MODELING RATIONALITY, MORALITY, AND EVOLUTION Edited by Peter A. Danielson
Vancouver Studies in Cognitive Science is a...
33 downloads
826 Views
28MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
MODELING RATIONALITY, MORALITY, AND EVOLUTION Edited by Peter A. Danielson
Vancouver Studies in Cognitive Science is a series of volumes in cognitive science. The volumes will appear annually and cover topics relevant to the nature of the higher cognitive faculties as they appear in cognitive systems, either human or machine. These will include such topics as natural language processing, modularity, the language faculty, perception, logical reasoning, scientific reasoning, and social interaction. The topics and authors are to be drawn from philosophy, linguistics, artificial intelligence, and psychology. Each volume will contain original articles by scholars from two or more of these disciplines. The core of the volumes will be articles and comments on these articles to be delivered at a conference held in Vancouver. The volumes will be supplemented by articles especially solicited for each volume, and will undergo peer review. The volumes should be of interest to those in philosophy working in philosophy of mind and philosophy of language; to those in linguistics in psycholinguistics, syntax, language acquisition and semantics; to those in psychology in psycholinguistics, cognition, perception, and learning; and to those in computer science in artificial intelligence, computer vision, robotics, natural language processing, and scientific reasoning.
VANCOUVER STUDIES IN COGNITIVE SCIENCE Forthcoming volumes Volume 8
Visual Attention Editor, Richard Wright Psychology Simon Eraser University
Volume 9
Colour Perception: Philosophical, Psychological, Artistic, and Computational Perspectives Editor, Brian Funt School of Computing Science Simon Eraser University
SERIES EDITORS General Editor Steven Davis, Philosophy, Simon Fraser University Associate General Editors Kathleen Akins, Philosophy Department, Simon Fraser University Nancy Hedberg, Linguistics, Simon Fraser University Fred Popowich, Computing Science, Simon Fraser University Richard Wright, Psychology, Simon Fraser University
EDITORIAL ADVISORY BOARD Susan Carey, Psychology, Massachusetts Institute of Technology Elan Dresher, Linguistics, University of Toronto Janet Fodor, Linguistics, Graduate Center, City University of New York F. Jeffry Pelletier, Philosophy, Computing Science, University of Alberta John Perry, Philosophy, Stanford University Zenon Pylyshyn, Psychology, Rutgers University Len Schubert, Computing Science, University of Rochester Brian Smith, System Sciences Lab, Xerox Palo Alto Research Center, Center for the Study of Language and Information, Stanford University
BOARD OF READERS William Demopoulos, Philosophy, University of Western Ontario Allison Gopnik, Psychology, University of California at Berkeley Myrna Gopnik, Linguistics, McGill University David Kirsh, Cognitive Science, University of California at San Diego Frangois Lepage, Philosophy, Universite de Montreal Robert Levine, Linguistics, Ohio State University John Macnamara, Psychology, McGill University Georges Rey, Philosophy, University of Maryland Richard Rosenberg, Computing Science, University of British Columbia Edward P. Stabler, Jr., Linguistics, University of California at Los Angeles Susan Stucky, Center for the Study of Language and Information, Stanford University Paul Thagard, Philosophy Department, University of Waterloo
Modeling Rationality, Morality, and Evolution
edited by Peter A. Danielson
New York Oxford OXFORD UNIVERSITY PRESS 1998
Oxford University Press Oxford New York Athens Auckland Bangkok Bogota Buenos Aires Calcutta Cape Town Chennai Dar es Salaam Delhi Florence Hong Kong Istanbul Karachi Kuala Lumpur Madrid Melbourne Mexico City Mumbai Nairobi Paris Sao Paulo Singapore Taipei Tokyo Toronto Warsaw and associated companies in Berlin Ibadan
Copyright 1998 by Oxford University Press, Inc. Published by Oxford University Press, Inc., 198 Madison Avenue, New York, NY 10016 Oxford is a registered trademark of Oxford University Press All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.
Library of Congress Cataloging-in-Publication Data Modeling rationality, morality, and evolution / edited by Peter Danielson. p. cm. — (Vancouver studies in cognitive science: v. 7) Includes bibliographical references. ISBN 0-19-512549-5 (alk. paper). - ISBN 0-19-512550-9 (pbk.: alk. paper) 1. Ethics. 2. Rational choice theory. 3. Ethics, Evolutionary. 4. Prisoner's dilemma game. I. Danielson, Peter, 1946- . II. Series. BJ1031.M63 1998 110-dc21 98-27140 CIP
Printing 9 8 7 6 5 4 3 2 1 Printed in Canada on acid-free paper
Acknowledgments I would like to thank the Social Sciences and Humanities Research Council of Canada, the Office of the Dean of Arts and the Publications Committee at Simon Fraser University, and the Centre for Applied Ethics at the University of British Columbia for their generous support of the volume and of the Seventh Annual Cognitive Science Conference, Modeling Rational and Moral Agents (Vancouver, Canada, February 1994), at which many of the papers were delivered. Many people helped to organize the conference and prepare the volume. My greatest thanks go to Steven Davis, who first suggested this project and invited me to serve as local chairman of the conference. Steven provided knowledge, support, and superb organizing skills at every point. The VSCS Committee provided help in organizing, advertising and running the conference. Fred Popowich deserves special thanks for helping me to keep in touch with the Internet community. Thanks to Tom Perry, head of the Cognitive Science program, Tanya Beaulieu, who did a remarkable job with local organizing, Lindsey Thomas Martin, who prepared the camera-ready copy from the authors' disks and texts, and Edward Wagstaff, who did the copy-editing and proofreading. I would also like to thank my colleague Leslie Burkholder who helped to plan the conference and my research assistant Chris MacDonald for help with proofreading. Thanks to all who attended the conference and contributed to its lively discussions, and to the authors of the volume for their cheerful co-operation and willingness to get material to me on time. The following publishers were kind enough to allow me to reprint material: Andrew Irvine, "How Braess' Paradox solves Newcomb's Problem" is reprinted from International Studies in the Philosophy of Science, 7:2, with the permission of Carfax Publishing. Paul Churchland, "The Neural Representation of the Social World" is exerpted from chapters 6 and 10 of P. M. Churchland, The Engine of Reason, the Seat of the Soul: A Philosophical Essay on the Brain (Cambridge, 1994: Bradford Books/MIT Press). Reprinted with the permission of MIT Press. David Schmidtz, "Moral Dualism" contains material from Rational Choice and Moral Agency (Princeton, NJ: Princeton University Press, 1995) with permission of Princeton University Press.
This page intentionally left blank
Contents Acknowledgments Contributors 1
Introduction 3 Peter A. Danielson
RATIONALITY
2
Rationality and Rules Edward F. McClennen
13
3
Intention and Deliberation David Gauthier
41
4 Following Through with One's Plans: Reply to David Gauthier 55 Michael E. Bratman 5
How Braess' Paradox Solves Newcomb's Problem A. D. Irvine
67
6
Economics of the Prisoner's Dilemma: A Background Bryan R. Routledge
7
Modeling Rationality: Normative or Descriptive? 119 Ronald de Sousa
92
MODELING SOCIAL INTERACTION
8
Theorem 1 137 Leslie Burkholder
9
The Failure of Success: Intrafamilial Exploitation in the Prisoner's Dilemma 161 Louis Marinoff
10 Transforming Social Dilemmas: Group Identity and Co-operation 185 Peter Kollock
11 Beliefs and Co-operation 210 Bernardo A. Huberman and Natalie S. Glance 12 The Neural Representation of the Social World 236 Paul M. Churchland MORALITY
13 Moral Dualism 257 David Schmidtz 14 Categorically Rational Preferences and the Structure of Morality 282 Duncan Macintosh 15 Why We Need a Moral Equilibrium Theory William J. Talbott
302
16 Morality's Last Chance 340 Chantale LaCasse and Don Ross EVOLUTION
17 Mutual Aid: Darwin Meets The Logic of Decision Brian Skyrms
379
18 Three Differences between Deliberation and Evolution Elliott Sober 19 Evolutionary Models of Co-operative Mechanisms: Artificial Morality and Genetic Programming 423 Peter A. Danielson 20 Norms as Emergent Properties of Adaptive Learning: The Case of Economic Routines 442 Giovanni Dosi, Luigi Marengo, Andrea Bassanini and Marco Valente
408
Contributors Andrea Bassanini, Faculty of Statistics, University "La Sapienza," Rome Michael E. Bratman, Philosophy Department, Stanford University Leslie Burkholder, Department of Philosophy, University of British Columbia Paul M. Churchland, Department of Philosophy, University of California, San Diego Peter A. Danielson, Centre for Applied Ethics and Department of Philosophy, University of British Columbia Ronald de Sousa, Department of Philosophy, University of Toronto Giovanni Dosi, Department of Economics, University "La Sapienza/' Rome and IIASA, Laxenburg David Gauthier, Department of Philosophy, University of Pittsburgh Natalie S, Glance, Rank Xerox Research Centre, Meylan, France Bernardo A. Huberman, Dynamics of Computation Group, Xerox Palo Alto Research Center A. D. Irvine, Department of Philosophy, University of British Columbia Peter Kollock, Department of Sociology, University of California, Los Angeles Chantale LaCasse, Department of Economics, University of Ottawa
Duncan Macintosh, Department of Philosophy, Dalhousie University Luigi Marengo, Department of Economics, University of Trento, and IIASA, Laxenburg Louis Marinoff, Department of Philosophy, The City College of CUNY Edward F. McClennen, Department of Philosophy, Bowling Green State University Don Ross, Department of Philosophy, University of Ottawa Bryan R. Routledge, Graduate School of Industrial Administration, Carnegie-Mellon University David Schmidtz, Departments of Philosophy and Economics, University of Arizona Brian Skyrms, Department of Philosophy, University of California, Irvine Elliott Sober, Department of Philosophy, University of Wisconsin, Madison William J. Talbott, Department of Philosophy, University of Washington Marco Valente, Faculty of Statistics, University "La Sapienza," Rome
Modeling Rationality, Morality, and Evolution
This page intentionally left blank
1
Introduction Peter Danielson This collection began as a conference on Modeling Rational and Moral Agents that combined two themes. First is the problematic place of morality within the received theory of rational choice. Decision theory, game theory, and economics are unfriendly to crucial features of morality, such as commitment to promises. But since morally constrained agents seem to do better than rational agents - say by co-operating in social situations like the Prisoner's Dilemma - it is difficult to dismiss them as simply irrational. The second theme is the use of modeling techniques. We model rational and moral agents because problems of decision and interaction are so complex that there is much to be learned even from idealized models. The two themes come together in the most obvious feature of the papers: the common use of games, like the Prisoner's Dilemma (PD), to model social interactions that are problematic for morality and rationality. The presentations and discussion at the conference enlarged the topic. First, many of the resulting papers are as much concerned with the modeling of situations, especially as games, as with the details of the agents modeled. Second, evolution, as a parallel and contrast to rationality, plays a large role in several of the papers. Therefore this volume has a broader title and a wider range of papers than the conference. Both of the original themes are broadened. On the first theme, contrasts between rationality and morality are complemented by the contrast of rationality and evolution and the effect of evolution on norms. On the second theme, the papers appeal to a wide range of models, from Irvine's abstraction that spans decision theory, game theory, queuing theory, and physics, to the particular and specialized models of minds in Churchland, and working models of the evolution of strategies in Danielson and Dosi et al. The Papers The papers are organized into four sections: Rationality, Modeling Social Interaction, Morality, and Evolution to capture some common elements. Here I will sketch the rationale for these groupings emphasizing the connections between the papers. 3
4
Peter Danielson
Rationality The volume opens with a central theme in cognitive science: planning. Agents' plans do not fit well within the received theory of rational choice, according to which one should choose the best option at each occasion for choice. The moralized interpretation of the pull of a plan - as a commitment or promise - worsens this problem. In the opening paper Edward McClennen develops his well-known work of revising the received theory to account for dynamic choice. He concludes "that a viable version of consequentialism will be a version of rule consequentialism, in which the notion of a rational commitment to extant rules has a central place" (McClennen, p. 32).1 Many of the papers focus on David Gauthier's account of how rational choice should be modified to allow for commitments to plans. In the next two papers, Michael Bratman and Gauthier debate the extent of the revision needed: "Bratman wants to replace the desire-belief theory of intention in action with what he calls the planning theory. But he does not go far enough; he accepts too much of the desire-belief model to exploit fully the resources that planning offers" (Gauthier, p. 40). Arguably, one of the biggest advances in our understanding of the Prisoner's Dilemma was the development of an analogous problem in decision theory: Newcomb's problem.2 A great advantage of abstract modeling techniques is the ability to draw strong analogies across widely different subject areas. Andrew Irvine argues that Braess' paradox (in physics) is structurally identical to Newcomb's problem. He sets out a fascinating matrix of analogies to educate our intuitions and show how four so-called paradoxes can be similarly resolved. Since many of the papers argue that the received theory of rational choice needs revision, it is important to set out what the received theory is. Bryan Routledge provides a review of the relevant economics literature. This section ends with Ronald de Sousa's paper, which discusses the interplay of normative and descriptive interpretations of rationality. De Sousa proposes a role for emotions in framing rational strategies and accounting for normativity. There are links between de Sousa's discussion and Macintosh's interest in fundamental values as well as many connections between de Sousa's discussion of evolution and the work in the fourth section. Modeling Social Interaction Robert Axelrod's The Evolution of Cooperation has stimulated much interest in the area of modeling interaction in mixed motive games like the Prisoner's Dilemma. The first two papers in this section build on Axelrod's work. Louis Marinoff works most closely within Axelrod's framework, elaborating Axelrod's tournament-testing device using
Introduction
5
Axelrod's own game, the Iterated Prisoner's Dilemma. He illuminates one of the reasons why correlated strategies of conditional co-operation are problematic. Leslie Burkholder explains one source of complexity in his generalization of Axelrod's theorem 1: given the variety of possible opponents in a tournament environment, no one strategy is best even for the one-shot PD. The third paper in this section tests a game model against actual interaction of human subjects. Peter Kollock's experimental setting provides evidence of a gap between the payoffs offered and the utilities of the subjects. He finds "that people transform interdependent situations into essentially different games . . . [T]he motivational basis for many social dilemma situations is often best modeled by an Assurance Game rather than a Prisoner's Dilemma" (Kollock, p. 208). While most of the models in this collection focus on interaction between pairs of agents, Bernardo Huberman and Natalie Glance use a more complex, many player game. They introduce modeling techniques for allowing agents to form expectations for dealing with their complex social situation. They argue that co-operation requires (1) limits on group size and (2) "access to timely information about the overall productivity of the system" to avoid "complicated patterns of unpredictable and unavoidable opportunistic defections" (Huberman and Glance, p. 235). Paul Churchland's paper bridges this section to the next on Morality. He sketches how neural nets might represent social situations and thereby show "how social and moral knowledge .. . might actually be embodied in the brains of living biological creatures" (Churchland, p. 238). The resulting "portrait of the moral person as a person who has acquired a certain family of perceptual and behavioural skills contrasts sharply with the more traditional accounts that picture the moral person as one who has agreed to follow a certain set of rules ... or alternatively, as one who has a certain set of overriding desires ... Both of these more traditional accounts are badly out of focus" (Churchland, p. 253).
Morality Where the opening papers of the first section reconstructed rationality with a view to morality, this section begins with three papers that reconstruct moral theory with a view to rationality. For example, we have characterized the tension in the Prisoner's Dilemma as one between rationality and morality, but David Schmidtz's theory of moral dualism gives both the personal and interpersonal elements a place within morality. His sketch of moral dualism has the attractive feature of remaining "structurally open-ended" (Schmidtz, p. 258). Moving in a different direction, Duncan Macintosh questions the instrumental
6
Peter Danielson
account of values assumed by most attempts to rationalize morality. He argues that, in addition to instrumental reasons to change one's values, there are metaphysical grounds for criticizing some "first values" as well. (Cf. de Sousa for another discussion of "foundational values.") William Talbott connects rational choice and morality in a different way. He criticizes conditionally co-operative strategies - such as Gauthier's constrained maximization - as deeply indeterminate. (This connects to the discussion of correlation in Skyrms and Danielson.) He sees the solution for this problem in the equilibrium techniques central to game theory. In contrast with the first three papers in this section, which are constructive, LaCasse and Ross claim to be decisively critical. "The leading advantage of this attempt to find (as we shall say) 'economic' foundations for morality is that the concepts to which it appeals have been very precisely refined. As usual, however, this sword is doubleedged: it permits us to show, in a way that was not previously possible, that the concept of morality is ill-suited to the task of understanding social co-ordination" (LaCasse and Ross, p. 342). Hence their title: "Morality's Last Chance." Evolution Evolution provides another perspective on the rationality of co-operation. If co-operation is irrational, why is it found in nature? How can co-operation evolve given the analogy between rationality and evolution? "Selection and [rational] deliberation, understood in terms of the usual idealizations, are optimizing processes. Just as the (objectively) fittest trait evolves, so the (subjectively) best actions get performed. This isomorphism plays an important heuristic role in the way biologists think about the evolutionary system" (Sober, p. 407). Skyrms (p. 381) puts the problem neatly, having noted the fact of co-operative behaviour in a wide range of species: "Are ground squirrels and vampire rats using voodoo decision theory?" Skyrms' solution turns on the role of correlated as contrasted with random pairing of strategies. If co-operators have a greater chance of playing with each other rather than playing with other members of the population, then procedures - like Jeffries' The Logic of Decision - that recommend co-operation are not defective. As Skyrms notes, there is a wide-ranging literature on detection of similarity in biology and game theory. Skyrms's model requires that correlation determines which strategies are paired, while some of the literature, including Gauthier and Danielson in this volume, use detection to determine strategy given random pairing. But when recognition of similar agents is used to generate moves in a game, complications arise (cf. Talbott and Marinoff).
Introduction
7
Sober also discusses how correlation distinguishes evolution and deliberation, and adds two additional reasons for differences. First, he argues, the conclusion of the puzzling case of the PD with known finite length is different for evolutionary and rational game theory. Second, rational deliberation has a role for counterfactual reasoning that is missing from the evolutionary case. The final two papers descend to a lower level of modeling, where an evolutionary mechanism is used actually to construct various agents and strategies. Dosi et al. opens with a good survey of the evolutionary computing approach to modeling. Both papers use the genetic programming method to construct functions to test in interaction. Danielson's test case is the Extended Prisoner's Dilemma (see the story in Gauthier and critical discussion in LaCasse and Ross), and the successful players deploy correlated, conditional, co-operative strategies. Dosi et al. argue for theoretical limits on rational choice and illustrate the robustness of norms with a model of learning pricing procedures in an oligopolistic market. Some Methodological Reflections Prisoner's Dilemma The game upon which most of the papers in this volume focus is the two-person Prisoner's Dilemma. Irvine recounts the classic story to which the game owes its name; Skyrms notes the variation played by two clones. In the round-table discussion that ended the conference, there was sharp criticism of the large role played by this one game. One must agree that both from the point of view of rationality and morality, it is a mistake focus on a single game as if it represented the only problem in the theory of rational interaction. From the moral side, the PD is very simple; its single, equal, optimal outcome obviates problems of distributive justice. Therefore, on the rational side, the PD does not demand any bargaining. None the less, there is something to be said for the emphasis on this one game. First, while the Prisoner's Dilemma is not the only problem in the relation of rationality to morality, it is both crucial and controversial. Crucial, because were there no situation in which morally motivated agents did better, there would be little hope of a pragmatic justification of morality. Controversial, because, as the papers collected here attest, there is strong disagreement about the correct account of the "simple" PD situation. While McClennen, Gauthier, and Talbot defend the rationality of a commitment to co-operate, Bratman, Irvine, LaCasse and Ross, and Skyrms all criticize this conclusion.
8
Peter Danielson
Second, focusing on a single puzzle case has the advantage of unifying discussion. We are fortunate that each author has not invented an idiosyncratic model. Much beneficial critical interaction is gained by use of a common model. Third, the appearance of focus on a single game is a bit deceptive; the papers employ several variants on the Prisoner's Dilemma. While Irvine, Kollock, Macintosh, Talbott, and Skyrms all focus on the classic, one-play game, McClennen, Gauthier, Bratman, Danielson, and LaCasse and Ross employ an extended version. Marinoff focuses on the Iterated game, and Glance and Huberman and Dosi et al. employ an n-player variant. These variations form a family of models, but differ in crucial respects. For example, the gap between rationality and cooperation is narrowed in the Iterated and Extended versions of the game, and there are bargaining problems in the n-player variants. Finally, it must be admitted that emphasizing the Prisoner's Dilemma has the risk of oversimplifying morality. The moral virtue needed to overcome the compliance problem in the PD is very rudimentary, ignoring much of moral interest. Indeed, as Burkholder reminds us, the connection between game models like the PD and morality is not simple. It is not always moral to co-operate and immoral to defect in a situation modeled as a PD. The oversimplification of morality is a risk of an approach to ethics that begins with rationality. From this perspective, even simple social norms look moral in their contrast to act-by-act rational calculations (cf. Dosi, p. 442). The subtler discussions of morality in section three counterbalance this tendency to oversimplification.
Modeling "This is one of the reasons why the recent artificial network models have made possible so much progress. We can learn things from the models that we might never have learned from the brain directly. And we can then return to the biological brain with some new and betterinformed experimental questions to pose .. ."(Churchland, p. 245). Similarly, artificial models of social evolution allow us to study the evolution of norms and dispositions in ways we might never have studied the historical evolution of real norms and dispositions. Abstract models have two faces. From one side, they simplify their subjects, allowing arguments and analogies to flow between apparently distant domains. The Irvine paper is the best example of the power of abstraction, but notice as well how McClennen and Bratman are able to apply the act/rule distinction from utilitarianism more generally. This allows Bratman to apply ]. J. C. Smart's criticism of rule worship to "plan worship" in the case of rationality (Bratman p. 10). The other side
Introduction
9
of models - especially computerized models - is their generative power, which throws up myriad concrete examples and counterexamples. The discussions in Churchland, Huberman and Glance, Danielson, and Dosi et al. are all informed by the unexpected results that crop up when thought experiments get somewhat more realized. Of course, we must not confuse these artifacts with the real subject matter, as Kollock, Churchland, and Dosi et al. remind us: "stylized modeling exercises ... complement more inductive inquiries from, e.g., social psychology and organizational sciences" (Dosi et al. p. 458).
Application Finally, the essays that follow are theoretical - extremely so - not practical. None the less there is reason to think that a unified theory of rationality and morality would have a wide and beneficial application. At the end of his conference paper, McClennen held out some tantalizing possibilities for an evolutionary social process leading to the success of a new and morally informed conception of rational agency: Beyond theory, it is worth pondering on what might be the effect of a course of study in which the issue of what rationality requires in such choice situations was not begged in favour of [an] extremely limiting sort of model... Suppose, in particular, that a more concerted effort were made to make individuals aware of the complex nature of decision-making over time, and in interactive situations with other agents, and at least to mark out concretely the advantages to be realized by those who could resolutely serve as their own agents, and choose within the context of co-operative schemes in a principled fashion. One might then reasonably expect to see this more efficient mode of dynamic decision-making drive out more costly precommitment and enforcement methods, and this through nothing more than what economists like to describe as the ordinary competitive process.3
Notes 1 Page references in the text refer to articles in this volume. 2 See Campbell and Sowden (1985), which is organized around this analogy. Richmond Campbell's introduction, "Background for the uninitiated" may also be useful to readers of this volume. 3 Edward McClennen, "Rationality and Rules," as delivered at the Seventh Annual Cognitive Science Conference, Modeling Rational and Moral Agents, Vancouver, 11-12 February 1994, note 55.
References Campbell, Richmond, and L. Sowden (eds.) (1985). Paradoxes of Rationality and Co-operation. Vancouver: University of British Columbia Press.
This page intentionally left blank
Rationality
This page intentionally left blank
2
Rationality and Rules Edward F. McClennen
1. Introduction and Statement of the Problem I propose to explore to what extent one can provide a grounding within a theory of individual, rational, instrumental choice for a commitment to being guided by the rules that define a practice. The approach I shall take - already signalled by the use of the term "instrumental" - is Humean, rather than Kantian, in spirit. That is, my concern is to determine to what extent a commitment to such rules can be defended by appeal to what is perhaps the least problematic sense of rationality that which is effectively instrumental with respect to the objectives that one has deliberatively adopted. In speaking of the "rules of a practice," I mean to distinguish such rules from those that are merely maxims.1 The latter are generally understood to summarize past findings concerning the application of some general choice-supporting consideration to a particular case. Thus, taking exception to such a rule can always be justified, in principle at least, by direct appeal to the underlying choice-supporting considerations. Correspondingly, a person is typically understood to be entitled to reconsider the correctness of a maxim, and to question whether or not it is proper to follow it in a particular case. The rules of a practice have a very different status. While the rule itself may be defended by appeal to various considerations, those who participate in a practice cannot justify taking exception to this rule on a particular occasion by direct appeal to those considerations. Correspondingly, those who participate in the practice are not at liberty to decide for themselves on the propriety of following the rule in particular cases. The question that I want to address, then, is how a commitment to abide by the rules of practices can be defended. More specifically, I shall be concerned with how such a commitment could be defended by reference to what would effectively promote the objectives of the participants, that is, within the framework of a model of rational, instrumental choice. Now one natural starting point for such a defence is the consideration that in a wide variety of political, economic, and social settings, it will be mutually advantageous for participants not only if 13
14
Edward F. McClennen
various practices are adopted, but if each participant voluntarily accepts the constraints of the rules defining these practices. If mutual gains are to be secured, and if individuals are unable voluntarily to accept the constraints of the rules, more or less coercive sanctions will be needed to secure their compliance. But sanctions are an imperfect, or, as the economist would say, a "second-best" solution. They typically ensure only partial conformity, consume scarce resources, destroy personal privacy, and deprive participants of freedom. In a wide variety of settings, then, efficiently organized interaction requires that persons voluntarily respect constraints on the manner in which they pursue their own objectives - that is, mutually accept an "ethics of rules." There is, however, significant tension between this conclusion and the basic presupposition of contemporary rational choice theory, namely, that rational choice is choice that maximizes preferences for outcomes. The tension arises because the objectives that one seeks to promote by participating in a practice are typically realized by the participatory actions of others. Given that others will in fact participate, one can often do better yet by acting in some specific, non-participatory manner. In short, one confronts here a version of the familiar public goods problem. In such cases, any given participant can easily have a preference-based reason for violating the rule. This being so, it would seem that from the perspective of a theory of instrumental rationality, a "practice" could never be conceived as setting more than a guideline or "maxim" for choice. But this implies that the standard model of instrumental reasoning does not provide a secure footing for a rational commitment to practice rules.2 What is needed, then, is a solid way of rebutting this argument from the public goods nature of co-ordination schemes based on practices. I propose to begin somewhat obliquely, however, by exploring a quite distinct, but related, type of problem. In many cases what one faces is not interpersonal conflict - as suggested by the public goods story - but intrapersonal conflict. That is, in many situations the self appears to be divided against itself, and this, moreover, as a result of reasoning, both naturally and plausibly, by reference to the consequences of its own choices. The story to be told here can be clarified by studying some simple, abstract models of rational choice. What emerges is that the problem of an isolated individual making a rational commitment to rules turns out to be rooted in a way of thinking about individual rational choice in general, a way that is so deeply ingrained in our thinking as to virtually escape attention altogether, yet one that can and should be questioned. I shall suggest, then, that it is here that we find a model for the problem that arises at the interpersonal level, and one that offers an important insight into how the interpersonal problem can be resolved.
Rationality and Rules
15
The analysis commences in Section 2, with an exploration of some standard models of intrapersonal choice - models which suggest a serious limitation to the standard way of interpreting consequential reasoning. Section 3 develops the thesis that it is not consequentialism as such, but only an incremental version of consequentialism, that generates the problem. This paves the way for a presentation of an alternative, and more holistic or global way of thinking about consequences. Section 4 argues for the instrumental superiority of this alternative conception. Section 5 extends these results to problems of interdependent choice. In Section 6, these results are then explicitly brought to bear on the rationality of accepting an "ethics of rules" - of accepting the constraints of practices. 2. Intrapersonal Choice Consider everyday situations in which you find yourself conflicted between more long-range goals and present desires. You want to reduce your weight, but right now what you want to do is have another helping of dessert. You want to save for the future, but right now you find yourself wanting to spend some of what could go into savings on a new stereo. The logic of this type of situation can be captured by appeal to a very simple abstract model, in which you must make a pair of choices in sequence (Figure 1). This neatly particularizes into a situation in which you face a problem of a change in your preferences, if we suppose an intelligible story can be told to the effect that at time t0 you will prefer outcome o3 to o2 and o2 to o4/ but that at time tl you will prefer o4 to o3/ and would prefer o3 to o2, if the latter were (contrary to fact) to be available at that time.3
Figure 1: A simple sequential choice problem
16
Edward F. McClennen
Consider now the plan that calls for you to move to the second choice node and then choose path a3 over a4. Call this plan a1-a3. Since at t0 you prefer the outcome of this plan, o3, to the outcome of choosing path «2 outright, namely o2, you might be inclined to pursue the former rather than the latter. Upon arriving at node 2, however, you will (so the argument goes) choose path a4 over a3, since, by hypothesis, you will then prefer o4 to o3. That is, you will end up abandoning the plan you adopted. To do this is, on the standard account, to choose in a dynamically inconsistent manner. Being dynamically inconsistent involves more than just changing plans in midstream. Having decided upon a plan, you may acquire new information that calls upon you, as a rational person, to alter your plans. In such a case there is no inconsistency.4 The charge of dynamic inconsistency arises when the change is not predicated on receipt of new information, and where, indeed, it is one that you should have anticipated. That is, more careful reflection would have led you to realize that the plan you are inclined to adopt is one that you will subsequently abandon. To be dynamically inconsistent, then, is to be a myopic chooser: you shortsightedly fail to trace out the implications of the situation for what you will do in the future. Being dynamically inconsistent in this sense means that your future self ends up confounding the preferences of your earlier self. As it turns out, however, myopia involves something worse. A myopic approach makes you liable to what are clearly, from a consequential perspective, unacceptable outcomes. The extensive literature on Dutch-books and money-pumps shows that myopic choosers can be "tricked" into accepting bets and making other choices that result in a sure net loss of scarce resources.5 Typically, this involves your being willing to pay to give up one option in exchange for another, and then, after certain events have occurred, being willing to trade once again, for another fee, in such a way that the net effect is that you end up being exposed all along to the original option, and thus have paid fees twice to no purpose. When matters are viewed from this perspective, it is not just one or the other of the selves whose preferences are confounded. Rather, both selves stand to loose as a result of myopia. Moreover, since the myopic chooser's loss is the exploiter's sure gain, myopic choosers must expect, at least in an entrepreneurial world, that they will be exploited: others will be eager to do business with them. All of this makes for a powerful pragmatic argument against myopic choice. Such unfortunate consequences can be avoided, however, by choosing in a sophisticated manner. To be sophisticated is to first project what you will prefer, and thus how you will choose, in the future, and then reject any plan that can be shown, by such a projection, to be one that
Rationality and Rules
17
you would subsequently end up abandoning. Such plans, so the argument goes, are simply not feasible. For the problem given in Figure 1, plan a1-a3 is not a feasible plan. Despite your preference at t0 for outcome o3, the corresponding plan is one that you will not end up executing, and thus o3 cannot be secured. Given this, and given your preferences at the first-choice node, you should choose a2 outright, and thereby realize o2. To be sophisticated, then, is to tailor your ex ante choice of a plan to your projection of what you will prefer, and hence choose, ex post. Just what this implies will vary from case to case. Thus, for example, plan a2 might involve giving irrevocable instructions to a hired agent to choose o3 rather than o4 at the second-choice node. That is, you may be able to achieve dynamic consistency by precommitting.
3. Assessing Consequences Myopic and sophisticated choice share certain things in common. Both make an explicit appeal to the principle of choosing plans so as to maximize with respect to your preferences for outcomes. That is, your assessment of the alternatives available is presumed to turn on your preferences for the outcomes realizable by your choices. Call this consequentialism.6 More importantly, however, in both myopic and sophisticated deliberation the assessment of consequences is perceived to take place in an incremental manner. What is relevant for deliberation and choice at a given node in a decision tree is not what preferences you had at the outset, when you first contemplated the whole decision tree, but simply those preferences that you just happen to have, at the given node, with respect to outcomes still realizable by your action.7 Now, certainly part of what is involved in this perspective is altogether plausible. On the standard preference (or desire) and belief model, it is your preferences together with your beliefs that causally determine your choice of an action. Intuitively, there can be no causal action at a distance. If preferences are to play a causal role, it must be the preferences you have (together with your beliefs) now, that determine your choice now. Notice, moreover, that such deliberation is consistent with having "backward-looking" concerns or commitments of various types.8 It may be, for example, that in certain cases what you now prefer to do is take your marching orders from some previous self. Alternatively, and less subserviently, you might now prefer to participate in a co-ordination scheme with earlier selves. The point, however, is that what counts in all such cases are the backward-regarding preferences that are entertained by you now.9 There is, however, a distinct and much more problematic assumption that appears to be implicit in this way of construing rational deliberation. This is that such a present concern for choosing to co-ordinate
18
Edward F. McClennen
your choice now with a choice made previously - to choose in a manner consistent with a prior commitment - cannot itself be grounded in a process of rational deliberation. That is, it would appear that, on the standard account, the logic of rational deliberation precludes that a preference on the part of your present self for co-ordinating its choice with the choices of earlier selves might issue from, as distinct from setting the stage for, rational deliberation itself. By way of seeing what is at issue here, it will prove helpful, first of all, to consider the case in which you do not have a present preference - deliberative or otherwise - for co-ordinating choice now with choice made earlier. In such a case, your present self will take the choices it previously made as simply setting constraints upon what outcomes are still possible - in exactly the same way that it will also take past natural events as setting such constraints. Suppose now that this situation also holds for your future self - that is, when it becomes the present self, it has no preference for co-ordinating with its earlier selves. This, in turn, implies that there could be no point to your present self trying to coordinate with its future self. On the assumption that this state of affairs exists for each present self, what is left open to you at each moment is not co-ordination with, but, at best, strategic adjustment to, the (hopefully) predictable behaviour of your future self.10 Each of your timedefined selves, then, will think of itself as an autonomous chooser, who has only to take cognizance of the choices that your other time-defined selves have made or will make, in the very same way that it must take cognizance of other "natural" events that can affect the outcome of its choices. That is, each of your time-defined selves will deliberate in an autarkic manner.11 Consider now the situation in which you do have a past regarding preference. Suppose you prefer, now, that you act in conformity with a plan that you initiated at some earlier point in time. How are we to understand your coming to have such a preference? A variety of explanations here are possible, of course. Two possible explanations were briefly mentioned in Section 1 above (note 2), where appeal was made to models of genetic and social transmission. That is, your backwardlooking concern for consistency may be the result of certain experiences triggering inborn dispositions; alternatively, such a concern may come about as a result of a process of socialization. Neither of these roads, however, lead us back to a model in which your several selves are deliberatively linked in any fashion. These linkages are due to non-deliberative causal, as distinct from essentially deliberative, processes.12 A pressing matter for consideration here, then, is under what conditions this sort of preference could arise as the result of a deliberative process. Now clearly there are going to be cases in which you deliberatively
Rationality and Rules
19
decide to act in conformity with a previously adopted plan. For example, you have made a bet with someone that you could carry through with your plan, and now want to collect on that bet. Here the explanation is straightforwardly pragmatic. Your deliberative preference for acting in conformity with a previously adopted plan is based on wanting to secure a certain (future) reward. Correspondingly, at the earlier point in time when the plan was to be adopted, you had a sufficient sense of your continuing pragmatic interests to know that you could count on your subsequent self carrying through. That is, your choice at the earlier point in time is shaped by the sense that you will be able to respond in the appropriate manner at the subsequent point in time, and your subsequent choice is shaped by your sense of what you committed yourself to do at the earlier point. The earlier choice is, then, conditioned by an expectation; and the subsequent choice is conditioned by knowledge of what your earlier self has chosen. So here is a model in which a backward-looking preference is deliberatively based. The pattern here is, moreover, paradigmatic of such deliberatively based preferences. Specifically, on the standard account, looking backward is justified by looking forward. Notice that the relevant backward-looking preference in the case just discussed is the one that is held by the subsequent self. What, correspondingly, is the relevant forward-looking preference? The logic of the perspective under consideration is that it must be the preference held by the earlier, not the subsequent, self. It is the earlier self's interest in securing the reward that motivates it to attempt the co-ordination, not any interest that the subsequent self has. At the same time, however, that in which the earlier self has an interest is not its receiving the reward: the reward, by definition, comes later. So it must be supposed that the earlier self has an interest in what happens to its own subsequent self (or selves), that it has, in this sense, an other-regarding concern regarding its own future self (or selves). And given this, it is also clear that the only thing that could motivate the subsequent self to engage in a backward-looking exercise, is its interest in the reward (or its interest in what accrues to even more subsequent selves). The problem is that nothing ensures that the earlier self's concern for the subsequent self (or selves) coincides with the future self's own concerns. This can easily be overlooked, if one casually supposes that both the earlier and the subsequent self are concerned to promote the wellbeing or interests of the subsequent selves, and that "well-being" or "interest" can be given some sort of objective interpretation. On that reading, it would then follow that any lack of coincidence between their respective concerns would be due merely to misinformation (on the part of one or the other of the involved selves). But nothing in the
20
Edward F. McClennen
logic of preferences of the relevant sort assures that the only problem is one of misinformation. The prior self may simply have a different future-regarding agenda than the agenda of the subsequent self. The prior self may be concerned with long-range considerations, while the agenda of the subsequent self may turn out to be, at the moment that counts, markedly myopic - the focus being on satisfying a momentary concern (say, an urgent desire).13 Nothing is essentially changed if we postulate some ongoing prudential concern. Your present self must reckon with the consideration that your future selves, while themselves prudentially oriented, are so from their own perspective. Whether any future self can have a commitment to abide by plans previously settled upon is a matter of what can be prudentially justified from the perspective of that future self. That is, what prudence dictates at any given point in time is a matter of what can be shown to serve longer-range interests from that specific time forward. This means that any given present self has no assurance that it is party to an arrangement to which future selves are committed and in which its own. concerns will be addressed. It all depends upon what, from some future vantage point, turns out to be prudential. This is built into the very logic of the concept of prudence: at any given point in time, what is or is not the prudential thing to do concerns what still lies in the future, not what lies in the past. Consider now what happens if there is a divergence between present and future motivating concerns, between present and future preferences. The model that emerges is not one in which selves are to be understood as compromising for the sake of mutual advantage. Rather, one of two alternative remedies is assumed to be appropriate. The structure of the problem may be such that the prior self is in a position to impose its will upon the subsequent self. This yields the model of precommitment that was discussed in the previous section, and which figures centrally in Elster's Ulysses and the Sirens. Alternatively, the structure of the problem may be such that no precommitment is possible: the prior self may have to recognize that, since the other self chooses subsequently, and with a view to maximizing from its own perspective, it has no choice but to anticipate this independent behaviour on the part of the subsequent self, and adjust its own choice behaviour accordingly, that is, to the reality of what it must expect the other self to do. In either case, then, the implicit model of choice is the one already encountered in the discussion of the case in which each present self has no backward-looking concern: that is, the model of the autarkic self. Suppose, finally, that there is as a matter of fact coincidence between present and future concerns. Clearly in this case there can be an adjust-
Rationality and Rules
21
ment of choices, and a convergence of expectations. But note that no less in this case, than in the case just discussed, each self is presumed to maximize from its own perspective, given what it expects the other self (or selves) to do. That is, each, from its own perspective, comes to choose and comes to expect the others to choose, in a manner that yields the requisite sequence of behaviours. But the model that is applicable here, then, is simply the mtrapersonal analogue to the model of "co-ordination" that is central to virtually all work in game theory, on interpersonal interaction, namely, the model in which the choices of the respective selves (distinct individuals) are in equilibrium. Within the framework of this model, the respective selves do not negotiate or bargain and thus compromise their respective interests or concerns with a view to reaching an arrangement that is mutually beneficial; rather, it just happens that the choice made by each maximizes with respect to that self's concern, given its expectation concerning how each other self will choose.14 Here, once again, then, the appropriate model is the one already encountered, in which each self proceeds to choose in a thoroughly autarkic manner. One can capture these conclusions in a somewhat more formal manner in the following way. It is natural, of course, to think of the initial node of a decision problem as such that whatever set of events led you to that node, those events simply fix the given against which, in accordance with consequential considerations, you now seek to maximize with respect to your present preferences for outcomes realizable within the tree. What the incremental, autarkic perspective presupposes is that each and every choice node in a decision tree presents you with just this sort of problem. Consider any choice node nt within a decision tree, T, and the truncated tree T(«;) that you would confront, were you to reach choice node n(, that is, the set of subplans that you could execute from HJ on, together with their associated outcomes. Now construct a decision problem that is isomorphic to the original tree from the node n{ onward, but which contains no choice nodes prior to njr although it can contain reports as to how as a matter of fact you resolved earlier choice options. That is, in this modified tree, n, is now the node at which you are first called upon to make a choice. Call this modified version of the original tree T(n0 -»«;). The controlling assumption, then, is this: Separability. The subplan you would prefer at a given node n, within a given tree T (on the assumption that you reach that node) must correspond to the plan that you would prefer at the initial node na = n( in the modified tree T (nQ —> n,) 15
22
Edward F. McClennen
Separability, then, requires coincidence between choice at ni in T and choice at n0 = n{ in T(n0 —> nt• ). On the standard account, it is consequentialism, as characterized above, that determines choice at n0 = n{ in T (MO —> M; ) itself. But it is separability itself that drives the autarkic approach to deliberation and decision. It disposes you to reconsider, at each new point in time, whatever plan you originally adopted, and to settle upon a new plan on the basis of whatever, then and there, you judge will maximize your present preferences with respect to outcomes still available.
4. Dynamic Consistency Re-examined Satisfaction of both consequentialism and separability does not ensure that you choose in a dynamically consistent fashion. This is, as we saw above, the lesson to be learned from being a myopic chooser. As a myopic chooser you satisfy both consequentialism and separability, but you adopt plans only to abandon them later. If you are committed to deliberating in a separable manner, you can achieve dynamic consistency by deliberating in a sophisticated manner. That is, you can confine choice at any given node to (sub)plans that are feasible. Correspondingly, the appropriate criterion of feasibility here is simply this: a (sub)plan at node ni is to be judged feasible if and only if it is consequentially acceptable at each and every successor node to n{ at which it directs choice. Feasibility, then, can be determined by working recursively backward, by starting at each potentially last choice point in the decision tree, and then moving backward through the tree until one reaches the initial choice point. At each such point, you are to ask what you would choose there from a separable perspective, and then fold in that information in the form of a judgment of what is feasible at the immediately preceding choice point.16 To illustrate, consider once again the decision problem in Figure 1, where it is assumed that o3 is preferred to o2 and o2 is preferred to o4 at time £„, but o4 is preferred to o3 at time tlf and that these are the only relevant preferences. Here, consequentialism plus separability generates the standard results. Given this ordering of outcomes at f,, consequentialism implies that at tl in T(n0 -> MJ), you will prefer the truncated plan «4 to the truncated plan a3. And this, in turn, by appeal to separability, implies that, at tl in T, you will prefer a4 to a3. That is, a3 is consequentially unacceptable. This, in turn, implies that at n0 the plan al—a3 is not feasible, even though, at that time you consequentially prefer it to the other plan, flj-fl4. Feasibility considerations, then, determine that only o4 and o2 are realizable by (rational) choice on your part, and since at M0 you prefer o2 to o4, consequentialism determines that the rational choice is plan a2.
Rationality and Rules
23
This is certainly one way to achieve dynamic consistency. In effect, what you do is tailor your plans to what you would expect to do in the future, given various contingencies (including your present choice of an action). In principle, however, dynamic consistency can be achieved in a radically different way. What dynamic consistency requires is an alignment between earlier choice of a plan and subsequent choice of remaining subplans. Rather than regimenting present choice of a plan to projected future choice, the required alignment can be secured, in principle, in just the reverse manner, by regimenting future choice to the originally adopted plan. Let us call an agent who manages to achieve consistency in this way resolute. Conceptually, being resolute involves being committed to carrying out the plan that you initially selected. For example, with regard to Figure 1, to be resolute is to choose, and then proceed to execute, plan «1-fl3. Being resolute does not mean being unconditionally committed to execute a chosen plan. It allows for changing plans in the light of new information. All that it requires is that if, on the basis of your preference for outcomes, you adopt a given plan, and if unfolding events are as you had expected them to be, you then proceed to execute that plan.17 It is consistent with being resolute that you simply tyrannize, at some point in time, over your own later selves. Were this to be the only way in which resolute choice was to be understood, this would surely cast doubt on your ability to be deliberatively resolute. Even if your present self desired to tyrannize over your future self, in the absence of some special hypothesis concerning the content of the preferences of your future self, what possible rational ground could your future self have for accepting such a regimen? In such a case, it would seem that you must, if you are realistic, expect that your deliberative resolve will unravel.18 But resoluteness need not be understood to be achievable only in this way. It can function, not as a regimen that your earlier self attempts to impose upon some later self, but as the means whereby the earlier and the later self co-ordinate their choices in a manner that each judges acceptable. The suggestion, to the exploration of which I shall shortly turn, is that such co-ordination can, in certain cases, prove mutually advantageous to one's several time-defined selves, and that this paves the way for a reappraisal of the rationality of being deliberatively resolute. It must be acknowledged, however, that deliberative resoluteness, no matter how it is based, cannot be squared with the separability principle. In Figure 1, the agent who adopts and then resolutely executes the plan a1-a3, despite being disposed to rank a4 over a3, in the context of a modified decision problem in which the initial choice node is at node 2, violates the separability principle, which requires the choice of «4.19 Since many are convinced that separability is a necessary condition of
24
Edward F. McClennen
rational choice, they conclude that the model of resolute choice must be rejected. It recommends plans that are simply not feasible, at least for a rational agent. Conceptually, however, what we have here is a conflict between a method of achieving dynamic consistency and an alleged principle of rational choice. How is that conflict to be adjudicated? One could, of course, make an appeal to intuition at this point. Unfortunately, the continuing debates within the field of decision theory over the last fifty years suggest that intuitions of this sort tend not to be interpersonally transferable.20 Under such circumstances, an appeal to intuitions is essentially a rhetorical move.21 A more promising approach, I suggest, is to consider once again the kind of pragmatic perspective to which appeal has already been made. Consider once again the case that can be made for being sophisticated rather than myopic. The argument is that the myopic self is liable to being exploited in a manner that works to its own great disadvantage, and since what it stands to lose are resources that any of its time-defined selves could put to use, here is a thoroughly pragmatic argument, and one that can be addressed to each successive self, for being sophisticated rather than myopic. What, then, can be said from this sort of perspective regarding the present issue, namely, the comparative rationality of sophisticated and resolute choice? It is sophisticated rather than resolute choice that can be criticized from this perspective. To illustrate, consider once again the version of the problem in Figure 1 in which plan a2 constitutes a precommitment strategy of paying someone else to execute a choice of o3 over o4 at the second-choice node. This is the best option consistent with the separability principle. But if you are resolute you can realize the very same outcome, without having to pay an agent. On the assumption that each of your time-defined selves prefers more to less money, to reason in a separable manner is to create a real mtrapersonal dilemma for yourself, in which "rational" interaction with your own future selves leads to an outcome that is intrapersonally suboptimal, or "second-best." That is, each time-defined self does less well than it would have done, if the selves had simply co-ordinated effectively with each other. Other values must be sacrificed as well. Precommitment devices limit your freedom, since they involve placing yourself in situations in which you do not chose, but have choices made for yourself. Moreover, they expose you to the risks associated with any procedure that is inflexible. In contrast, the resolute approach is not subject to any of these difficulties. Scarce resources do not have to be expended on precommitment devices or to pay agents; you are the one doing the choosing, and you retain the option of reconsideration insofar as events turn out to be different from what you had anticipated.
Rationality and Rules
25
Here, then, is a thoroughly pragmatic or consequentialist consideration in favour of being resolute, and against being sophisticated. There is a class of sequential problems in which acceptance of separability generates choice behaviour that is intrapersonally suboptimal, and where this unfortunate consequence can be avoided by choosing in a resolute manner. In at least some cases, I suggest, a more holistic but still consequentialist perspective can be marshalled in support of being resolute as opposed to being sophisticated, and hence, in support of a relaxation of the standard separability assumption.22 The pragmatic argument just rehearsed does not establish that it is always rational to be resolute. It only makes the case for being resolute in certain cases in which there are comparative advantages to being resolute rather than sophisticated, as measured in terms of standard "economic" values of the conservation of scarce resources, freedom, and flexibility. Moreover, nothing has been said about what, within a non-separable framework, a full theory of rational, intrapersonal, sequential choice would look like. At the very least what is needed, in addition, is a theory of what constitutes a fair bargain between one's different, time-defined selves. All that has been argued so far is that there are contexts within which being resolute is a necessary condition of rational, sequential choice. But even this limited conclusion has two connected, and quite powerful, implications. First, insofar as weakness of will is a manifestation of the agent's having succumbed to the "Siren's Song" of incremental reasoning, it may really be a sign of imperfect rationality; and, thus, second, talk of precomrnitment and the like in such cases is really best understood as addressed to those who are not fully rational.23 To understand the kind of case in which being resolute is pragmatically defensible is also to understand the relation between resoluteness and consequentialism. Being resolute involves, by definition, adopting a two-level, deliberative approach to consequentially oriented choice. At the first level, in settling upon a plan of action, you will compare the consequences of the various available plans, and reject all plans that fail the test of intrapersonal optimality. That is, consequentially oriented considerations will guide you to adopt plans as a means of effectively co-ordinating between your time-defined selves. But, at the second level, with respect to specific choices to be made as you move through a decision tree, the plan actually adopted will then set constraints on subsequent choice. That is, you will take the plan adopted as regulative of choice. Finally, these last remarks suggest that the model of resolute, as distinct from separable, choice provides an account of rationality in terms of which one can make sense of, and defend, a rational commitment to practice rules. But the story that has
26
Edward F. McClennen
been told so far concerns only the intrapersonal problem that arises for the isolated individual. Before turning specifically to the issue of practices, it will prove useful to explore the implications of resolute choice for cases of interpersonal interaction.
5. Interpersonal Choice under Ideal Conditions What light does the foregoing analysis shed on the problems of interpersonal choice with which I began? I shall focus here on the logically special, but very important case of interactive games that are played under the following "ideal" conditions: (1) all of the players are fully rational; and (2) there is common knowledge of (a) the rationality of the players, (b) the strategy structure of the game for each player, and (c) the preferences that each has with respect to outcomes.24 Just what is implied by (1) remains to be spelled out, of course. (2) commits us to the assumption that there is no asymmetry in the information available to the different players. In particular, any conclusion reached by a player, regarding what choice to make, can be anticipated by the others: there are no hidden reasons. Here is a simple game of this type, one that involves the players choosing in sequence, rather than simultaneously (where the pair of numbers in parentheses to the right of each listed outcome o;, gives A's and B's preference ranking, respectively, for that outcome - with a higher number indicating that the outcome is more preferred). See Figure 2.
Figure 2: An assurance game
Rationality and Rules
27
Each of the outcomes o3 through o6 can be achieved by A and B coordinating on this or that plan. By contrast, outcome o2 can be reached by a unilateral move on A's part. Once again, for the sake of the argument to be explored, «2 can be interpreted as a "precommitment" plan whereby B can be assured that if she chooses bl in response to alr A will then respond with a3. That is, realizing o2 amounts to realizing o3, although at the cost of an agency fee to be paid to a third party from funds, say, contributed by A. Note also that this interactive situation has embedded in it, from node 2 on, a sequential version of the classic Prisoner's Dilemma game. Given the specified preference rankings for outcomes, and the standard consequentialist assumption that plans are to be ranked according to the ranking of their associated outcomes, plan a-^-b^a^ might seem to be the most likely candidate for a co-ordination scheme. To be sure, A prefers the outcome associated with the plan a1-b1-a4, but it is unrealistic to suppose that B would agree to co-ordinate on that plan. On the standard view, however, under ideal conditions (of mutual rationality and common knowledge), talk of voluntarily co-ordinating their choices is pointless. Suppose A were to set out to implement the first stage of such a co-ordination scheme, by selecting av and suppose, for some reason or other, B were to reciprocate with b1. In such a case, so the argument goes, A would surely select «4. In effect, plan a^-b^-a^ is simply not feasible: it calls upon A to make a choice that A knows he would not make, and, under ideal conditions, B knows this as well. Moreover, B would end up with her least preferred outcome, as the result of a failed attempt at co-ordination. Suppose, alternatively, that A were to select alf and B were to respond - for the reasons just outlined - by protectively selecting &2: under these conditions, A's best response at node 4 would be «6, and each would then end up with a second least preferred outcome. Once again, all of this is common knowledge. Against the background of these subjunctively characterized conclusions, then, A's best opening choice is not alf but a2, yielding for each a third least preferred outcome. That is, the equilibrium outcome - and projected solution - for rational preference maximizers is o2. Notice, however, that the problem in Figure 2 neatly mirrors the essential features of the intrapersonal problem given in Figure 1. The outcome associated with al-bl-a3 is preferred by each to the outcome associated with a2. But, according to the story just told, the former outcome is not accessible. Why? Under conditions of common knowledge, and on the standard analysis, A cannot expect B to co-operate. Why? A cannot plead that B is basically disposed to be non-co-operative. B's maximizing response to an expectation that A will co-operate is to cooperate herself. A's expectation that B will play defensively derives
28
Edward R McClennen
solely from the consideration that B must expect that A will, if and when node 3 is reached, choose a4, not a3. Thus, A's quarrel is with himself; or perhaps we should say (given the analysis of the last section), with his own future self I
What this suggests, of course, is that the analysis of intrapersonal conflict applies to this situation as well. As already indicated, consequentialism can be invoked to argue that preferences for outcomes are controlling. And once again, it can be noted that this does not, in itself, settle the question of what would qualify as a rational choice for A at node 3. What is requisite, in addition, is an assumption to the effect that A will conceptualize his situation at node 3 as one in which his own past choice behaviour, and the choice behaviour of B constitute givens against which he must maximize with respect to his preferences for outcomes still realizable at node 3. That is, A functions as an autarkic chooser at node 3. Thus, the argument turns once again on the separability assumption formulated above in Section 2. It is consequentialism together with separability that yields the standard result regarding how A must choose. The conclusion of the last section is that separability in the context of intrapersonal choice must be rejected. That conclusion carries over to the context of the type of sequential problem just discussed, where interaction is sequential and takes place under conditions of common knowledge. Here as before, A's commitment to separability precludes both A and B from realizing gains that could otherwise be realized. Within this framework, then, separability cannot be taken as an unqualified condition of rational choice. Given common knowledge, the rational choice for A is to be resolute, and since B will anticipate that A will choose in this manner, her best move is to respond with b1 for an outcome which ranks second-best (4,4) on each participant's preference ordering. Now this line of reasoning is subject to an important extension. Consider first a very minor modification of the interactive situation presented in Figure 2, according to which A and B must jointly decide at node 1 either to employ some sort of enforcement mechanism (a2), or to reach an agreement on a co-operative strategy, say, Cj-bj-^, which will govern the remainder of the interactive situation and which is to be voluntarily executed by each. Once again, of course, this interactive situation has embedded in it, from choice node 2 on, a sequential version of the classic Prisoner's Dilemma game. And once again the standard argument is that the rational outcome of this interactive situation must be o2 - for all the reasons just rehearsed in connection with the interactive situation in Figure 2. Were A and B to agree at node 1 to co-ordinate their choices at nodes 2 and 3, that would provide no deliberative
Rationality and Rules
29
reason for A to choose «3 rather than «4 at node 3, unless, of course, A just happened to prefer to follow through on such an agreement. Now, modify the situation once more to that given in Figure 3. This is a sequential game, the second stage of which involves a simultaneous choice problem. At node 1, A and B must not only agree upon a plan (say, to jointly choose "co-operation" at the second stage) but also decide whether to institute an enforcement mechanism that will ensure that this agreement is binding (plan c2) or whether to proceed forward and attempt to voluntarily implement their agreement (q). Once again, both players would be better off choosing cl and then mutually honouring their agreement to co-ordinate, than they would be by agreeing to an enforcement mechanism. But, if they were to agree upon cv what they then face is a classic, symmetrical version of the Prisoner's Dilemma, in which both the dominance principle and the equilibrium requirement mandate that each defect. Under these conditions, O2is the rational outcome of this interactive situation, even when it is played under the ideal conditions of common knowledge. What does the work here is not just the consequentialist assumption that each player is disposed to choose so as to maximize preferences for outcomes, but, rather, once again, an assumption about how such consequential reasoning is anchored. In effect, the problem of simultaneous interpersonal choice is conceptualized in the same manner that both the problem of intrapersonal choice and the problem of interpersonal sequential choice is conceptualized, notwithstanding that choices are now to be made simultaneously, rather than sequentially. That is, once again, the supposition is that you will chose in an autarkic manner. To see this, factor out the consideration that choices are to be made simultaneously, and focus on a sequential game, where the other person plays first and you play second. In this case, as we have already seen,
Figure 3: A joint assurance game
30
Edward F. McClennen
the supposition is that you will reactively maximize your present preferences for outcomes against the given behaviour of the other person. That is, the choice behaviour of the other person is taken to be a given, just like the given choice behaviour of your past self, and not a choice that calls for a co-ordinating move on your part. Now consider the same situation except that choices are to be made simultaneously. Here you cannot take the choice behaviour of the other person as a given. But suppose that you are in a position to make an estimate of how that other person will choose. Then the supposition is, once again, that you will reactively maximize your present preferences for outcomes, this time against your best estimate of how the other player will choose.25 In short, just as the distinction between your own past choice behaviour and the past choice behaviour of another person is strategically insignificant, so also is the distinction between choosing after another has chosen, and choosing simultaneously. In the latter case, you are simply thrown back on having to maximize against your best estimate of how the other person will choose.26 What is implicit in this way of thinking can be captured, then, in a separability principle that parallels the one for intrapersonal choice problems: Separability (for two-person, interpersonal, synchronous choice): Let G be any two-person game, and let D be a problem that is isomorphic to G with respect to the strategy and payoff structure of the game for both you and the other player, except that the choice behaviour of the other player has been fixed at a certain value that you either know or can reliably estimate - so that what you face, in effect, is a situation in which all that remains to be resolved is your own choice of a strategy. In this case, what you choose in G must coincide with what you choose in D.27 However intuitively acceptable this principle is, within the context of ideal games it is subject to precisely the objection raised against the intrapersonal separability principle. As the classic Prisoner's Dilemma game illustrates, persons who are disposed to choose in this fashion simply do less well, in a significantly large class of ideal games, than those who are disposed to reason from a non-separable perspective, and with a view to realizing the gains that can be secured from effective co-operation. Here, then, is another context within which there is a pragmatic argument against separable and in favour of resolute choice.28 There are a number of plausible extensions of resolute reasoning, to ideal games involving more than two players, and to iterated games
Rationality and Rules
31
played under ideal conditions. That is, the logic of the argument is not confined to the two-person, "one-shot" case. All of the interactive situations just considered, however, are overly simple in one important respect: there is only one outcome that is Pareto-efficient relative to the standard, non-co-operative, equilibrium solution. What has been offered, then, is at best only a necessary condition of an adequate solution concept for ideal games. What is needed is a theory of co-operative (as distinct from non-co-operative) games, that is, a well-developed theory of (explicit and/or tacit) bargaining, for selecting among outcomes that are Pareto-efficient relative to the equilibrium outcomes of the game (or some other appropriate baseline), and which are themselves Pareto-optimal.29 Moreover, for any theory of bargaining that can serve as a normative guide to help people avoid suboptimal outcomes, perhaps the key issue is what constitutes a fair bargain.30 There is also the very important question concerning to what extent the results for ideal games can be extended to games played under more realistic conditions, where players may be uncertain as to the rationality of the other players and where various informational asymmetries obtain. Here, questions of assurance are bound to loom large, even for agents who are otherwise predisposed, as a result of rational deliberation, to co-operate. However, all of this pertains more to the question of the scope of the argument just rehearsed, and the more pressing concern now is to see what implication this sort of argument has for the problem originally posed - the problem of whether an agent can make a rational commitment to act subject to the constraint of practice rules. 6. Rules, Resoluteness, and Rationality I argued in Section 1 that the standard model of rational choice does not provide a secure footing for the rationality of choosing subject to the constraints of practice rules. What I now want to argue is that the alternative model presented in the intervening sections opens the door to understanding and defending a rational commitment to practice rules. Consider first the concept of a practice. One can mark in the abstract concept of being resolute a model for just the sort of hierarchical structure that characterizes practice-constrained choice, both for the kinds of practices that the isolated self may adopt, but also for many of those practices that structure our interactions with others. One has only to observe that a practice can be understood to be a type of plan, and to recall that it is precisely the resolute self that is capable of taking a plan that has been adopted as regulative of future choice, even in the face of what would otherwise count as good reasons to choose differently. But why is a practice to be taken as regulative? Because this is what is needed if individuals are to co-ordinate their actions, or if the
32
Edward F. McClennen
isolated individual is to co-ordinate actions over time. For co-ordination to take place, it is not enough that each does what each judges to be "best"; nor is it even enough that each conforms to some rule that each judges would best serve the ends in question, if all were to conform to it. To the contrary, co-ordination requires a mutual structuring of activity in terms of a prior, established rule having both normative and positive import: that is, a rule to which all are expected to adhere, and to which it is expected that all (or most) will in fact adhere.31 The rules defining a practice, then, are to be understood as prior to the choices of action that arise under it in both a temporal and a normative sense. There is temporal priority, because what is regulative is a rule that is already established. There is normative priority because the rule takes precedence, at least in principle, over any countervailing choice supporting consideration that can arise at the level of choice of action within the context of a situation to which the rule is applicable. The logic of practice rules, so conceived, then, involves the notion that one cannot decide to overrule such a constraint in a given situation to which the practice rule applies by directly appealing to whatever considerations could be adduced in support of the practice itself. Those who participate in such a practice abdicate, in effect, their "right" to make decisions case by case by direct appeal to such underlying considerations. The sense in which a practice rule is prior to, and establishes nondiscretionary constraints on choice, is already provided for in the model of resolute choice - in the notion that choice in certain sequential decision problems is constrained by a prior decision to pursue a plan, or a prior (tacit or explicit) understanding as to how choices by quite different individuals are to be co-ordinated. That is, the account of nonseparable deliberation and choice explored in previous sections provides a model of the kind of intentional co-ordination that is essential to adopting and choosing subject to the constraints of a practice. As I argued at the close of Section 4, the intrapersonal co-ordination problem is resolved by adopting a two-level approach to deliberation and choice. At the first level, consequentially oriented considerations will lead one to adopt a specific plan; and at the second level, the plan that is in fact adopted will set constraints on subsequent choice. In this setting, what is relevant to subsequent intrapersonal choice is not what plan one might have adopted, or what plan it would have been best for one to adopt (by reference to some underlying consideration), but what plan one did in fact adopt. Correspondingly, what is relevant in certain interpersonal decision problems is not what plan the participating individuals might have adopted, or what plan it might have been
Rationality and Rules
33
best to adopt (once again, by reference to some underlying consideration), but what plans are already in place. In each case, then, there is both a positive or "fact of the matter" and a normative dimension to the reference point that emerges for deliberation and decision: what functions as directive for choice is the plan that as a matter of fact has been chosen. This conceptual account holds, it should be noted, even if resoluteness is conceived as merely the imposition, by the earlier self, of a regimen that the later self accepts, or, in the case of interpersonal choice, of a pure convention, among a group of people, regarding how each is to constrain choice in certain situations. Whether a given practice is fully justified turns, of course, on what arguments can be constructed for the rules themselves. What I have argued is that the logic of interactive (intrapersonal or interpersonal) situations is typically such that practice rules are required for the effective promotion of the objectives of the participants. The notion is that there are cases in which the concerns of each cannot be served unless the future is tied down and plans co-ordinated in advance. In such cases each person's deciding what to do by reference to her own concerns, case by case, will lead to confusion, and the attempt to co-ordinate behaviour simply by each trying to predict the behaviour of the others will fail.32 When this is the case, one can appeal to the model of non-separable deliberation and choice, to show that a commitment to practice rules can be defended pragmatically, by reference to consequences that are assessed from a non-separable, global perspective. Nothing need be presupposed here regarding what each takes to be the objectives that such a co-ordination scheme is to serve. In particular, there is no reason to suppose that there is some one or more objectives that all participants share. Divergence with respect to ends can be offset by convergence with respect to means - by a shared sense that the objectives of each can be more effectively promoted by the adoption of a co-ordination scheme. Correspondingly, there is no need to introduce some ad hoc assumption about persons just happening to attach value to choosing in accordance with such rules. Nor are such persons "rulebound" in a way that can be criticized from the perspective of a theory of consequential choice.33 The story to be told here can pivot fully and uncompromisingly on consequential concerns. It can be a story of individuals who come to regulate their interactions with themselves over time, and with one another, in accordance with constraints to which time-indexed selves, or distinct individuals, can mutually assent, and who do this from nothing more than a sense of the enhanced power that such a new form of activity gives them with respect to furthering their own projects and interests.34
34
Edward F. McClennen
7. Conclusion I have sought to argue here a number of things. First, the standard way of thinking about rationality in both intrapersonal and interpersonal contexts unacceptably fails to yield a theory that can render intelligible the notion of having a commitment to practice rules, much less provide for the rationality of being so committed. Second, this feature of the standard theory can be traced back to a basic presupposition of virtually all contemporary accounts of rationality, namely, that consequential reasoning inevitably takes place within the framework of a separability principle. Third, there is a distinct account that renders the notion of a commitment to rules both intelligible and rational: the resolute model. Finally and more ambitiously, I have sought to show that the resolute model can be defended by appeal to consequentialism itself. The notion is that a consequential argument can be constructed for adopting a more holistic or global approach to deliberation and choice, and this entails, in turn, that in certain cases one should deliberatively suspend the separability principle. In terms of the more familiar notion of practices, the conclusion is that a commitment to practice rules can be defended endogenously, from within a consequentially oriented framework. Alternatively put, the logical structure of intrapersonal and interpersonal co-ordination problems is such that a viable version of consequentialism will be a version of rule consequentialism, in which the notion of a rational commitment to extant rules has a central place.
Acknowledgments I am especially indebted to the following for helpful comments during the preparation of this paper: Bruno Verbeek, David Schmidtz, Christopher Morris, Mike Robins, and the graduate students in two separate seminars that were conducted at Bowling Green State University. Versions of this paper were read at a CREA Conference held in Normandy, France, in June of 1993, and at a Conference on Modeling Rational and Moral Agents, held at Simon Fraser University, Vancouver, Canada, in February of 1994. Notes 1 The manner in which I shall characterize the distinction between maxims and the rules of practices overlaps, but does not quite coincide with the way in which that distinction is drawn in J. Rawls, "Two concepts of rules," Philosophical Review, 64 (1955): 3-32. Following B. J. Diggs, "Rules and utilitarianism," American Philosophical Quarterly, \ (1964): 32-44,1 want to focus on a somewhat broader class of practices than Rawls does. 2 The qualifier, "rational," is crucial here. 1 am not denying the obvious fact that one can offer other accounts of an individual's commitment to practice
Rationality and Rules
3
4
5
6
35
rules. At least all of the following are possible: (1) some persons simply have a preference for acting subject to the constraints of such rules; (2) the commitment is the result of an essentially "unconscious" process of socialization; (3) the disposition to make such commitments is the upshot of an evolutionary process. The problem with (1) is that it is ad hoc. Explanation (2) is the one most favoured, it would seem, by economists. See, for example, K. Arrow, "Political and Economic Evaluation of Social Effects and Externalities," in M. Intriligator (ed.), Frontiers of Quantitative Economics (Amsterdam: North-Holland, 1971), pp. 3-31; and M.W. Reder, "The place of ethics in the theory of production," in M. J. Boskin (ed.), Economics and Human Welfare: Essays in Honor of Tibor Scitovsky (New York: Academic Press, 1979), pp. 133-46. On this sort of account, of course, one does have a rational motive for ensuring that others are socialized, since a steady commitment on the part of others to an ethics of rules will typically work to one's own advantage. More recently, explanation (3) has been promoted by both economists and philosophers. For a powerfully argued version of this sort of explanation, see R. H. Frank, Passions within Reason: The Strategic Role of the Emotions (New York: W. W. Norton, 1988). I do not deny the relevance of either (2) or (3). My concern is rather with the nearly ubiquitous presupposition that such a commitment could not arise as the result of rational deliberation. That presupposition, I shall argue, is implausible. For a sense of the extraordinary range of stories that can be told here, see, for example, R. H. Strotz, "Myopia and inconsistency in dynamic utility maximization," Review of Economic Studies, 23 (1956): 149-58; P. Hammond, "Changing tastes and coherent dynamic choice," Review of Economic Studies, 43 (1976): 159-73, and "Dynamic restrictions onmetastatic choice," Economics, 44 (1977): 337-50; M. E. Yaari, "Endogenous changes in tastes: A philosophical discussion," Erkenntnis, 11 (1977): 157-96; J. Elster, Ulysses and the Sirens: Studies in Rationality and Irrationality (Cambridge: Cambridge University Press, 1979); E. F. McClennen, Rationality and Dynamic Choice: Foundational Explorations (Cambridge: Cambridge University Press, 1990); and G. Ainslie, Picoeconomics (Cambridge: Cambridge University Press, 1993). For an illuminating discussion of planning in the context of changing information, see M. Bratman, Intention, Plans and Practical Reason (Cambridge: Harvard University Press, 1987). See F. P. Ramsey, "Truth and probability," in R. B. Braithwaite (ed.), Foundations of Mathematics and Other Logical Essays (London: Routledge & Kegan Paul, 1931), pp. 156-98; D. Davidson, J. McKinsey, and P. Suppes, "Outlines of a formal theory of value, I," Philosophy of Science, 22 (1955): 60-80; F. Schick, "Dutch Bookies and Money Pumps," Journal of Philosophy, 83 (1986): 112-19; and E. F. McClennen and P. Found, "Dutch Books and Money Pumps," Theory and Decision (forthcoming). Pertinent discussions of consequentialism are to be found in P. Hammond, "Consequential Foundations for Expected Utility," Theory and Decision, 25
36
Edward F. McClennen
(1988): 25-78; E. E McClennen, Rationality and Dynamic Choke: Foundational Explorations (Cambridge: Cambridge University Press, 1990), pp. 144-46, in particular; I. Levi, "Consequentialism and sequential choice," in M. Bacharach and S. Hurley, Foundations of Decision Theory (Oxford: Basil Blackwell, 1991), pp. 92-122; and J. Broome, Weighing Goods (Oxford: Basil Blackwell, 1991), pp. 1-16,123-26. 7 The qualifier "still" is important here, since as you move through the tree, certain opportunities are foregone; paths that were in fact not taken lead to outcomes that are, then, no longer possible. 8 Formally speaking, it would seem that concerns of this sort can. be captured within a consequentialist framework by working with a more permissive notion of what counts as an outcome. Someone who now prefers to make choices that are consistent with choices made earlier can be said to view the path by which they reach a given outcome (in the more ordinary sense of that term) as part of the outcome. See the references in note 6. Once again, however, what is relevant are the preferences entertained by you now. 9 By the same token, of course, nothing on this account mandates that you have such preferences. That is, it is also possible that you have virtually no, or at best only a very imperfect, commitment to past decisions. 10 The point is that each of your future selves stands to its past selves in just the same relation that your present self stands to your past selves. But as just observed, in the case in question, the present self does not conceptualize its deliberative problem in such a way that deliberation could issue in a decision to co-ordinate with its past self. 11 Autarky ordinarily implies not just independence but also self-sufficiency. What will emerge shortly is that the time-defined self does less well by exercising independent choice than it would by entering into a co-operative scheme of choice. In this respect, it can realize only an imperfect form of self-sufficiency. 12 If I have not misunderstood D. Davidson's argument in "Deception and Division," (in J. Elster, ed., The Multiple Self [Cambridge: Cambridge University Press, 1986], pp. 79-92), this appears to be the position that he adopts when he argues that there is no reasoning that extends across the boundaries of the divided self, only causal or power relations. Since I propose to challenge this assumption, it seems clear to me that our views on both rationality and, for example, weakness of the will, significantly diverge. I must leave to another paper, however, the task of sorting out and comparing our respective viewpoints. 13 This is the case upon which Ainslie focuses in Picoeconomics. Once again, space considerations preclude my exploring the relation between my own account of dynamic intrapersonal choice and that which is to be found Ainslie's most interesting and insightful work. 14 I shall return to this point in Section 5, below. For the present it perhaps will suffice to remark that the concept of choices that are in equilibrium is cen-
Rationality and Rules
15 16 17
18
19
20
21
37
tral to the work of T. Schelling, The Strategy of Conflict (Cambridge: Harvard University Press, 1960), David Lewis, Convention: A Philosophical Study (Cambridge: Harvard University Press, 1969), and virtually all those who have subsequently explored the nature of co-ordination games. The condition formulated here constitutes a generalization of the one formulated in my Rationality and Dynamic Choice. For a much fuller treatment of the more technical details of this, see my Rationality and Dynamic Choice, chaps. 6-8. Supposing that preference tout court determines choice, could it be argued that if you are resolute, you will face no preference shift at all: what you end up doing, ex post, is what you prefer to do, both ex ante and ex post? To leave the matter there, I think, would be to provide an account suited only to what Sen characterizes as "rational fools." See Amartya Sen, "Rational fools: A critique of the behavioral foundations of economic theory," Philosophy and Public Affairs, 6 (1977): 317-44. A more plausible approach would involve an appeal to the notion of counter-preferential choice, or of secondorder preferences. See, in particular, J. Raz, Practical Reason and 'Norms (Princeton, NJ: Princeton University Press, 1990), ch. 1, on exclusionary reasons; H. G. Frankfurt, "Freedom of the Will and the Concept of a Person" Journal of Philosophy, 67 (1971): 5-20; and Amartya Sen, "Rational fools." It is interesting to note, in contrast, that tyranny is exactly what the sophisticated self achieves, by the device of precommitment. Ulysses before he hears the Sirens does not respect the preferences of Ulysses after he hears the Sirens, and once he precommits, his later self has no choice but to accept the constraints imposed by his earlier self. His resolve, then, does not unravel; but this is simply because he has tied his hands in advance. Once again, of course, as discussed earlier, you might just happen to be the sort of person who values choosing in a manner that is consistent with earlier choices made. Given preferences of the type in question, however, you have no need to be resolute in the sense with which I am concerned: ordinary motivations carry you through. In such cases, it surely makes more sense to invoke a principle of tolerance, and let each theorist nurse his or her own intuitions. On this reading, however, separability has only limited, inter-subjective standing, that is, standing only within the circle of the committed. Some will insist that the separability principle, which validates sophisticated rather than resolute choice, speaks to a fundamental requirement of consistency, and thus that -which appeals to pragmatic considerations can have no force here. The root notion is presumably that there must be a match between what you are prepared to choose at some particular node in a decision tree, and what you would choose in the modified version of the problem, in which that choice node becomes the initial choice node. But why is such a match required? If one appeals to intuition here, then one simply arrives back at the point already identified in the text, where argument leaves off
38
Edward F. McClennen
and rhetoric begins. Moreover, in this instance my sense is that the "intuition" which underpins the separability principle is essentially the product of a confusion arising from the manner in which two distinct conditions consequentialism and separability - are intertwined, and that if any condition is intuitively secure, it is consequentialism rather than separability. 22 Some, of course, will be inclined to resist this conclusion. But those who resist must embrace the rather paradoxical position that a fully rational person, faced with making decisions over time, will do less well in terms of the promotion of these standard values than one who is capable of a special sort of "irrationality." The literature of the last two decades bears testimony to the great number who have, with varying degrees of reluctance, had to embrace this odd conclusion. For a sampling, see M. E. Yaari, "Endogenous changes in tastes"; J. Elster, Ulysses and the Sirens; D. Parfit, Reasons and Persons (Oxford: Clarendon Press, 1984); and R. Nozick, The Nature of Rationality (Princeton: Princeton University Press, 1993). 23 It is instructive to consider how Elster approaches this issue in Ulysses and the Sirens. A central claim of that book is that perfect rationality involves a capacity to relate to the future, not simply in the sense of being able to look farther ahead, but also being able to wait and employ indirect strategies. That is, it involves being able to say no to an attractive, short-run advantage, a local maximum, in order to achieve something even better, a global maximum. Elster also argues, however, that human beings manifest this capacity imperfectly and, thus, have to settle for the second-best strategy of precommitment. Now, precommitment is a form of sophisticated choice. How, then, are we to understand the first-best form of reasoning, the one that lies beyond the reach of the human deliberator? The global maximum, it would seem, is precisely what can be achieved by the resolute chooser. Why does Elster conclude that the global maximum is beyond our reach? Because we are subject to weakness of will. What accounts for the latter? Elster offers a variety of suggestions, but he also - following a suggestion of D. Davidson - concludes that weakness of will is a form of surdity in which the causal processes of the mind operate behind the back of the deliberating self. This is framed, moreover, by an insistence that to provide an explanation of weakness of will is different from offering a strategy for overcoming it. In contrast, the account I have offered interprets at least one form of weakness of will not as a surdity, but as a matter of an error in deliberation, arising from a conceptual conflation of consequentialism with separability - of confusing a particular manner of reasoning from consequences, with reasoning in general with respect to consequences. This diagnosis, moreover, contrary to both Davidson and Elster, does appear to smooth the way to a cure. To grasp that there is a confusion here is to realize that there is an alternative, and more consequentially defensible approach to dynamic choice, which is captured in the notion of being resolute when this works to the mutual advantage of one's several time-defined selves.
Rationality and Rules
24 25
26
27
28
39
For a more detailed discussion of the relation between the argument pursued here and Sister's work, see E. F. McClennen, Rationality and Dynamic Choice, Section 13.7. I focus just on these games so as to keep the exposition within bounds. This class of games, however, is pivotal for the whole theory of games. This, then, is a fixed point of the standard theory of games: if you are rational your choice must be a preference-maximizing response to what, at the moment of choice, you expect the other player to do. There is a huge literature on refinements in, and modifications of, this way of thinking about rational interpersonal choice. What is basic is the concept of an equilibrium of choices, as developed originally by J. F. Nash, in "Non-cooperative games," Annals of Mathematics, 54 (1951): 286-95. A most useful exposition is to be found in R.D. Luce and H. Raiffa, Games and Decisions (New York: John Wiley, 1958), ch. 4. For a sense of the wide range of variations on, and modifications in, this way of thinking, see in particular J. B. Kadane and P. D. Larkey, "Subjective probability and the theory of games," Management Science, 28 (1982): 113-20; B. D. Bernheim, "Axiomatic characterizations of rational choice in strategic environments," Scandinavian Journal of Economics, 88 (1986): 473-88; and W. Harper, "Ratifiability and refinements (in twoperson noncooperative games)," in M. Bacharach and S. Hurley (eds.), Foundations of Decision Theory (Oxford: Basil Blackwell, 1991), pp. 263-93.1 have tried to explain what I find unconvincing about all of these approaches in "The Theory of Rationality for Ideal Games," Philosophical Studies, 65 (1992): 193-215. Note, however, that it is not a matter of strategic indifference whether you play first rather than simultaneously. The player who goes first, just like one's earlier self, is faced with the task of determining what will maximize her present preferences for outcomes, given that the player who goes second will maximize in an autarkic manner. Once again I have modified the formulation of the relevant separability principle, specifically the one that I employ in "The Theory of Rationality for Ideal Games," so as to leave open the possibility that an agent might just happen (for some non-deliberative reason) to have a preference for coordinating her choice with the other participating agent. It might be objected, of course, that in a game such as a simultaneous choice Prisoner's Dilemma, you will have a quite distinct reason for choosing the non-co-operative strategy, namely, so as to minimize the loss that the other person could impose on you. But this argument cannot be sustained within the context of ideal games played under conditions of common knowledge. Under such conditions, once the separability assumption is replaced by the assumption that rational players will resolutely act so as to secure gains that co-ordination can make possible, each will expect the other to co-operate, and thus the risk factor is eliminated. There is more that needs to be said here, of course, since two individuals each of whom
40
29
30
31
32 33
34
Edward F. McClennen is disposed to conditionally co-operate may fail to co-operate even under conditions of common knowledge. The problem is simply that knowing the other to be a conditional co-operator does not ensure that the other will co-operate. There may, in effect, be no decoupling of consequent choice directives from their antecedent conditions. See H. Smith, "Deriving morality from rationality," in P. Vallentyne, ed., Contractarianism and Rational Choice (Cambridge: Cambridge University Press, 1991), pp. 229-53; and P. Danielson, Artificial Morality (London: Routledge, 1992). It might also be objected that in such games there is a distinct argument for taking mutual non-co-operation as the rational outcome, an argument that is based on an appeal to a principle of dominance with respect to outcomes. But dominance considerations, no less than equilibrium considerations, carry little weight in contexts in which a case can be made for a co-ordinated approach to choice. Just how problematic dominance reasoning can be is revealed in N. Howard, The Paradoxes of Rationality (Cambridge: MIT Press, 1971). Howard is forced to admit that his reliance on the dominance principle in his own theory of "meta-games" generates a serious paradox of rationality. I am suggesting, in effect, that we must march in exactly the opposite direction to that taken by virtually the entire discipline in recent years. But, then, radical measures are needed if game theory is to be rescued from the absurdities generated by the standard theory. What the standard theory offers is a marvellous elaboration of the behaviour of "rational fools" (if I may be allowed to borrow that phrase from Sen). In this regard, I have found N. Howard, "A Piagetian approach to decision and game theory," in C. A. Hooker, J. J. Leach, and E. F. McClennen (eds.), Foundations and Applications of Decision Theory (Dordrecht: D. Reidel, 1978), pp. 205-25, most useful. This is, of course, something that is central to the argument in Gauthier's Morals by Agreement. I have tried to say something along similar lines in "Justice and the problem of stability," Philosophy and Public Affairs, 18 (1989): 3-30, and in "Foundational explorations for a normative theory of political economy," Constitutional Political Economy, 1 (1990): 67-99. Space considerations preclude my exploring the relationship between the sort of intentional co-ordination I have in mind here and that which Bratman discusses in "Shared cooperative activity," Philosophical Review, 102 (1993): 327-41. Clearly, however, what I have in mind is only a species of the genus that he delineates. I take this phrasing from Rawls, "Two concepts of rules," p. 24. Bratman would disagree. He levels this charge against resolute choice in "Planning and the stability of intention," Minds and Machines, 2 (1992): 116. See, however, the rejoinder by L. De Helian and E. F. McClennen, "Planning and the stability of intention: A comment," Minds and Machines, 3 (1993): 319-33. The phrase, "a new form of activity" is taken from Rawls, "Two concepts of rules," p. 24.
3 Intention and Deliberation David Gauthier
Successful deliberation about action in the future gives rise to an intention. Michael Bratman expresses the orthodox view about how deliberation, action, and intention are related when he says, "But in deliberation about the future we deliberate about what to do then, not what to intend now, though of course a decision about what to do later leads to an intention now so to act later ... This means that in such deliberation about the future the desire-belief reasons we are to consider are reasons for various ways we might act later" (Bratman 1987, p. 103). I intend to challenge this view, arguing that it rests on an incomplete account of the relations among deliberation, intention, and action. In particular, I want to reject the contrast that Bratman draws between deliberating about what to do in the future, and deliberating about what to intend now. And in rejecting this contrast, I want also to reject his claim that in deliberating about the future we are to consider only the desire-belief reasons that we should expect to have at the time of action. Bratman wants to replace the desire-belief theory of intention in action with what he calls the planning theory. But he does not go far enough; he accepts too much of the desire-belief model to exploit fully the resources that planning offers. But this paper is not directed primarily towards a critique of Bratman. Rather, I intend to use some of his views as a foil for my own position. Of course, I can develop only a small part of a theory of intention and deliberation in this paper. For my present purposes, I intend not to question parts of the desire-belief model that now seem to me to be doubtful. For Bratman, "the agent's desires and beliefs at a certain time provide her with reasons for acting in various ways at that time" (Bratman 1987, p. 15), where desires and beliefs are taken as severally necessary and jointly sufficient to provide these reasons. He accepts desire-belief reasons, although he rejects the simplistic account of their role that the desire-belief model gives. I have come to have serious misgivings about the very existence of desire-belief reasons, but in this paper I propose largely to ignore those misgivings. It would, I now 41
42
David Gauthier
think, be better to say simply that an agent's beliefs, her representations of how the world would be at a certain time, and how it might be were she to perform the various actions possible for her at that time, give her reasons for and against performing these actions. But nothing in this paper will turn on whether we should employ the desire-belief model or an alternative pure belief model in formulating what I shall call the agent's outcome-oriented reasons for acting. I do not deny that an agent has outcome-oriented reasons. But I do deny both that she has only outcome-oriented reasons, and that in deliberating about the future, she should always consider only the outcome-oriented reasons she would expect to have at the time of action. Of course, deliberation about the future concerns what to do in that future. Successful deliberation about the future typically concludes in a decision now to perform some action then. (The decision may or may not be explicitly conditional, but for present purposes I shall leave conditionality to one side and take successful future-oriented deliberation to conclude simply in a decision.) And, of course, the reasons for acting that one expects to have then must enter into one's deliberation. Indeed, I want to insist that if one deliberates rationally, then what one decides to do must be what one expects to have adequate reason to do. In determining this, one certainly must take into account the outcome-oriented considerations that one expects would be relevant at the time of action. But there are, or may be, other reasons. Suppose there are not. Consider this situation. You offer to help me now provided I help you next week. Or so you may say, but you can't make helping me now literally dependent on my helping you next week. You can, of course, make your helping me now depend on your expectation of my help next week, where you seek to make your expectation depend on whether or not I sincerely agree to help you. You ask me whether I shall return your help, and if I say that indeed I shall, and if you take this to be a commitment, or an expression of firm intention, then you will help me. So now I am faced with deciding what to say, and whether to be sincere in what I say. Let us suppose that I judge that in all likelihood, I shall be better off if you help me now and I return the help next week than if you don't help me now. I think that I have good desire-belief or outcome-oriented reasons to bring it about that you help me, even if I were then to help you. And so, deliberating about what to do now, I think that I should say that I agree to return your help. But let us suppose that, for whatever reason, I think that I should say this only if I am sincere - only, then, if I actually intend to return your help. This leads me to deliberate further - not about what I shall do now, but about what I shall do next week. Unless I decide now that I shall return your help next week, I shall not decide to say that I agree to return your help.
Intention and Deliberation
43
But the outcome-oriented reasons that I expect to have next week may not favour my then returning your help. For, let us suppose, I am not concerned about our ongoing relationship, or about third-party reputation effects. Although I should like your help, I do not care greatly for you, I do not expect our paths to cross in future, and I do not expect to associate with those persons who would be likely to find out about my not returning your help and think ill of me in consequence. I can see quite clearly that come next week, if you have helped me I shall have gained what I want and nothing, or not enough, in my outcomeoriented reasons, will speak in favour of reciprocating. Faced with the need to decide what to say about what I shall do next week, I have deliberated about what I shall do then. And I have deliberated in terms of the outcome-oriented reasons for the various ways I might then act. Since they are reasons for not reciprocating, that is what I decide to do next week, should you help me now. Hence my intention - if I am rational - must be not to reciprocate. I cannot rationally intend to do what I have decided not to do. And so what I must say now, given that I am for whatever reason not willing to say that I agree to do what I intend not to do, is that I do not agree to return your help. And so you do not help me and we both lose out - each of us would do better from mutual assistance than no assistance. I have deliberated about what to do then, rather than what to intend now, and I have deliberated about what to do then in terms of the outcome-oriented reasons I should expect to have then. I have followed Bratman's lead, and I have paid a high price for it. For if I were to deliberate about what to intend now, then my outcome-oriented reasons would speak in favour of intending to reciprocate. And if I were to form the intention to reciprocate, then I should, of course, say that I agreed so to act. You would help me and - if I then acted on my intention - we should both benefit. Bratman contrasts deliberating about what to do later with deliberating about what to intend now. But this way of contrasting the two modes of deliberation is misleading. For just as deciding what to do in the future leads to an intention now so to act later, so forming an intention about what to do in the future leads to, or is, a decision now so to act later. To form the intention to reciprocate next week is to decide now to reciprocate. And so deliberating about what to intend now is, and must be, deliberating about what to do later. To be sure, there seems to be a difference between deliberating about what to do directly, and deliberating about what to do as a consequence of what to intend. The former proceeds in terms of reasons that one expects to have at the time of performance for acting in one way rather than another, whereas the latter proceeds in terms of reasons that one has at the time of deliberation for intending to act in one way rather than
44
David Gauthier
another. In many contexts the two sets of reasons coincide. But in my example the outcome-oriented reasons do not coincide, since the intention to reciprocate has effects that actually reciprocating does not have, and these effects manifest themselves in the difference between my outcome-oriented reasons for intending and my outcome-oriented reasons for performing. The contrast that Bratman wanted to draw would seem to relate to this difference, between deliberating on the basis of outcome-oriented or desire-belief reasons for acting, and deliberating on the basis of outcome-oriented or desire-belief reasons for intending to act. But if we draw it in this way, it is surely evident that the latter mode of "deliberation" would be absurd. Suppose, for example, that you will confer some minor favour on me if you expect me to act later in a way that would be very costly to me - the cost far outweighing the benefit from your favour. So I have sufficient outcome-oriented reasons not to perform the costly act, and would have such reasons even if performing the act were the means to gain the favour. Deliberating on the basis of my reasons for acting, I should decide not to perform the costly act. But if I were to form the intention, I should gain the favour, and since intending is one thing and acting is another, it may seem that I have adequate outcome-oriented reasons to form the intention to perform the costly act. But this is to decide to perform the act - which is surely absurd. I agree. To focus deliberation purely on outcome-oriented reasons for intending, where the intention is divorced from the act, would indeed be absurd. But we should not therefore conclude that in deliberating about what to do, considerations relevant to the intention are to be dismissed. For some of these considerations, I shall argue, give the agent reasons for performing the intended act. These are not outcomeoriented, or desire-belief reasons for acting. They may concern the outcome of having the intention, but they do not concern the outcome of performing the intended action. However, I shall argue, they are reasons for acting none the less, and they are relevant to the agent's deliberation about what to do. When they are taken properly into account, deliberating about what to do later, and so what to intend now, and deliberating about what to intend now, and so what to do later, both appeal to reasons, oriented not to the expected outcome of the action alone, but rather to the expected outcome of the intention together with the action. Or so I shall now argue. Let us consider again my deliberation arising out of your conditional offer to assist me. Suppose that I must decide whether to assure you sincerely that I shall reciprocate. But I cannot decide this without deciding whether to reciprocate. Unless 1 intend to reciprocate I cannot sincerely assure you that I shall. A sincere assurance carries with it an
Intention and Deliberation
45
intention to perform the assured act, and the intention carries with it the decision so to act. Note that I am not claiming that intention is sufficient for assurance. If I intend to go to a film merely because I doubt that there will be anything worth watching on television, I could not sincerely assure you that I should go to the film. Not every intention is strong or firm enough to support an assurance. My claim is only that an intention is necessary support. So deliberating about whether to offer you a sincere assurance that I shall reciprocate involves me in deliberating about whether I shall reciprocate. And now my claim is that I should focus initially, neither on the outcome-oriented reasons I should have for and against reciprocating to the exclusion of all else, nor on the outcome-oriented reasons I should have for and against intending to reciprocate to the exclusion of all else, where the intention is divorced from the act, but rather on the outcome-oriented reasons for the overall course of action that would be involved in sincerely assuring you that I should reciprocate. If I am to give you this assurance, then I must decide to reciprocate; hence I consider my reasons for giving you the assurance and then, should you help me, reciprocating, in relation to my reasons for the other courses of action I might undertake, and that do not include my giving you a sincere assurance. And I conclude that my outcomeoriented reasons best support the first course of action. Note that I may not consider, among the alternatives, giving you a sincere assurance and then not reciprocating. Even if this would be better supported by my reasons for acting, and even though it is possible for me both to give you a sincere assurance and then not to honour it, and indeed to decide to give you a sincere assurance and then later to decide not to honour it, it is not possible for me to make a single decision to do both. For deciding to give you a sincere assurance involves deciding to honour it, so that to decide to give you a sincere assurance and not to honour it would be to decide both to honour and not honour it. Giving you a sincere assurance and not reciprocating is then not a possible course of action, considered as a single subject for deliberation. To this point, my account of deliberation about the future makes reference only to outcome-oriented reasons, but for the entire course of action from now until then, and not merely for the ways an agent might act then. But in deliberating among courses of action, an agent would be mistaken simply to conclude with a decision in favour of that course best supported by these outcome-oriented reasons. A course of action is not decided upon and then carried out as a whole. Insofar as it involves a succession of actions, it involves a succession of choices or possible choices, and these may not be ignored. If 1 am to decide rationally on a course of action, I must expect that I shall have good reason to carry it
46
David Gauthier
out, and so to choose the successive particular actions that constitute the course. Hence, however desirable it may seem to me to decide on a course of action, I must consider whether it will also seem desirable to perform the various acts it includes, when or if the time comes. This further consideration does not simply reintroduce outcomeoriented reasons for these future acts. Their desirability turns on importantly different factors. But before I explain what these are, I should consider a challenge to the claim that any further consideration is necessary. For if an agent has adopted a course of action, and done so after rational deliberation, then she has reason to carry it out, and so to perform the particular actions it requires, unless she has reason to reconsider. Without reason to reconsider it would be unreasonable for her to ask, when or if the time comes, whether it would seem desirable to perform the various actions required by her course of action. And if it would be unreasonable for her to ask this, then why need she ask at the outset whether it will seem desirable to perform these actions when or if the time comes? Why need she consider more than the overall desirability of the course of action? But what determines whether it is rational for an agent to reconsider? For it may be rational for an agent to reconsider her course of action if she has failed to consider whether it will seem desirable to perform the particular actions it requires when or if the time comes. Now if we suppose the agent's values or concerns are to remain unchanged, then the grounds of rational reconsideration, it may be suggested, must relate to one or both of two possibilities. The first is that the agent has come to recognize that the circumstances in which she adopted her course of action were not what she reasonably took them to be in adopting it, so she should reconsider the rationality of adoption in the light of her new understanding. And the second is that the circumstances in which the agent now finds herself were not those that she reasonably expected in adopting her course of action, so she should reconsider the rationality of continuing in the light of the actual current circumstances. But if these are the only grounds of rational reconsideration, she will have no reason to consider the desirability of performing the various particular actions required by her overall course if she finds herself in any of those circumstances, present or future, that she envisages in adopting it. And so she has no reason to deliberate in advance about the desirability of carrying on with her course of action in the circumstances that she envisages as obtaining or arising. I reject this argument. Grant that it is unreasonable to reconsider one's course of action if one has adopted it as a result of rational deliberation and circumstances that are as one envisaged or expected. Nevertheless, adopting it as a result of rational deliberation requires more
Intention and Deliberation
47
than considering its overall expected desirability. One must also consider at the outset whether it would seem desirable at the time of performance to choose the various acts it includes - not because one will be rationally required to consider this desirability at the time of performance, but to obviate the need for this further consideration. For suppose there is no such need. Consider this situation. I want to get my way with you in some matter, and I think that the most effective way of doing so would be to threaten you with something pretty nasty if you do not agree. And suppose that for whatever reason I think that only a sincere threat will do - not perhaps because I have any compunction about being insincere but simply because I know from past experience that I am a hopeless bluffer. But if a threat is sincere, then it must involve the intention to carry it out in the event of non-compliance by the threatened party. I cannot decide sincerely to threaten you without deciding to carry out the threat if need be. So I am faced with the prospect of a course of action that includes issuing the threat and, should it fail, carrying it out. Of course, I expect it to succeed, but I must still consider the possibility that it will fail. And now it occurs to me that should it fail, then carrying it out will be something that I shall have good reason not to do. I do not think that carrying it out will enhance my credibility as a threatener to any great extent, so I have nothing to gain from carrying it out, and I should, let us suppose, have to do something that I should find quite costly in order to be nasty to you. Indeed, I think, were you not to comply, then I should be worse off carrying out the threat than if I had never made it. And this leads me to conclude that it would not make sense or be reasonable for me to carry out the threat, and that in fact I would not carry it out. But then I cannot intend or decide to do it - at least not in any direct or straightforward way. If I intend to do something, then I must at least not consciously believe that I shall definitely not do it. To be sure, it would not be rational for me to reconsider whether to carry out a failed threat, provided the circumstances of failure were what I envisaged they would be were the threat unfortunately to fail, if I had rationally issued the threat in the first place. But my point is that it would not be rational to issue the threat, and with it the intention to carry it out if need be, given that should it fail, one would then be worse off carrying it out than had one never issued it. That carrying out a threat would leave one worse off than had one not made it is a reason against carrying it out. It is this feature of the situation in which one would have to carry it out - a feature that one may be in a position to recognize at the outset - that leads one to judge that it would not be rational to carry out the threat, and thereby makes the whole course of action irrational for one to adopt.
48
David Gauthier
Deliberating rationally about a course of action is a complex matter. An agent must consider both her reasons for choosing the course as a whole in preference to its alternatives, and also her reasons for choosing the particular actions it requires. But the latter need to be set in the context of the former, in a way that I shall now try to explain. In the situation that I have just considered, the problem that I identified in carrying out the threat was not merely that I could do better not to carry it out, but also that I should have done better not to have made it, than I should do to carry it out. If the threat fails, then the situation is no longer one in which my course of action is best. However reasonable my initial expectation may have been, the failure of my threat shows that the course of action consisting of issuing a threat and carrying it out if need be, actually turns out worse for me than an alternative in which no threat is issued. What this suggests is that within the context provided by a course of action, the reasons relevant to performing a particular action focus not on whether the action will yield the best outcome judged from the time of performance, but on whether it will yield a better outcome than the agent could have expected had she not undertaken that course of action. Consider again my deliberation in the light of your offer to help me provided I am willing to return your help next week. I judge that the best course of action that I can adopt is to offer you a sincere assurance to reciprocate, which brings with it my decision now to reciprocate next week. But I then ask myself whether I shall have reason next week to carry this course of action through. And I reflect that, although next week my outcome-oriented reasons would favour not reciprocating, yet I shall still in all likelihood judge the course of action that includes reciprocating as better than any alternative that I might have chosen. (Recall that giving you a sincere assurance and not reciprocating is not a course of action that I can choose.) And this, I claim, gives me sufficient reason to reciprocate. Suppose that I issue a threat and it fails. Then, even though the circumstances I may find myself in are exactly those that I anticipated should the threat fail, and even though I recognized that I might find myself in these circumstances, yet in making the threat, my expectation was that I should benefit thereby, and I now know that expectation to have been mistaken. Suppose on the other hand that I give an assurance and it succeeds -1 agree to reciprocate and you in consequence help me. Then if the circumstances I find myself in are those that I anticipated should my assurance succeed, my expectation that I should benefit thereby has proved correct. And this difference, between the mistaken expectation associated with a failed threat, and the confirmed expectation associated with a successful assurance, is crucial to deciding the
Intention and Deliberation
49
rationality of continuing one's course of action. It is not rational to continue a course of action if the expectation associated with adopting it has proved mistaken, and if continuing it is not then supported by one's outcome-oriented reasons for acting. It is rational to continue a course of action if the expectation associated with adopting it has proved correct, even if continuing it would not then be supported by one's outcome-oriented reasons for acting. These two theses are at the core of my argument. Let us say that a course of action is confirmed at a given time, if at that time the agent may reasonably expect to do better continuing it than she would have expected to do had she not adopted it. Then to deliberate rationally about adopting a course of action, an agent must consider both whether it is adequately supported by her outcomeoriented reasons at the time of adoption, and whether her expectation is that the course would be fully confirmed - that is, confirmed in each possible situation in which it would require some particular action or decision. Full confirmation constitutes a filter; only courses of action that the agent expects would be fully confirmed are eligible for adoption. Among those she judges fully confirmed, the agent rationally adopts the course best supported by her outcome-oriented reasons for acting, and then does not rationally reconsider unless she becomes aware that circumstances at the time of adoption were in relevant ways not as she envisaged them, or that her present circumstances differ relevantly from what she expected at the time of adoption. If she comes to think that her circumstances at the time of adoption differed from what she then believed, she must ask whether she actually had adequate outcome-oriented reasons to adopt it. If she comes to think that her present circumstances differ from those she expected, she must ask whether in her actual circumstances it remains confirmed. And negative answers to either of these questions require her to reopen deliberation and consider in her present situation what course of action she should follow. This account of deliberation is intended only as a preliminary sketch. One complicating aspect that should be kept in mind is that one course of action may be adopted in the context of a more embracing course, which will serve as a further filter on eligibility. Suppose for example that I have a general policy of giving only sincere assurances; then courses of action involving insincere assurances would not be eligible for deliberative consideration, even if they might be eligible in themselves and best supported by my outcome-oriented reasons for acting. A fully confirmed course of action whose adoption is adequately supported by an agent's outcome-oriented reasons for acting may
50
David Gauthier
require particular actions not best supported at the time of performance by the agent's outcome-oriented reasons. The agent may know this, but will nevertheless consider herself to have adequate reason to perform such actions. She might express this reason at the time of performance by noting that she expects to do better in terms of her outcome-oriented reasons, than if she had not adopted the course of which the action is part. And done what instead? What alternative does she compare to continuing with her adopted course of action? I suggest as a first approximation that the relevant alternative would be the best course of action that would not have required her to act against the weight of her outcome-oriented reasons at the time of performance. But this is a complex issue that I shall not pursue here. Instead, I shall conclude with two further matters. The first is to note the implications of my account of deliberation for Kavka's toxin puzzle. The second is to state very briefly the rationale for my account - why is deliberation that accords with it rational?
Imagine an individual who is an exceptionally astute judge of the intentions of her fellows, and who - perhaps partly in consequence - is extremely wealthy. Bored with making yet more money, she devotes herself to the experimental study of intention and deliberation. She selects persons in good health, but otherwise at random, and tells each of them that she will deposit $1,000,000 in his bank account at midnight, provided that at that time she believes that he intends, at 8 a.m. on the following morning, to drink a glass of a most unpleasant toxin whose well-known effect is to make the drinker quite violently ill for twenty-four hours, by which time it has passed entirely through his system leaving no after-effects. Her reputation as a judge of intentions is such that you think it very likely that at midnight, she will believe that you intend to drink the toxin if and only if you do then so intend. She puts her offer to you; how do you respond? This is Kavka's puzzle. Kavka - and Bratman - think that it would be irrational for you to drink the toxin. And one can hardly deny that when the time for drinking it is at hand, you would have excellent outcome-oriented reasons not to do so. But let us suppose that you would be willing to drink the toxin in order to gain a million dollars. Suppose, then, that the offer were to put $1,000,000 in your bank account if at 8 a.m. tomorrow you were to drink the toxin. Deliberating now about what to do, you would surely decide to drink tomorrow morning, thus forming the intention to do so. So you can form the intention to drink the toxin. Now return to the actual offer. Since you believe that forming the intention to drink will lead the experimenter to deposit the money in your bank account, then deliberating now about what to do, isn't it rational for you to form the intention to drink the toxin, and so to decide to drink it?
Intention and Deliberation
51
Bratman agrees that you have good reason now to have the intention to drink the toxin, if it is possible for you to have it without undue cost. But he denies that this reason can affect your deliberation about whether to drink it, since it will not apply tomorrow morning. He insists that you cannot be led to form the intention to drink by deliberating rationally about what to do tomorrow, since your deliberation will turn on what action will then be supported by your outcomeoriented reasons. Bratman thinks that you may have good reason to cause yourself to intend to drink the toxin tomorrow morning, but this way "will not be simply ordinary deliberation" (Bratman 1987, p. 103). Bratman agrees that it would not be rational to reconsider an intention if one has adopted it as a result of rational deliberation and circumstances are as one envisaged or expected. And so if one had acquired the intention to drink the toxin through deliberation, it would indeed be rational to drink it. But he refuses to extend this view of reconsideration to cases in which one has caused oneself to have an intention rather than adopting it through deliberation. Hence, even though he agrees that it would be rational for one to cause oneself before midnight to intend to drink the toxin, he denies that "reasonable habits of reconsideration ... would inhibit reconsideration" (Bratman 1987, p. 106), and claims that reconsideration would lead one to decide not to drink. I can agree with Bratman that if an intention is not acquired through rational deliberation, then it may be rational to reconsider it even if circumstances are as one envisaged. But, of course, I deny that the intention to drink the toxin is not acquired through rational deliberation. Faced with the experimenter's offer, you should consider what course of action to adopt. Outcome-oriented reasons support intending to drink the toxin, even though this carries with it the decision to drink it. You must then consider whether, come tomorrow morning, you will have good reason to implement your decision. And here you do not consider only outcome-oriented reasons. Instead, you ask yourself whether you would expect to be better off carrying it out, than if you had not adopted the course of action that requires you to make it. Had you not adopted the course of action, you would expect to find your bank balance unchanged. Having adopted it, you would expect to find it enhanced by $1,000,000 - well worth one day's illness. And so your course of action would be confirmed. It is rational for you to form the intention to drink the toxin, and to do so deliberatively. And it would not be rational for you to reconsider your decision to drink the toxin tomorrow morning. Bratman thinks it obvious that it would not be rational for you actually to drink the toxin. I think that it would be rational for you to drink it - although I do not claim that this is obvious. Why do I think this? Of
52
David Gauthier
course, my account of rational deliberation endorses the course of action that embraces intending to drink the toxin and then actually drinking it, but why do I think that my account is correct? A full answer to this question would take me far beyond the confines of this present paper. Part of that answer is implicit in my criticism of focusing exclusively on outcome-oriented reasons in deliberating about what to do, but let me try to make it more explicit. Suppose that, agreeing with Bratman, we accept a planning theory of intention, or perhaps more generally of deliberation. The core of such a theory is the idea that we deliberate, not only to decide on particular actions, but also to decide on courses of actions, or plans. And when an agent deliberatively adopts a plan, she then confines her subsequent deliberation to actions conforming to her plan unless she has adequate reasons to reconsider, where such reasons depend on her recognition that she mistook her initial circumstances, or formed mistaken expectations about the circumstances in which some act is required by the plan. Consider three forms that a planning theory might take. The first is roughly akin to Bratman's, and maintains that a plan is eligible for adoption (henceforth 1-eligible) if and only if each intention that it requires the agent to form would be supported by the outcomeoriented reasons she expects to have in the situation in which she would act on the intention. The second is the one I have proposed, and maintains that a plan is eligible for adoption (2-eligible) if and only if each intention that it requires the agent to form would be confirmed in the situation in which she would act on the intention, in the sense that the agent may reasonably expect to do better in performing the action than she would have done had she not formed the intention, or had not adopted the plan requiring it, but had instead restricted herself to 1-eligible plans. And the third maintains that any plan is eligible for adoption. All three agree that the agent should choose among eligible plans on the basis of her outcome-oriented reasons. These three versions of the planning theory differ on the scope of eligible plans, with the first being the most restrictive. Now one might defend no restrictions, and so the third version of the theory, on the ground that whenever the agent would choose a different plan under the third rather than under either of the other two, it can only be because some plan ineligible under one of the first two versions is better supported by her outcome-oriented reasons than any eligible plan. But against this one may note that according to the third version, it may be irrational for an agent to reconsider a plan even if she recognizes that she would have done better not to have adopted it, and would now do better to abandon it. And if this seems a decisive objection, then we may naturally turn to the second, version, since 2-eligibility excludes a
Intention and Deliberation
53
plan only if it would require being followed in the face of evident failure. And the superiority of the second to the first version is clear; if an agent would choose different plans under these versions, it can only be because some plan lacking 1-eligibility is better supported by the agent's outcome-oriented reasons both at the time of choice and at the times of execution than any 1-eligible plan. Rational deliberation about plans is responsive to the agent's outcome-oriented reasons for acting. But responsiveness is not a simple matter. The first version of the planning theory affords greater direct responsiveness at the time of performance, but less responsiveness overall, than the second. The third version affords greater direct responsiveness at the time of adoption than the second, but lacks responsiveness at the time of performance. I conclude that the second version mine - best characterizes rational deliberation. What is the relation between intention and deliberation, as it emerges from this discussion? Intentions that an agent has rationally formed, and has no reason to reconsider, constrain her future deliberation; she considers only actions compatible with those intentions. On what grounds are intentions rationally formed? Here there are two different questions. What makes it rational to settle on some intention or other? The benefits of deciding matters in advance, more generally of planning, provide an answer. Given that it is rational to settle on some intention or other, what makes it rational to form a particular intention? The orthodox answer, accepted even by planning theorists such as Bratman, is that it is rational to form a particular intention on the basis of one's expected outcome-oriented reasons for performing the action. And this is frequently the case. But this answer overlooks one important benefit of deciding matters in advance - the effect it may have on the expectations, and consequent actions, of others. This effect may give one outcome-oriented reasons for intending quite unrelated to one's reasons for performing. Deliberation may be concerned not only with what to do then, and so in consequence what to intend now, but also with what to intend now, and so in consequence what to do then. I have tried to accommodate both of these deliberative concerns in a unified account. And the rationale of this account is pragmatic. The person who deliberates about future actions and forms intentions in the manner that I have proposed may expect to do better overall in terms of her outcome-oriented reasons for acting than were she to deliberate about future actions and form intentions solely on the basis of the outcome-oriented reasons that she would expect to have for performing those actions. But, of course, I have not shown that my account can be made fully precise, or that if it can, it must be the best account of intention and deliberation.
54
David Gatithier
Acknowledgments I am grateful to All Souls College, Oxford, and the John Simon Guggenheim Memorial Foundation, for support at the time this paper was written. References Bratman, Michael E. (1987). Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press. All quotations and page numbers are from this book. Kavka, Gregory S. (1983). The toxin puzzle. Analysis, 43 (1983): 33-36.
4
Following Through with One's Plans: Reply to David Gauthier Michael E. Bratman
1. We are planning agents. Planning helps us to realize ends that we value or desire, in part by helping us achieve important kinds of coordination, both intra-personal and social. A theory of practical rationality for agents like us should include a theory of rational planning (Bratman 1987). At least part of such a theory, and the focus of this discussion, will be a theory of instrumentally rational planning agency, planning agency that is rational in the pursuit of basic desires, ends, and values taken as given. A theory of instrumentally rational planning agency puts to one side important questions about the possibility of rational criticism of basic desires and ends. My discussion here of rational planning will be limited in this way throughout, though I will usually take the liberty of speaking simply of rational planning and action to avoid circumlocution. A theory of instrumentally rational planning agency should tell us when it is rational, in deliberation, to settle on a plan for future action. It should also provide a theory of the stability of plans - a theory that tells us when it is rational to reconsider and abandon one's prior plans. Finally, these accounts of deliberation and of plan stability should be linked: I can rationally decide now, on the basis of deliberation, on a plan that calls for my A-ing in certain later circumstances in which I retain rational control over my action, only if I do not now believe that when the time and circumstances for A arrive I will, if rational, reconsider and abandon the intention to A in favour of an intention to perform some alternative to A. We may call this the linking principle.1 In all this I am, I believe, in broad agreement with David Gauthier, both in his "Intention and Deliberation" - to which this essay is intended as a brief and partial response - and in another recent study of his (1996). But there are also disagreements, and some of these will be the focus of this discussion.2 55
56
Michael E. Bratman
2. Let me begin not with explicit disagreement, but with a difference of emphasis. Gauthier's discussion in "Intention and Deliberation" focuses on certain puzzle cases: (1) Can a rational agent win the money in Kavka's toxin case? (Kavka 1983) (2) Can rational but mutually disinterested agents who will never see each other again (and in the absence of independent moral considerations and reputation effects) co-operate just this once? (3) When can a rational agent offer a credible threat? Elsewhere, Gauthier (1996) has also discussed puzzles posed for a theory of planning by the kind of preference change illustrated by versions of the case of Ulysses and the Sirens. These puzzle cases are fascinating and important; and I will shortly join in the fray. Nevertheless, I think we should be careful not to let these cases dominate our theorizing about planning. It is a striking fact about us that we manage at all - even in the simplest cases which do not raise the cited kinds of puzzles - to organize and co-ordinate our actions over time and socially. Normally, for us to be able to achieve such co-ordinated activity we need to be able reliably to predict what we will do; and we need to be able to do this despite both the complexity of the causes of our behaviour and our cognitive limitations. I see a theory of planning agency as part of an account of how limited agents like us are at least sometimes able, rationally, to do all this. That said, I will focus on the special issues raised by puzzle cases (l)-(3). I will put to one side here issues raised by preference change cases, though I believe that they pose important questions for a full account of rational planning agency.3 Gauthier's lucid discussion saves me from the need to spell out all the details concerning (l)-(3). Suffice it to say that these cases have the following structure: I consider at tl whether I can rationally form an intention to perform a certain action (drink the toxin, help you if you have helped me, retaliate if you do not comply) at t3. I know that my so intending at t1 would or may have certain benefits - my becoming richer at £2; my being aided by you at £2; your compliance at t2. But these benefits, even if realized, would not depend causally on my actually doing at t3 what it is that at t11 would intend to do then; by the time t3 arrives I will either have the benefits or I will not. They are, to use Kavka's term, "autonomous" benefits (Kavka 1987, p. 21). The execution at t3 of my intention would, if called for, only bring with it certain burdens: being sick from the toxin;4 the costs of helping you (if you have helped me); the costs of retaliating (if you have not complied). But at tl I judge that (taking due account of the relevant likelihoods) the expectation of these burdens of execution is outweighed by the expectation of the associated autonomous benefits. And this judgment is based on desires / values which will endure throughout: these are not preference-change cases.
Following Through with One's Plans
57
Given that I judge th at the expected autonomous benefits would outweigh the expected burdens of execution, can I in such cases rationally settle at t1 on a plan that involves (conditionally or unconditionally) so acting at t3l It is not clear that I can. Recall the linking principle. It tells us that rationally to settle on such a plan I cannot judge that I would be rationally required at t3 to reconsider and abandon my intention concerning £3 - the intention to drink, to help (if you have helped me), or to retaliate (if you have not complied). But I know that under the relevant circumstances at t31 would have available a superior alternative option: not drinking, not helping, or not retaliating. These alternatives would be superior from the point of view of the very desires and values on the basis of which I would settle on the plan in the first place. That suggests that I would at t3 be rationally required to abandon that intention. So, given the linking principle, I am not in a position rationally to decide on the plan in the first place. Should we then conclude that, despite their attractions, such plans are not ones on which a rational planning agent can decide on the basis of deliberation? Concerning at least cases (1) and (2) Gauthier thinks that a theory of rational planning should resist this conclusion, and so he seeks a different view about rational reconsideration. It is, of course, sometimes rational to abandon a prior plan if one discovers that in settling on that plan one had relied on a conception of one's present or future situation that was importantly inaccurate. But Gauthier's focus is on cases in which one makes no such discovery. Concerning such cases, Gauthier distinguishes between "three forms that a planning theory might take" (Gauthier 1997, p. 51). Each involves a different view of rational reconsideration. On the first theory, one should reconsider one's prior plan to A at t under circumstance c if one knows one has available at t, given c, an incompatible alternative that is favored over A by one's "outcomeoriented reasons."5 This approach to reconsideration leads, we have seen, to scepticism about whether a rational planner can settle on the cited plans in cases (l)-(3). On the third theory, one should not reconsider if one has discovered no relevant inaccuracy of belief, and the original decision in favour of the plan really was favoured by the expected balance of autonomous benefits and burdens of execution. Gauthier's present proposal falls between these two approaches. Suppose that at the time of action one has discovered no relevant inaccuracy of belief, and the original decision in favour of the plan was favoured by the expected balance of autonomous benefits and burdens of execution. One should at the time of action reconsider this plan if and only if one knows that following through with the plan then would be inferior to what one would have accomplished if one had not settled on this plan in the first place but had instead planned in
58
Michael E. Bratman
accordance with the constraints of the first theory. Since the first theory highlights a comparison of the planned action with alternatives available at the time of execution of the plan, we may call it the time of execution (TE) view. Gauthier's theory, in contrast, highlights a comparison of the planned action with certain counterfactual alternatives. So let us call it the counterfactual comparisons (CFC) view. Given the linking principle, these different views about rational reconsideration yield different views about which plans are eligible for adoption in the first place. Thus, Gauthier associates the first theory with a plan's being "1-eligible," and his theory with a plan's being "2eligible." Since the third theory provides no further constraint (over and above a concern with the expected balance of autonomous benefits and burdens of execution) on a plan's being eligible for adoption, we may call it the no further constraint (NFC) view. The NFC view is similar to Gauthier's (1984) earlier view about deterrent intentions. On this earlier view, it would be rational to adopt an intention to retaliate if attacked so long as the expected impact of one's settling on this plan was optimal. And if it was rational to adopt the deterrent intention it would be rational to execute it should the occasion arise and one discovers no relevant inaccuracy of belief. An implication was that one could be rational in retaliating even though so acting was completely at odds with one's outcome-oriented reasons for one's various alternatives at the time of the retaliation. Gauthier now rejects this view, replacing it with the CFC view. Very roughly, and ignoring issues about policies to which Gauthier alludes,6 on the CFC view such retaliation may well not be rational. In opting for the deterrent intention one in effect gambled that the deterrence would succeed; now that it has failed one can see that one is worse off, as assessed by one's outcome-oriented reasons, in retaliating than one would have been had one eschewed the deterrent intention in the first place. This approach to deterrent intentions allows Gauthier to drive a wedge between such cases, on the one hand, and cases (1) and (2), on the other. In the toxin case, for example, the intention to drink the toxin can pass the CFC test; for in drinking it one is completing a course of action that is superior, from the perspective of one's outcome-oriented reasons, to what one would have achieved had one earlier decided not to drink it. In contrast, on the standard interpretation of the toxin case, the intention to drink the toxin fails the TE test. At the time of execution one knows one has available a superior alternative to drinking, namely: not drinking. Gauthier's reason for preferring the CFC view to the TE view is pragmatic: "the person who deliberates about future actions and forms intentions in [this] manner ... may expect to do better overall in terms
Following Through with One's Plans
59
of her outcome-oriented reasons" (Gauthier 1997, p. 52-3). In particular, a CFC agent can rationally achieve the benefits at stake in cases (1) and (2). Note the form of argument: A general structure of planning is justified by appeal to its expected long-run impacts; the particular piece of planning is justified insofar as it conforms to this justified general structure. This is a two-tier account, one that seems similar in structure to versions of rule-utilitarianism. That said, we may wonder why Gauthier does not stick with the NFC view; for it seems that that is where a straightforward application of a two-tier pragmatic account will lead. With an eye on this concern, Gauthier writes: "against this one may note that according to the [NFC view] it may be irrational for an agent to reconsider a plan even if she recognizes that she would have done better not to have adopted it, and would now do better to abandon it" (Gauthier 1997, p. 52). But so far this is only a statement of the difference between these two views, not an independent argument. Perhaps the idea is that the two-tiered pragmatic account should be tempered by the intuition that it is rational to reconsider in certain cases of failed deterrence. But then we need to know why an analogous argument cannot be used, against CFC, in favour of TE; for there is also a strong intuition that it would be rational to reconsider the intention to drink the toxin were the time to come.7 So there is reason to worry that CFC may be an unstable compromise between NFC and TE. 3. I agree with Gauthier that a substantial component of a theory of rational reconsideration should have a pragmatic, two-tier structure (Bratman, 1987, ch. 5). But I do not think that such a pragmatic, two-tier approach exhausts the subject. Suppose at ^ I form the intention to A at t2. Suppose that when t2 arrives I discover that I simply do not have it in my power to A then. I will, of course, give up my prior intention; for, baring some very special story, I cannot coherently embark on an effort to execute my prior intention to A once I see that I cannot A. Of course, if I had earlier known that I would not be able so to act I would not have earlier formed the intention to A. But what baffles intention is not the newness of my information, but what information it is. Why am I rationally obliged to give up my prior intention on learning of my inability? Our answer need not appeal solely to the good consequences of a general strategy of giving up one's intention in such cases; though such a strategy no doubt would normally be useful. We can also appeal directly to a kind of incoherence involved in intending and attempting to A while knowing one cannot A. If I am at
60
Michael E. Bratman
all reflective, I cannot coherently see what I am doing as executing an intention to do what I know I cannot do. Now consider a different case. Earlier I formed the intention to A, and the time has arrived. But I have learned of a specific alternative to A that it is on-balance clearly superior to A as a way of realizing the very same long-standing, stable and coherent set of desires and values which were and continue to be the rational basis for my intention to A. My judgment of the superiority of this alternative has taken into account the various (perhaps substantial) costs of changing my mind - including the need for various forms of re-planning and the impact on my ability to co-ordinate with others. Does this new judgment, like the judgment that I simply cannot A, require me to abandon my prior intention? Though the issues here are complex, it seems to me that the answer is "yes." After all, in the case at issue I suppose that by abandoning my prior intention I best realize the very same long-standing, stable and coherent desires and values which are the basis for that intention in the first place. Now, no plausible theory will deny that many times such a judgment should trigger reconsideration and change of mind. But it might be responded that this is not always the case. Suppose there were a general strategy concerning reconsideration, a strategy that would have superior consequences over the long haul. And suppose that this general strategy supported non-reconsideration in certain special cases in which one has the cited judgment about the superiority of an alternative to A. Would it not be rational of me to endorse such a general strategy and so follow through with my intention to A? Suppose I say to myself: "I would, to be sure, do better this time by abandoning my prior intention - even taking into account the relevant costs of reconsideration, re-planning, and so on. But there is a general strategy of non-reconsideration that has a pragmatic rationale and that enjoins that I nevertheless follow through with my prior intention. So I will stick with my prior intention." But why do I give this general strategy such significance? The answer is: because of its expected impact on realizing long-standing, stable and coherent desires and values. But if that is the source of the significance to me of that strategy, how can I not be moved, on reflection, by the fact that I can even better realize those same desires and values by abandoning my prior intention in this particular case? Following through with my plan is, after all, not like following through with my golf swing: following through with my plan involves the real possibility of changing my mind midstream. Sticking with a general strategy of non-reconsideration in such a case solely on grounds of realizing long-standing, stable and coherent desires and values, while fully aware of a superior way of realizing those very same desires and values on this particular occasion, would
Following Through with One's Plans
61
seem to be a kind of incoherence - as we might say, a kind of "plan worship."8 Or at least this is true for an agent who is reflective - who seeks to understand the rationale both for her general strategies of (non)~ reconsideration and for her particular choices. This suggests that a reasonably reflective planning agent will reconsider and abandon her prior intention to A, in the absence of a change in basic desires and/or values, if she (A)
believes that she cannot A
(B)
believes of a specific alternative to A that it is on balance superior to A as a way of realizing the very same long-standing, stable and coherent desires and values that provide the rational support for the intention to A.
or
Or, at least, she will change her mind in this way if these beliefs are held confidently and without doubts about her judgment in arriving at them. Normally such beliefs will represent new information. But what obliges reconsideration is not the newness of the information, but what the information is. Of course, when the time comes to execute a prior intention one may not have either such belief and yet there may still be an issue about whether to reconsider. Indeed, this is probably the more common case. One may have new information and yet it may not be clear whether or not, in light of this new information, one could do better by changing one's plan. In such cases we frequently depend on various non-deliberative mechanisms of salience and problem-detection: we do not want constantly to be reflecting in a serious way on whether or not to reconsider (Bratman, 1987, ch. 5). Concerning such cases we can take a pragmatic, two-tier stance. We can evaluate relevant mechanisms and strategies of reconsideration in terms of their likely benefits and burdens, with a special eye on their contribution to the characteristic benefits of planning; and we can then see whether particular cases of (non)reconsideration are grounded in suitable mechanisms. Given our limits, the costs of reconsideration and re-planning, and the importance of reliability for a planning agent in a social world, such a pragmatic approach exerts clear pressures in the direction of stability. But this does not override a general demand to reconsider in the face of clear and unwavering beliefs along the lines of (A) or (B). We want mechanisms and strategies of reconsideration that will, in the long run, help us to achieve what we desire and value, given our needs for co-ordination and our cognitive limitations. We can assess
62
Michael E. Bratman
such strategies and mechanisms in a broadly pragmatic spirit; and we can then go on to assess many particular cases of reconsideration, or its absence, in light of the pragmatic reasonableness of the relevant general mechanisms of reconsideration. But there are also occasions when reconsideration is driven primarily by demands for consistency or coherence. This seems to be true in those cases in which one has beliefs along the lines of (A) or (B). Return now to the toxin puzzle. It follows from what I have been saying that if it is clear to me, when the time comes, that not drinking the toxin is superior, in the relevant sense, to drinking it then I should not drink. And if I know earlier that this will be clear later, at the time of action, then I cannot rationally form earlier, on the basis of deliberation, the intention to drink later. A similar point follows concerning cases (2) and (3): in all three cases rationality seems to stand in the way of the autonomous benefits.9 But how will planning agents achieve the benefits of sequential cooperation? Isn't the support of interpersonal co-operation one of the main jobs of planning? A planning agent will typically be able to profit from the kind of sequential co-operation at stake in case (2) by assuring the other that she will do her part if the time comes. This will work so long as planning agents can be expected to recognize that such assurances typically induce associated obligations. These obligations generate reasons for playing one's part later, after one has been helped by the other. When and why do such assurances generate obligations? A plausible answer might well include an appeal to our planning agency. The ability to settle matters in advance is crucial to planning agents - especially planning agents in a social world. The ability to put ourselves under assurance-based obligations would help us settle certain matters, both for others and for ourselves.10 So perhaps the fact that we are planning agents helps support principles of assurance-based obligation; and we intentionally put ourselves under such obligations in the pursuit of co-operation. In this way our planning agency might help to provide indirect support for co-operation, by way of its support for principles of assurance-based obligation. This would still contrast, however, with an approach like Gauthier's that seeks to explain the rationality of co-operation in such cases by a direct appeal to principles of rational planning. Such a story about assurance can allow that, as Gauthier says, "a sincere assurance carries with it an intention to perform the assured act" (Gauthier 1997, p. 44). It just notes that one's reasons for so acting, and so one's reasons for so intending, can be affected by the assurance itself. My assurance that I will help you if you help me gives me reason
Following Through with One's Plans
63
to help you if the occasion arrives, because it generates an associated obligation which I recognize. If this reason is sufficiently strong it can make it rational for me later to help you, if you have helped me. Knowing this, I can rationally, in giving the assurance, intend to help you (if you help me). My reasons for A-ing that support my intention to A (the intention that qualifies my assurance as sincere) can depend on my assurance that I will A. My assurance can be, in effect, that I will help you (if you help me) in part because of this very assurance. Return again to the toxin case. In posing her challenge to me, the billionaire does not ask me to assure her that I will drink the toxin; she asks, instead, that I just intend to drink it. Indeed, it is an explicit condition for winning the money that I do not support this intention by making a promise to her or someone else that I will drink (Kavka 1983, p. 34). So the toxin case is very different from cases of sequential cooperation in which one achieves the benefits of co-operation by issuing appropriate assurances.11 4. A theory of rational planning should help to explain how limited agents like us manage to organize and co-ordinate so many of our activities over time and with each other. But that does not mean that the accessibility of all forms of valued co-ordination and co-operation can be explained by direct appeal to principles of rational planning. A modest theory of planning would emphasize the central roles of planning and of associated mechanisms of reconsideration. But such a theory would also draw on forms of assurance-based obligation as support for certain kinds of co-operation.12 Plans will normally have a significant degree of stability; intelligent planners will normally follow through with their plans and need not constantly start from scratch. But in versions of cases (l)-(3) in which a reflective agent sees clearly, without the need for further information or reasoning, that, in light of her long-standing, stable and coherent values, she does best by abandoning her prior plan, she should.
Acknowledgments This is a revised and recast version of my 1993 reply to David Gauthier's paper, "Intention and Deliberation." Both Gauthier's paper and my reply (titled "Toward a Modest Theory of Planning: Reply to Gauthier and Dupuy") were presented at the June, 1993 Cerisy, France, conference on "Limitations de la rationalite et constitution du collectif." My original reply, which also includes a response to Jean-Pierre Dupuy's paper "Time and Rationality: The Paradoxes of Backwards Induction," will appear in French translation in the proceedings of that
64
Michael E. Bratman
conference. My work on this essay was supported in part by the Center for the Study of Language and Information.
Notes 1 The linking principle is concerned with rational decision based on deliberation. It has nothing to say about the possibility of causing oneself to have certain intentions/plans by, say, taking a certain drug. My formulation here has benefited from discussions with Gilbert Harman, John Pollock, and Bryan Skyrms. I discuss the linking principle also in "Planning and temptation" and in "Toxin, temptation and the stability of intention." 2 Other elements of my response to Gauthier's recent essays are in "Toxin, temptation and the stability of intention." 3 I discuss issues raised by preference change cases in "Planning and temptation" and in "Toxin, temptation and the stability of intention." I believe such cases should be given an importantly different treatment than that proposed here for cases (l)-(3). 4 At least, this is how the toxin case is standardly understood. I will proceed on this assumption (which Gauthier shares), but raise a question about it below in note 11. 5 Gauthier uses this terminology in "Intention and Deliberation." I assume that, despite the talk of "outcome," such reasons can involve reference to the past. I assume, for example, that reasons of revenge or gratitude can count as "outcome-oriented." 6 Gauthier develops these remarks about policies in "Assure and threaten." 7 My discussion below, in section 3, offers one such argument; my discussion in "Toxin, temptation and the stability of intention" offers another. 8 I use this terminology to indicate a parallel between what I say here and Smart's (1967) criticism of rule-utilitarianism as supporting unacceptable "rule worship." I discuss this parallel further in "Planning and the stability of intention," where I also discuss related views of Edward McClennan (1990). My remarks here are similar in spirit to those of Pettit and Brennan (1986), p. 445. 9 I reach a similar conclusion, by way of a somewhat different (and more developed) argument, in "Toxin, temptation and the stability of intention." A main concern in that paper is to clarify differences between cases (l)-(3), on the one hand, and certain cases of preference change, or of intransitive preferences, on the other. The latter sorts of cases, as well as constraints imposed by our cognitive limitations, lead to significant divergences from the TE view. 10 See Scanlon (1990), esp. pp. 205-06. Of course, an account along such lines could not simply appeal to the usefulness of an ability to put ourselves under assurance-based obligations. There are many abilities that we do not have even though it would be useful to have them.
Following Through with One's Plans
65
11 There may, however, be a complication. In the science-fiction circumstances of the toxin case, my formation of an intention to drink would be known directly by the billionaire; and I would want the billionaire to come to know this and to act accordingly. All this is common knowledge. It is also common knowledge that the billionaire cares about whether this intention is formed (though she does not care whether I actually drink the toxin) and will act accordingly if it is formed. So in this science-fiction setting it may be that my forming that intention, while not itself a promise, is an assurance to the billionaire that I will drink (in part because of this very assurance). Normally, an assurance to another person requires a public act. But the known science-fiction abilities of the billionaire may make such a public act unnecessary. If my forming the intention is my assuring the billionaire it then becomes an open question whether such an unusual type of assurance induces an obligation to act. If it did turn out that my intention to drink would itself generate an assurance-based obligation, it might then turn out that I could thereby rationally drink the toxin having (earlier) rationally intended to drink. So I might be able, in full rationality, to win the money! Having noted this possible line of argument, I put it aside here. For present purposes it suffices to note that even if we were led to its conclusion it would not be because of Gauthier's CFC view. 12 I believe we will also want to appeal to shared intention. See my "Shared intention."
References Bratman, Michael E. (1987). Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press. (1992). Planning and the stability of intention.Minds and Machines, 2:1-16. (1993). Shared intention. Ethics, 104: 97-113. (1995). Planning and temptation. In Larry May, Marilyn Friedman, and Andy Clark (eds.), Mind and Morals (Cambridge, MA: Bradford Books/ MIT Press), pp. 293-310. (forthcoming). Toxin, temptation, and the stability of intention. In Jules Coleman and Christopher Morris (eds.), Rational Commitment and Social Justice (New York: Cambridge University Press). Gauthier, David (1984). Deterrence, maximization, and rationality. Ethics, 94: 474-95. (1994) Assure and threaten. Ethics, 104: 690-721. (1996). Commitment and choice: An essay on the rationality of plans. In Francesco Farina, Frank Harm, and Stefano Vannucci, (eds.), Ethics, Rationality and Economic Behavior (Oxford: Clarendon Press; Oxford University Press), pp. 217-43. (1997) Intention and deliberation. Tn Peter Danielson (ed.), Modeling Rational and Moral Agents (Oxford: Oxford University Press), pp. 40-53.
66
Michael E. Bratman
Kavka, Gregory (1983). The toxin puzzle. Analysis, 43: 33-36. (1987). Some paradoxes of deterrence. In Gregory S. Kavka,Mora/ Paradoxes of Nuclear Deterrence (Cambridge: Cambridge University Press), pp. 15-32. McClennen, Edward F. (1990). Rationality and Dynamic Choice: Foundational Explorations. Cambridge: Cambridge University Press. Pettit, Philip, and Geoffrey Brennan (1986). Restrictive consequentialism. AMStralasian Journal of Philosophy, 64: 438-55. Scanlon, Thomas (1990). Promises and practices. Philosophy and Public Affairs, 19:199-226. Smart, J. J. C. (1967). Extreme and restricted utilitarianism. In Philippa Foot (ed.), Theories of Ethics (Oxford: Oxford University Press), pp. 171-83.
5
How Braess' Paradox Solves Newcomb's Problem A. D. Irvine
Newcomb's problem is regularly described as a problem arising from equally defensible yet contradictory models of rationality. In contrast, Braess' paradox is regularly described as nothing more than the existence of non-intuitive (but ultimately non-contradictory) equilibrium points within physical networks of various kinds. Yet it can be shown that, from a participant's point of view, Newcomb's problem is structurally identical to Braess' paradox. Both are instances of a well-known result in game theory, namely that equilibria of non-co-operative games are generally Pareto-inefficient. Newcomb's problem is simply a limiting case in which the number of players equals one. Braess' paradox is another limiting case in which the "players" need not be assumed to be discrete individuals. This paper consists of six sections. It begins, in section 1, with a discussion of how paradoxes are typically resolved. Sections 2 and 3 then introduce Braess' paradox and Newcomb's problem respectively. Section 4 summarizes an argument (due originally to Brams and Lewis) which identifies Newcomb's problem with the standard, two-person, Prisoner's Dilemma. Section 5 generalizes this result, first to an M-person Prisoner's Dilemma, then to the Cohen-Kelly queuing paradox and, finally, to Braess' paradox itself. The paper concludes, in section 6, with a discussion of the consequences of these identifications, not just for Newcomb's problem, but for all four of the paradoxes discussed. Newcomb's problem, it turns out, is no more difficult to solve than (the easy-to-solve) Braess' paradox.
1. Resolving Paradoxes Traditionally, a paradox is said to obtain whenever there exist apparently conclusive arguments in favour of contradictory propositions. Equivalently, a paradox is said to obtain whenever there exist apparently conclusive arguments both for accepting, and for rejecting, the same proposition. Yet if either of these definitions is accepted, it follows 67
68
A. D. Irvine
that many so-called "paradoxes" - including the Allais paradox, the Banach-Tarski paradox, the paradoxes of special relativity, Gibers' paradox, and others - are not genuine paradoxes at all. They are not paradoxes in this strict sense since they do not entail contradictions. Instead, each is better characterized as merely an unintuitive result, or a surprising consequence, of a particular theory or collection of beliefs. One way of making this observation more explicit is as follows: most arguments are developed relative to a theoretical context or background theory. Typically, the result of each such argument is to supplement this background theory with an additional belief, p. However, in cases where the relevant background theory already includes ~p, this expansion of the background theory results in a contradiction. The contradiction becomes paradoxical only if neither an error in the reasoning which led to p, nor an appropriately modified background theory which excludes ~p, can be found. In contrast, if either of these two conditions is met, the apparent "paradox" is resolved.1 In cases where the background theory is modified to avoid ~p, accepting p may still remain unintuitive or unexpected, but nothing more. Whether the argument which leads to p should be understood as paradoxical is therefore relative to the theoretical context in which the argument appears. Relative to one such context the acceptance of p will be paradoxical; relative to another it will not. Relative to any background theory broad enough to include some large selection of pre-theoretical, "common sense" beliefs, almost any startling or unintuitive result will be judged paradoxical. Yet it is in just these cases that paradoxes are most easily resolved. Consider, as an example, the well-known Allais paradox concerning preference selection under risk. The paradox arises as follows: You are given the option of choosing either of two alternatives, Al and A2, such that Al = 100% chance of receiving $100; A2 = 89% chance of receiving $100 + 10% chance of receiving $500 + 1% chance of receiving $0. At the same time you are also given the option of choosing either of two separate alternatives, A3 and A4, such that A3 = 89% chance of receiving $0 + 11% chance of receiving $100; A4 = 90% chance of receiving $0 + 10% chance of receiving $500.
How Braess' Paradox Solves Newcomb's Problem
69
It turns out that most people prefer Al to A2 and A4 to A3.2 Yet, to be consistent, an expected utility maximizer who prefers Al to A2 should also prefer A3 to A4. The reason is straightforward. Letting "U" represent an expected utility function and "u" some (arbitrary) unit of utility that is proportional to the value of money, we have U(A1) = (0.89)(100)M + (0.10)(100)M + (0.01)(100)w
and U(A2) = (0.89)(100)M + (0.10)(500)H + (0.01)(0)w. So preferring Al to A2 entails that (0.10)(100)w + (0.01)(100)u > (0.10)(500)w + (0.01)(0)u. At the same time, we have U(A3) = (0.89)(0)w + (.010)(100)w + (0.01)(100)w
and U(A4) = (0.89)(0)w + (0.10)(500)w + (0.01)(0)w. So preferring A4 to A3 entails that (0.10)(500)w + (0.01)(0)« > (0.10)(100)w + (0.01)(100)w. Yet this contradicts the claim that (0.10)(100)w + (0.01)(100)w > (0.10)(500)w + (0.01)(0)w. Resolving this (apparent) contradiction is not difficult. One solution is to conclude that naive expected utility is inappropriate for modeling many cases of rational decision-making - including this one since it is insensitive to factors of marginal utility and risk aversion. A second solution is simply to conclude that people sometimes act - at least in cases such as these - irrationally. Accepting either of these solutions is equivalent to accepting a slight modification to our background theory. In the first case we modify our background theory by rejecting the assumption that the application of an expected utility model will be appropriate in such contexts. In the second case we modify our background theory by rejecting the assumption that observed (real-life) epistemic agents will inevitably have fully consistent contexts of belief. In neither case will we be committed to a genuine
70
A. D. Irvine
paradox since in neither case will we be committed to a contradiction. In the first case a contradiction arises only when an admittedly incomplete model of expected utility is employed. In the second case a contradiction arises only as reported from within someone else's (obviously fallible) belief context. Thus, the Allais paradox is a paradox in name only. Relative to some belief contexts there will indeed be a contradiction; but relative to other more carefully considered contexts the contradiction disappears. In this respect, the Allais paradox is not unique. In fact, there are many examples of this relative nature of paradox. Russell's paradox (which concludes that a set S both is and is not a member of itself) arises relative to naive set theory, but not relative to ZFC or any other theory based upon the iterative conception.3 The Banach-Tarski paradox (which concludes that it is both possible and impossible to divide a solid sphere into a finite number of pieces that can be rearranged in such a way as to produce two spheres each exactly the same size as the original) arises relative to pre-Cantorian theories of cardinality (supplemented by the axiom of choice) which disallow the existence of nonmeasurable sets, but not relative to any modern theory.4 Paradoxes of special relativity such as the clock (or twin) paradox (which concludes that two observers in motion relative to each other will each observe the other's clock running more slowly than his or her own, even though at most one can in fact do so) arise only if one retains within relativity theory the (unwarranted, classical) assumption that the rate of each clock is independent of how it is measured.5 Olbers' paradox (which concludes that there both is and is not sufficient extra-galactic background radiation within an isotropic universe to lighten the night sky) arises only relative to a static conception of an infinite universe, and not relative to any modern theory incorporating a finite but expanding model of space-time.6 In each of these cases the purported paradox is resolved by altering the relevant background theory. In some cases these alterations are comparatively minor; in others, they involve a major reworking of the most fundamental aspects of the discipline under investigation. 2. Braess' Paradox Like all of the above paradoxes, Braess' paradox is a paradox in name only. It is a paradox only in the weak sense of describing the existence of non-intuitive equilibrium behaviour within classical networks of various kinds. Braess' original paradox concerned traffic flow. Despite the fact that each driver seeks to minimize his or her travel time across a system of roadways, it turns out that within congested systems the addition of
How Braess' Paradox Solves Newcomb's Problem
71
extra routes will sometimes decrease (rather than increase) the overall efficiency of the system.7 The result is surprising, since in uncongested systems the addition of new routes can only lower, or at worst not change, the travel time of each driver at equilibrium. In contrast, Braess' paradox shows that within congested systems, the addition of new routes can result in increased mean travel time for all drivers. Since Braess introduced the paradox in 1968, the same non-intuitive equilibrium behaviour has been discovered within a broad range of unrelated physical phenomena, ranging from thermal networks, to electrical circuits, to hydraulic systems.8 Within all such systems it turns out that the introduction of additional components need not increase capacity. To understand the paradox in a concrete case, consider a mechanical apparatus9 in which a mass, m, hangs from a spring, S2, which is suspended by a piece of string of length LI. This string is attached to a second spring, SI, identical to S2, which is fixed to the ceiling (Figure la). Should the connecting string be cut, mass m and the lower spring would fall to the floor. In order to avoid this possibility we add two "safety strings" to the original apparatus, one connecting the top of the lower spring to the ceiling, the other connecting the bottom of the upper spring to the top of mass m (Figure Ib). Assume further that both safety strings are equal in length, i.e., that L2 = L3, and that, by hypothesis, both of these strings will remain limp so long as the original centre string remains taut. Should the centre string be cut, it will be the two safety strings which will take up the tension. The question then arises, if the centre string is cut, will m find its new equilibrium below, above
Figure 1: Braess' paradox
72
A. D. Irvine
or identical to its original resting point? In other words, will H2 > HI (Figure 2a), will H2 < HI (Figure 2c), or will H2 = HI (Figure 2b)? Because the two safety strings are each longer than the combined height of the original string together with one of SI or S2, one might at first conclude that H2 > HI. After all, it seems plausible that once the original centre string is cut, m will fall until stopped by the (longer!) safety strings. The new equilibrium will thus be lower than the original resting point and H2 > HI. However, contrary to this reasoning, for many combinations of components, this is not the case. For many combinations of springs, strings, and weights, the equilibrium point, once the supporting string is cut, will in fact be higher than it was originally. In other words, our original (hidden) assumption that SI and S2 will remain constant in extension was mistaken. As an example, consider the case in which LI = %, L2 = L3 = 1, and both SI and S2 have spring constant k. (In other words, the extension, x, of both SI and S2 will be related to the force applied, F, by the formula F = kx.) If we further assume that k = 1, that F = l/i, that the strings are massless and perfectly inelastic, and that the springs are massless and ideally elastic, then the distance from the ceiling to the bottom of LI (or, equivalently, from the top of LI to the weight) will be l /2 + % = %. Since we have assumed that both L2 and L3 (the safety strings) have length 1, it follows that they will initially be limp (Figure Ib), and that the total distance, HI, from the ceiling to m will be l /2 + 3/s + l/i = l3/s. To calculate the equilibrium after LI has been cut, it
h
Figure 2: Predicting the equilibrium point
How Braess' Paradox Solves Newcomb's Problem
73
is sufficient to note that since each spring now bears only l/i its previous weight, SI and S2 will each have extension ½ x½ = ¼, and the new distance, H2, from the ceiling to the weight will be 1 + 1A = 11A, which is less than the previous distance of 1%. Contrary to our original conclusion, H2 < HI!10 This conclusion becomes paradoxical only if we now fail to abandon our original but unwarranted claim that H2 > HI. (For then it would follow both that H2 > HI and that H2 < HI.) Yet this is surely not the case. Our original implicit assumption that SI and S2 would remain constant in extension was false. If additional evidence is required, one need only construct the appropriate apparatus and make the relevant measurements.11 In other words, Braess' paradox is not a paradox at all. It is a paradox in name only. What is perhaps more surprising is that Braess' paradox is structurally no different from. Newcomb's problem. Both paradoxes are instances of a well-known result in the theory of non-co-operative games, namely that equilibria of non-co-operative games are generally Pareto-inefficient.12 Specifically, Newcomb's problem is a limiting case of the Prisoner's Dilemma, a dilemma which is itself a limiting case of the Cohen-Kelly queuing paradox. This paradox, in turn, is structurally identical to Braess' paradox, a result which has consequences for how we ought to view all four paradoxes.
3. Newcomb's Problem Newcomb's problem is traditionally described as a problem arising from equally defensible yet contradictory models of rationality.13 It arises as follows: consider the case in which you are presented with two opaque boxes, A and B. You may select either the contents of A and B together, or the contents of B alone. You are told that box A contains $10,000. You are also told that if a predictor has predicted that you would select box B alone, then box B contains $100,000. Similarly, if the same predictor has predicted that you would select boxes A and B together, then box B contains $0 (Figure 3). In the past the predictor's predictions have been highly accurate. For the sake of argument, let's assume that these predictions have been correct 70% of the time.14 Given that you wish to maximize your payoff, which is the more rational choice? Should you select the contents of A and B together, or should you select the contents of B alone? Two distinct strategies appear equally rational. On the one hand, dominance suggests that you should select the contents of A and B together. You know that there are only two possibilities: Either there is $100,000 in box B or $0 in box B. Yet in both cases the selection of boxes A and B together dominates the selection of box B alone. In both cases you will be $10,000 richer if you select A and B together.
A. D. Irvine
74
Box B contains SO
Box B contains S100.000
Selection A & B
$10,000 - SO
$10.000 t 5100,000
Selection of B
so
S 100.000
Figure 3: Pay-off matrix for Newcomb's problem
On the other hand, utility maximization suggests that you should select B alone, since it is this choice which has the higher expected utility. Simply calculated,15 U(B) = (0.3)(0)w + (0.7)(100,000)« = 70,000« while U(A & B) = (0.7)(10,000 + 0)w + (0.3X10,000 + 100,000)M = 40,OOOM. However, these two strategies clearly conflict. One must either select the contents of A and B together, or the contents of B alone. This is Newcomb's problem. To show that Newcomb's problem is a special case of Braess' paradox, we first consider the relationship between Newcomb's problem and another well-known paradox, the Prisoner's Dilemma. 4. Prisoner's Dilemma Two prisoners - you and I, say - are forced into a situation in which we must either defect or not. (Perhaps defecting consists of agreeing to testify against the other prisoner.) In this context you are told that if you defect you will receive a one-year reduction from a potential maximum eleven-year prison sentence. At the same time, if I fail to defect, then you will receive a (possibly additional) ten-year reduction from the same potential maximum eleven-year sentence. (Perhaps this follows since without my testimony you will be unable to be convicted on anything but a minor charge.) You are also told that I have been given exactly the same information as you, and that I have been offered the same opportunity to defect or not (Figure 4). In the past the actions of two prisoners in similar circumstances have coincided a high percentage of the time. For the sake of argument, let us assume they have coin-
How Braess' Paradox Solves Newcomb's Problem
75
I defect
I don't defect
You defect
You receive 10 years (I receive 10 years)
You receive 0 years (I receive 11 years)
You don't defect
You receive 11 years (I receive 0 years)
You receive 1 year (I receive 1 year)
Figure 4: Decision matrix for Prisoner's Dilemma
cided 70% of the time.16 Given that you wish to minimize your time in prison, which is the more rational choice? Should you defect or not? As both Brams and Lewis have pointed out, the differences between this problem and Newcomb's problem are differences merely of detail.17 To see this, one need only consider the following slightly altered version of our original Prisoner's Dilemma: Two prisoners you and I, say - are once again forced into a situation in which we must either defect or not. (Again, we can assume for the sake of argument that defecting consists of agreeing to testify against the other prisoner.) In this context you are told that if you defect you will receive a payment of $10,000. At the same time, if I fail to defect, then you will receive a (possibly additional) payment of $100,000. For the sake of argument, we might also assume that defecting in part consists of accepting the contents of a box, A, which is known to contain $10,000. (As Lewis points out, accepting the contents of box A might even be construed as the act of testifying against the other prisoner, just as accepting the Queen's shilling was once construed as an act of enlisting.) The contents of a second box, B - which may or may not contain $100,000 depending upon the choice made by the other prisoner18 will be given to you in any event. You are also told that I have been given exactly the same information as you, and that I have been offered the same opportunity to defect or not (Figure 5). As before we are each facing a potential maximum sentence of eleven years, but in this case it turns out that any monies we obtain can be paid to the court in lieu of time served. Escape from a one-year sentence costs $10,000; escape from a ten-year sentence costs $100,000. As before, the past actions of two prisoners in similar circumstances have coincided 70% of the time. But now, minimizing your time in prison turns out to be equivalent to maximizing your financial pay-off. Should you defect or not?
76
A. D. Irvine I defect
I don't defect
You defect
You receive $10,000 (I receive $10,000)
You receive S110,000 (I receive $0)
You don't defect
You receive $0 (i receive$1 10,000)
You receive S100,000 (I receive $100.000)
Figure 5: Revised decision matrix for Prisoner's Dilemma
As with Newcomb's problem, two distinct strategies appear equally rational. On the one hand, dominance suggests that you should defect. You know that there are only two possibilities: Either I have failed to defect (in which case box B will contain $100,000), or I have defected (in which case box B will be empty). In both cases, defecting will give you an additional $10,000. On the other hand, utility maximization suggests that you should not defect, since it is this choice which has the higher expected utility. Simply calculated, U(not defecting) = (0.3)(0)w + (0.7)(100,000)w = 70,000w while U(defecting) = (0.7) (10,000 + 0)u + (0.3)(10,000 + 100,000)tz = 40,000w. However, as in the case of Newcomb's problem, these two strategies clearly conflict. The reason is apparent: from the perspective of either prisoner, his decision matrix is no different from the pay-off matrix of a Newcomb's problem (cf. Figure 3 and Figure 5). The Prisoner's Dilemma is simply two Newcomb's problems run side by side. From the point of view of each individual prisoner, the Prisoner's Dilemma just is Newcomb's problem, albeit in slightly different guise.
5. Generalizing the Prisoner's Dilemma Just as Newcomb's problem can be generalized to form a two-person Prisoner's Dilemma, the two-person dilemma itself can be generalized to form the «-person case.19 To see this, substitute for the second prisoner an array of n —1 prisoners and let the contents of box B become a function, not of the choice of a single second prisoner, but of the choices
How Braess' Paradox Solves Newcomb's Problem
77
of all n —1 other prisoners. The resulting n-person choice matrix can then be used to characterize a number of well-known phenomena, including several variants of the free-rider problem found in social choice theory.20 The free-rider problem arises whenever co-operative effort produces an improved corporate pay-off but when, at the same time, no single contribution is sufficient in its positive effect to result in compensation for an individual's contribution. In other words, the problem is how to justify, via utility alone, the cost involved in co-operation when it may be argued that, despite the non-negligible benefits of mutual co-operation, each individual's contribution also detracts from his or her personal pay-off.21 As in the original two-person dilemma, only two general conditions need to be met for the n-person version to arise. The first is that defecting - failing to co-operate - must be a dominant strategy for each participant. The second is that universal co-operation must be preferable to universal defection. In other words, although in every case defection will be preferable to the absence of defection, when compared to universal co-operation, universal defection will be worse for some - and typically for all - and better for none. In yet other words, universal defection is Pareto-inefficient when compared to universal co-operation, despite the fact that individual defection is dominant.22 How these conditions affect our particular dilemma can be summarized as follows: as before, you are given the opportunity to defect, in which case you will be awarded $10,000. If some given percentage23 of the other n—\ prisoners fail to defect, then you will receive (a possibly additional) $100,000 (Figure 6). For the sake of argument, let's assume that in the past the choice of (this percentage of) the other prisoners has coincided with your own choice 70% of the time. All prisoners have been given the same information and have been offered the same potential reward for defecting. Provided once again that the probability of your decision coinciding with that of (the required percentage of) the other prisoners is itself above the required threshold, it follows that the resulting n-person choice matrix is simply a generalized version of the two-person Prisoner's Dilemma (cf. Figure 5 and Figure 6). From the point of view of each individual prisoner there simply is no difference. This generalized Prisoner's Dilemma also turns out to be structurally equivalent to yet another paradox, the Cohen-Kelly queuing paradox. To see this, consider a queuing network (Figure 7a) which allows for the transfer of individuals (customers, traffic, messages, etc.) from entrance point A to exit point F.24 Two paths are available for the movement of traffic through the network: ABCF and ADEF. Individual travellers all have knowledge of the mean delays associated with each node in the network but not of instantaneous queue lengths. All arrival
78
A. D. Irvine
Collective defection
Collective failure to defect
You defect
$10,000
S110.000
You don't defect
SO
S100.000
Figure 6: Decision Matrix for n-person Prisoner's Dilemma
and exit streams are assumed to be independent Poisson flows. Infinite-Server nodes (IS nodes) represent single-server queues at which individuals are delayed by some random time, x, the average of which is independent of the number of individuals awaiting transit. FirstCome-First-Serve nodes (FCFS nodes) represent single-server queues at which individuals are processed in the order in which they arrive but in which time delays are assumed to be independent exponential random variables based upon the number of individuals arriving at the queue per unit of time. Thus, for a mean number, y, of individuals arriving per unit time, and for a capacity constant, k, with k > y, the mean delay time will be l/(fc - y). System equilibrium is defined as any case in which no individual can lower his or her transit time by altering his or her route whenever all other individuals retain their present routes. It is the goal of each individual to minimize his or her transit time across the network. Typically, in uncongested networks the addition of new routes will decrease, or at least not increase, the transit time of individuals. None the less, it turns out that within (some) congested networks, the introduction of additional capacity will lead to an increase in mean transit time at equilibrium. In other words, as long as individuals choose routes designed to minimize their individual travel time, the mean travel time at equilibrium is higher in the augmented network than in the initial network. To understand why, compare our initial queuing network to an augmented network (Figure 7b). The augmented network includes a new path from B to E which contains an additional IS node, G. In addition, we assume the following: that the mean delay for both C and D is 2 time units, that the mean delay for G is 1 time unit, that the total traffic per unit of time entering at A is n, and that n ^ k — 1 > n/2 > 0
How Braess' Paradox Solves Newcomb's Problem
79
Figure 7: Cohen-Kelly queuing paradox
Figure 8: Predicting the equilibrium point
(Figure 8). It follows that at equilibrium the mean transit time in the augmented network is 3 time units while the mean transit time in the initial network is strictly less than 3 time units. To check this claim, note that in the case of the initial network, if the Poisson flow from A to B is a, then the mean transit time for route
80
A. D. Irvine
ABCF is l/(k - a) + 2 and the mean transit time for route ADEF is l/(fc - (n - a)) + 2. It follows that, at equilibrium, traffic will distribute itself as equally as possible between the two routes in order that a = n - a = n/2. The mean transit time for both routes will then equal l/(k - n/2) + 2. But since k - 1 > n/2 > 0 it follows that 1 > l/(k - n/2) > 0 and, thus, that the mean transit time for both routes at equilibrium is strictly less than 3 time units. However, as in the initial network, individual maximizers in the augmented network will also seek out and use those paths that minimize their travel time (i.e., those paths that maximize their preferences). As a result, individual maximizers will be unable to refrain from selecting route ABGEF despite that fact that this will lead inevitably to an overall increase in mean travel time (i.e., an overall decrease in efficiency) for all travellers. To see this, once again assume that the Poisson flow from A to B is a. If, in addition, we assume that the flow from B to C is b, it then follows that the flow from B to E is 0 — b, that the flow from A to D is n — a, and that the combined flow at E is the combination of that from B to E and from A to D, viz. (n - a) + (a — V) = n — b. The mean transit time for ABCF is then l/(k - a) + 2 while the mean transit time for ABGEF is l/(k - a) + I + l/(k - (n - b)). But if a - b > 0, then the transit times at equilibrium for BCF and BGEF will be equal. Thus, l/(/c - (n - b)) = 1 and n - f c = fc-l.In other words, the mean delay time at E is exactly 1 time unit, the total Poisson flow arriving at E is k - I, and the mean transit time for ADEF is exactly 3 time units. It follows that the mean transit time for ABGEF is also 3 time units. Yet if this is so, then the delay at B must also be 1 and the mean transit time for all three paths will be identical. Thus, for all three paths the mean transit time at equilibrium is exactly 3 time units. If this is so, why use ABGEF at all? The answer is that when a - b = 0, a = n - a (as shown above), and so a = n - b. It then follows that the delay time at E is only I / ( k - a). But since a = n/2 and k - 1 > n/2, it also follows that l/(k ~ a) is strictly less than 1. Thus, for those individuals in the augmented network at node B, delay time will be minimized by avoiding node C. In global terms, what has happened is that the introduction of path BGE allows individuals the opportunity to bypass both of the original IS nodes, C and D, with their mandatory 2 time unit delay. Use of this pathway is therefore inevitable. Despite this, use of path BGE increases the delay at the FCFS node, E, without any compensating reduction in delay time at either of the original IS nodes. The result is an overall increase in mean travel time. Identifying the Cohen-Kelly queuing paradox with an n-person choice matrix comparable to Prisoner's Dilemma is straightforward. Both are instances of non-co-operative pay-off structures in which
How Braess' Paradox Solves Newcomb's Problem
81
individual defection is dominant for all, but in which universal defection is Pareto-inefficient when compared to universal co-operation. Initially, the only significant difference is that the pay-off function used in the case of Prisoner's Dilemma is typically less fine-grained than that used in the queuing paradox. However, this difference can be eliminated by associating with the queuing network a stepped pay-off function similar to that used in Prisoner's Dilemma. As an example, consider a network similar to the one above, but in which k = 5, the mean delay time for both C and D is 2 hours, the mean delay for G is 1 hour, and n, the total traffic entering at A, is 4. Then M > /c — 1 > n/2 > 0. We then assume that you are one of the travellers and that you must decide whether to take advantage of route ABGEF. As before, you know that either a — b = 0 or a ~~ b > 0. If a - b = 0, then at equilibrium the mean delay time at both B and E will be 1 /(k - a) — II (k -(n- a)) = l/(fc - n/2) = i/(5 - 2) = 20 minutes. Thus the mean travel time for each of ABCF and ADEF will be 2 hours and 20 minutes. In contrast, should a single traveller per unit of time elect to take route ABGEF, then a - b = 1. If, as before, a = n - a = n/2, then the mean delay time at B will be l/(/c - a) = I/(5 — 2) = 20 minutes, while the mean delay time at E will be l/(k - (n - b)) = l/(5 - (4 - 1)) = 30 minutes. Thus, the mean travel time for ABCF will be 2 hours and 20 minutes, the mean travel time for ADEF will be 2 hours and 30 minutes, and the mean travel time for ABGEF will be 1 hour and 50 minutes. As more and more travellers select ABGEF this time will increase until a - b = n = 4 and the mean time becomes l/(5 — 4) + 1 + l/(5 - 4) = 1 + 1 + 1 = 3 hours. Thus, you know that, should you elect to use ABCF, your mean travel time would be I/(5 — 4) + 2 = 3 hours; should you elect to use ADEF, your mean travel time would be 2 + l/(5 — 4) = 3 hours; and should you elect to use ABGEF, your mean travel time would be l/(5 — 4) + l + l / ( 5 - 4 ) = 3 hours. Given these times it might initially appear that it makes no difference which of the three routes you elect to take. Yet this is not so. After all, you also know that if even one of the n — \ travellers other than yourself fails to select route ABGEF, then your most efficient route will be through G. Dominance therefore dictates that you, too, will select ABGEF. To complete the identification of this paradox with the n-person Prisoner's Dilemma, all that remains is to associate the appropriate pay-offs with the required travel times and routes. We do so as follows: if you select route ABGEF (and thereby improve your travel time relative to both routes ABCF and ADEF) you will be awarded $10,000. In contrast, if some sufficient number of other travellers (which we call the "collective") fails to select route ABGEF, you will
A. D. Irvine
82
be awarded (a possibly additional) $100,000. The number required to constitute the collective will be a function of the details of the particular queuing network. For the sake of argument let's assume that in the past your choice has coincided with that of the collective 70% of the time. As with the original Prisoner's Dilemma, four distinct pay-offs are then possible (Figure 9). Also as before, one strategy dominates: since nothing you do now can affect the routes taken by other travellers, you know that you will be $10,000 richer if you select route ABGEF. At the same time, you also know that it is much more likely that you will receive the $100,000 pay-off if you avoid route ABGEF. Expected utility therefore suggests that you should avoid route ABGEF. Simply calculated, U(ABGEF) = (0.7)(10,000 + 0)w + (0.3)(10,000 + 100,000)« = 40,000« while U(~ABGEF) = {0.3)(0)w + (0.7)(100,000)w = 70,OOOM. Provided that all travellers have been offered the same potential pay-offs, and that decreased travel time is seen solely as a means of maximizing one's pay-off, it turns out that the choice matrix used in the case of the queuing paradox is indistinguishable from that used in the case of the n-person Prisoner's Dilemma. From the point of view of the individual participant, it is also no different from that used in Newcomb's problem. Identifying the Cohen-Kelly queuing paradox with Braess' paradox is also comparatively straightforward. Provided that similar pay-offs are attached to the various equilibrium points of both networks, the Collective selection of route ABGEF
Collective avoidance of route ABGEF
Your selection of route ABGEF
$10.000
$110.000
Your avoidance of route ABGEF
so
S100.000
Figure 9: Decision matrix for Cohen-Kelly queuing paradox
How Braess' Paradox Solves Newcomb's Problem
83
two paradoxes can be shown to be structurally equivalent. Once again, both networks are cases in which system-determined equilibria fail to be Pareto-efficient. To understand why the Cohen-Kelly queuing network is structurally equivalent to the mechanical apparatus described in Section 2, simply map the components of the one system directly onto those of the other. Specifically, we begin by letting IS gates correspond to strings, FCFS gates correspond to springs, mean travel time correspond to equilibrium extension, and items of traffic correspond to units of downward force. (In other words, the correlates in the mechanical apparatus of the items of traffic in the queuing network need not be assumed to be discrete individuals. Nevertheless, to motivate the identification, one might imagine the strings and springs as constituting a system of hollow tubes through which "marbles" - units of downward force - flow. Each "marble" is given a "choice" as to which path it will follow. However, unlike the travellers in the queuing network - who are assumed to be rational agents attempting to maximize their individual pay-offs - the "marbles" in this network have their movements decided, not by the laws of rationality, but by the laws of physics.25) It follows that the three pathways in each of the two systems now also correspond. Route ABGEF corresponds to the line of tension, Tv consisting of two springs and the original centre string. Similarly, routes ABCF and ADEF correspond to the two remaining lines of tension, T2 and T3, each consisting of a single spring together with one of the two added safety strings. Just as additional traffic through route ABGEF increases mean travel time throughout the queuing network, increased force along Tj increases the equilibrium extension of the entire mechanical apparatus.26 Finally, we postulate that, just as shorter travel times in the queuing network result in larger pay-offs, shorter equilibrium extensions in our mechanical apparatus do the same. Specifically, for any designated unit of downward force (or marble) there will be a $10,000 pay-off if it follows path Tj. There will also be a (possibly additional) $100,000 payoff if some sufficient number of other units of force (or marbles) fail to follow path Tj. The same in-principle-combinations of pay-offs then appear here as appear in the case of the queuing network (Figure 9). In both cases, too, the likelihood that all travellers (or marbles) will act in concert can truthfully be claimed to be very high! Network equilibrium therefore fails to be Pareto-efficient. Both networks are systems in which individual defection is dominant even though this results in an overall decline in system efficiency. Just as travellers select route ABGEF even though both ABCF and ADEF are available, units of force place tension on Tl even though both T2 and T3 are also available. The
A. D. Irvine
84
resulting pay-offs are therefore exactly the same in both cases and the identification between the two networks is complete.
6. Resolving Newcomb's Problem Newcomb's problem, Prisoner's Dilemma, the Cohen-Kelly queuing paradox and Braess' paradox are all instances of the same strategic, non-co-operative "game" (Figure 10). In other words, individual players - whether choosers, prisoners, items of traffic or units of force select one of a variety of alternatives from within a given problem space in an attempt to maximize some goal - be it money, saved time, efficiency, or saved distance. Each alternative (or strategy) results in one of a variety of pay-offs. In some cases pay-off functions are initially assumed to be smooth, but all can be converted to stepped functions of the kind originally found in the two-person Prisoner's Dilemma or Newcomb's problem. In addition, in each case a single alternative dominates; it and it alone serves to maximize individual pay-off regardless of the choices made by other individuals. However, because of the collective effects that individual choices have upon the pay-offs of other individuals, these same actions also act as an equilibrium variable. The result is that user-determined equilibria vary from system-optimal equilibria: in cases where all individuals (or some other threshold percentage of individuals) select the dominant strategy, the resulting equilibrium fails to be system-optimal. It follows that universal defection is Pareto-inefficient when compared to universal co-operation, even Newcomb's problem
Prisoner's dilemma
Queuing paradox
Braess' paradox
Problem space
Payoff Malnx
Decision Matrix
Queuing Network
Classical Mechanics
Players
Chooser
Two (or more) Prisoners
Items of Traffic
Units c! Force
Strategy set
Dominance vs Expected Utility
Dominance vs Expected Utility
Dominance vs Expected Utility
Dominance vs Expected Utility
Payoff
Money
Time
Efficiency
Distance
Equilibrium variable
Predictor's Prediction
Prisoner's Decision
Variations in Traffic
Division of Forces
Figure 10: Hour paradoxes
How Braess' Paradox Solves Newcomb's Problem
85
though individual defection is dominant. Thus, just as a single Paretoinefficient equilibrium is guaranteed by the laws of physics in the case of Braess' paradox, the same type of Pareto-inefficient equilibrium is guaranteed by the laws of rationality in the Cohen-Kelly queuing paradox, Prisoner's Dilemma, and Newcomb's problem. What Braess' paradox shows is that there is nothing inconsistent in such an outcome. Just as with Braess' paradox, Newcomb's problem is a paradox in name only. Unintuitive it may be, but inconsistent or incoherent it is not. As with the Allais paradox, the Banach-Tarski paradox, Olbers' paradox, the paradoxes of special relativity and others, Newcomb's problem (like Braess' paradox) is therefore best characterized as simply a surprising consequence of a particular set of assumptions, rather than as a genuine contradiction. Yet if this is so, one question still remains: why is it that in the case of Newcomb's problem the argument from expected utility appears persuasive? Put in other words, why is it that it is regularly those players who avoid dominance who obtain the largest pay-offs? The same question also arises for both the Prisoner's Dilemma and the Cohen-Kelly queuing paradox. The argument from expected utility in both of these cases appears to be a strong one. Why is it that it is the travellers and prisoners who fail to defect who typically obtain the optimal outcomes? The answer is a simple one. It is to deny that such a situation in fact obtains. That is, it is to deny that, over the long run, it will continue to be those who avoid dominance, or who fail to defect, who will continue to receive the largest pay-offs. In yet other words, the solution to Newcomb's problem - like the solution to the Allais paradox, Olbers' paradox, the Banach-Tarski paradox, and others - involves an important modification to our original background assumptions. In this case the modification required is straightforward: we simply abandon the (false) assumption that past observed frequency is an infallible guide to probability and, with it, the claim that Newcomb's problem is in any sense a paradox of rationality. Expected utility does not conflict with dominance once it is realized that the appropriate probability assignments for any expected utility calculation will not be based solely upon past relative frequency. Instead, these assignments will be conditional upon both relative frequency and any additional information available. In the case of Newcomb's problem this additional information comes in the form of an argument from dominance. The situation is similar to one in which you know through independent means - e.g., careful physical measurements and a detailed structural analysis - that a tossed coin is fair even though it has landed tails some large number of times in a row. After all, it is perfectly consistent with the coin's being fair that it land tails
86
A. D. Irvine
any number of times in a row. It is just that the probability of such an outcome decreases proportionally with the length of the uniformity. The same is true in the case of Newcomb's problem. The solution to the problem is simply to deny that selecting box B alone in any way affects or is indicative of the probability of its containing $100,000. Similarly in the case of the Prisoner's Dilemma, the solution is to deny that a single prisoner's failure to defect in any way affects or is indicative of the likelihood that other prisoners will also defect.27 Similarly in the case of the Cohen-Kelly queuing network, the solution is to deny that a single traveller's failure to select route ABGEF in any way affects or is indicative of the likelihood that other travellers will do the same. Such outcomes are no more likely in the case of Newcomb's problem, Prisoner's Dilemma, or the Cohen-Kelly queuing network than they are in the case of Braess' paradox. Which is to say, they are not likely at all.
Acknowledgments This paper is reprinted with minor revisions from International Studies in the Philosophy of Science, vol. 7, no. 2 (1993), pp. 141-60. Preliminary versions of the paper were read at the annual Dubrovnik Philosophy of Science Conference on 11 April 1992 and at the University of Victoria on 9 March 1993. Thanks go to members of both audiences, as well as to Eric Borm, Joel Cohen, Adam Constabaris, Colin Gartner, Joan Irvine, Kieren MacMillan, Steven Savitt, Howard Sobel, and Jeff Tupper for their helpful comments. In addition, special thanks go to Leslie Burkholder and Louis Marinoff whose many detailed suggestions and ideas were crucial in the writing of this paper.
Notes 1 In some cases both conditions will be met simultaneously. 2 Allais's original 1952 survey concluded that 65% of respondents prefer Al to A2 while at the same time preferring A4 to A3. For example, see Munier (1991), p. 192. 3 For example, see Boolos (1971), pp. 215ff. The same is also true for some non-iterative set theories, including the theory of hypersets. For example, see Aczel (1988). 4 For example, see Jech (1977), p. 351. 5 For example, see Mermin (1969), chs. 16,17 andpassim. Similar "paradoxes" can be constructed with metre sticks or any other type of measuring device. 6 For example, see Irvine (1991), pp. 156-8. The paradox was first proposed in 1720 by Halley and restated more formally by the Swiss astronomer Cheseaux in 1743. The paradox gained its name after being popularized by the German astronomer Heinrich Wilhelm Matthaus Olbers in the 1820s.
How Braess' Paradox Solves Newcomb's Problem
87
7 See Braess (1986); Dafermos and Nagurney (1984a, 1984b); and Steinberg and Zangwill (1983). Also compare Cohen (1988). 8 For example, in electrical circuit design the addition of extra current-carrying routes can lead to a decrease, rather than an increase, in the flow of current. Similarly, in thermal networks an increased number of paths for heat flow can lead to a drop, rather than a rise, in temperature. See Cohen and Horowitz (1991). 9 The example is from Cohen and Horowitz (1991). The example is a helpful one to consider even though, strictly speaking, Braess' paradox originally concerned only congested transportation networks. The extension of the paradox to physical networks of this kind is due to Cohen and Horowitz. 10 Similar examples can be constructed to show that in some cases H2 > HI and that in other cases H2 = HI. As an example of the former, let LI = Vs. It then follows that HI = 1 Vs and that H2 = 1 %. As an example of the latter, let LI = %. It then follows that HI = H2 = 1V4. 11 Of course, compensation will have to be made for the idealizing assumptions made in the original thought experiment. 12 See Dubey (1986). 13 The problem, due originally to William Newcomb, was first published in Nozick (1969). 14 As is well known, in order for Newcomb's problem to arise, the predictive process need not be 100% accurate; any potentially predictive process above a given accuracy threshold will do. This threshold will be a function of the ratio, r, of the value of the contents of box A to the value of the contents of box B such that r = A / B and the threshold equals (1 + r) / 2. In other words, as the difference between the values of the contents of the two boxes increases, the required accuracy threshold decreases, provided of course that the value of the contents of box A is not greater than the value of the contents of box B. In the above example this threshold will be (1 + 0.1) /2, or 55%, assuming that the value of the contents of both boxes is strictly proportional to the money the boxes contain. 15 As before, we let "U" represent an expected utility function and "u" some (arbitrary but, in this case, positive) unit of utility. 16 As with Newcomb's problem, any percentage of coincidence above a certain threshold will do. In this case the threshold will be a function of the ratio, r, of the value (to you) of your defection, to the value (to you) of my failure to defect, such that the threshold equals (1 + r) / 2. 17 See Brams (1975) and Lewis (1979). For related discussion see Davis (1985); Marinoff (1992), esp. ch. 5; and Sobel (1985). 18 In other words, the role of the predictor is played by the other prisoner. For a real-life example in which an individual's action is believed to be predictive of the choices made by others in similar contexts, Quattrone and
88
19
20
21
22
23
A. D. Irvine Tversky present experimental evidence supporting the claim that many voters view their own choices as being diagnostic of the choices of other voters, despite the lack of causal connections. See Quattrone and Tversky (1986). However, it is important to note that, strictly speaking, the decision about whether to place $100,000 in box B need not be viewed as being based upon a prediction at all. Contra Lewis, Sobel and other commentators, any independent process - predictive or otherwise - will do the job. All that is required is that the appropriate coincidence threshold be met and that the contents of box B be determined in a manner that is casually independent of the decision which you make now. As is well known, Prisoner's Dilemma can be generalized in another dimension as well, i.e. the dilemma can be iterated in such a way that each prisoner is required to make the decision about whether to defect, not once, but many times. See Axelrod (1980a, 1980b, 1984); and Danielson (1992). Also called the "tragedy of the commons," the free-rider problem dates back at least to Hume and his meadow-draining project. See Hume (173940), III, ii, vii. As described above, the free-rider problem admits of several variants. Philip Pettit, for example, distinguishes between two types of cases: (1) cases in which a single defection will lower the pay-off for all participants other than the defector, and (2) cases in which a single defection will fail to do so. The former of these two cases, in which even a single defector is capable of altering the collective outcome, Pettit calls the "foul-dealing problem"; the latter, in which each person's individual contribution is insufficient to alter the collective outcome, Pettit calls the "free-rider problem." The distinction arises since in the M-person case, unlike in the twoperson dilemma, there is no uniquely preferred, well-ordering of outcomes. Although the distinction may have strategic consequences, it can be passed over safely in this context since it is the general (n-person) case we are examining. Following Jean Hampton, I shall call both types of dilemma "free-rider" problems since in both cases individuals find it preferable to take advantage of others' contributions to the corporate good without themselves contributing. See Pettit (1986); and Hampton (1987). Compare Hardin (1971). Although other conditions can be used in the construction of w-person choice matrices, these two conditions, and these two alone, are typically taken as standard. For example, see Sen (1969); Taylor (1976); and Pettit (1986). This percentage of the other n—1 prisoners will vary depending upon the details of each specific dilemma. As in both Newcomb's problem and the two-person Prisoner's Dilemma, the probability of your choice coinciding with the relevant "prediction" must be greater than (1 + r ) / 2 where, as
How Braess' Paradox Solves Newcomb's Problem
89
before, r is the ratio between the value (to you) of your defection and the value (to you) of a non-defecting "prediction". The only difference is that in this case the role of the predictor is played, not by a single second prisoner, but by the collective behaviour of the « — 1 prisoners other than yourself. If we assume that all H — 1 prisoners other than yourself are equally likely to defect, the (minimum) required percentage of agreement among them will be an implicit inverse function of (1 + r ) / 2 . In short, the greater the required accuracy of the prediction (or, equivalently, the smaller the difference in value between the two pay-offs), the lower the percentage of agreement among the n—\ prisoners other than yourself that will be required. Specifically, Louis Marinoff has pointed out to me that iff is the frequency of co-operation on the part of the collective, then we can calculate P(f), the probability with which/exceeds the required threshold, as follows: Assume that the probability, x, of an individual's co-operation is uniform over all n— 1 players other than yourself. Then the probability thatfc other players will co-operate is the product of the number of possible states of k co-operative players and the probability that each such state obtains. Specifically, Similarly, the probability that k + 1 other players will co-operate is 1
Now, if we let k be the smallest integer such that/t > (1 + r) / 2, then it follows that 24 In its essentials, the example is from Cohen and Kelly (1990). 25 Should it be required - for the sake of analogy between Braess' paradox and the other paradoxes discussed - that rational agents be associated with the selection of outcomes, simply associate with each outcome a rational gambler who has bet on that outcome and who will receive the appropriate pay-off. 26 It is worth noting that in both systems there is also the equivalent of a conservation law. In the queuing network (at equilibrium), traffic in equals traffic out at every node, while in the mechanical apparatus (at equilibrium), mechanical force upward equals mechanical force downward at every point in the system. 27 Importantly, it is on just this point that the single-case Prisoner's Dilemma differs from the iterated case. As a result, the iterated case may better be viewed as a type of co-ordination problem. As with co-ordination problems in general, the iterated case then involves more than the simple preference rankings associated with the single case.
90
A. D. Irvine
References Aczel, Peter (1988). Non-well-founded Sets. CSLI Lecture Notes No. 14. Stanford, CA: Center for the Study of Language and Information. Axelrod, Robert (1980a). Effective choice in the Prisoner's Dilemma. Journal of Conflict Resolution, 24: 3-25. (1980b). More effective choice in the Prisoner's Dilemma. Journal of Conflict Resolution, 24: 379-403. (1984). The Evolution of Cooperation. New York: Basic Books. Boolos, George (1971). The iterative conception of set. Journal of Philosophy, 68: 215-32. Reprinted in Paul Benacerraf and Hilary Putnam (eds), Philosophy of Mathematics, 2nd ed. (Cambridge: Cambridge University Press, 1983), 486-502. Braess, D. (1968). Uber ein Paradoxon aus der Verkehrsplanung. Unternehmensforschung, 12: 258-68. Brams, S. (1975). Newcomb's problem and Prisoner's Dilemma. Journal of Conflict Resolution, 19: 596-612. Cohen, Joel E. (1988). The counterintuitive in conflict and cooperation. American Scientist, 76: 577-84. Cohen, Joel E., and Paul Horowitz (1991). Paradoxical behaviour of mechanical and electrical networks. Nature, 352 (22 August): 699-701. Cohen, Joel E., and Frank P. Kelly (1990). A paradox of congestion in a queuing network. Journal of Applied Probability, 27: 730-34. Dafermos, S. and A. Nagurney (1984a). On some traffic equilibrium theory paradoxes. Transportation Research: Part B, 18:101-10. (1984b). Sensitivity analysis for the asymmetric network equilibrium problem. Mathematical Programming, 28:174-84. Danielson, Peter (1992). Artificial Morality. London: Routledge. Davis, Lawrence H. (1985). Is the symmetry argument valid? In Richmond Campbell and Lanning Sowden (eds), Paradoxes of Rationality and Cooperation (Vancouver: University of British Columbia Press), pp. 255-63. Dubey, Pradeep (1986). Inefficiency of Nash equilibria. Mathematics of Operations Research, 11:1-8. Hampton, Jean (1987). Free-rider problems in the production of collective goods. Economics and Philosophy, 3: 245-73. Hardin, Russell (1971). Collective action as an agreeable n-Prisoner's Dilemma. Behavioural Science, 16: 472-81. Hume, David (1739-40). A Treatise of Human Nature. London. Irvine, A. D. (1991). Thought experiments in scientific reasoning. In Tamara Horowitz and Gerald J. Massey (eds), Thought Experiments in Science and Philosophy (Savage, MD: Rowman and Littlefield), pp. 149-65. Jech, Thomas J. (1977). About the axiom of choice. In Jon Barwise (ed.), Handbook of Mathematical Logic (Amsterdam: North-Holland), pp. 345-70. Lewis, David (1979). Prisoners' Dilemma is a Newcomb problem. Philosophy
How Braess' Paradox Solves Newcomb's Problem
91
and Public Affairs, 8: 235-40. Reprinted in Richmond Campbell and Lanning Sowden (eds), Paradoxes of Rationality and Cooperation (Vancouver: University of British Columbia Press, 1985), pp. 251-55. Marinoff, Louis (1992). Strategic Interaction in the Prisoner's Dilemma. Doctoral thesis, Department of the History and Philosophy of Science, University College London. Mermin, N. David (1969). Space and Time in Special Relativity. Prospect Heights, IL: Waveland Press. Munier, Bertrand R. (1991). The many other Allais paradoxes. Journal of Economic Perspectives, 5: 179-99. Nozick, Robert (1969). Newcomb's problem and two principles of choice. In Nicholas Rescher et al (eds), Essays in Honor of Carl G. Hempel (Dordrecht: Reidel), pp. 114-46. Abridged and reprinted in Richmond Campbell and Lanning Sowden (eds), Paradoxes of Rationality and Cooperation (Vancouver: University of British Columbia Press, 1985), pp. 107-33. Pettit, Philip (1986). Free riding and foul dealing. Journal of Philosophy, 83: 361-79. Quattrone, George A. and Amos Tversky (1986). Self-deception and the voter's illusion. In Jon Elster (ed.), The Multiple Self (Cambridge: Cambridge University Press), pp. 35-58. Sen, Amartya (1969). A game-theoretic analysis of theories of collectivism in allocation. In Tapas Majumdar (ed.), Growth and Choice (London: Oxford University Press), pp. 1-17. Sobel, J. Howard (1985). Not every Prisoner's Dilemma is a Newcomb problem. In Richmond Campbell and Lanning Sowden (eds), Paradoxes of Rationality and Cooperation (Vancouver: University of British Columbia Press), pp. 263-74. Steinberg, R., and W. Zangwill (1983). The prevalence of Braess' paradox. Transportation Science, 17: 301-18. Taylor, Michael (1976). Anarchy and Cooperation. London: Wiley.
6
Economics of the Prisoner's Dilemma: A Background Bryan R. Routledge
1. Introduction At the recent conference in Vancouver, scholars in philosophy, biology, economics, cognitive science and other fields gathered to discuss "Modeling Rational and Moral Agents."1 The vast majority of the papers used or referred to the Prisoner's Dilemma (PD). With its stark contrast between individual rationality and collective optimality, it is a natural vehicle to address questions about what is moral and what is rational. In addition, many of the papers presented (and included in this volume) build on ideas of evolutionary biology. Since individual versus species fitness runs parallel to individual and collective rationality, the prisoner's dilemma also serves to link biologists and social scientists.2 This paper offers a brief overview of some of the research related to the PD in economics. Since its introduction in the 1950s,3 the folk story associated with the basic PD has become well known. A clever jailer (DA/Crown Attorney/Sheriff) offers each thief a reduced jail term if he testifies against the other. Since the jail term is lower regardless of what the other chooses to do, both rationally fink on each other. This occurs despite the fact that mutual silence would produce less time in jail than both testifying. Individually rational behaviour leads to collective suboptimality.4 This point is underscored by the fact that staying silent is often called "co-operation" while finking is often referred to as "defection." Surprisingly, many economic and social situations are analogous to that of the two thieves with fate playing the role of the clever jailer. Axelrod (1984), for example, offers many such examples.5 Despite the seeming simplicity, there are many subtle factors which can affect the behaviour of agents in a PD situation. Much of what makes the PD so useful to social scientists comes from these variations. Economic theory, particularly the field of game theory, has addressed many of these subtleties. This paper offers a brief (and far from complete) 6 survey of economic theory and evidence related to 92
Economics of the Prisoner's Dilemma: A Background
93
PD situations. For a more comprehensive consideration of strategic situations and game theory see any one of a number of excellent books on the subject.7 The chapter is organized as follows. Section 2 concerns one-shot PD situations. In this section the formal game is introduced and the nature of the equilibrium is discussed. Some of the "solutions" to the dilemma in the one-shot game are considered; including communication, commitment, and altruism. Section 3 considers repeated play of the PD. In repeated play, the strategy space available to the agents is much richer. This richness leads us to issues of sub-game perfection, common knowledge of rationality and renegotiation. Equilibria for both finitely and infinitely (or indefinitely) repeated games are considered. Several modifications of the finitely repeated game which permit co-operation are considered. The final section touches on experimental evidence, evolutionary arguments, and multi-person PDs.
2. One-shot game A. Description of the Game The traditional one-shot game can be represented as a normal form game (as opposed to extensive form games which are mentioned in the next section). A normal form game can be describe by the set G
The game is defined by three elements, a finite set of players (N) and a finite set of strategies (S;) and preferences (ut) for each player. In the PD, there are only two agents or N = {1,2}. S; is the set of available pure strategies to agent z'.8 In the PD, this set is the same for both players and is S2 = S2 = {c,d}. A strategy profile is a combination of strategies for all the agents. The set of all possible strategy profiles, S, is defined as follows. ij
— S\
i^
\^-J
The set of all strategy profiles, or the possible outcomes from the game, in the PD contains only four elements.
where (sl7s2) are the strategies of agents 1 and 2 respectively. The final element in the description of the normal-form game is the agents' preferences. The preferences are over the set of possible strategy profiles. We will assume that these preferences are representable
94
Bryan R. Routledge
by a von Neumann-Morgenstern utility function.9 The utility function maps possible strategy profiles (or outcomes of the game) to real numbers for each agent: U{. S —» IR. It is important to note that while we motivate the PD game with payoffs described by years in jail or money, formally preferences relate to the set of strategy profiles. This point comes up when we consider various resolutions of the PD in part (C) below. A PD game can be completely described by the set GPD as follows.10
However, normal-form games can be more easily summarized in a matrix. The specific PD game used in this paper is presented in Figure 1. Before continuing it is important to note a few of the assumptions that are often implicit in game theory. The first is that agents make choices to maximize their well being. In a narrow sense, agents are selfinterested. However, preferences are defined only on S. The particular reason why an agent prefers (c,c) to (d,d) is not modeled. Utility only reflects the agents' preference over an outcome, whether that utility is derived out of altruism or spite is not important. This topic is addressed again in part (C). The second implicit assumption in the game is that GPD or the matrix in Figure 1 represents everything. In particular, time does not play a role in the game. Agent behaviour is not a reaction to past events nor does it consider the future. We can think, therefore, that the moves are simultaneous. In normal-form games, there is no distinction between behaviour and strategy. To consider other elements (like timing), the normal-form game needs to be changed or, as is often easier, an extensive form game is developed. This is considered further below and in the second portion of the paper on repeated games. Finally, GFD is assumed to be common knowledge. By this, it is assumed that the agents know the structure of the game. In addition, Agent 2 c
A
c
(2,2)
(0,3)
d
(3,0)
(1,1)
Agent 1
Figure 1: The PD game in normal form. The utilities are shown as (agent 1, agent 2).
Economics of the Prisoner's Dilemma: A Background
95
agents know that their opponent knows GPD, that their opponent knows that they know GPD, and so on. Due to the nature of the equilibrium, which is discussed next, the issue of common knowledge is not important in the one-shot PD. However, it becomes crucial when considering the repeated PD game, as in Section 3. For more information on the role of common knowledge see Geanakoplos (1992) or Brandenburger (1992).
B. Equilibrium i. Nash Equilibrium The central equilibrium concept in game theory is due to Nash (1951). A Nash equilibrium is defined as a strategy profile in which no agent, taking the strategies of the other agents as given, wishes to change her strategy choice. The strategy profile forms a mutual best response. To state this concept precisely we need to define s\r;. For s G S and rt G S; define
In other words s\rt is a strategy profile identical to s except with agent i playing ri instead of s(. A Nash equilibrium is a strategy profile s such that
for all ri G S, and i G N.11 This definition and an interesting discussion of the concept can be found in Kreps (1987). Nash equilibria have the property that they always exist.12 However, they also raise several problems. In particular they may not be unique. Furthermore there are examples of games where the Nash equilibria lack Schelling's (1960) concept of "focal quality" (see Kreps (1987) for examples). Finally, some Nash equilibria are robust only to single-agent deviations. A group of agents (coalition) may find it profitable to jointly change strategy and improve their welfare. While all these are potential difficulties with the Nash concept, they are not problems in the PD. The reasons why the (d,d) equilibrium is so strong is discussed next. ii. Dominant Strategies It is easy to verify that (d,d) is the unique Nash equilibrium for the PD game.13 A simple check of all the strategy profiles, listed in (3), verifies this. However, the (d,d) equilibrium is stronger. There is no need for a subtle reasoning process in which the agent must conjecture about what the other agent will do. Regardless of what player 2 chooses, player 1 always does better with the choice of d.
96
Bryan R. Routledge Formally, for agent /, strategy qt is strictly dominated by r-t if for all s SE S, u,(s\r,) > M,.(sH)
(7)
Note that for each agent d strictly dominates c. The assumption that agents do not play dominated strategies yields the unique (d,d) equilibrium. Note that agents need not assume anything about their opponents. This is why the common knowledge assumption can be substantially relaxed without changing the equilibrium. This contrasts with the finitely repeated game considered latter where agents use the information that their opponent does not play dominated strategies. Hi. What Makes the PD a PD There are two characteristics that make the game in GPD or Figure 1 a prisoner's dilemma. The first is the nature of the equilibrium as one where each player has one strategy which dominates all others (d dominates c). The second is that this unique equilibrium is Pareto-inefficient. Both players are made strictly better off by jointly playing c. The dilemma posed by these two defining characteristics is what makes the PD an interesting test case for rationality and morality. Externalities, like in many economic models, exist in the PD. Agent 1's choice of strategy affects agent 2's utility. Since this impact is external to agent 1's decision problem (i.e., does not affect agent 1's utility), the optimal decision for agent 1 does not coincide with the jointly optimal decision. This form of inefficiency is common in many public policy decisions such as pollution control, common pool resources, employee training, and car pooling. The solution to externality problems often involves public policies (such as taxes) that cause agents to internalize these external effects.14
C. "Solutions" to the Dilemma There are no "solutions" to the dilemma in the normal-form game, which is summarized in Figure 1. Aumann states this strongly as: "People who fail to co-operate for their own mutual benefit are not necessarily foolish or irrational; they may be acting perfectly rationally. The sooner we accept this, the sooner we can take steps to design the terms of social intercourse so as to encourage co-operation."15 Resolutions of the dilemma involve altering the nature of the game. This section discusses some of the more commonly suggested variations. In these cases where the inefficient equilibrium is circumvented, d is no longer a dominant strategy and the new game is no longer a PD. However, the alterations are interesting and their analysis is not trivial, so it is important that they be considered carefully.
97
Economics of the Prisoner's Dilemma: A Background
There are two broad ways to alter the PD game; either the strategy space, S,, or the preferences, «;, can be modified. First, I will consider alterations to the strategy space. Second, the notions of altruism and other preference modifications will be considered. i. Changing the Strategy Space
The description of the PD game embodied in GPD (or Figure 1) overlooks some features which are common in many social settings. In particular, this section discusses communication and commitment. Modifying the game to include repeated interactions of the PD where reputation plays a role is left to Section 3. The folk story of the PD often has the two thieves in separate cells unable to communicate. However, simply expanding the strategy space to allow communication will not solve the inefficient equilibrium. For example, consider allowing agents to send m, the message that says "I will co-operate" or n, which can be interpreted as remaining silent. Since there is no truth-telling constraint to the message, defection (md and nd) still dominate co-operation (me and nc). The normal-form matrix is presented in Figure 2. There are multiple equilibria for this game. However, they all exhibit the joint defection of both players.16 The second modification of the strategy space is to allow the agents to commit. The agents would be better off if they could commit to play a dominated strategy. However, for this to be an equilibrium this commitment itself cannot be dominated. For example, allowing our two thieves the opportunity to commit to c or d before conducting their crime does nothing since both thieves will commit to d. Strategy d remains a dominant strategy. The sort of commitment that does solve the dilemma is a conditional commitment. A social mechanism that allows an agent to "co-operate only if other co-operates else defect" (called cc) is not a dominated strategy. This game is shown in Figure 3. Agent 2
Agent 1
me
nc
md
nd
me
(2,2)
(2,2)
(0,3)
(0,3)
nc
(2,2)
(2,2)
(0,3)
(0,3)
md
(3,0)
(3,0)
(1,1)
(1,1)
nd
(3,0)
(3,0)
(1,1)
(1,1)
Figure 2: The PD game with communication. The utilities are shown as (agent 1, agent 2).
Bryan R. Routledge There are two Nash equilibria in the game. The (d,d) equilibrium remains, but now (cc,cc), yielding the Pareto superior-outcome, is also an equilibrium.17 The PD game modified to allow conditional co-operation discussed here is similar to the game used in Danielson (1992) or Gauthier (1986) and used in the evolutionary environment of Danielson (this volume). In addition, the discussion by Frank (1987) that humans have the ability to "read" one another's type can be interpreted as allowing this sort of conditional co-operation strategy.18 ii. Changing the Preferences
The finger is often pointed at self-interest as the cause of the PD. If the two thieves were (completely) altruistic, agent 1 would co-operate (stay silent) hoping that agent 2 would defect (fink) so that agent 2 would be set free. If both acted this way they would achieve the Paretooptimal (c,c) outcome. However, this solution is a bit hollow since it leaves open the possibility that a very clever sheriff (or just bad luck) could, by altering the jail terms, recreate the (d,d) equilibrium.19 Altruism may resolve the dilemma in some situations but not others. The following shows that the amount of altruism required to prevent all PD's is extremely precise. To consider the effect of altruism, consider the following matrix of payoffs in Figure 4. Unlike Figure 1, these payoffs are dollars and not utilities. What preferences over these dollar payoffs will prevent PD situations? Answering this question will help understand the role of altruism in the PD situation. To simplify things, consider a set of agents with identical, monotonically increasing and continuous preferences over the dollar payoffs. For xp x2 £ U, where x1 is the dollar received by the agent and x2 is the dollars received by the other agent, U(xlr x2) will represent the utility to the agent. A strongly self-interested agent puts no weight on the x2 Agent 2
Agent I
c
d
cc
c
(2,2)
(0,3)
(2,2)
A
(3,0)
(14)
•(U)
cc
(2,2)
(14)
(2,2)
Figure 3: The PD game with conditional commitment. The utilities are shown as (agent 1, agent 2).
Economics of the Prisoner's Dilemma: A Background
99
dollars in his utility, while a completely altruistic agent puts no weight on xl dollars.20 The following proposition demonstrates that agents must put equal weight on xl and x2 to avoid PD situations.21 Proposition: No PD exists if and only ifU(a,p) = U(j3,a) Ma, /3 e I? Proof: First note, using the Figure 4 dollar payoffs, a PD exists when for some a, /3, y, 17 G IK, U(a,)8) > U(y,-y) > 11(77,17) > U(P,a)
(8)
Sufficiency: l/(a,/3) = U(a,/3) Va, /3 G K implies (8) cannot hold. Necessity: If U(a,f3) > U(f3,a) for some a, /3 G I?, then by the continuity and monotonicity of U, a y and 17 can be found such that (8) holds. In this context, the amount of altruism required is exact. If agents are too selfish or too altruistic, PD situations can arise. For a series of interesting papers on altruism in this and similar contexts, see Stark (1989) and Bernheim and Stark (1988). In particular, these papers consider a different definition of altruism where my utility is a function of your utility rather than your dollars. The latter article considers the "surprising and unfortunate" ways altruism can affect a utility possibility frontier. Finally, Arend (1995) has, in the context of joint ventures, considered the role of a bidding process to eliminate the PD. The bidding process, by auctioning the sum of the dollars in each outcome, effectively creates the equally weighted altruism needed to alleviate the PD. The second variation on preferences is often considered to introduce a preference for "doing the right thing." In the above example, utility depended solely on the dollar payoffs to the agent and her opponent and not on the acts c or A. Clearly introducing preferences directly on the action can eliminate PD situations by ensuring that w;(c,c) > u{(d,c) regardless of the dollar payoffs, years in jail or other such domains for utility. Margolis (1982) and Etzioni (1987), for example, discuss this issue in much more detail. Agent 2
c
d
c
(y,y)
(fc«)
d
(«.&
M
Agent 1
Figure 4: Dollar payoffs for strategy profiles. The dollar payoffs are shown as (agent 1, agent 2) with a, ft, y, r; e K.
100
Bryan R. Routledge
Hi. Institutions
Clearly, if PD situations were indeed common, inefficiency would be chronic. Fortunately, many social institutions have evolved to implement various solutions, or more correctly modifications, to the PD. Some social institutions alter the strategy space while others foster altruism. The most common institutions are those which facilitate repeated interaction where reputation and reciprocity can play a role. Repetition is the topic of the next section. Institutional and transactions-cost economics are beyond the scope of this survey. However, they do address many situations related to the PD. North (1991), for example, discusses the evolution of institutions relating to trade which he models as a PD. Williamson (1985) offers a different perspective using problems similar in character to the PD. 3. Repeated Game The most important modification to the PD to consider is the effect of repeatedly playing the game. When one introduces multi-stage play, notions of reputation and punishment can play a role. For this reason the repeated game is both more complicated and more realistic. This section begins with a few definitions of the repeated game. Next, the nature of the equilibrium is considered in both the finitely repeated game and the infinitely repeated game. The section concludes by considering some interesting modifications to the basic repeated PD. In particular, some of these modifications suggest co-operation is a possible equilibrium even in the finitely repeated game. A. Description of the Repeated PD A repeated game is often best described as an extensive form. The extensive form is a set which includes a description of the players, their utilities, the order of play (often represented as a tree), the information available to the agent each time they are called to move and the actions which are available. However, since the repeated PD game is easily decomposed into a series of periods or stages, a full discussion of the extensive form will be omitted.22 Instead, the repeated PD can be analyzed after considering strategies and preferences in the multi-stage environment. The stages of the game are denoted as t = 0,1,2,... ,T (where T may be finite or infinite). Unlike in the one-shot game (where T = 0), it is important to distinguish between actions (moves) and strategies. At each stage, the agents simultaneously take actions, s,-(f) £ S;. Note that in the PD, the set of available actions, S, = {c,d}, does not depend on time or past moves as it might in a more complex game.23 The pair of actions played by the agents at stage t is s(t) = (s, (t), s2(tj). Each s(f) is a member of S in (3) and contributes to the history of the game. Let h(t) = (s(0),
Economics of the Prisoner's Dilemma: A Background
101
s(l),..., s(t-l)) represent the history of the game (where h(0) = 0) and H(t) represent the set of all possible histories to stage t.24 In the repeated PD, when acting at stage t, agents know the history h(t). A strategy, therefore, gives must give a complete contingent plan, which provides an action for all stages and histories. Formally, a strategy for agent i is defined as sequence of maps cr; = (cr • J=0 where each a\: H(t) —> S, map possible histories into actions. Note that in the repeated PD, since S~{c,d] is fixed, we can abbreviate the notation by describing strategies as one (more complicated) function (j;: H —> {c,d}, where H = U/ = 0 H(f) is the set of all histories. The second element that needs consideration in the multi-stage game is agents' preferences. A strategy profile, a = (cr{l
In this definition, s (T (f) describes the actions induced for the agents by (j and the history to t. The ut are the stage payoffs (or period utility) from Figure 1 (i.e., U{. S —> M). Finally, the 0 < § < 1 is the discount factor, which plays an important role for infinite games (i.e., where T = °°) and the (1 — 5) is simply a normalization.25
B. Finite Repeated Game - Equilibrium With the exception of the obvious change in notation, the definition of a Nash equilibrium in (6) can be used. The actions induced by an equilibrium strategy profile is called the equilibrium path. In this section, the PD will be repeated a finite number of times. The striking feature of this game is that no matter how large the number of repetitions, as long as T is finite, the equilibrium path consists solely of (d,d). To establish that all equilibrium paths consist only of (d,d), first consider the final stage. In the final stage, T, strategies which call for d strictly dominate those that call for c. Thus, the equilibrium must have (d,d) at stage T. In the second last period (T-1), action d yields a larger stage payoff. In addition, play at stage T-1 does not influence equilibrium play in stage T. Thus, strategies that call for d at T—1 dominate. This logic extends right back to the initial stage. i. Sub-Game Perfection
The above discussion has focused on the actions which occur on the equilibrium path. It is possible that c's occur off the equilibrium path.
102
Bryan R. Routledge
The off-equilibrium path consists of actions that would have been taken if certain histories (not on the equilibrium path) were reached. Since agents' action choice on these branches is payoff irrelevant, Nash equilibrium places little restriction on this behaviour. Thus, there are usually multiple Nash equilibria in repeated games. Selten proposed a refinement of Nash equilibria that requires that strategies form an equilibrium in every possible sub-game. In some games (not the repeated PD) this plays a role since never used "threats" can affect other agents' choices. Sub-game perfection requires these threats to be credible.26 Sub-game perfection in the finitely repeated PD only affects off-equilibrium behaviour. In particular, the only sub-game perfect equilibrium is one where the agents' strategies call for d at every stage and for all histories. However, sub-game perfection arguments are not required to establish that no co-operative play is observed in equilibrium. ii. Dominant Strategies
Despite the similarity in the equilibria, the repeated PD does differ from the one-shot PD. As in the one-shot game, the equilibria in the finitely repeated game are Pareto-inferior. However, in the repeated PD game there is no dominant strategy. For example, suppose agent 1 played a tit-for-tat (TFT) strategy.27 In this case, a strategy that played all-d would do poorly. The best response in the repeated game depends on the opponent's strategy. For this reason, the repeated PD game is not a true PD. In the one-shot game (T = 0), the assumption that agents do not play dominated strategies (i.e., they are rational)28 is sufficient to lead to the (d,d) equilibrium outcome. In a twice repeated game (T = I), this level of rationality predicts (d,d) in the final stage (stage 1). However, if in addition to being rational, agents know that their opponents are rational then they realize that the action at stage 0 does not affect play in stage 1. Realizing this, d at stage 0 is dominant. In a three-stage game (T = 2) an additional level of rationality is required. Both knowing that each is rational gives (d,d) in stages 1 and 2. However, if agents know that their opponent knows they are both rational then at stage 0 both know that future play is (d,d) and d is again dominant. This procedure is called iterated elimination of (strictly) dominated strategies.29 Common knowledge of rationality implies this reasoning. However, for finite games, a finite level of knowledge about knowledge suffices to establish that the equilibrium path consists solely of (d,d). As the length of the game increases, the level (or depth of recursion) of knowledge about mutual rationality increases. This point comes up again in part C below.30
Economics of the Prisoner's Dilemma: A Background
103
C. Infinitely Repeated Game - Equilibrium Equilibria in the finitely repeated game depend heavily on the existence of a final stage. In the infinite game, no final stage exists. In fact an alternative interpretation of the game is one of being indefinitely repeated where the game length is finite but stochastic.31 The lack of a known terminal stage, where an agent's optimal action can be determined without regard to history or opponent's strategy, allows the possibility that equilibria exist in which c's are observed. In particular, perfectly co-operative (Pareto-efficient) equilibria may exist in which only (c,c) are observed. Unfortunately, many other different equilibria may also exist. In fact, almost any pattern of behaviour in the infinitely repeated game can form part of an equilibrium. This section addresses equilibria in the infinitely repeated game. Recall that a strategy is a complete specification for all stages and possible histories. In an infinite game, therefore, these strategies can become very complex. For this reason and to avoid supporting cooperation with non-credible threats, the focus of this section is on subgame perfect equilibria. Even with this restriction, one can imagine an equilibrium path which is supported by a complex hierarchy of punishments that induce the agents not to deviate from the equilibrium path and also not to deviate from a punishment should it be necessary. Fortunately, Abreu (1988) demonstrates that complex strategies are unnecessary and attention can be restricted to simple strategies. The strategies are simple in that the same punishment is used after any deviation (including deviations from punishment). As will be seen in the examples below, simple strategies induce a stationarity in the agents' decision problems. To determine if a strategy profile is an equilibrium, only one-period deviations need to be examined.32 i. Some of the Equilibria in the Infinitely Repeated PD
The first equilibrium to note is one where only (d,d) is observed. This equilibrium path is supported by both the agents playing the all-rf strategy (i.e., both play ad(h(t)) = d for all h(t) G H). To verify a strategy profile as an equilibrium, one needs to consider all possible single, period deviations in all possible phases of play. For this strategy profile the only phase of play is (d,d) and the only possible deviation is to play a c. While it is perhaps obvious that this deviation is not profitable this is formally established by calculating the utility playing c and d (using (9) and the stage payoffs in Figure 1) play d
play c
(1-5)1 + SI > (1-5)0+ SI
(10)
104
Bryan R. Routledge
For each of the possible actions, the utility decomposes into the immediate payoff plus the discounted sum of future payoffs under the proposed strategy profile. Since a play of c does not affect future play neither agent has an incentive to deviate and the all-d strategy profile is an equilibrium. The second equilibrium profile to consider is a perfectly co-operative where only (c,c) is observed. Consider the possible grim trigger strategy where each agent plays c so long as only (c,c) is observed otherwise play d. Formally both agents play o-g where:
This strategy profile generates two possible phases; an initial cooperative phase where (c,c) occurs and a punishment phase where (d,d) is observed. First note that if the agents are in the punishment phase neither will deviate since the punishment phase is equivalent to the alld equilibrium discussed above. For an agent not to deviate in the cooperative phase, the following must hold.
Note that when contemplating action d, its effect on future payoffs (i.e., initiating the punishment phase) is reflected. Equation (12) is true so long as 8 > V2. This accords with the intuition that for equilibrium cooperative play to exist the future must matter enough to offset the immediate gain from playing d. ii. Tit-for-Tat Axelrod (1984) focuses heavily on the tit-for-tat (TFT) strategy. The TFT plays c on the initial move and thereafter mimics the opponent's action at the previous stage. While TFT has intuitive appeal, the strategy profile in which both agents use TFT is an example of a strategy profile which can be a Nash equilibrium but not a sub-game perfect equilibrium. The initial phase of the TFT strategy profile produces (c,c). For this to be an equilibrium the following must hold.
The left-hand side of (13) is the immediate benefit of playing a c and the future discount value of remaining (c,c) play. The right-hand side
Economics of the Prisoner's Dilemma: A Background
105
reflects the immediate and future value of playing a d. Note that playing a d causes future play to alternate between (d,c) and (c,d). Some algebra shows that as long as 6 2= V2, (13) holds and the strategy profile is a Nash equilibrium. However, for the punishment to be credible, the case that when the TFT strategy calls for a d, it must be in the agent's interest to play a d. For this to be true, the following must hold.
Note that choosing action c instead of a d returns play to the co-operative phase. Since (14) is the mirror image of (13), both cannot be true (except for 8 = V 2 ). If agents are sufficiently patient (d > V2) to play (c,c), then they are too patient to play a d should it be called for. The punishment threat is not credible and, therefore, the strategy profile is not sub-game perfect. It is possible to modify the TFT strategy to yield a sub-game perfect equilibrium. The modified strategy has the agent playing d only if the other agent has played rf (strictly) more times in the past. This modified TFT strategy retains the reciprocity feature of TFT. However, this modified strategy profile avoids the alternating phase. Instead, after an agent is "punished," play returns to the co-operative phase. Hi. The Folk Theorem Two different equilibria for the infinite game have been discussed. The Folk Theorem demonstrates that for sufficiently patient agents (high 8) many other equilibria also exist. To start, note that a strategy profile, a, implies not only a path of play but also a utility profile (Uj(cr), U2(cr)). By considering all possible strategy profiles, the set of feasible utility profiles can be constructed.33 Note that an agent can guarantee that her utility is never below 1 by repeatedly playing d. Thus, utility profiles where Ut(o) ^ 1 for each agent are said to be individually rational. The Folk Theorem states that for every feasible utility profile which is individually rational, a sub-game perfect equilibria exists which provides the agents with that utility (for a high enough 8). Since the minimum guaranteed level of utility happens to coincide with the Nash equilibrium in the (one-shot) stage game, (d,d), it is easy to see how this works. Equilibrium strategies simply revert to the grim (all-d) punishment if either agent deviates from playing according to some sequence which generates the desired utility profile. Since agents are patient (high enough 5), and utility is above the all-d level, neither will deviate. Note that this is quite a large space and equilibrium notions say little about which of the equilibria will occur.34
106
Bryan R. Routledge
iv. Trigger Strategies, Renegotiation, and Noise
In order to support a co-operative equilibria (or any of the many others) agents threatened to play d for a period of time (or forever in the grim strategies). This threat is credible because both agents revert to d after a deviation. Given agent 1 is going to punish by playing d, the best agent 2 can do is to play d and vice versa. However, this mutual support for playing d seems hollow. If the punishment stage actually occurred (for whatever reason), the agents could renegotiate and jointly agree to play c. Both are better off and both should agree (i.e., playing (c,c) is Pareto-superior). However, since players will anticipate that punishment can be escaped through negotiation, agents have no incentive to ever play c. Thus, it might seem that requiring equilibria to be immune from this problem or be "renegotiation-proof" might reduce the set of possible equilibria. However, van Damme (1989) shows that this is not the case. He uses the modified TFT strategy (mentioned above) where the punishment stage ends with repentance. That is, the defecting player must play c when her opponent plays d in order to restart the co-operative phase. In this case both agents cannot mutually agree to forgo punishment (i.e., (c,c) is not Pareto-superior to a (d,c) outcome).35 There is perhaps a second concern with the trigger strategies used to support co-operative play. In particular, if there is some chance that agents can play d by mistake or opponents can misinterpret an opponent's move, then punishments should not be too severe. In these imperfect information games, equilibria can be constructed where trigger strategies are used to "punish" deviations. However, the length of the punishment period is chosen to be as short as possible while still providing agents with the incentive to co-operate. The punishment period is minimized because, unlike the perfect information games considered above, due to noise, in equilibria punishment occurs. (For more on these games see Green and Porter 1984 and Abreu, Pearce, and Stacchetti 1990). Alternatively, Fudenberg and Maskin (1990) use evolutionary arguments to suggest that punishment cannot be too severe.
D. Co-operation in Finite Games In infinitely repeated games, many equilibria including the co-operative one are possible. However, as was discussed above, no matter how long the game, if it is repeated a known finite number of times, no cooperation exists in equilibria. Not only is this result at odds with our intuition, it also seems to be in contrast to experimental results.36 Cooperation in the finitely repeated PD is addressed in this section. As with the one-shot game, various modifications of the repeated game that permit equilibrium co-operation are discussed. First, the role of
Economics of the Prisoner's Dilemma: A Background
107
imperfect information of rationality is considered. Second, the stage game is modified so that multiple (Pareto-ranked) equilibria exist. Finally, the effect of strategic complexity and bounded rationality on co-operative behaviour is presented. i. Imperfect Information and Reputation
Kreps, Milgrom, Roberts, and Wilson (1982) consider a finitely repeated PD but relax the assumption regarding the common knowledge of rationality. They assume that there is a very small chance (or e-probability) that agent 2 is "irrational." In particular, an irrational agent plays the TFT strategy. This implies that there is some possibility agent 2 will play c even at the last stage. This could result from true "irrationality" or simply from different (not common knowledge) stage payoffs. The fact that agent 1 does not know agent 2's "type" makes this situation a game of imperfect information. The interesting result of this model is that this small chance of irrationality has a large impact on equilibrium behaviour. In particular, for long but finite games, the e-level of irrationality is enough to dictate co-operation in equilibrium for most of the game. In order to see the effect of this e-irrationality consider why it is the case that in equilibrium (the rational) agent 2 will not play all-d. Given the all-d strategy for agent 2, what would agent 1 believe about agent 2's rationality if, on move zero, he observed a c? Clearly, he would assume that agent 2 is irrational and he should play c (inducing a c from TFT) until the final round where d dominates. Given this response from agent 1, agent 2 has an incentive to mimic a TFT agent and play c on the initial move. In fact what Kreps et al. show is that for a long enough (but finite) game agent 2 mimics the TF T player by playing a c on all but the last few stages. Given this, agent 1 also optimally plays c on all but the last few stages as well. Note that this model captures the intuition that reputation is important. Here the reputation that is fostered is one of a TFT-type agent.37 Recall from the discussion of domination in the finitely repeated game that rationality alone was not sufficient to show that all equilibria consist of (d,d)- Common knowledge of rationality was sufficient so that for any finite T, agents could work back from T, eliminating dominated strategies to conclude that only (d,d) will be played. In the e-irrationality model, rationality is no longer common knowledge. The e-chance that player 2 is not rational acts as if the recursive knowledge about rationality is terminated at some fixed level. Thus, at some stage, I can play c knowing that you know that I know that you know I am rational, but not one level deeper.38 The fact that the irrationality is in the form of a TFT guides the equilibrium to the (c,c) on the initial moves. Fudenberg
108
Bryan R. Routledge
and Maskin (1986) prove a Folk Theorem result that any individually rational payoffs can be achieved for most of the game with the with eirrationality as long as the appropriate form of irrationality is chosen. ii. Pareto-Ranked Equilibria in the Stage Game
In the co-operative equilibrium in the infinitely repeated game, cooperative play is rewarded with future co-operation. In the finitely repeated game, at stage T, such a reward is not possible; play at T cannot depend on the past. However, if one modifies the stage game to allow the players the option of not playing (action n) yielding a payoff of zero for both players (regardless of the other's action) this modifiedstage game now has two equilibria, (d,d) yielding the agents a stage payoff of 1 and (n,n) yielding the agents 0. In this repeated game, at the final stage it is possible for past mutual co-operation to be rewarded by playing d and past defection punished by playing n. Since both of these actions form a Nash equilibrium in the final stage they are credible. Thus, threatening to revert to the "bad" stage-game equilibrium can support co-operation. Hirshleifer and Rasmusen (1989) interpret the n action as ostracism and consider its effect in inducing co-operative societies. Finally, note that Benoit and Krishna (1985), who develop this model, also show that Folk Theorem-like results again hold and such models have a large class of solutions. in. Strategic Complexity Thus far, agents' reasoning ability has implicitly been assumed to be unlimited. Following Simon (1955,1959) some recent research has considered the effects of bounding rationality. Strategic complexity has been developed in game theory to address the effects of bounded rationality. This research defines the complexity of a strategy by the number of states used to describe it. An automata describes strategies with a series of states which dictate an action c or d. In addition, a transition function maps the current state to the next state based on the opponent's last move. For example, the all-d strategy consists of a single d state which, regardless of the other's action, it never leaves. The TFT strategy requires two states (a c and a d state) and transits between them accordingly (e.g., to the c state if opponent plays a c). Formal definitions of automata and complexity can be found in the survey of Kalai (1990). Neyman (1985) notes that in a T-stage game, an automata requires at least T states in order to be able to count the stages and issue a d on the last stage. Thus, if the complexity of agents' automata is restricted (exogenously), then agents may choose co-operative automata in equilibria. If, for example, agent 1 used a grim trigger strategy (two states), agent 2 would like to be able to execute a strategy where she played a c
Economics of the Prisoner's Dilemma: A Background
109
in stages 0 to T — 1 and d on the final move. However, since this strategy requires many states, it may not be feasible, in which case agent 2 may optimally select the grim trigger strategy. Essentially, not being able to count the stages transforms the finite game to an infinite game. Zemel (1989) extended these results to note that small-talk (the passing of meaningless messages) can increase co-operation. Zemel's argument is that having to follow meaningless customs (handshakes, thank-you cards) uses up states in the automata and further reduces the number of stages that can be counted.39
4. Other Interesting Things Related to the PD Thus far, one-shot, finitely repeated, and infinitely repeated PDs have been discussed. While this represents a large portion of PD research, it is far from exhaustive. This final section simply touches on some of the major topics which have yet to be addressed in this brief overview. First, a few pointers to the vast body of experimental evidence are presented. Next a mention is made of evolutionary economics related to the PD. Finally, multi-person PD situations are considered.
A. Experimental Evidence Many laboratory experiments have investigated what behaviour is actually observed in PD situations. Roth (1.988) and Dawes (1980) both include surveys of PD and PD-related studies. In general, experiments do report some co-operation in one-shot PD's (about 20% is typical).40 In finitely repeated games again co-operation is observed, however, it tends to decrease towards the end of the game.41 Finally, in an infinitely (or indefinitely) repeated game, co-operative play occurs less often as discount rates decrease.42 Recent studies investigate the Kreps et al. (1982) e-irrationality model. The results are mixed. Camerer and Weigelt (1988) and Andreoni and Miller (1993) find weak evidence that observed patterns of cooperation in finitely repeated games do form the sequential equilibria of Kreps et al. However, in the Camerer and Weigelt paper, which investigates a game similar to the PD, the observed behaviour was most consistent with a sequential equilibria with a higher probability of irrationality than the experimenters induced. Alternatively, Kahn and Murnighan (1993) conclude that uncertainty tends to increase the tendency to play d, which is contrary to the model of Kreps et al. Finally, Cooper et al. (1996) look at both altruism (Is the underlying stage game viewed as a PD?) and Kreps et al. e-irrationality (uncertainty about the common knowledge of the stage game being a PD). They conclude that neither altruism nor reputation are sufficient to understand observed play.
110
Bryan R. Routledge
B. Evolutionary Economics and the PD Evolutionary game theory has developed primarily out of two concerns. First, games (particularly repeated games) often have a large number of equilibria and it is not obvious which will actually occur. Second, the unbounded rationality assumption is empirically false. Evolutionary game theory addresses both of these questions by assuming that agents simply follow rules of thumb which evolve based on past performance or fitness in the population. For introductions to recent symposia on evolutionary game theory see Mailath (1992) or Samuelson (1993). While a comprehensive survey of evolutionary game theory is beyond this paper's scope, there are several evolutionary papers specifically regarding the PD. In the finitely repeated game Nachbar (1992) concludes that, in a randomly matched population, evolutionary dynamics lead to the all-d equilibrium. However, while he demonstrates that the system converges to all-d he notes that populations exhibiting some co-operation can exist for a long time before the all-d takes over. Binmore and Samuelson (1992) investigate the evolution of automata which play the infinitely repeated PD. Following Rubinstein (1986), fitness depends both on payoffs and strategic complexity. They conclude that evolutionary dynamics leads to the co-operative equilibria. However, the automata which evolve are tat-for-tit automata. These strategies play an initial d move and thereafter play co-operatively only if the opponent also began with d. This initial move acts as a handshake and prevents simple (but not stable) all-c strategies from surviving. C. Multi-Person Social Situations Most of the analysis in this paper focuses on two-person games. However, many situations are multi-person. Kandori (1992) and Ellison (1994) investigate co-operation in pairwise infinitely repeated PD games where agents from a population are randomly matched. Both of these papers consider social sanctions or norms which support co-operation under various information assumptions. In particular, Ellison demonstrates that co-operation is possible even if the individual interactions are anonymous, through a punishment mechanism that is contagious. An alternative to multi-person pairwise interaction is the n-person prisoner's dilemma. In these games, an individual agent's utility depends on the action of all the other agents. For example, utility might be increasing in the proportion of agents which play c. However, to maintain the PD nature of these situations, utilities are such that each agent prefers d, regardless of other agents' strategies. For a comprehensive analysis of these situations see Bendor and Mookherjee (1987). Besides, analyzing the repeated version of the n-person game, they
Economics of the Prisoner's Dilemma: A Background
111
consider the situation in which other agents' actions are imperfectly observed. When the number of agents in a situation is large, a small imperfection in the ability of the agents to identify an agent's d move has a large impact on the equilibrium.
5. Conclusion This paper has presented a brief overview of economic research related to the well-known prisoner's dilemma. Clearly, much more can be said about the topics introduced here and many interesting lines of research have been omitted. However, the survey has dealt with much of the basic results of the one-shot, finitely repeated, and infinitely repeated PD. It is perhaps surprising that such a simple parable about two thieves and a clever jailer can spawn such a large volume of sophisticated research. However, the struggle to understand what is moral and rational is not easy. They are concepts which are intuitive and familiar from our everyday experience. However, developing their axiomatic properties can be complex. Simple problems like the PD serve as excellent testing ground where the crucial elements can be sifted away from the noise of everyday experiences.
Acknowledgment Thanks to Madan Pillutla and Tom Ross for helpful comments. 1 gratefully acknowledge the support of the Social Sciences and Humanities Research Council of Canada.
Notes 1 Held at Simon Fraser University on February 11 and 12,1994. 2 The Economist (1993) comments on the cross-fertilization of biology and economics using the prisoner's dilemma model. 3 Axelrod (1984) attributes the invention of the game to Merrill Flood and Melvin Dresner. It was formalized by A. W. Tucker shortly after its invention. It is Tucker who receives credit for the invention in both Aumann (1987) and Rapoport (1987) in their contributions to the Palgrave Dictionary of Economics. 4 It is important to note that the sub-optimality is with respect to the two prisoners. Clearly, for the clever jailer this represents the best outcome. 5 I have not compiled a list of economic applications of the PD. The list would be extensive. For a few examples see Kreps (1990) or North (1991). 6 It is difficult to accurately determine just how many papers there are on the PD. Grofman (1975) estimated it at over 2000. Aumann (1987) offers the more conservative lower bound at over 1000. Both of these estimates include the many papers on experiments using the PD from the social psychology literature. These papers are beyond the scope of this survey.
112
Bryan R. Routledge
7 Myerson (1990) or Fudenberg and Tirole (1991), for example. There are of course many other excellent recent textbooks. For an interesting historical overview of game-theory see Aumann (1987). 8 In many games it is important to consider mixed strategies. A mixed strategy, for agent i, is a probability measure over the set of pure strategies, S,, which dictates the probability with which each pure strategy is chosen. Since mixed strategies play no role in the equilibria discussed in this paper, they are not explicitly considered. 9 Typically, preferences are expressed as a binary relationship, R, over some set (S, for example). We say S]Rs2 when outcome s1 is weakly preferred to (as least as good as) S2. To say a utility function, u, represents these preferences means that Vsvs2 E S, w(Sj) s u(s2) <=> SjRsj. Finally, a von Neumann-Morgenstern utility function has the additional property that it is linear in the probabilities over the elements of S. Since mixed strategies are not discussed, this feature does not play a role. These foundations of decision theory are included in Myerson (1990). However, Savage (1954) is the classic reference. 10 The defining characteristics of the PD are discussed below. In addition to the preference orderings indicated in additional conditions are often imposed. The first is symmetry in the utility functions such that M1(s1,s2) = w2(s2,Sj). The second is that u\(c,d) + M t (rf,c) < 2;/1(c,c) and similarly for agent 2. In the one-shot game both of these restrictions are without loss of generality. They do, however, play a minor role in the infinitely repeated game. 11 This definition is actually a Nash equilibrium in pure strategies. The definition is easily extended to mixed strategies where the mixed strategy chosen by each agent is a best response to the other agents' mixed strategies. The randomization agents perform in a mixed strategy profile are assumed to be mutually independent. Aumann (1974) considers the effect of allowing correlation among the randomization devices. However, these issues are not important in the PD game. 12 Nash (1951) proved that in games where the sets of players and strategies are finite, at least one Nash equilibrium exists. However, existence may require the use of mixed strategies. 13 There are no mixed strategy equilibria. 14 Dubey (1986) considers the generic inefficiency of Nash equilibria. He considers models where the strategy space is continuous (i.e., not finite). 15 Aumann (1987), page 468, emphasis included. 16 Note that this description assumes a normal-form game structure. Little is changed if we explicitly consider the timing of the messages. Note that the equilibria differ only in the communication portion of the agents' strategies. In any of the equilibria, agents are indifferent about communicating or not. 17 The Nash equilibrium concept is silent on choosing which equilibria in the game will be played. However, in this case one can note that once agent 2
Economics of the Prisoner's Dilemma: A Background
18
19 20 21
22 23 24 25
26
27
28
113
realizes that agent 1 will not play c because it is weakly dominated by both d and cc, then cc weakly dominates d in the remaining matrix. This process yields (cc,cc) as the unique equilibrium. However, one must use caution when eliminating weakly dominated strategies iteratively since the order of elimination may matter. The iterated elimination of strictly dominated strategies is considered again in the finitely repeated PD game below. The structure which generates the game in Frank (1987) is quite subtle. Playing A reduces your ability to "look like" a cc player since feelings of guilt cannot be perfectly masked. This technique is often used by villains in the movies who, after getting no reaction from threatening the hero/heroine, turn to threaten the spouse. Using the monotonicity of the utility function, for completely selfinterested agent U(xv x2) > U(x^', x2') <=> x/ > x,'. The formal definition of completely altruistic is analogous. Rusciano (1990) discusses the relationship between Arrow's impossibility theorem and the PD. The author argues that the PD is simply a special case of the Arrow problem. This proposition is in this spirit. See, for example, Kreps and Wilson (1982b) for a formal definition. Note that the notation of s; and S, is consistent with that used in describing the one-shot game. However, it should now be interpreted as actions or moves and not strategies. H(f)=S f , the t-fold Cartesian product of the set of possible outcomes in a stage. Note that if mixed strategies were to be considered, preferences would be taken as expected utility. There are alternative criteria such as the average stage payoffs (or their limit for infinite games). See Myerson (1990) Chapter 7 or Fudenberg and Tirole (1991) Chapter 5 for more discussion. Selten's (1978) Chain Store Paradox is the classic example. Kreps and Wilson (1982a) and Milgrom and Roberts (1982) extend this concept to games where information is not perfect. They not only restrict out-of-equilibrium behaviour but also restrict out-of-equilibrium beliefs on inferences made from the hypothetical actions of others. For more on Nash refinements see Kohlberg and Mertens (1986). The TFT strategy plays c at stage 0 and at stage i plays whatever the opponent played at t—\. See Axelrod (1984) for much more on this strategy which was used in his finitely repeated PD tournament. The fact that this strategy is not part of an equilibrium does not matter when checking for dominance. Recall from (7) that a dominant strategy does better regardless of other agents' strategies. If a player does not play d in the one-shot game we call this "irrational." However, this is simply a convenient way of describing an agent whose preferences (for whatever reason) do not imply d is a dominant strategy.
114
Bryan R. Routledge
29 For more on this issue, see Bernheim (1984) and Pearce (1984) who introduce the concept of rationalizability as an equilibrium concept which begins with the contention that agents will never play strictly dominated strategies. 30 An important issue which is not addressed here is what agents are to believe if they were to observe a c. Is this an indication of irrationality or just a mistake? Reny (1992) addressed this issue. 31 The two interpretations are equivalent. The 8 in (9) can be seen as composed of the agent's patience, A, and the probability that the game ends after any stage, tt, as 8 = (1 — 77)A. Note that 8 can change over time; however, for co-operative equilibria to exist it is necessary that 8 remain large enough. Just how large 5 needs to be is discussed below. 32 This uses fundamental results of dynamic programming. See Stokey and Lucas (1989) for a discussion of dynamic programming. 33 For the specific repeated PD game considered here, this set is the convex hull of {(2,2),(3,0),(0,3),(1,1)|. That is, all points that are convex combinations of the stage payoffs. This is why the (1—8) normalization in (9) is a convenient normalization. Note that a profile,
Economics of the Prisoner's Dilemma: A Background
38 39
40
41
42
115
ity is that of sequential equilibrium from Kreps and Wilson (1982b). A sequential equilibrium is an equilibrium in strategies (i.e., mutual best response) and also in beliefs. Beliefs about unknown types must be consistent. See Kreps and Wilson (1982b) for more details. See Reny (1992) for more on this topic. Strategic complexity issues have also been considered in the infinitely repeated game. Rubinstein (1986) and Abreu and Rubinstein (1988) consider infinitely repeated games played with finite automata. Instead of arbitrarily restricting the number of states, these papers assume a cost of complexity. The cost is introduced lexigraphically in that given two strategies with the same sum of discounted stage payoffs, the agent prefers the least complex. The authors demonstrate that the relatively mild complexity assumptions can dramatically reduce the number of equilibria in infinitely repeated games. See Cooper et al. (1996) for a recent study. Frank, Gilovich, and Regan (1993) provide an interesting study of the effect of studying economics on the tendency to co-operate. See Selten and Stoecker (1986). Their experiments not only repeat the PD, but also repeat the entire game. They find that defection tends to appear sooner in latter trials. See also Cooper et al. (1996). See Feinberg and Husted (1993) who alter the discount rate by changing the probability that the repeated game is terminated.
References Abreu, D. (1988). On the theory of infinitely repeated games with discounting. Econometrica, 56: 383-96. Abreu, D., and A. Rubinstein (1988). The structure of nash equilibrium in repeated games with finite automata. Econometrica, 56:1259-81. Abreu, D., D. Pearce, and E. Stacchetti (1990). Toward a theory of discounted repeated games with imperfect monitoring. Econometrica, 58:1041-63. (1993). Renegotiation and symmetry in repeated games. Journal of Economic Theory, 60: 217-40. Andreoni, J., and J. H. Miller (1993). Rational cooperation in the finitely repeated prisoner's dilemma: experimental evidence. The Economic Journal, 103: 570-85. Arend, R. J. (1995). Essays in Policy Analysis and Strategy, PhD dissertation, University of British Columbia Aumann, R. J. (1974). Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1: 67-96. (1987). Game theory. In J. Eatwell, M. Milgate and P. Newman (eds.),Tfe New Palgrave: A Dictionary of Economics, Vol. 12 (London: Macmillan Press), pp. 460-82. Axelrod, R. M. (1984). The Evolution of Cooperation. New York: Basic Books.
116
Bryan R. Routledge
Bendor, ]., and D. Mookherjee (1987). Institutional structure and the logic of ongoing collective action. American Political Science Review, 81:129-54. Benoit, J. P., and V. Krishna (1985). Finitely repeated games. Econometrica, 53: 905-22. Bernheim, B. D. (1984). Rationalizable strategic behavior. Econometrica, 52: 1007-28. Bernheim, B. D., and O. Stark (1988). Altruism within the family reconsidered: Do nice guys finish last? American Economic Review, 78:1034-45. Binmore, K. G., and L. Samuelson (1992). Evolutionary stability in repeated games played by finite automata. Journal of Economic Theory, 57: 278-305. Brandenburger, A. (1992). Knowledge and equilibrium in games. Journal of Economic Perspectives, 6(4): 83-101. Camerer, C, and K. Weigelt (1988). Experimental tests of a sequential equilibrium reputation model. Econometrica, 56: 1-36. Cooper, R., D. V. Dejong, R. Forsythe, and T. W. Ross (1996). Cooperation without reputation: Experimental evidence from Prisoner's Dilemma games. Games and Economic Behavior, 12: 187-218. Danielson, P. (1992). Artificial Morality: Virtuous Robots for Virtual Games. London: Routledge. (1996). Evolutionary models of co-operative mechanisms: Artificial morality and genetic programming. This volume. Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31:169-93. Dubey, P. (1986). Inefficiency of Nash equilibria. Mathematics of Operations Research, 11: 1-8. Economist (1993). Evo-economics: Biology meets the dismal science. 25 (Dec.): 93-95. Ellison, G. (1994). Cooperation in the prisoner's dilemma with anonymous random matching. Review of Economic Studies, 61: 567-88. Etzioni, A. (1987). The Moral Dimension: Toward a New Economics. New York: Free Press. Farrell, J. and E. Maskin (1989). Renegotiation in repeated games. Games and Economic Behaviour 1 :327-360 Feinberg, R. M., and T. A. Husted (1993). An experimental test of discount-rate effects on collusive behaviour in duopoly markets. Journal of Industrial Economics, 61: 153-60. Frank, R. H. (1987). If homo economicus could choose his own utility function, would he want one with a conscience? American Economic Review, 77: 593604. Frank, R. H., T. Gilovich, and D. T. Regan (1993). Does studying economics inhibit cooperation? Journal of Economic Perspectives, 7(2): 159-71. Friedman, J. W. (1971). A noncooperative equilibrium for supergames. Review of Economic Studies, 38: 1-12.
Economics of the Prisoner's Dilemma: A Background
117
Fudenberg, D., and E. Maskin (1986). The folk theorems in repeated games with discounting or with incomplete information. Econometrica, 54: 533-54. (1990). Evolution and cooperation in noisy repeated games. American Economic Review (Papers and Proceedings), 80: 274-79.
Fudenberg, D., and J. Tirole (1991). Game Theory. Cambridge, MA: MIT Press. Gauthier, D. P. (1986). Morals by Agreement. Oxford: Oxford University Press. Geanakoplos, J. (1992). Common knowledge. Journal of Economic Perspectives, 6(4): 53-82. Green, E. J., and R. H. Porter (1984). Noncooperative collusion under imperfect price information. Econometrica, 52: 87-100. Grofman, B. (1975). Bayesian models for iterated Prisoner's Dilemma games. General Systems, 20:185-94. Hirshleifer, D., and E. Rasmusen (1989). Cooperation in a repeated Prisoners' Dilemma with ostracism. Journal of Economic Behaviour, 12: 87-106, Kahn, L. M., and }. K. Murnighan (1993). Conjecture, uncertainty and cooperation in Prisoner's Dilemma games. Journal of Economic Behaviour and Organization, 22: 91-117. Kalai, E. (1990). Bounded rationality and strategic complexity in repeated games. In T. Ichiishi, A. Neyman and Y. Tauman (eds.), Game Theory and Applications (San Diego, CA: Academic Press), pp. 131-57. Kandori, M. (1992). Social norms and community enforcement. Review of Economics Studies, 59: 63-80. Kohlberg, E., and J.-F. Mertens (1986). On the strategic stability of equilibria. Econometrica, 54: 1003-38. Kreps, D. M. (1987). Nash equilibrium. In J. Eatwell, M. Milgate and P. Newman, eds., The New Palgrave: A Dictionary of Economics, Vol. 3 (London: Macmillan), pp. 584-88. (1990). Corporate culture and economic theory. In J. E. Alt and K. A. Shepsle (eds.), Perspectives on Positive Political Economy (Cambridge: Cambridge University Press), pp. 90-143. Kreps, D. M., P. Milgrom, J. Roberts and R. Wilson (1982). Rational cooperation in the finitely repeated Prisoners' Dilemma. Journal of Economic Theory, 27: 245-52. Kreps D. M., and R. Wilson (1982a). Reputation and imperfect information. Journal of Economic Theory, 27: 253-79. (1982b). Sequential equilibria. Econometrica, 50: 863-94. Mailath, G. J. (1992). Introduction: Symposium on evolutionary game theory. Journal of Economic Theory, 57: 259-77. Margolis, H. (1982). Selfishness, Altruism and Rationality: A Theory of Social Choice. Cambridge: Cambridge University Press. Milgrom, P., and J. Roberts (1982). Predation, reputation, and entry deterrence. Journal of Economic Theory, 27: 280-312.
118
Bryan R. Routledge
Myerson, R. B. (1990). Game Theory: Analysis of Conflict. Cambridge, MA: Harvard University Press. Nachbar, J. H. (1992). Evolution in the finitely repeated Prisoner's Dilemma. Journal of Economic Behaviour and Organization, 19: 307-26. Nash, J. F. (1951). Non-cooperative games. Annals of Mathematics, 54: 286-95. Neyman, A. (1985). Bounded complexity justifies cooperation in the finitely repeated Prisoner's Dilemma. Economics Letters, 19: 227-29. North, D. C. (1991). Institutions, journal of Economic Perspectives 5(1): 97-112. Pearce, D. G. (1984). Rationalizable strategic behavior and the problem of perfection. Econometrica, 52:1029-50. Rapoport, A. (1987). Prisoner's dilemma. In J. Eatwell, M. Milgate and P. Newman (eds.), The New Palgrave: a Dictionary of Economics, Vol. 3 (London: Macmillan), pp. 973-76. Reny, P. J. (1992). Rationality in extensive-form games. Journal of Economic Perspectives, 6(4): 103-18. Roth, A. E. (1988). Laboratory experimentation in economics: A methodological overview. The Economic Journal, 98: 974-1031. Rubinstein, A. (1986). Finite automata play the repeated prisoner's dilemma. Journal of Economic Theory, 39: 83-96. Rusciano, L. (1990). The Prisoners' Dilemma problem as an extended Arrow problem. Western Political Quarterly, 43: 495-510. Samuelson, L. (1993). Recent advances in evolutionary economics: Comments. Economics Letters, 42: 313-19. Savage, L. J. (1954). The Foundations of Statistics. New York: Wiley. Schelling, T. C. (1960). The Strategy of Conflict. Cambridge, MA: Harvard University Press. Selten, R. (1978). The chain-store paradox. Theory and Decision, 9: 127-59. Selten, R., and R. Stoecker (1986). End behaviour in sequences of finite prisoner's dilemma supergames. Journal of Economic Behaviour and Organization, 7: 47-70. Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69: 99-118. (1959). Theories of decision-making in economics and behavioral science. American Economic Review, 49: 253-83. Stark, O. (1989). Altruism and the quality of life. American Economic Review (Papers and Proceedings), 79: 86-90. Stokey, N. L., R. E. Lucas Jr., and E. C. Prescott (1989). Recursive Methods in Economic Dynamics. Cambridge, MA: Harvard University Press. van Damme, E. (1989). Renegotiation-proof equilibria in repeated Prisoners' Dilemma. Journal of Economic Theory, 47: 206-17. Williamson, O. (1985). The Economic Institutions of Capitalism. New York: Free Press. Zemel, E. (1989). Small talk and cooperation: a note on bounded rationality. Journal of Economic Theory, 49: 1-9.
7
Modeling Rationality: A Normative or Descriptive Task? Ronald de Sousa
The Rationality of Suicide: Two Problems Suppose your friend, who is wholly committed to rationality, proposes to commit suicide. How - without moralizing or exhortation - might you try to dissuade her? The standard method, springing from the type of Bayesian decision theory propounded by Richard Jeffrey (1965) and others, goes something like this. First establish, on the basis of past decisions, a general picture of the subject's preference rankings. Then compare the present choice to that established picture. If they match, that indicates the choice is rational. If not, it is not. Obviously, this procedure makes several assumptions: that the subject has not changed her mind; that the inference from choices or reports to the original preference ranking was accurate; that the information from present choice is correct and so forth. These might well be mistaken. They might also be virtually impossible to verify. But these are not problems I want to dwell on here. The issues I want to draw attention to stem from two additional considerations. First, suicide seems especially resistant to rational assessment. The past, in this case, can, strictly speaking, afford little guidance as to the rationality of the present. The reason is that the choice of death is a choice that negates all the alternatives among which previous choices were made. Previous preference for a Chevy over a Lancia, or for eating lemon ice over viewing a Picasso, can provide no guidance where one alternative is neither. Nor can the choice of pleasure over pain afford comparison with nothing. So here, it seems, the Bayesian schema loses its grip. Yet surely the choice of suicide can sometimes be assessed for rationality. The second problem is that suicide involves, among other considerations, an assessment of what we might call foundational values: values - if there are any such - which are chosen for their own sake and not as 119
120
Ronald de Sousa
a consequence of their relation to any other values. Is there a special way in which such foundational values are related to the Bayesian scale? And is there any adequate evidence embodied in past choices that can determine uniquely what they are?
Normative/Descriptive Ambiguity in Rational Models These questions are actually a symptom of a wholly general problem. Roughly put, the problem arises from the fact that any normative model of rationality presupposes a corresponding descriptive model. In the case just considered, the model lost its grip precisely because there was no applicable description of previous suicide behaviour. In fact, we should rather say that every normative model is identical with a descriptive model: the difference depends merely on context of use. Yet they may draw us in different directions. In one mode, I infer my subject's preference ranking from her actual past decisions. In another, I infer from her choices that she is irrational, on the ground that they are inconsistent with her preference rankings as previously ascertained. (This is what we cannot do in the case of suicide.) But what guarantees that the original assignment was correct? Perhaps it was a mistaken inference, reflecting irrationality in the original choice. In inferring from it to the existence of a certain preference structure, I must assume it was rational. This assumption rests on some sort of "principle of charity" (Quine I960; Davidson 1982). Otherwise I might not have been able to make it look coherent. On the other hand, when I use the scheme to criticize irrationality, I take a previously established structure for granted. Then, for my inability to fit the subject's present choice into that structure, I choose to blame my subject rather than myself. This ambiguity of the normative and the descriptive is pervasive in efforts to model human rationality.1 But what exactly is the relation between them? The history of philosophy affords examples of two reactions: naturalism, which attempts to reduce the normative element to some sort of natural process, and normativism, which claims that within all attempts to model actual reasoning processes there must be an ineliminable element of normativity. Naturalism must be distinguished from two neighbouring positions. First, it must not be confused with physicalism. A functionalist, for example, need not be a physicalist with respect to mental entities, while still claiming that all there is to be said about the norms of rationality can be accounted for in terms of natural truths. What marks out truth (as the proper object of belief) from the proper objects of other mental states is that truths alone must pass the test of simple consistency. The proper objects of other mental states, such as wants, need pass no such test.
Modeling Rationality: A Normative or Descriptive Task?
121
Consistency does not require that the propositions we desire be possibly all true together: it is sufficient that they be possibly all good together.2 Second, however, the contrast between naturalism and normativism may not be assimilable to the fact-value distinction. That rationality involves norms should not pre-empt the question whether all norms of rationality refer to the good. I hazard no judgment on that question here. Some Related Forms of the Problem of Naturalism The present problem is an ancient one. Its first avatar in our tradition is Plato's puzzle about error in the Theaetetus: If a representation (belief, reference, or perception) is the imprint on our mind caused by the thing or fact which is its object, then how can it ever be mistaken? (Theaetetus, 187d ff.) If for R to be a representation of A is to be caused by A, then R must either represent A correctly, or represent something else correctly, or else fail to represent anything at all. In no case can it erroneously represent A. In more modern forms, the problem of misrepresentation has been widely discussed (Fodor 1987; Dretske 1986; Millikan 1993). If I mistake a cow for a horse, does this not mean my word 'horse' really means (to me) 'horse-or-cow'? (In that case I was not mistaken after all.) A normativist might view this as a piece of evidence for the irreducibility of the normative. Once all the mechanics of causation have been accounted for, it will still be possible to draw the distinction between what is correct and what is not. To counter this, the naturalist's strategy must first be to show that the "rightness" or "wrongness" associated with biological functions can be accounted for without normative residue. Naturalism need not eliminate teleology: it need only tame it. We need to show that it makes sense to claim that something is meant to be this rather than that, without resorting to ineliminable normativity. A couple of examples will remind you of the flavour of the resulting debates. (i) The "frog's eye" problem: if what sets off the frog's eye is a black moving speck, must we say that it just accidentally finds flies, or rather that by means of the capacity to detect moving specks, it serves to detect flies? (Dretske 1986; Millikan 1991). (ii) In Elliott Sober's sorting machine problem, a series of sieves finds green balls, because the balls' colour happens to be correlated with size. But is the sorting machine to be described as having the function of sorting green balls, or that of sorting for small ones which sometimes happen to be green? (Sober 1984, pp. 99ff.)
122
Ronald de Sousa
On the first problem, Ruth Millikan quotes an unpublished tract of Fodor's as remarking: Nature "doesn't sort under any labels" (Millikan 1991, p. 159). If that were strictly true, then the plight of naturalism would be far worse when we are judging rationality, for there labels are almost all. Without labels, there is unlikely to be any way of arriving at a sufficiently unambiguous ascription of belief or want to prove rationality or convict of irrationality (de Sousa 1971). Even without positing that nature sorts under labels, however, we can hazard hypotheses about the "real" function of these processes, by looking at the causal origin of their teleology. The mechanism of teleology in these cases may be difficult to demonstrate conclusively, but it seems reasonable to assume that it is not magical: that there is some naturalistic explanation for it. Can we distinguish, from a biological standpoint, between stimuli that are apparently causally equivalent in actual situations?3 The answer favoured by both Sober and Millikan is that we can, providing we delve into the history of the selection process. In the case of the ball-selecting toy, what the device selects are green balls, but what it selects for are small ones. The reason is that the colour of the balls is causally irrelevant to the selection even though the effect of the selection is to select green ones. The selectionist equivalent for the frog's eye is this: the frog's detection mechanism was selected for finding flies, but it selects mechanisms that find both flies and specks. We can safely insist that it must have an unambiguous meaning (even in the absence of labels subjectively assigned by a language-speaking creature). We need only say that it means whatever it has been caused to find: " 'Selection of pertains to the effects of a selection process, whereas 'selection for' describes its causes" (Sober 1984, p. 100). The moral is that evolutionary considerations are probably capable of assigning a definite function to some mechanisms or processes, without resorting to some sort of externally imposed normativity. One use of this idea, is that it might be possible to explain, in evolutionary terms, why we have certain propensities to follow given strategies. But can the reference to an evolutionary story actually avoid the question of normativity? Some, in the tradition of Hume (1975) or Goodman (1983), have answered that it can. In fact this is arguably Hume's essential lesson: just say no to the demand for justification. Instead, change the subject: do not ask why we do it, just ask what it is we do. Sometimes rejecting a question is a good strategy. Witness Newton and Darwin: Newton's genius was to insist on not answering the classic question about what keeps the arrow in flight. Darwin's was to insist on not answering the classic question about what is the cause of biological diversity. After Newton, we do not ask why the arrow keeps
Modeling Rationality: A Normative or Descriptive Task?
123
going, we ask why it stops. After Darwin, we do not ask why living things are so diverse or why they fail to be true to type, we ask instead what makes them cluster around apparent types. Many people feel cheated by Hume's answer: "So maybe I do it naturally, but why is this a reason to do it? Wasn't it you, Hume, who famously told us you can't go from an 'is' to an 'ought'?" There are obviously cases where not to ask is just an evasion. What makes it the right strategy in some cases and not in others? One possible answer is that it must be the right strategy if it is wired in, i.e., if it is embodied in the system's "functional architecture." This is Pylyshyn's (1984) term for a level of explanation at which some mechanism carries out a function merely in virtue of its physical configuration: given that it is set up in just such a way, physical laws have just that effect. The fact that we perform modus ponens is due to some such basic functional architecture. We just do what we are programmed to do. But does this not beg the question? Is it not an empirical question whether natural selection results in mistakes? Consider Richard Dawkins's (1982) discussion of the case of the digger wasps. Digger wasps hide paralyzed prey in burrows; when they fight over a burrow the time they spend fighting is proportional to their own efforts in stocking the borrow, not to the "true value" of the burrow measured in terms of the number of prey it contained. At first sight, these wasps appear to be committing the "sunk costs" or "Concorde fallacy." But this example too embodies the sort of descriptive/normative ambiguity I have been discussing. The digger wasps would not be there if their policy had not worked out as well or better than, available alternatives. So who are we to carp? Dawkins's recommendation is instructive: "assume that an animal is optimizing something under a given set of constraints ... try to work out what those constraints are" (Dawkins 1982, p. 48). Sure enough, the digger wasps commit no fallacy under the constraints entailed by their epistemic position. In other words: the biologist's real task is to explain why the apparent alternatives that might have avoided the "sunk costs fallacy" were not actually "available alternatives." So even if one agrees that "sunk costs" reasoning is a normative mistake, the constraints of descriptive adequacy will not let us actually blame the wasp (or natural selection) for committing it. If one cannot even blame wasps for being irrational, what are the prospects of making charges of irrationality stick to the "rational animal" par excellence? Someone might object to this whole discussion that talk of evolutionary rationality is irrelevant. The questions we should be raising concern rational agents, where the word 'rational' just means "capable of irrationality." Natural processes can maximize this or that parameter, but they cannot exhibit irrationality.
124
Ronald de Sousa
The special rationality of persons, then, consists essentially in the capacity to be irrational. Some have argued that humans could never be systematically irrational (Jonathan Cohen 1981; against him, see Stich 1990). But if this is so, what accounts for our actual irrationality in particular cases? And how, from a naturalist point of view, can we be convicted of such irrationality? If our models were strictly descriptive, would the appearance of irrationality not merely indicate inadequacy in the model? This, then, returns us to my original question: Are the models constructed to account for human rationality purely descriptive, or must they contain an irreducibly normative element?
Four Classes of Models Models of rationality fall into two pairs of distinct classes: (i) strongly or (ii) weakly compulsory; and (iii) weakly or (iv) strongly optional. These cases are significantly different with respect to their origins, to the role played in their determination by natural selection, and to the way in which they are subject to the two problems of descriptive/normative ambiguity and foundational status. The third and fourth class are particularly interesting. In the remainder of this paper, I propose first to tease out some characteristics of the two compulsory types, particularly the duality of descriptive and normative aspects, and then to examine the special role of emotions in relation to both sorts of optional principles.
(i) Strongly Compulsory Example: modus ponens/tollens. One who does not observe these rules is straightforwardly irrational. Nevertheless, a non-question-begging justification of the rule has not yet been stumbled on. This fact makes it plausible to argue that these principles are irreducibly and categorically normative. For if they were merely conditionally normative, one would be able to offer the conditions on which their prescriptive force depends. The fact is, however, that even in strongly compulsory cases the normative force of the argument falls far short of "logical compulsion." To see this, consider modus ponens. Two facts stand out: the first is that in this case normativity entails naturalism. The second is that even in this case no argument ever compels. Why Normativity Entails Naturalism A naturalistic theory is exactly what we need at precisely the point where normativism is supposed to triumph. The proof lacks freshness, because it is really an amalgam of Hume (1975), Quine (1966), Goodman (1983), Lewis Carroll (n.d.), and Wittgenstein (1951). But here it is, in terms of Carroll's classic dialogue between Achilles and the Tortoise.
Modeling Rationality: A Normative or Descriptive Task?
125
Achilles:
"If p then q, and p. So you must accept q."
Tortoise:
"Why must I - oh, never mind, I know you'll never satisfactorily answer that one. Don't even try. [Hume] Instead, let me accept this imperative without justification. Let me accept it, in fact, as a categorical imperative of thought. Or if that sounds too grandiloquent, let's just call it a convention [Carnap (1956)]. But, please, write it down for me.
Achilles:
All right, then (writes): "P and (if p then q). But, if p and (if p then q) and (if p and (if p then q) then q) then q." See? Now you must accept q.
Tortoise:
I've agreed to your rule, but now how do I know that this is a relevant instance for its application? [Quine] I need a principle of interpretation that will indicate to me when and how I must apply this categorical rule or convention that I have agreed not to question. [Wittgenstein].
Hence the Quine/Wittgenstein/Carroll dilemma: Either you will need to give me a rule of interpretation for every new case - and then you will have a doubly exploding process: for each new case will not only require a new rule of interpretation but also an additional rule of interpretation to interpret the application of the rule of interpretation - ad infinitum. Or the answer has to be at some point that we do not follow a rule at all: we just naturally do this. In short, it is just the way we are wired. Why No Argument Compels I have called modus ponens (and its converse) the most compelling cases, because their violation requires heroic twists in the application of the principle of charity. Even in this most compelling case, it is important to see that no one is actually compelled to believe the conclusion of a valid argument. Arguments are maps, not guides. The most any deductive argument can give us is a set of alternatives: believe the conclusion together with the premises, or continue to reject the conclusion, but then also reject one or more premises. And in this situation, what is the most reasonable thing to do? The most reasonable thing to do is, surely, to believe the least incredible alternative. But what can determine which that is? Since not everyone will agree, the relevant determinant must be something essentially subjective. At best it can be discerned, at the end of a process of reflection, by the place at which a "reflective equilibrium" is reached. But if we need to appeal to a
126
Ronald de Sousa
reflective equilibrium even in the most compelling case, then a fortiori we shall need to understand what it is that guides our choice of rational strategies in other cases. That, I venture, is where the unique structural role of our emotional dispositions comes in. More of this in a moment. Let me first complete the sketch of my taxonomy.
(ii) Weakly Compulsory Sometimes, it seems that sub species aeternitatis there is a clear answer to the question of what is the correct way to interpret a given situation and produce a rational outcome. Some of the Kahneman-Tversky problems seem to be of this sort: the usual claim about them is that we tend to make mistakes about them (Kahneman and Tversky 1982). Another good example is a problem that has come to known as the "Monte Hall Problem": Three cards are face down; one is an Ace, two are Kings: you do not know which. I ask you to put one finger on one card at random (you may hope it is the Ace). I then turn one other card up, which is a King. Now I ask you to bet on which of the remaining cards is the Ace: the one you had your finger on, or the other one? It is tempting to reason: since there are just two cards, it makes no difference. You could switch or stay at random. But actually, if you switch, you stand to win; if you stay, you stand to lose, two thirds of the time. For, of all the times you start playing this game, your finger will be on the Ace just one third of the time.4
In cases such as these, it is clear that our intuitive answers are just plain wrong. It does not follow, needless to say, that "evolution failed us," since one can imagine constraints under which the decision procedure in question might turn out to be the best of all possible procedures. Besides, while we are bad at working out probability problems, we are actually quite sensitive to frequency differences in practice (Whitlow and Estes 1979). In some cases, principles such as "anchoring" or "representativeness" may involve significant savings of cognitive resources, and yield approximately correct results enough of the time to outweigh their disadvantages in the cases generally highlighted in the literature. (Kahneman and Tversky 1982) In these compulsory cases, we might expect that once the problem is sufficiently well defined, we can give conclusive reasons for the superiority of one argument or method over another. This class of examples differ from the "strongly compulsory" ones in that they make no claim to foundational status. As a result, they admit of (conclusive) justification. I call these weakly compulsory because an argument is required
Modeling Rationality: A Normative or Descriptive Task?
127
to see that they are correct principles. (By contrast, as we just saw, in the case of modus ponens the cause is lost as soon as one starts asking for an argument.) Arguments in their favour will be normative in tone; but once understood, they will be seen to be as compelling as any argument can be - within the limits just discussed. Anyone (including, notoriously, a number of "experts") inclined to dispute the standard solution to the Monte Hall problem can be invited to put their money behind their principle, and soon come up against the necessity of admitting that either it is time for them to give up the ordinary principles of induction, or they must take their monetary losses as evidence of their mistake.
(Hi) Weakly Optional Optional cases are those where no "compelling" (in quotation marks because of the qualification just made that no argument is really so) solutions can be shown to be correct. There are two separate reasons why this might be so. In one case, there are alternative solutions that have the feel of an antinomy: equally compelling arguments seem to line up on either side of the issue. Newcomb's problem, for example, pits dominance arguments (nothing wrong with them) against probabilistic arguments (nothing wrong with these either). Yet the arguments' conclusions are radically incompatible (Nozick 1969).
(iv) Strongly Optional In other types of situation, there is no definitely or demonstrably right answer to the questions at all. When offered a choice between betting and not betting in a zero-sum game, for example, there is, ex hypothesi, no Bayesian reason to choose either. Such a decision is a paradigm case of the strongly optional. In other cases there are competing goals that cannot be reconciled at any permanently optimal point. In the ethics of belief, for example, the two competing goals are "maximize truth," and "minimize falsehood." Either goal could be fully satisfied at the expense of completely ignoring the other. So any policy is, in effect, a compromise between contrary risks. Still other cases seem to involve rules that are reasonable under certain evolutionary constraints. These are reminiscent of biological cases such as the digger wasp. Principles such as anchoring or representativeness may belong here rather than in the compelling class. For having a stable policy enabling quick decisions may lead, in the long run, to better results even if the policy is a relatively coarse one. As Mill once put it, a sailor would not get on better by calculating the Nautical Almanac afresh before every turn of the tiller. This consideration casts doubt on the claim that these are definitely mistaken (Mill 1971).
128
Ronald de Sousa
The Necessity of Biological Economics In all the above cases, it might be tempting to claim that there is no real possibility of discovering that the processes of evolution are "irrational." The normative correctness of these processes are built into the conditions of adequacy for their description. The reason is that the economic model is more straightforwardly applicable in biology than it is in economics itself. For economics applies to people only insofar as they can be construed as economic agents - an idealization. In biology we can give the model a literal interpretation: probability of this gene's reproduction can simply be taken as the actual frequency (in some run considered long enough) of the gene, and the benefit can be interpreted as the difference between this frequency and the corresponding future frequency of its alleles (or some other acceptable measure of fitness). If there were constraints that prevented an organism from attaining some "ideal" condition, these are automatically included in the equation. There is one qualification, however. Sometimes, we can see something in nature that we would have to rate plainly irrational if we thought that God had invented it, because, if God had invented it, there would have been no special constraints on the mode of its engineering. Take, for example, the ratio between the sexes among vertebrates in general and primates in particular. If you were God, you would surely arrange for that to be as close to 0:1 - to parthenogenesis - as was compatible with the gene-mixing function of sex (grant, for the sake of this almost-serious argument, that hypothesis about the function of sex) (Williams 1975). In a stable environment, the best bet would be to settle on a satisfactory model. That means parthenogenesis. A parthenogenetic species needs only half the resources for every offspring produced, and moreover, the offspring, being clones, are of guaranteed quality. In unstable environments, however, all clones might be threatened at once. So we need variation, kept up by the gene shuffling of sex, to increase the chance of there being some variant pre-adapted to the new conditions. Males, however, are notoriously murderous and wasteful, and their presence in such large numbers clearly manifests that this is not the best of all possible worlds. One in a hundred would easily suffice. But the trouble is that the mechanism that secures the actual ratio takes no account of the normative considerations just adduced. It secures the result purely mechanically, for if there is a tendency for the genes to favour one sex, the members of the other immediately acquire an advantage, in that they will, on average, necessarily have occasion to contribute their genes to a larger number of members of the future generation.
Optional-Foundational Principles In the case of human policies, in contrast to the biological cases just described, any constraints placed upon us by the facts of natural life
Modeling Rationality: A Normative or Descriptive Task?
129
limits only what we can do, not what we can judge to be desirable. Here, perhaps, is the crucial difference between biological models and models of genuine intentional behaviour. Can we make a clear distinction between biological principles of rationality and those that are determined at the level of the actual life of the individual? Cognitive science commonly posits two levels of brain programming. First-level evolutionary programming determines innate operational mechanisms; its advantages are stability and early access. Second-level evolutionary programming acts via a first-level capacity for learning. Its advantages are flexibility, power, and lower evolutionary cost. They outweigh the disadvantage of complete dependence at birth, as well as the risks entailed by the need for a developmental process crucially contingent on environmental circumstances. The hypothesis I want to advance is that in the case of the "optional" types, the range of principles of rationality available actually corresponds, at least in certain cases, to emotional dispositions that determine fae framework of rationality rather than its content. They represent an analogue, in the sphere of evaluation and reaction in real-life situations, of the evolutionary trade-off just alluded to: when in the grip of an emotional state, we tend to act with a speed that easily becomes haste; our behaviour tends to be stereotypical, but it is generally efficient. Emotions, like instinctual behaviour patterns, trade flexibility for early access to a response. We can, therefore, think of them as analogous to genetic constraints on rationality, without being committed to the view that they are genetically programmed. This, then, is the simple idea I want to promote: it is that the pattern of dual programming (the first for fixed responses and the second for the learning of new ones) can be extended to illuminate the "optional" principles of rationality. A partly acquired emotional repertoire might provide substitutes for innate rationality principles such as the ones governing strongly compulsory practices such as modus ponens. Thus, for example, an emotional fear or love of risk can determine the choice between betting and not; a tendency to attachment might anchor the policy of anchoring; and some sense of emotional identification with a certain kind of scenario might promote the policy of "representativeness." Moreover, emotions present characteristics peculiarly well suited to temporarily mimicking the rigidity of constraints acting on the economic models of evolution. Emotions as Foundation Substitutes There are a number of parallels between the temporary role played by emotion in the determination of our cognitive strategies, and the evolutionary role illustrated above in the case of the structures determining functional architecture. I noted above that even in the most compelling
130
Ronald de Sousa
case, the choice of what to believe is actually to be determined by subjective factors. In less compelling cases, such as the case of suicide, or of temptation (see Elster 1979; Ainslie 1993; Nozick 1993), present emotion is pitted against future or past emotion in ways that reflect individual "temperament" (our wired-in emotional dispositions) but also another source of variability tied to factors that depend on individual biography. In all cases, however, the emotions seem to play at least the following roles (de Sousa 1989): (1) they filter information to the point of temporary exclusion of normally relevant facts; (2) they offer strong motivational focus; (3) they have quasi-foundational status, and can therefore be modified only by the kind of reflection that can change a reflective equilibrium. This is why emotions are said to "transcend reason" insofar as the latter is the mere working out of the means to the emotionally fixed goal. (Reason is and ought to be, in the words of Hume, nought but the slave of the passions.) To grasp the significance of this last point, it is important to see that emotions are not merely desires. They have motivational force of some sort, to be sure, but that force is structurally different from that of desires, because of their "foundational" status. Let me explain. The recently dominant Bayesian-derived economic models of rational decision and agency are essentially assimilative models - two-factor theories, which view emotion either as a species of belief, or as a species of desire. They make is look as if all behaviour can be explained in terms of a suitable pair (or pair of groups) taken from each category. That enviably resilient Bayesian model has been cracked, however, by the refractory phenomenon of akrasia or "weakness of will." In cases of akrasia, traditional descriptive rationality seems to be violated, insofar as the "strongest" desire does not win, even when paired with the appropriate belief (Davidson 1980). Emotion is ready to pick up the slack: it determines what is to count as input into the Bayesian machine. Emotions are often credited with the power to change beliefs and influence desire. But they can play a determining role even without doing either of these things. By controlling attention, emotions can fix, for the duration of what we suggestively call their "spell," what data to attend to and what desires to act on, without actually changing our stock of either beliefs or desires.
Modeling Rationality: A Normative or Descriptive Task?
131
The Normative Factor in Emotions It remains to sketch how the role of emotions in framing optional rational strategies is compatible with naturalism, and how it can explain the appearance of irreducible normativity. The hypothesis I have just sketched could be rephrased in these terms: that our repertoire of emotions constitute the temporary functional architecture of a given person's rationality rules. The way that they are built up involves playing out basic scripts or "paradigm scenarios," in terms of which the emotion is, in effect, defined. Individual temperament plays a crucial role in the writing of these scripts, and individual differences in temperament account for a good deal of the individual differences between scripts. But so, of course, does individual history: since roles are first played out in social contexts (albeit a society that may consist only of child and caretakers), these are also in large part conditioned by social sanctions. We learn to "conform," we learn to "rebel." Neither concept makes sense without a social norm. And these norms are subjectively experienced as if they had objective reality. Felt norms, while in the grip of an emotion, will be experienced as compulsory, and so appear to be categorically normative: hence the temptation to reject naturalism. The qualification, categorically is important, because conditional injunctions do not really pose a problem for naturalism: those who insist on irreducible normativism need categorical imperatives. It may be difficult to prove that a certain strategy is best given certain goals, but it is at least clear how this could be a purely factual question. The difficulty of doing any more has led to a tradition of thinking of reason as essentially limited to the elaboration of means (Wiggins 1976). As we saw in the case of suicide, the hardest cases for naturalism are those that involve foundational choices, i.e., choices that are not themselves conditional on pre-existing choices. But once we see the emotions as forming the framework of our deliberations and the limiting conditions of our rational strategies, it is no longer surprising that there should be a category of "optional" models, strongly backed by social norms, which in certain situations might be experienced as categorical norms. But what, in turn, is a social norm? I conclude by hazarding a coarse guess. A social norm is nothing more, I venture, than a collection of facts about the individual reactions (actual and counterfactual) of individual members of the society. To be sure, those individuals will refer to a norm in their reaction. That is because their reactions are partly internalized as immediate emotional states, which are experienced as guided by norms. But there is no reason to take this experience at face value. For if the norm itself is merely embodied in further counterfactuals about the
132
Ronald de Sousa
reactions of members of the society, there is a self-feeding loop here that is capable both of accounting for the powerful appearance of irreducible normativity, and of explaining it away, as reducible without remainder to natural facts.
Notes 1 This is so not only in those models that apply to behaviour. Hintikka (1962) exemplifies the problem for a purely cognitive domain. His notion of "virtual consistency" evades the issue, however, since it presupposes complete logical transparency. But, of course, a description that would fit only an ideally consistent subject is not a merely factual description of any real belief set. 2 I have argued this in de Sousa (1974). The gap between simple consistency and rational policy is attested by the lottery paradox (Kyberg 1961, pp. 196ff.). It seems rational to believe of each lottery ticket that it will lose, while believing that one will win. Yet these are strictly inconsistent. The question of rationality here concerns a policy about believing, not the closure of the set of propositions believed. The issue in the lottery paradox is whether the necessity that some of the propositions in that set be false which is not controversial - constitutes a sufficient reason for holding that the policy of jointly believing all of them must be irrational - which is. 3 In the face of counter-examples raised, e.g., by Boorse (1976), to the classic Wright-type analysis (Wright 1973), Robert Nozick has suggested that genuine teleology has to obey a second-order condition, combining Wright's insight that if G is the function of X it explains why X exists, with Nagel's analysis of homeostatic systems: "The Nagel and Wright views can be combined, I suggest, to present a more complete picture of function. Z is a function of X when Z is a consequence (effect, result, property) of X and X's producing Z is itself the goal-state of some homeostatic mechanism M satisfying the Nagel analysis, and X was produced or is maintained by this homeostatic mechanism M (through its pursuit of the goal: X's producing Z)" (Nozick 1993, p. 118, referring to Nagel 1961). This condition excludes Boorse-type counter-examples. It may, however, be too stringent, since in most cases of natural selection there is little reason to think the processes involved were homeostatic, insofar as that implies centring on some fixed point. 4 I do not know the origin of this puzzle, which has been around for some years, becoming widely known a few years ago as the Monte Hall problem. Hundreds of mathematicians and statisticians, it was reported, got it wrong (Martin 1992, p. 43).
References Ainslie, G. (1992). Picoeconomics: The Strategic Interaction of Successive Motivational States within the Person. Cambridge: Cambridge University Press.
Modeling Rationality: A Normative or Descriptive Task?
133
Boorse, C. (1976). Wright on functions. Philosophical Review, 85: 70-86. Carnap, R. (1956). Meaning and Necessity: A Study in Semantics and Modal Logic. Chicago: University of Chicago Press. Carroll, L. (n.cl). What the Tortoise said to Achilles. In Lewis Carroll, The Complete Works of Lewis Carroll (New York: Random House), pp. 1225-30. Cohen, J. L. (1981). Can human irrationality be demonstrated? Behavioral and Brain Sciences, 4: 317-30. Davidson, D. (1980). How is weakness of the will possible? In D. Davidson,Essays on Actions and Events (Oxford: Oxford University Press), pp. 21-43. (1982). inquiries into Truth and Interpretation. Oxford: Oxford University Press. Dawkins, R. (1982). The Extended Phenotype: The Gene as Unit of Selection. Oxford: Oxford University Press. de Sousa, R. (1971). How to give a piece of your mind, or the logic of belief and assent. Review of Metaphysics, 25, 51-79. (1989). The rationality of emotion. Cambridge, MA: Bradford Books / MIT Press. Dretske, F. (1986). Misrepresentation. In R. J. Bogdan (ed.),Belief: Form, Content and Function (Oxford: Oxford University Press), pp. 17-36. Elster, J. (1979). Ulysses and the Sirens: Studies in Rationality and Irrationality. Cambridge: Cambridge University Press. Fodor, J. (1987). Psychosemantics. Cambridge, MA: Bradford Books/MIT Press. Goodman, N. (1983). Fact, Fiction, and Forecast. 4th ed. Cambridge, MA: Harvard University Press. Hintikka, J. (1962). Knowledge and Belief. Ithaca, NY: Cornell University Press. Hume, D. (1975). Enquiry Concerning Human Understanding. Edited by L. A. Selby-Bigge, 3rd ed., revised by P. H. Nidditch. Oxford: Oxford University Press. Jeffrey, R. C. (1965). The Logic of Decision. New York: McGraw Hill. Kahneman, D., and A. Tversky (1982). On the study of statistical intuitions. In D. Kahneman, P. Slovic, and A. Tversky (eds.), Judgment under Uncertainty: Heuristics and Biases (Cambridge: Cambridge University Press), pp. 493-508. Kyburg, Henry (1961). Probability and the Logic of Rational Belief. Middletown, CN: Wesleyan University Press. Martin, R. M. (1992). There Are Two Errors in the the Title of This Book: A Sourcebook of Philosophical Puzzles, Problems, And Paradoxes. Peterborough, ON: Broadview Press. Mill, John Stuart. (1971) Utilitarianism. In Max Lerner (ed.), Essential Works of ]ohn Stuart Mill (New York: Bantam). Millikan, R. (1991). Speaking up for Darwin. In B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and his Critics (Oxford: Blackwell), pp. 151-65. (1993). White Queen Psychology and Other Essays for Alice. Cambridge,
MA: Bradford Books/MIT Press.
134
Ronald de Sousa
Nagel, Ernest (1961). The Structure of Science. New York: Harcourt, Brace and World. Nozick, R. (1986). Newcomb's problem and two principles of choice. In N. Rescher, ed., Essays in Honor of Carl G. Hempel (Dordrecht: Reidel), pp. 114-146. Nozick, R. (1993). The Nature of Rationality. Princeton: Princeton University Press. Pylyshyn, Z. (1984). Computation and Cognition. Cambridge, MA: Bradford Books / MIT Press. Quine, W. V. O. (1960). Word and Object. Cambridge, MA: MIT Press. (1966). Truth by convention. In W. V. O. Quine, The Ways of Paradox and Other Essays (New York: Random House), pp. 70-99. Sober, E. (1984). The Nature of Selection: Evolutionary Theory in Philosophical Focus. Cambridge MA: MIT Press. Stich, S. (1990). The Fragmentation of Reason. Cambridge, MA: MIT Press. Whitlow, J. W., Jr., and Estes, W. K. (1979). Judgment of relative frequency in relation to shifts of event frequency: evidence for a limited capacity model. Journal of Experimental Psychology: Human Learning and Memory 5: 395-408. Williams, G. C. (1975). Sex and Evolution. Princeton: Princeton University Press. Wiggins, D. R. P. (1976). Truth, invention and the meaning of life.Proceedings of the British Academy, 62: 331-78. Wittgenstein, L. (1958). Philosophical Investigations. Translated by G. E. M. Anscombe (New York: Macmillan). Wright, L. (1973). Functions. Philosophical Review, 82:139-68.
Modeling Social Interaction
This page intentionally left blank
8
Theorem 1 Leslie Burkholder
In a typical single-iteration prisoner's dilemma (PD) game, each player has available two choices or actions, co-operate and defect. Each player gets a better pay-off by defecting, no matter what the other players do. But when all defect, each gets less than when all co-operate. A singleiteration PD might stand on its own. Agents or players might encounter each other only during a single play of the game, never before and never again. Or a single-iteration PD game might be part of a sequence of games involving the same players. In particular, it might be the last or terminating game or stage in a finite sequence of games.1 More particularly still, and the details will be gone into later in this paper, a PD game might be the last or terminating game in a sequence of games all but the last of which have no pay-offs or benefits or losses to any of the players but acquiring information or providing disinformation about how the players will act in the terminating PD game. Now suppose we define a PD-co-operator to be any agent or player so constituted that at least sometimes it co-operates in the terminating PD game of such a sequence of information/disinformation games. Suppose also that we define a PD-non-co-operator to be an agent or player that always defects in the terminating PD game. Then the following question arises: Does it reliably pay the best, is it rational, is it most advantageous, is it expected utility-maximizing, is it optimal, to be a PD-co-operator or a PD-non-co-operator for the sequence of games under consideration? This is the question that will occupy us in this paper. Although we will usually talk about agents or players, we can also phrase our topic question as one about strategies or rules for making choices in the sequence of games of interest here. Suppose we say that PD-co-operator strategy or choice-making rule is one which sometimes dictates co-operation in the terminating PD. A PD-co-operator agent or player is just one that faithfully executes or carries out this strategy. A PD-non-co-operator strategy is one which always dictates defect in the terminating PD game and a PD-non-co-operator agent is one whose choices are governed completely by this strategy. Then we 137
138
Leslie Burkholder Second Player cooperate
defect
R2
T2
cooperate R1
S1 S2
P2
defect T1
P1
where T /> R /> P /> S / Figure 1: Schematic pay-off matrix for the Prisoner's Dilemma game.
can phrase our topic question as: Does it pay the most, is it optimal, to adopt a PD-co-operator strategy or a PD-non-co-operator one for our sequence of games? Plainly, PD-co-operators include a large class of different kinds of agents or players (or strategies). A PD-co-operator can come in a variety of more specific types. One kind of PD~co-operator always co-operates in a terminating PD game, no matter what has happened in any previous information or disinformation games in the sequence. This might be called a PD-unconditional-co-operator. Another kind cooperates if certain things have been learned in any information games preceding the terminating PD game. This type of PD-co-operator is a PD-conditional-co-operator. Naturally, this kind can itself be divided into sub-kinds. Yet a third kind is a PD-random-co-operator. Members of this kind select co-operate at random in the terminating PD game. And cutting across the division between PD-co-operators and PD-nonco-operators, there is the division among those agents that provide full or partial information about themselves, or even deliberate misinformation in the information/disinformation games in our sequence of games. Later in this essay, we shall have occasion to specify some of these different types of agents. Obviously our topic question - Does it pay the most to be a PD-co-operator or a PD-non-co-operator for the sequence of games under consideration? - might be answered "yes" for some kinds PD-co-operator and "no" for others, or perhaps "yes" when perfect information about how agents will behave in the ending PD can be acquired and "no" otherwise. There is, as many know, a recent literature full of answers to this topic question.2 Philosopher David Gauthier says that if agents or play-
Theorem 1
139
ers acquire sufficient information about when other agents will cooperate in the terminating PD, it pays best for agents to be PD-co-operators, in particular a kind of PD-conditional-co-operator he calls a constrained maximizer (Gauthier 1986, ch. 6). Other philosophers have agreed with Gauthier (Harman 1988; McClennen 1988; Resnik 1987, sec. 5-4e). Philosopher Peter Danielson also says that it pays the most for agents to be PD-co-operators when they can gather sufficient information about the behaviour of others in the terminating PD, and again a particular kind of PD-conditional-co-operator but not the same kind as Gauthier favours (Danielson 1991,1992, chs. 4,5). Economist Robert Frank has urged that if all agents or players can acquire perfect information about when other agents will co-operate in the ending PD and the only kinds of agents there are include just PD-unconditional-cooperators and PD-non-co-operators, it is utility-maximizing for agents to be PD-unconditional-co-operators (Frank 1987; Frank, Gilovich, and Regan 1993). In Frank's work, these PD-unconditional-co-operators agents are described as honest or the possessors of consciences. And, of course, on the other side there is the traditional argument that in a terminating PD game it is only rational to be a PD-non-co-operator because, whatever kind of agent the other player is, a PD-non-co-operator always fares better (Binmore 1993, sec. 2). The idea advocated here is that these answers to the topic question are all mistaken and that a new method needs to be used to get the right answer. The idea defended here is very like that provided by political scientist Robert Axelrod's investigations of the indefinitely repeated PD game (Axelrod 1984). The indefinitely repeated PD game is just a sequence of games in which the single-iteration PD game is repeated an unknown finite number of times with the same players or agents. Axelrod wanted to know whether there is any kind of agent or player for whom it is most advantageous or utility-maximizing or to whom it always pays the most to be in this kind of game. One thing Axelrod showed was that there is no such dominant agent or player (Axelrod 1981, 1984, ch. 1). There is no uniquely most advantageous kind of agent or player to be, no dominant agent or strategy, independent of what the other agents are like, for the kind of game that interested him. This is Axelrod's famous Theorem 1. The same thing is true for the sequence of games of interest in this paper, at least when players can acquire sufficient information about what other agents are like. In addition, as everyone knows, Axelrod ran game-based computer tournaments to try to find out whether there is, since there is no dominant agent or player or strategy, at least a robustly effective agent or player for the indefinitely repeated PD game. A robustly effective agent or player is one which, although not dominant, pays very well when
140
Leslie Burkholder
playing against a representative wide variety of other kinds of agents or players. The proposal here is that, since there is no dominant agent or player or strategy for our sequence of games, a sound method for answering the topic question is also to run game-based computer tournaments and look for robiistly effective agents, agents that can be counted on to pay fairly well against a representative wide variety of other kinds of agents or players. The order of business in the remainder of this essay will be as follows. We will first say something more about the sequence of games we are interested in, and particularly about the information and deliberate misinformation games in that sequence. Then we shall describe in more detail some of the different kinds of PD-co-operator and PD-nonco-operator agents that can play this sequence of games. In both cases, we'll be describing things in a manner that shows how they could be exploited in suitable game-based computer tournaments. Next, we'll show that the answers in the literature mentioned above to our topic question are mistaken and say why using the method of running game-based computer tournaments is appropriate. Finally, we'll end by considering the relationship between our topic question and a question of traditional interest to theoretical and practical ethicists: Can it be expected to pay the most, is it advantageous, is it optimal, to be a moral agent or player?
The Games The basic idea for the sequence of games of interest in this paper is simple. The sequence is finite and known to be so. The sequence always terminates in a single iteration of a PD game. We will restrict our attention here to games with just two players. So the PD game has a familiar pay-off matrix, any instance of the schematic matrix in Figure 1. Plainly the sequence could terminate in other kinds of games. An obvious example worth considering among just two-player games is the PDwith-Exit game (Vanberg and Congleton 1992; Kitcher 1993). This is a PD game in which each player has a third choice amounting to not participating in the contained PD game. We can imagine a slight variant of our topic question for this game: Does it pay best to be a PD-with-Exitco-operator (where this game, of course, terminates a sequence of suitable information/disinformation games)? Among many-player games there is obviously a host of interesting kinds of games: the minimal contributing set game investigated by social scientist Robyn Dawes (van de Kragt, Orbell, and Dawes 1985), free-rider and foul-dealer many-player PDs (Pettit 1986), and so on. And in the other direction, with fewer players or agents, there are interesting decision-problems which might be used to terminate our sequence of information and dis-
Theorem 1
141
Second Player tell DC + listen
tell UC + listen
tell UC + don't listen
tell CC + listen
tell CC + don't listen
tell NC + listen
tell NC + don't listen
tellUC+ don't listen
0 0
0 0
0 0
0
0
0 0
0 0
0 0
0 0 0
0 0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
teliNC + don't listen
0
0
0
0
tellNC+ listen
0 0
0
0
tellCC+ don't listen
0 0
0
0
tellCC+ listen
0
0 0
Figure 2: Part of pay-off matrix for the information game.
information games like the Newcomb problem (Nozick 1969; Irvine 1993; Gauthier 1989). Our reason for restricting attention to the PD game and the two-player version in particular, rather than considering these other games, is simply that we want to address the literature mentioned above on our topic question. That literature, to a large extent, restricts itself to this game. The first part of the sequence of games is made up of appropriate information/disinformation games. In our examples there is but one iteration of this kind of game. A partial pay-off matrix for a two-player example of this game is in Figure 2. Each player has an indefinitely large number of pairs of choices, each of the form tell (strings) + listen and tell (string*) + don't listen. Examples of the told strings include: UC, for PD-unconditional-co-operator; CC, for PD-conditional-co-operator; NC, for PD-non-co-operator; and so on. The pay-offs for each player, no matter what the other does, are always zero. So this game itself brings no benefits or costs to the players. (It is possible to imagine versions with costs for listening or for telling, but these will not be considered
142
Leslie Burkholder
here.) Since there is but one iteration, no player or agent can select to tell {string*) + listen or not, depending on what the other player does. Nor, for that matter, can any player listen and then, depending on what it hears, tell different things. The choice has to be made in ignorance of what the other player does. The obvious questions are: What is one player or agent doing when it selects to, for example, tell UC + listen or tell UC + don't listen? And why has each player an indefinite number of pairs of choices? Each choice alternative in the information game has two parts, the tell {strings) part and the listen or don't listen part. The idea for the tell (string*) part is that each agent tells something about when or under what conditions it will co-operate in the terminating PD. Exactly what string it tells depends on its programming; this will be described in the next section. But an obvious example is that an agent might tell UC, i.e., that it is a PD-unconditional-co-operator. What it tells need not be true or it might only be part of the truth; players could attempt to deliberately misinform or deceive others. We will return to this point in a later section of the essay. The idea for the listen and don't listen part of the choice alternatives is that players may listen or not to what they are told by other players or agents and then, presumably, react in some differential manner to this. What an agent or player might hear if it selects to listen depends entirely on what the other player is programmed to tell. (We do not include informational noise in our game, although plainly this could be introduced.) An agent or player could hear that the other player is a PD-unconditional-co-operator if that is what the other player tells about itself. And, of course, what it hears could be false or only a part of the truth or the whole truth. Players have a choice between listening and not listening; they have no option to not tell (string*). On the other hand, a possible string is the empty string and this amounts to having not tell as a choice alternative. This explains why the choices in the information game come in pairs, tell (string*) + listen and tell (string*) + don't listen. Clearly, different players or agents can tell different things. The only limit on the number of pairs of choice alternatives is the number of strings there is in the vocabulary to construct. We hold this vocabulary constant for any group of competing players or agents in a manner to be explained below. (Another variant, not explored here, allows some players or agents greater abilities in this respect than others.) This explains why there is an indefinitely large number of pairs of choice alternatives in the information game. We end this section with two miscellaneous remarks about our information and disinformation games. The first is this. These games were inspired by games in the economics and game theory literature often
Theorem 1
143
known as cheap talk or sometimes as signalling games (Kreps 1990, chs. 12.1, 12.6, 17; Binmore 1992, ch. 7.5). There may be some differences, however. In cheap talk or signalling games pay-off values seem to be conceived of as a function of just telling or signalling or talking costs. Pay-off values are zero when these costs are zero. Here, pay-off values are conceived to be functions of both telling or signalling or talking costs and listening costs. Both of these costs are assumed to be zero in our examples. This result is that cheap talk or cheap signalling and our games look like they are the same thing even though they may not be. The second remark is that there are plainly other mechanisms by which information or deliberate misinformation about their prospective behaviour in a terminating PD game could be exchanged between players. Peter Danielson's mechanism, for example, is to allow each agent or player to directly inspect the other's heart or mind or program for behaviour (Danielson 1992, ch. 4). In the mechanism used here, players can only indirectly inspect another's heart or mind through what the other tells about what its program for behaviour in the terminating PD is. We do not here explore the disadvantages and advantages of one of these mechanisms over the other. The Agents We now describe the agents or players that make or select choices in our sequence of games. Equivalently, since our agents or players are just the executors of choice-making strategies for our sequence of games, we now have to say something about the strategies available for playing the games. Recall that one of the goals of this paper is to describe the sequence of games and the agents which play them in a manner which shows how they could be employed in game-based computer tournaments. In keeping with that goal, we conceive of our agents or players as programmed computing machines or automata. The strategies they execute or carry out are just the programs governing their behaviour. Of course, this may have been guessed from remarks made earlier about the programs that agents carry out. The agents we will describe here are, in particular, programmed state machines. The idea of using programmed state machines as agents when investigating a question similar to our topic question What kind of agent does it reliably pay the most to be in a given sequence of games? - is certainly not original (Marks 1992, contains a sort of review of the use of state machines as agents or players in finitely iterated PD games). In addition, other writers have used other kinds of computing devices or automata for this purpose. These have included, for example, Fortran program executors in Axelrod's indefinitely repeated PD game investigations (Axelrod 1984, ch. 2), Prolog
144
Leslie Burkholder
Tell-UC-don't-listen-PD-unconditional-cooperator: If information game, tell UC + don't listen. IfPD, cooperate.
Tell-UC-don't-listen-PD-non-cooperator: If information game, tell UC + don't listen. IfPD, defect.
Tell-50RNDM-don't-listen-PD-random-50%-cooperator: Solicit random nmbr integer 1-100. If information game and 51
Figure 3: Some programmed state machine agents
program executors (Danielson 1992), Lisp program executors (Koza 1992, Koza 1991; Fujiki and Dickinson 1987; Danielson 1996), and others. State machines, at least those that can have an infinite number of states, are no less computationally capable than these other computing devices.3 But state machines have the advantage of being, at least for the kinds of agents we are mostly concerned with here, easy to comprehend and construct, or design, or program. We should say something about the details of these computing devices and their programs. Each programmed state machine, whether it is employed as an agent in the sequence of games we are interested in or for some other purpose, has a number of features: (1) It has (2) It has (3) It has (4) It has (5) It has (6) It has
a finite or infinite set of internal states, S. a designated start state, s, in S. available a finite set of vocabulary symbols for input, I. available a finite set of vocabulary symbols for output, O. a state transition function, from Sxl to S. an output function, from S to O.
Examples of computer programs for state machines are typically given by presenting a transition table or a graph. Here we will use just
Theorem 1
145
transition graphs to present state machine programs. These specify the set of internal states the programmed machine has, its start state, the input symbols it uses (which may not be all the symbols in the input vocabulary set), the output symbols it uses (which also may not be all the symbols in the output vocabulary set), its state transition function, and its output function. Figures 3 and 4 give some examples of programs governing the behaviour of state machine agents for our sequence of games together with translations into English of these programs. We assume that whatever resources are available for the construction or programming of one state machine agent or player are also available to each other. Given the nature of state machines, what this essentially Tell-RC-listen-PD-reciprocal-cooperator:
If information game, tell RC + listen. If PD and hear RC, cooperate. Otherwise defect.
Tell-GC-listen-PD-Gauthier-cooperator:
If information game, tell GC + listen. If PD and hear GC or UC, cooperate. Otherwise defect.
Tell-CC-listen-PD-conditional-cooperator:
If information game, tell CC + listen. If PD and hear RC or GC or UC, cooperate. Otherwise defect. Figure 4: Three PD-conditional-co-opcrator programmed state machine agents.
146
Leslie Burkholder
amounts to is that a common vocabulary of input symbols and a common vocabulary of output symbols is available to all. (Recall that we earlier said that whatever strings were available for one agent or player to use in an information / disinformation game were also available to all other agents.) In the examples of state machine programs given in Figures 3 and 4 and elsewhere in this paper, the input and output sets of vocabulary symbols available to all include the following: Input set I, subset for random number: {1, 2, 3, ..., 100| Input set I, subset for information game: {UC, NC, GC, RC, CC, ...} Output set O, subset for random number: {RNDM#} Output set O, subset for information game: {50RNDM+L, C+L, UC+L, NC+L, GC+L, RC+L, CC+L, NCANC+L, GCANC+L, RCANC+L, UCANC+L, 50RNDM+-.L, C+^L, UC+-.L, NC+-nL, GC+-L, RC+--L, CC+--L, NCANC+-L, GCANC+-L, RCANC+-.L, UCANC+-L} Output set O, subset for PD game: {C, D) Of course, we may augment this vocabulary. But every state machine agent can be governed only by a program expressed using the designated sets of vocabulary symbols. This restriction can be of theoretical interest when it comes to formalizing the idea of agents with bounded or limited capacities. Such bounded or limited agents are usually formalized as programmed state machines with a limited size, where size is a function of just the number of internal states in the programmed machine or a joint function of the number of internal states and the number of transitions between these states (Marks 1992). While we do not pursue the issue here, another obvious way of formalizing the idea of limited or bounded agents is to represent them as state machines with limited input and output sets of vocabulary symbols. As stated earlier, the programs governing the behaviour of these state machine agents are strategies or rules for making choices or decisions in our sequence of games translated into computer programs.4 This certainly raises a question: Can all the strategies for making choices or decisions in our sequence of games that might be devised be successfully translated into computer programs for state machines? This question might be put in another way: Are there agents who might play our sequence of games, whether PD-co-operators or of other kinds, capable of carrying out tasks or doing things programmed state machine agents are not capable of doing? If there are such agents or players, then by restricting ourselves to just programmed state machine agents we may fail to find some that reliably do the best in our sequence of games and so not answer our topic question correctly.
Theorem 1
147
Given the remarks above about restricted vocabularies, it may seem that the answer to either version of the question must be "no." But, of course, input and output vocabularies for state machine agents or their programs may be altered and expanded as needed. So this is not a difficulty. And since (infinite) state machines with appropriate vocabularies of input and output symbols are as computationally able as any other kind of computing device, the question about the translation of strategies into state machine computer programs is the same as the question whether all the strategies of interest can be successfully translated into computer programs for the most powerful possible kind of computing machine, and the question about the nature or capabilities of agents is the same as the question whether all the agents or players to be considered are capable of carrying out only those tasks the most powerful of computing machines can carry out. It has to be admitted that there can theoretically be strategies, even for players in our sequence of games, that are not translatable into programs successfully executable by even the most powerful of computers in a finite length of time. To design these we need only insert any noncomputable function into the statement of the strategy; the result will be a non-computable strategy. Carrying out or executing such a strategy will require determining values of the non-computable function. That is, successfully being governed by such a strategy will require that an agent or player in our sequence of games be capable of determining values of the non-computable function. Such an agent or strategy executor will undoubtedly have some computational abilities, and perhaps even be the equivalent in abilities to the most powerful of computing devices. But it will also have some non-computational abilities. So it seems that there must be strategies not expressible as the computer programs we propose to govern agents or players in the sequence of games we are interested in, or, equivalently, there must be agents with abilities beyond those of the state-machine agents that we propose as agents or players in this sequence of games. Should this concern us? Perhaps not. Many of those interested in our topic question and questions like it are interested for very practical reasons. They want to be able to give advice to people who might find themselves in circumstances characterizable as a sequence of games like the one we are interested in (e.g., Axelrod 1984, chs. 6, 7; Feldman 1988). They believe that people should make themselves into agents of certain kinds if being agents of that kind will pay the most (Gauthier 1986, ch. 4). Unless we think that human beings or other agents can successfully execute or carry out non-computable strategies for decision-making in our sequence of games or somehow make themselves into agents capable of exercising non-computational powers,
148
Leslie Burkholder
discoveries about strategies which can only be successfully executed by agents with powers beyond those of any computing device do not seem to be of much relevance. That is, restricting ourselves to only strategies translatable into computer programs or consideration only of agents with the abilities of the most capable of computers, seems not to have any detrimental practical consequences for the topic we are interested in here (cf. Norman 1994). Indeed, this restriction may be too generous. It is commonly remarked that people have, even when they are aided by the most powerful supercomputers, practical limits or bounds on their computing abilities (Simon 1959). This fact is the source for the interest in formal models of limited agents or players referred to earlier (Marks 1992; Norman 1994). So if we are ultimately interested in providing practical advice to human and other computationally limited agents, our restriction to strategies that can be translated into computer programs to be executed by state machine agents with unbounded computing abilities may even be more generous than it should be.5 Now we turn to the last issue for this section of the paper: Where do the computer programs for our state machine agents or the strategies they execute or embody, or equivalently, the programmed state machines, come from? The developers of these strategies, or equivalently, the designers of the computer programs, can be of various kinds. They can be decision-makers who determine a strategy for playing our sequence of games and then, so to speak, embody that strategy in a suitably programmed computing device. These decision-makers might be rational or they might not be. Recall, for instance, that ordinary people designed the strategies or computer programs governing agents or players in Axelrod's game-based computer tournaments (Axelrod 1984, ch. 2). The program or strategy developers could also be some kind of non-deliberative and non-selective mechanism like the random generation from appropriate elements (states, input and output vocabulary symbols, and transition arcs) of programmed computing devices. Or the mechanism could involve more complicated genetic recombination and selection processes of the kind found in natural or artificial evolution (Axelrod 1987; Koza 1992, 1991; Fujiki and Dickinson 1987; Danielson 1996). For our purposes in this paper, it does not matter which one of these is the source of the programmed state machines which play or are agents in our sequence of games. A consequence of this promiscuity is that our investigations will not assume common knowledge of rationality on the part of each decisionmaker or other mechanism by which agents or strategies are selected or designed (cf. Skyrms 1990, ch. 6).
Theorem 1
149
Theorem 1 In this section we provide a simple result, one, as we hinted earlier in the paper, comparable to Axelrod's Theorem 1 for indefinitely iterated prisoner's dilemmas. The result is that, when sufficient knowledge is available about the character of agents, there is no one kind of agent or player it reliably pays to be in our sequence of games, independent of the kind of agent the other player is. This result is our reason for turning to the search for a robustly effective agent as one that it reliably pays to be rather than a dominant one, for our sequence of games, and to the use of game-based computer tournaments as a means of discovering such agents or players or the strategies they embody. First, however, we make some distinctions. It is worth distinguishing two aspects or features of what one agent might hear and another agent might tell in an information game. We referred to these two features in an earlier section. In the first place, an agent might hear or be told just truths or it might hear or be told some falsehoods. In the second place, what it hears or is told might be, as we shall say, complete or incomplete. For example, an agent or player might be told by the other player that it always co-operates in the terminating PD. If the other player is indeed a PD-unconditional-co-operator then it obviously hears nothing but the truth. (For just such an agent see Figure 3, top example.) But if the other agent is really a PDconditional-co-operator then it does not hear just truths, it hears and is told at least some falsehoods. (See Figure 3, middle example.) On the other hand, an agent may be told nothing more precise or informative than that the other player sometimes co-operates in the terminating PD (i.e., that it is a PD-co-operator). In that event, the agent may be hearing truths but what it hears is incomplete; it is not told everything that could be told about exactly when the other player will co-operate. (See Figure 4, bottom example.) In contrast, when an agent or player hears that the other is a PD-unconditional-co-operator and what it hears is the truth, it also learns the whole or complete truth about when the other player will co-operate. We can now describe three different conditions under which our sequence of games can be played and potentially known by designers of agents or strategies to be played. The conditions are borrowed from Gauthier but are adapted to our sequence of games (Gauthier 1986, ch. 6). The conditions specify how complete and true what any agent tells others about itself in an information game has to be. Equivalently, they may be thought of as restrictions on which agents are allowed to play. Can only those that provide (true or complete) information play, or can those that instead provide deceptive disinformation participate?
150
Leslie Burkholder
The first of these information-telling conditions is called complete transparency by Gauthier. This requires that what every player tells about itself must be the truth and the whole truth. In other words, agents must be able to hear or learn the whole truth in the information game about each other player's prospective behaviour in the terminating PD. In effect, each player can reliably and completely read the heart or mind of every other player if it selects the listen choice in the information game. Thus, the middle agent in Figure 3 is not allowed. The second condition under which the sequence of games can be played is mutual complete opacity. In this condition, whatever an agent tells others need be no more complete or true than what these others could learn about the agent by being told nothing in the information/disinformation game. Equivalently, this condition may be described as requiring that whatever an agent hears, if it chooses the listen action in the information/disinformation game, leaves it no more correctly or fully informed about how the other agent will behave in the terminating PD than had it selected not to listen. So all the agents in, for example, Figures 3 and 4 are permitted. There may not be much to gain, however, from being one of the truth-telling agents. The third information-telling condition is the translucency condition. It includes anything in between complete transparency and opacity. In the translucency condition, an agent is guaranteed to be left better informed about how other agents or players will behave in the terminating PD if it selects the listen choice in the information/disinformation game than if it chooses not to listen, but not so well informed as in the condition of complete transparency. In this condition an agent may tell anything compatible with that restriction. This means that an agent like the middle one in Figure 3 is disallowed, but all the other agents in Figures 3 and 4 are permitted. Although it will not affect anything later in this paper, we should perhaps note in passing an unjustified symmetry assumption often made about the completeness and truth of what different agents must tell about themselves in this translucency condition. In the transparency condition, each agent is supposed to provide the whole truth about its own character or behaviour in the ending PD game. So each agent must tell all the others the same thing: everything. Agent A tells agent B just as much about itself as B tells A. A also tells agent C just as much about itself as it tells B, and C and B tell each other just as much as they each tell A. They all tell everything there is to say that is true about their behaviour in the concluding PD. In the opacity condition, agents or players can also be counted on only to provide all others the same information, essentially nothing. Whatever agent A tells B, it can leave B having learned nothing more about A's behaviour in the terminating
Theorem 1
151
PD than had A told B nothing. The same is true for whatever B tells A, A and C tell one another, and B and C tell one another. But the translucency condition is different. Our definition of this condition - and in this we do not depart in any way from Gauthier or others - does not require that A provide as much of the truth about itself to B as B gives to A or as much of the truth about itself as B gives to C and C provides to B. A may be more deceptive or successfully disinformative in what it tells about itself than some or all of the other agents are in what they tell A or in what they tell each other. Of course, it is possible to stipulate or assume that there is no such asymmetry. Some discussions and derivations of answers to our topic question do this (Gauthier 1986, ch. 6; Resnik 1987, sec. 5-4e). But such a stipulation or assumption seems warrantless. Further, it seems clear that once the stipulation or assumption is abandoned, these derivations of answers to our topic question do not succeed. But this is a matter which will not be pursued here. We now turn to our Theorem 1 result. All of our considerations will apply explicitly only to the complete transparency condition. But it will be clear that variations of them will give similar results when agents are not transparent to one another but all sufficiently translucent to one another. First we want to show that it cannot be established by analytical means that the kind of agent it always pays to be, no matter what other agents are like, is a tell-NC-listen-PD-non-co-operator or a tell-NC-don't-listen-PD-non-co-operator (see Figure 3 for this agent or its program). This is the only possible version of a PD-non-cooperator when agents are fully transparent. Then we show that there is no other kind of dominant agent either. The obvious reasoning for the conclusion that a tell-NC-x-PD-nonco-operator (x = listen or x = don't listen} agent is the best paying or most effective one for our sequence of games is the following backwards induction reasoning. No matter what action the other agent selects in the terminating PD, it pays more to defect. This means that the optimal kind of agent to be or strategy to follow is a PD-non-cooperator one. Of course, this also means that in the information/disinformation game, at least under the complete transparency condition, the agent must tell that it is a PD-non-co-operator. So the optimal kind of agent to be or strategy to follow in our sequence of games in the transparency condition is a tell-NC-x-PD-non-co-operator. We think the mistake in this reasoning is an assumption that the choice in the terminating PD made by the opposing agent or player is causally independent of what it hears or is told in the information/disinformation game by the first agent. There are strategies or programs an opponent agent or player might be following, which will make the agent choose to co-operate in the terminating PD when and only when it hears that
152
Leslie Burkholder
the first player is a PD-co-operator of some kind in the information/ disinformation game. Examples of these strategies are displayed in Figure 4. Since under the transparency condition an agent can say it is a PD-co-operator of some kind only if it is true, it will not necessarily pay the best to be a PD-non-co-operator if the opposing player is one of these kinds of players. It will be pay more to be a PD-co-operator because being so will secure an R pay-off with such an opponent rather than a P pay-off (and R > P). Of course, there is no guarantee that an opposing agent or player is executing any such strategy. But the fact that an opposing player could be following such a strategy shows that the tell-NC-z-PD-non-co-operator strategy or agent cannot always be guaranteed to pay the best no matter what the other agent is like. We now make one remark about the backwards induction reasoning and our criticism. (The criticism, of course, merely varies an argument found in Gauthier and elsewhere (Gauthier 1986, ch. 6; Resnik 1987, sec. 5-4e).) The criticism assumes that one agent or player can influence what the other does in the ending PD game by what it tells the other in the preceding information/disinformation game. The criticism assumes, in other words, that each agent is possibly programmed with a strategy that will make the player or agent react differentially to any information it receives about the other player in that game. Now suppose that each designer or programmer of an agent or strategy is confident that each other designer or programmer is rational, confident that the others are sure that all the others are rational, and so on. That is, suppose that, contrary to our remark at the end of the previous section, common rationality obtains. Then, each designer or programmer of an agent should conclude that the other will reason backwards from the act it pays to choose in the terminating PD. So each should conclude that every other agent will be a PD-non-co-operator and, in fact, a tell-NC-x-PD-non-co-operator. In that event, nothing an agent or player tells in the information/disinformation game will affect the choice the other makes in the terminating PD. We think this conclusion presents a dilemma for anyone concerned to answer our topic question who would like both not to surrender the common rationality assumption and to assert that some kind of PD-co-operator is the best agent to be in our sequence of games in the complete transparency condition. It is, however, not a problem for our approach since, as explained at the end of the last section, we are concerned to discover answers to our topic question when this assumption is not made. Now we want to show that there is no other kind of agent it always pays to be, no matter what other agents are like, at least when agents play our sequence of games under complete transparency. For this, it is sufficient to point out that there can be, among the agents that a player
Theorem 1
153
Tell-NC&NC-listen-PD-favor-NC-cooperator:
If information game, tell NCANC + listen. If PD and hear NC, cooperate. Otherwise defect.
Tell-GC&NC-listen-PD-favor-GC-cooperator:
If information game, tell GCANC + listen. If PD and hear GC, cooperate. Otherwise defect.
Tell-UC&NC-listen-PD-favor-UC-cooperator:
If information game, tell UCANC + listen. If PD and hear UC, cooperate. Otherwise defect. Figure 5: Some discriminator programmed state machine agents.
might have as an opponent, any number of different kinds of what we shall call discriminator agents. These agents differentially discriminate between any candidate for the best agent and some other agent, giving points to the second agent in a manner that guarantees that the candidate for best agent is not a dominant one. Some examples of these agents are in Figure 5, and a partial pay-off matrix showing the results of various combinations of agents is in Figure 6. Plainly, no matter what agent or player is a candidate for being the agent it always pays
Leslie Burkholder
154
Second Player tell-NC -*-NC
tell-NC -*-NC
tell-RC -L-RC
tell-GC -L-GC
tell-UC -*-UC
tell-RC -L-RC
tell-GC -L-GC
tell-NCANC -L-NCANC
tell-GCANC -L-GCANC
tell-UCANC -L-UCANC
P
P
P
T
P
P
P
R
P
P
P
P
P
P
R
P
T
P
S
S
R
S
S
R
where T > R > P > S and only payoffs for first player entered. Figure 6: Part of pay-off matrix for the sequence of games.
to be, no matter what other agents are like, we can invent a discriminator agent which will make it not necessarily pay the best. So, there is no dominant agent or player or strategy for our sequence of games, at least when agents are transparent to one another. Now it may be thought that no mechanism for designing agents or developing strategies to govern their behaviour would ever as a matter of fact design or develop these kinds of discriminator agents. We doubt this to be a fact. Discriminator agents can certainly be the product of genetic recombination or mutation if, for example, the designing mechanism includes this (Axelrod 1987; Koza 1992,1991; Fujiki and Dickinson 1987; Danielson 1996). No doubt these agents will get weeded out by the selection part of the mechanism very quickly because they will probably not reliably pay at all well. But they can also be the result of temporary imperfect rationality on the part of usually rational but human decision-makers or the result of a belief that they constitute the best response to the kind of agent an opponent decision-maker is designing. But suppose it z's a fact that these agents are unlikely or rare, nearly to the point of non-existence. Then, of course, there will be a kind of agent that is robustly effective but still not dominant. It will be robustly effective because, as it happens, the kind of agents it plays against in our sequence of games generally does not include discriminator agents that reduce its comparative score. The agent will still not
Theorem 1
155
be a dominant one, guaranteed to pay at least as well as any other and sometimes better no matter what the agents are like it plays against, because the discriminator agents are still possible although improbable opponent ones. It is obvious that the trick here is that agents under the transparency condition are able to reliably know what other players are like. When players or agents are opaque to one another, discriminators have no basis on which to differentially make gifts to some kinds of players. Thus, our result is relevant only when players are sufficiently transparent to others. We have shown the result explicitly only for the case when they are completely transparent to one another. But clearly it also applies when agents can gather sufficient but not perfect knowledge about the kind of agents others are. This leaves us with the following conclusion: There is no agent guaranteed to pay the most in our sequence of games, at least when agents or players can find out enough about the conditions under which others will co-operate in the terminating PD. Which agent it is best to be depends very much on what the other agents or players are like. This conclusion does not say that the correct dominant strategy or agent has not been discovered yet. The conclusion says that the literature referred to at the beginning of the paper is mistaken in assuming that there is such a dominant agent or strategy to be found. One possible response to this conclusion is scientific despair. As Axelrod pointed out in his discussion of the similar result for iterated PD games, perhaps nothing of any general interest on our topic question can be hoped for (Axelrod 1984, ch. 2; cf. also Skyrms 1990, ch. 6). But we think we are now in the same position as Axelrod was when he turned to gamebased computer tournaments. That is, while these tournaments may discover no general results in the end - and they certainly cannot discover a dominant agent - they may generate results that support one or another agent as a reliably effective or pretty good paying agent for our sequence of games. Moral Agents We have been dealing the question: Does it reliably pay the most to be a PD-co-operator of some kind or a PD-non-co-operator in a sequence of information and disinformation games terminating in a PD? We shall conclude this paper by turning to another question. This question is: Does it pay the most to be a virtuous or moral agent, an agent that, at least when this is the morally proper thing to do, keeps promises, does not steal from or cheat others, and so on, or does it pay, is it utility maximizing, to be immoral and a cheater? We turn to this question because many of those interested in our topic question have been so at
156
Leslie Burkholder
least in part because they are interested in this question. The idea is that dealing with our topic question will in some way lead to an answer to the question about the benefits of being a moral or virtuous agent. The most obvious relationship between the two issues is this. In some instances of the PD game, and this can include PDs which are terminating games for a sequence which includes information and disinformation games of the kind considered earlier, selecting the co-operate choice or action is morally acceptable and taking the defect choice or move is morally unacceptable. Another way of saying this is that at least some conflicts between morality and immorality can be represented or modeled by a PD. It is easy to construct such instances of the PD. But there are ethically simple and ethically more complicated ones. We start with a simple example: Next Tuesday, A will have the opportunity to steal a piece of computer hardware from B. At the same time, B will have an opportunity to steal $100 from A. A and B will never encounter each other again. There is no chance that either thief will be punished. If A and B each count the status quo as better than a potential new state in which A has the computer hardware but lacks her $100 and B has the $100 but lacks her computer hardware, this is a PD. In this PD the defect or steal choice or action is immoral and the co-operate or don't steal choice or move is the only morally acceptable one. Further, their moral status is unaffected by the likely behaviour of either agent or player. If A finds out in some information/disinformation games before next Tuesday that B is almost certainly a PD-non-co-operator and so almost certain to take A's $100 given the opportunity, this does not make it morally permissible for A to steal B's property. The only morally acceptable agent for the situation seems to be a PD-unconditional-cooperator. Does it always or at least fairly reliably pay to be such an agent in these circumstances? The answer - and this should be unsurprising given otir results in the section just completed - depends very much on what the other players are like. Conceivably, the other agents are good at information gathering (that is, agents or players are fairly translucent to them) and will react favourably to players they discover to have a virtuous constitution. That is, they could react differentially to information that an agent is a PD-unconditional-co-operator by selecting the cooperate or don't steal choice in the terminating PD only for such agents. To many, this will seem like wishful thinking. But only empirical work will tell us; a priori reasoning will not tell us whether other players typically are or are not differentially rewarders of a virtuous character. Here is a second example in which the conflict between morality and immorality can be represented or modeled by a PD. But it is ethically more complicated than the first example. A has promised to mail B $100 on Tuesday for the purchase of a second-hand piece of computer hard-
Theorem 1
157
ware; B has promised to mail A the piece of hardware on Tuesday. If there is no way for either to respond to failure by the other to keep her part of the bargain and joint completion of the bargain is Pareto-superior to joint failure, this is a PD. Between now and Tuesday, when they each have to carry out their parts of the agreement, A and B may have the opportunity to acquire information about the character of the other. Does it pay, either always or at least with some reliability, to be a moral agent in this kind of situation? The answer depends, of course, on what kind of agent a moral agent must be in these circumstances. Here is where things seem to be more complicated than in the previous example. According to some, no matter what either player finds out about the other in any information or disinformation games before Tuesday, selecting the defect choice in the terminating PD is morally reprehensible and taking the co-operate choice or act is morally required. It is just a matter of keeping a promise. The only ethically acceptable agent is a PD-unconditional-co-operator. No matter what the other agent or player is discovered to be like, this agent co-operates and does the morally proper thing. Other kinds of PD-co-operators, and certainly PD-non-co-operators, are not morally adequate agents. According to others, taking the co-operate choice in the ending PD in this situation is morally not required if it is reasonably certain that the other agent will not complete her part of the bargain. If, for example, player A acquires good evidence that B is a PD-non-co-operator, then there is no moral duty for A to co-operate or comply with the terms of the bargain. On this account of what is ethically required or permitted, the moral agent should be a PD-conditional-co-operator and in particular something like a PD-Gauthier-co-operator (see Figure 4). It is worth noting that some other kinds of PD-conditional-co-operators are certainly morally unacceptable. PD-reciprocal-co-operators (see Figure 4) will select defect when playing against another agent found to be a PD-unconditional-co-operator, an agent sure to live up to her end of the bargain. This exploitation seems to many to be morally reprehensible (Lackey 1990, pt. 3; Danielson 1992). So this kind of PD-conditionalco-operator is an ethically unacceptable player or agent. But whichever of these answers about what the moral agent in these circumstances is like is correct, whether it pays to be this kind of agent will depend, at least when other agents can acquire sufficient information about what agents are like, on how those other agents respond to the information they acquire. They may respond in a self-sacrificial fashion to players they learn will take advantage of them, although this seems unlikely. We have considered two examples where co-operation in a terminating PD might be morally required and so being some kind of PD-
158
Leslie Burkholder
co-operator agent might be what virtue requires. We should not exaggerate this coincidence between morality and co-operation in the PD, however. It is not as though co-operation is always required or even allowed by morality in every PD, as some writers, like biologist Richard Dawkins and economist Robert Frank, seem to think (Dawkins 1986; Frank 1987). When businesses collude to keep their prices up, they can be selecting the co-operate choice in a PD created by the competitive market (Ullmann-Margalit 1977, pt. 2, ch. 7.2). In doing this they are hardly being moral or virtuous agents.6
1 2
3 4 5 6
Notes A single-iteration PD game, which is a stand-alone interaction among some players is, of course, just the last game in a finite sequence of 1 game. Sometimes this literature talks about which kinds of agents will gain the most, sometimes it talks about which strategies are optimal, sometimes about which dispositions it is rational to have, sometimes about which is the rational resolute choice or commitment, sometimes about which is the best habit to have or principle to follow and so forth. For our purposes here, there does not seem to be any relevant difference among these. Finite state machines are, of course, less computationally able than these other kinds of computing machines. By translation we mean something fairly weak, something like: written up as a computer program which computes the function the strategy specifies. For disagreement with this conclusion see Levi (1990). We wish to thank Peter Danielson and the Centre for Applied Ethics at the University of British Columbia for the opportunity to present earlier versions of some of this material.
References Axelrod, R. (1981). The emergence of cooperation among egoists. American Political Science Review, 75: 306-18. Reprinted in R. Campbell and L. Sowden (eds.), Paradoxes of Rationality and Cooperation (Vancouver: University of British Columbia Press, 1985). (1984). The Evolution of Cooperation. New York: Basic Books. (1987). The evolution of strategies in the iterated Prisoner's Dilemma. In L. Davis (ed.), Genetic Algorithms and Simulated Annealing (London: Pittman), pp. 32-41. Binmore, K. G. (1987). Modeling rational players I. Economics and Philosophy, 3: 9-55. (1992). Fun and Games. Lexington MA: Heath. (1993). Bargaining and morality. In D. Gauthier and R. Sugden (eds.), Rationality, Justice, and the Social Contract (Ann Arbor, MI: University of Michigan Press).
Theorem 1
159
Danielson, Peter A. (1991). Closing the compliance dilemma: How it's rational to be moral in a Lamarckian world. In P. Vallentyne (ed.), Contractarianism and Rational Choice (New York: Cambridge University Press), pp. 291-322. (1992). Artificial Morality. London: Routledge. (forthcoming). Evolutionary models of co-operative mechanisms: Artificial morality and genetic programming. In Peter A. Danielson (ed.), Modeling Rationality, Morality, and Evolution (Oxford: Oxford University Press), pp. 421-40. Dawkins, R. (1986). Nice guys finish first. Horizon. London: BBC Video. Feldman, F. (1988). On the advantages of cooperativeness. In P. A. French, T. E. Uehling Jr., and H. K. Wettstein (eds.), Midwest Studies in Philosophy, 13. Ethical Theory: Character and Virtue (Notre Dame IN: University of Notre Dame Press), pp. 308-23. Frank, R. H. (1987). If homo economicus could choose his own utility function, would he want one with a conscience? The American Economic Review, 77(4): 593-604. Frank, R. H., T. Gilovich, and D. T. Regan (1993). The evolution of one-shot cooperation: An experiment. Ethology and Sociobiology, 14: 247-56. Fujicki, C., and J. Dickinson (1987). Using the genetic algorithm to generate LISP source code to solve the Prisoner's Dilemma. In J. J. Grefenstette (ed.), Genetic Algorithms and their Applications (Hillsdale NJ: Lawrence Erlbaum), pp. 236-40. Gauthier, D. (1986). Morals by Agreement. Oxford: Oxford University Press. (1989). In the neighbourhood of the Newcomb-predictor. Proceedings of the Aristotelian Society, 89:179-94. Harman, G. (1988). Rationality in agreement: A commentary on Gauthier's Morals by Agreement. Social Philosophy and Policy, 5:1-16. Irvine, A. D. (1993). How Braess' paradox solves Newcomb's problem.International Studies in the Philosophy of Science, 7: 141-60. Reprinted in Peter A. Danielson (ed.), Modeling Rationality, Morality, and Evolution (Oxford: Oxford University Press), pp. 421-40. Kitcher, P. (1993). The evolution of human altruism. The Journal of Philosophy, 90: 497-516. Koza, J. R. (1991). Genetic evolution and co-evolution of computer programs. In C. G. Langton, C. Taylor, J. D. Farmer, and S. Rasmussen (eds.}, Artificial Life II (Redwood City CA: Addison Wesley), pp. 603-29. (1992). Genetic Programming. Cambridge MA: MIT Press. Kreps, D. M. (1990). A Course in Microeconomic Theory. Princeton: Princeton University Press. Lackey, D. (1990). God, Immortality, and Ethics. Belmont CA: Wadsworth. Levi, I. (1990). Rationality unbound. In W. Sieg (ed.), Acting and Reflecting (Dordrecht: Kluwer Academic Publishers), pp. 211-21. Marks, R. E. (1992). Repeated games and finite automata. In J. Creedy,
160
Leslie Burkholder
J. Borland, and J. Eichberger (eds.), Recent Developments in Game Theory (London: Edward Elgar), pp. 43-64. McClennen, E. F. (1985). Prisoner's Dilemma and resolute choice. In R. Campbell and L. Sowden (eds.), Paradoxes of Rationality and Cooperation (Vancouver: University of British Columbia Press, 1985), pp. 94-104. (1988). Constrained maximization and resolute choice. Social Philosophy and Policy, 5: 95-118. Norman, A. L. (1994). Computability, complexity, and economics. Computational Economics, 7: 1-21. Nozick, R. (1969). Newcomb's problem and two principles of choice. In N. Rescher, (ed.), Essays in Honor of Carl G. Hempel (Dordrecht: Reidel), pp. 11446. Reprinted in R. Campbell and L. Sowden (eds.),Paradoxes of Rationality and Cooperation (Vancouver: University of British Columbia Press, 1985). Pettit, P. (1986). Free riding and foul dealing. The Journal of Philosophy, 73: 361-79. Resnik, M. D. (1987). Choices: An Introduction to Decision Theory. Minneapolis: University of Minnesota Press. Simon, H. A. (1959). Theories of bounded rationality. American Economic Review, 49: 253-83. Skyrms, B. (1990). The Dynamics of Rational Deliberation. Cambridge MA: Harvard University Press. Ullmann-Margalit, E. (1977). The Emergence of Norms. Oxford: Oxford University Press. Vanberg, V. J., and R. D. Congleton (1992). Rationality, morality, and exit. American Political Science Review, 86: 418-31. van de Kragt, A. J. C, J. M. Orbeli, and R. M. Dawes (1985). The minimal contributing set as a solution to public goods problems. American Political Science Review, 77: 112-22.
9 The Failure of Success: Intrafamilid Exploitation in the Prisoner's Dilemma Louis Marinoff
1. Introduction A recent n-pair computer tournament for the repeated Prisoner's Dilemma (Marinoff 1992) amplifies and extends Axelrod's (1980a, 1980b) results, and demonstrates the relative robustness of co-operative maximization of expected utilities. For empirical and analytical purposes, the twenty competing strategies in the tournament are grouped into five "families," whose respective members share either common program structures, or similar conceptual functions. The five families are: the probabilistic family, the tit-for-tat family, the maximization family, the optimization family, and the hybrid family. Individual strategies or entire families can be selectively "bred" to exhibit (or to exclude) particular traits, or combinations of traits. MAC, the most co-operatively-weighted member of the maximization family, is the most robust strategy in the tournament. MAC plays randomly during the first one hundred moves of each thousand-move game, with a co-operative weighting of 9/io. It records the joint outcomes of the first hundred moves in an "event matrix," from which it computes its expected utility of co-operation (ElfC), and of defection (EUD), from move 101 onward. The tournament payoff matrix, the maximizer's event matrix, and the expected utilities to which these matrices give rise are displayed in Figure 1. MAC updates its event matrix after every move, and maximizes its expected utilities accordingly. That is to say, on its nth move (where 100 < n < 1000), MAC either co-operates or defects according to whichever of its expected utilities is the greater, based upon the previous n-1 moves. MAC has three siblings in the tournament: MAE, MELI, and MAD. These siblings' program structures are identical to MAC's, but their respective co-operative weightings for their hundred random moves are 5/7, Vi, and Mo. A maximization family member's robustness in the tournament increases strictly with its initial co-operativeness. In the 161
162
Louis Marinoff Column Player c d Row Player
c
(3,3)
(0,5)
D
(5,0)
OU)
C, c denote co-operation; D, d denote defection.
Opponent Maximizing Strategy
c
d
c
W
X
D
Y
Z
W = frequency of (C, c); X = frequency of (C, d); Y = frequency of (D, c); Z = frequency of (D, d); expected utility of co-operation = 3W/(W + X) expected utility of defection = (5Y + Z)/(Y + Z) Figure 1: Prisoner's Dilemma payoff matrix and event matrix.
overall standings, out of twenty strategies, MAC placed first; MAE, third; MEU; eighth; MAD, tenth. Like its siblings, MAC is neither nice (where niceness means never defecting first) nor rude (where rudeness means always defecting first); rather, it is nide (where nideness means indeterminacy with respect to primacy of defection). And like its siblings, MAC is both provocable and exploitive. But unlike its siblings, MAC is initially co-operative enough to attain perpetual mutual cooperation with other provocable yet forgiving strategies, such as TFT (tit-for-tat). Thus MAC can become nice. These attributes, among others, account for MAC's success in the environment under discussion. The tournament reveals a number of interesting performance characteristics of the maximization strategies, but also exposes an ironic deficiency in their intrafamilial encounters: members of this family often fail to recognize one another, and their twins, in competition. While the random phase of its play permits a maximization strategy to "learn" about its opponent's responses by constructing an event matrix of joint outcomes, a pair of competing maximization strategies can "misconstrue" one another as random strategies, on the basis of their respective hundred-move event matrix profiles. Such cases of mistaken identity can result in perpetual mutual defection from move 101 onward. Pure defection is the optimal strategy against a random player, and as such is prescribed by the maximization calculus.
The Failure of Success
163
Figure 2 (reprinted from Marinoff 1992, p. 214) illustrates this general deficiency in intrafamilial maximization performance/1 and also raises several perplexing questions. First, given that all members of this family experience sharp decreases between their average tournament scores and average intrafamilial scores, why is it that the magnitudes of the differences do not correspond to the order of increasing initial co-operativeness? From greatest to smallest, the strategic order of difference is: MAC, MAD, MAE, MEU (while the corresponding order of initial co-operativeness is MAC, MAE, MEU, MAD). Next, within the family itself, the order of success - MAE, MEU, MAC, MAD - is again altered with respect to initial co-operativeness. The second-most co-operative strategy finishes first within the family; the third-most co-operative strategy, second; the most co-operative strategy, third. Why does this unexpected order obtain? Only in competition against MAD, the least co-operative strategy, are the other strategies exploited in strict order of their increasing initial co-operativeness. Why? Finally, in competition against respective twins, the most successful pair is MAE-MAE (averaging 2594 points per game), followed by MELT-MELT (averaging 2384 points per game). But the MAC-MAC twins, which are weighted far more co-operatively than the others, average only 1807 points per game. Why does MAC's overwhelming probability of co-operation during the first 100 moves (9/io, as opposed to 5/7 for MAE and l/2 for MEU) result in a relatively poor performance between MAC-MAC twins? This is the most surprising and counter-intuitive result of the tournament. 2. Normally Distributed Scores In order to appreciate what takes place when a maximization family member encounters a sibling, or a twin, one must recognize a strategic property peculiar to this family; namely, its members' sequential use of probabilistic, then deterministic algorithms. Thus, one observes two different phases in a maximization strategy's play: first, its construction of the initial event matrix for 100 moves; second, its calculation of expected utilities (and updating of the event matrix) for the subsequent 900 moves. But when maximization family members encounter
MAC MAE MEU MAD
MAC
MAE
MEU
MAD
Interfamilial Average
Tournament Average
1807 2123 1887 1332
1849 2594 2396 1266
1741 2356 2384 1181
971 987 1003 1029
1592 2015 1918 1202
2645 2503 2362 2086
Figure 2: The maximization family - interfamilial competition.
164
Louis Marinoff
one another, their play takes on a reflected aspect, wherein certain symmetries, as well as asymmetries, become apparent. One can identify five different kinds of algorithmic function, in the tournament environment: predetermined, purely probabilistic, purely deterministic, mixed probabilistic and deterministic, and sequential probabilistic and deterministic (see Marinoff 1992 for a detailed description of each strategic agent.) If two pre-determined and/or deterministic strategies are paired in a sequence of games, the scores of the given pair obviously do not vary from one game to another. If a probabilistic (or mixed probabilistic and deterministic) strategy is paired with any strategy other than a sequential strategy in a sequence of games, the scores of the given pair vary according to a normal distribution, in which the mean score approaches the most probable score as the number of games increases. When a maximization strategy meets a strategy that uses a mixed probabilistic and deterministic algorithm, the former's scores tend to be highly concentrated; the latter's normally distributed. The maximization family members' scores against one another, however, are neither concentrated nor distributed normally, with one noteworthy exception. In consequence, their average scores do not, as a rule, approach their most probable scores as the number of games increases. Let the exception to the rule, which occurs in games involving MAD, be considered first. The extreme case of this exception obtains when MAD plays itself. Recall that during its first 100 moves, MAD co-operates randomly with probability Vio. Thus, the a priori probabilistic outcomes for a MAD-MAD pair are: p(C,c) = Vioo; p(C,d) = p(D,c) = 9/ioo; p(D,d) = 81/ioo. So after 100 moves, the most probable event matrix contains entries W = 1; X = Y = 9;Z = 81, with associated expected utilities EUC = 0.3, EUD = 1.4, and the score tied at 129. The deterministic play that ensues from this matrix, from moves 101 to 1000, consists of 900 consecutive mutual defections. The game ends with the score tied at 1029. Since this score is a deterministic endproduct of the most probable event matrix, it is the most probable score. Empirically, after five hundred games, MAD's average score was found to be 1029. The scores themselves appear to be distributed normally.
3. Non-Normally Distributed Scores Next, consider an encounter between the MEU-MEU pair. Since MEU co-operates with probability l/2 during its first 100 moves, then the MEU-MEU pair has equiprobable outcomes during this phase: p(C,c) = p(C,d) = p(D,c) = p(D,d) = V-i. Thus, after 100 moves, the most probable event matrix has equal entries: W=X = Y=Z = 25, with associated
The Failure of Success
165
expected utilities EUC = 1.499, EUD = 2.999, and the score tied at 225. The deterministic phase of their encounter proceeds as follows. Onehundred-and-fifty consecutive mutual defections obtain between moves 101 to 250, with a concomitant steady decrease in the value of EITO.1 By move 251, the value of EUD is driven below that of EUC, and 750 consecutive mutual co-operations ensue. After 1000 moves, the score is tied at 2625. Again, it is the most probable score. Empirically, however, after 500 games of MEU versus MELT, the average score is found to be 2384. This is substantially less than the most probable value. The cause of the discrepancy is revealed in a histogram showing the distribution of scores for 500 gam.es of MELT versus MEU. Figure 3 displays a non-normal distribution, with a minor prominence in the 1100-1200 point range, and a skewed distribution across the middle and upper ranges. The peak of the skewed distribution indeed coincides with the a priori most probable score, in the 26002700 point range. But the minor feature at the low end of the range, along with the overall skewness, diminishes the average score. Next, consider an encounter between the MAE-MAE pair. Since MAE co-operates with probability 5/7 during its first 100 moves, the MAE-MAE pairs' a priori probabilistic outcomes are: p(C,c) = 25/49; p(C,d) = p(D,c) = 10/49; p(D,d) = 4/49. Thus, after 100 moves, the most
Figure 3: MEU versus MEU. I [istogram of scores for 500 games. MEU's average score: 2384.
166
Louis Marinoff
probable event matrix has these entries: W = 50; X = Y = 20; Z = 8. The associated expected utilities are EUC = 2.159, EUD = 3.879, and the score is tied at 256. The deterministic phase of their encounter proceeds as follows. Forty-one mutual defections take place between moves 101 and 141, followed by 859 mutual co-operations. After 1000 moves, the score is tied at 2882 points. Again, this represents the most probable score. Empirically, however, after 500 games of MAE versus MAE, the average score is found to be 2594 points. Again, a histogram (Figure 4) reveals the cause of the discrepancy between the most probable and the average scores. Figure 4 displays a non-normal distribution. While the most frequent scores by far occur in the 2800-2900 point range, which is the range of the most probable score, the skew of the distribution towards the lower ranges diminishes the average score by some 250 points. Other features of increasing prominence appear in the 19002000, 2300-2400, and 2600-2700 point ranges. Finally, consider an encounter between the MAC-MAC pair. Since MAC co-operates with probability 9/io during its first 100 moves, the MAC-MAC pairs' a priori probabilistic outcomes are: p(C,c) = 81/ioo; p(C,d) = p(D,c) = 9/ioo; p(D,d) = Vioo. Thus, after 100 moves, the most probable event matrix has these entries: W = 81; X = Y = 9;Z = 1. The
Figure 4: MAE versus MAP,. Histogram of scores for 500 games. MAE's average score: 2594.
The Failure of Success
167
associated expected utilities are EUC = 2.699, EUD - 4.599, and the score is tied at 288. In the deterministic phase, mutual co-operation commences on move 113, after only twelve consecutive mutual defections. The string of 888 prescribed mutual co-operations between moves 113 and 1000, in addition to the 81 probabilistic mutual co-operations during the first 100 moves, yields a total of 969 instances of mutual co-operation in a game of 1000 moves. The resultant score, which again represents the most probable score, is tied at 2965 points. The competing MAC-MAC pair, however, realizes the largest empirical deviation in its family. After 500 games of MAC versus MAC, the average score is found to be 1807 points, a remarkable difference of 1158 points between the a priori most probable and a posteriori average scores. Again, a histogram (Figure 5) reveals the cause of this large discrepancy. Figure 5 shows a fragmented distribution of scores, with prominent features in the 1300-1400, 1600-1700, 2300-2400, and 2900-3000 point ranges. Empirically, the most probable score is 1300-1400 points. In addition, troughs appear between 1900-2200 and 2600-2700 points, from which ranges scores seem to be excluded. The histogram clearly illustrates how the average score for the MAC-MAC pair falls well below the most probable predicted score. But this illustration merely begs the question: Why does the distribution become so fragmented?
Figure 5: MAC versus MAC. Histogram of scores for 500 games. MAC's average score: 1807.
168
Louis Marinoff
Indeed, this is one of a number of questions raised by an examination of the distributions of scores between members of the maximization family. In the four cases considered, in increasing order of initial co-operative weighting, one finds: first, a concentration of scores at the low end of the scale; second, a skewed distribution with a minor prominence at the low end; third, a skewed distribution in the preliminary stages of fragmentation; and fourth, a fragmented distribution. One may wonder why these differences occur, given that each distribution represents a range of deterministic results stemming from a domain of probabilistic initial conditions. What causes such pronounced changes in the profiles of the distributions? 4. Analysis of Symmetric Event Matrices Answers are found in an analysis of the event matrix itself. There are 176,851 different combinations of 100 trials of the four possible outcomes; in other words, for the first 100 moves in the iterated prisoner's dilemma, there are 176,851 possible event matrices. To facilitate analysis, one seeks to formulate a few general principles that extend to the many possible cases. First, consider those matrices which are symmetric across their major diagonals; that is, event matrices in which the numbers of (C,d) and (D,c) outcomes are identical after 100 moves. As we have seen, such matrices obtain from a priori probabilistic encounters between maximization family twins. As a most general example, suppose that any maximization strategy MAX, with an initial co-operative weighting of p, encounters its twin. Then, during their first 100 moves, both strategies co-operate randomly with probability p, and defect with probability (\—p). Thus, after 100 moves, the entries in the most probable event matrix are: W = lOOp2; X = Y = lOOp(l-p); Z = 100(1 -p)2. The expected utilities are EUC = 3p, EUD = 4p+l; and the score is tied at 100(1+3p-p2). The significance of symmetry across the major diagonal is as follows. When the number of (C,d) outcomes equals the number of (D,c) outcomes, then both competitors have identical expected utilities of cooperation and of defection. In consequence, from move 101 onward, their joint play is identical, with symmetric outcomes of either (D,d) or (C,c). In the a priori evaluations of most probable scores for the MAD-MAD, MEU-MEU, MAE-MAE, and MAC-MAC twins, one naturally finds increasing tied scores (1029,2625,2882, and 2965 points respectively) as the co-operative weighting increases. MAD's most probable score against its twin is far lower than MAD's siblings' most probable scores against their respective twins because, unlike MAD, the other siblings sooner or later attain mutual co-operation with their respective twins.
169
The Failure of Success
Empirically, it is found that the threshold co-operative weighting for the eventual attainment of mutual co-operation is p = 37/ioo (in a game of 1000 moves with the payoffs of Figure 1). This is not a highly cooperative weighting; nevertheless, it does result in mutual co-operation from move 719 onward. The initial and final event matrices for this threshold weighting are displayed in Figure 6. Now, compare this result with that of a game in which the co-operative weighting of the competitors is 36/ioo, or just below the threshold value, as displayed in Figure 7. While the initial conditions in Figures 6 and 7 scarcely differ, the final results admit of considerable difference. Having established that the threshold weighting of p = 37/wo leads to the eventual attainment of mutual co-operation at move 719, one might next find the maximum rapidity with which such co-operation can be attained. The highest admissible value of p, to the nearest Vioo, is p = "/wo. (If p equals unity, then EUD is undefined owing to division by zero.) After 100 moves, the event matrix for this maximum value of p contains entries W = 98; X = Y = 1; Z = 0. The expected utilities are EUC = 2.97, EUD = 5; the score is tied at 299. Two subsequent mutual defections, at moves 101 and 102, suffice to drive the value of EUD below that of £ UC. Perpetual mutual co-operation ensues from move 103, with a resultant final score tied at 2995. (This is comparable to a final score between two nice strategies, which is tied at 3000.) Evidently, for symmetric event matrices, the number of mutual defections required to bring on mutual co-operation can be represented 100 Moves: MAX
MAX
c
d
C
14
23
D
23
40
EUC = 1.14; EUD = 2.46; score tied at 197 1000 Moves:
MAX
MAX
c
A
C
296
23
D
23
658
EUC = 2.78; EUD = 1.14; score tied at 1661 Figure 6: MAX versus MAX [p(C) — p(c) = 37/ioo], initial and final event matrices.
Louis Marinoff
170 100 Moves:
MAX c MAX
d
C
13
23
D
23
41
EUC = 1.08; EUD = 2.44; score tied at 195 1000 Moves:
MAX c MAX
d
C
13
23
D
23
941
EUC = 1.08; EUD = 2.09; score tied at 1095 Figure 7: MAX versus MAX \p(C) —p(c) = 36/ioo], initial and final event matrices.
as a decreasing exponential function of initial co-operative weighting. An exponential curve-fit yields the following equation: n = f (p) = 7093e~7-164P where n is the number of mutual defections between move 101 and the onset of perpetual mutual co-operation and p (37/ioo s p < 1) is the cooperative weighting. The coefficient of determination for this exponential equation is 0.985. Similarly, the final scores that result from these initial distributions can be fitted to a second exponential curve:
where s is the score after 1000 moves. The coefficient of determination for this expression is 0.9997. Needless to say, while the numerical coefficients of both curves depend upon the particular payoff structure and the length of the game, the form of the curves is independent of these coefficients. In general then, both the play that ensues from event matrices exhibiting symmetry across their major diagonals, and the scores which result from this play, conform to simple mathematical expressions. This class of event matrix gives rise to regular and readily comprehensible outcomes.
The Failure of Success
171
5. Analysis of Asymmetric Event Matrices That class of event matrices whose members do not exhibit symmetry across their main diagonals, is unfortunately (from the viewpoint of simplicity of analysis) the far larger of the two classes. The event matrices in this class give rise to the non-normal distributions displayed in Figures 3 through 5. It is possible (and desirable) to gain an understanding of why these distributions arise without having to analyze tens of thousands, nor even thousands of such matrices. Fortuitously, the process can be well represented by the tabling of results of a few dozen small probabilistic fluctuations about the most probable outcome, for each of the strategic pairs. One first considers the case of MEU versus MEU, displayed in Figure 8. Recall the notation for entries in the generalized event matrix: W, X, Y, and Z are the respective numbers of (C,c), (C,d), (D,c), and (D,d) outcomes. Columns labelled "Initial W,X,Y,Z" contain differing values of these variables after the first 100 moves, i.e., contain different probabilistically-generated event matrices. With each initial event matrix, described by a set {W,X,Y,Z}, is associated the move number on which perpetual mutual co-operation commences (column labelled "Perpetual (C,c)") in the deterministic phase of the game (moves 101-1000) arising from that set. If no mutual co-operation occurs between moves 101-1000, the entry for that set reads "none." The column labelled "Final Score" associates the score (after 1000 moves) which results from the given initial set {W,X,Y,Z}. The sets of {W,X,Y,Z} values are arranged in blocks. Within each block, the values of W and Z are held constant, while the difference between X and Y increases. Each column of blocks holds the value of W constant, while the value of Z increases from block to block. Similarly, each row of blocks holds the value of Z constant, while the value of W increases from block to block. Thus, Figure 8 can be read both vertically and horizontally. Reading down a column shows the effect of increasing initial difference in asymmetric outcomes (X,Y) within blocks, and of increasing initial mutual defections (Z) between blocks, upon the attainment of perpetual mutual co-operation and upon the final score. Reading across a row shows the effect of increasing the number of initial mutual co-operations (W) upon the attainment of perpetual mutual co-operation and upon the final score, with the number of initial mutual defections (Z) held constant and the variance in difference between asymmetric (X minus Y) outcomes held to unity. Recall that for MEU versus MEU the most probable {W,X,Y,Z} is {25,25,25,25}. In Figure 8, the sets of initial event matrices are representative
Initial WJ(,Y,Z
Perpetual (C,c)
Final Score
Initial W,X,Y,Z
Perpetual (C,c)
Final Score
Initial WJf,Y,Z
Perpetual (C,c)
Final Score
20,33,32,15 20,34,31,15 20,35,30,15 20,36,29,15 20,37,28,15
none none none none none
1139-1139 1138-1143 1137-1147 1136-1151 1131-1156
25,30,30,15 25,31,29,15 25,32,28,15 25,33,27,15 25,34,26,15
move move move move move
386 423 464 510 562
2370-2370 2299-2299 2220-2220 2131-2131 2030-2030
30,28,27,15 30,29,26,15 30,30,25,15 30,31,24,15 30,32,23,15
move move move move move
262 281 301 323 347
2622-2622 2587-2587 2550-2550 2509-2509 2464-2464
20,30,30,20 20,31,29,20 20,32,28,20 20,33,27,20 20,34,26,20
move 651 move 755 move 885 none none
1830-1830 1625-1625 1368-1368 1139-1139 1138-1143
25,28,27,20 25,29,26,20 25,30,25,20 25,31,24,20 25,32,23,20
move move move move move
324 354 386 423 464
2488-2488 2431-2431 2370-2370 2299-2299 2220-2220
30,25,25,20 30,26,24,20 30,27,23,20 30,28,22,20 30,29,21,20
move move move move move
213 229 245 262 281
2709-2709 2682-2682 2653-2653 2622-2622 2587-2587
20,28,27,25 20,29,26,25 20,30,25,25 20,31,24,25 20,32,23,25
move move move move move
497 567 651 755 885
2132-2132 1995-1995 1830-1830 1625-1625 1368-1368
25,25,25,25 25,26,24,25 25,27,23,25 25,28,22,25 25,29,21,25
move move move move move
251 273 298 324 354
2625-2625 2584-2584 2537-2537 2488-2488 2431-2431
30,23,22,25 30,24,21,25 30,25,20,25 30,26,19,25 30,27,18,25
move move move move move
185 199 214 229 245
2759-2759 2736-2736 2709-2709 2682-2682 2653-2653
20,25,25,30 20,26,24,30 20,27,23,30 20,28,22,30 20,29,21,30
move move move move move
346 389 439 497 567
2425-2425 2342-2342 2245-2245 2132-2132 1995-1995
25,23,22,30 25,24,21,30 25,25,20,30 25,26,19,30 25,27,18,30
move move move move move
213 231 251 273 298
2695-2695 2662-2662 2625-2625 2584-2584 2537-2537
30,20,20,30 30,21,19,30 30,22,18,30 30,23,17,30 30,24,16,30
move move move move move
151 162 168 186 199
2820-2820 2801-2801 2788-2788 2759-2759 2736-2736
move 25,20,20,35 20,23,22,35 2557-2557 move 277 25,21,19,35 move 20,24,21,35 move 309 2496-2496 25,22,18,35 move 2425-2425 20,25,20,35 move 346 25,23,17,35 move 2342-2342 20,26,19,35 move 389 25,24,16,35 move 20,27,18,35 move 439 2245-2245 Figure 8: MEU versus MEU, varying event matrices and scores.
166 181 196 213 231
2780-2780 2753-2753 2726-2726 2695-2695 2662-2662
30,18,17,35 30,19,16,35 30,20,15,35 30,21,14,35 30,22,13,35
move move move move move
127 127 126 126 125
2858-2863 2853-2868 2850-2875 2845-2880 2842-2887
The Failure of Success
173
of some probabilistic fluctuations in these values that would naturally occur in empirical trials. Three main tendencies, and one interesting exception to them, quickly become apparent. First, within each block, the onset of perpetual mutual co-operation (when it occurs) is increasingly delayed by increases in the difference between X and Y. For a given number of mutual co-operations, a given number of mutual defections, and an initial unequal number of (C,d) and (D,c) outcomes, the MEU-MEU pair first proceeds to equalize the number of (C,d) and (D,c) outcomes. Once that happens, their expected utilities become equal, and the pair then defects until the value ofEUD is driven below that of EUC. Perpetual mutual co-operation then ensues, and a tied final score results.3 The greater the initial difference between X and Y, the greater number of moves are required for their equalization, and the still greater number of moves must be made before mutual co-operation is attained. Thus, for a given W and Z, the smaller the initial difference between X and Y, the larger the final score. Second, reading down the columns, one perceives that for a constant value of W, the onset of perpetual mutual co-operation is actually hastened as the initial number of mutual defections increases. Within certain probabilistic limits, which vary according to their initial weightings, the maximization strategies demonstrate the capacity of enlisting mutual defections in the service of perpetual mutual co-operation. While one wishes to refrain from lapsing into trite moralization, this counter-intuitive capacity suggests that, in certain instances, the gametheoretic end may justify the game-theoretic means. Third, reading across the rows, one perceives that for a constant value of Z, the onset of perpetual mutual co-operation is hastened as the initial number of mutual co-operations increases. This tendency is not surprising, but reassuring in terms of the integrity of the maximization strategy. In general, Figure 8 shows that perpetual mutual co-operation between MEU-MEU pairs, and thus their final scores, depend upon three factors. The scores tend to increase as W increases with Z fixed, as Z increases with W fixed, and as the difference between X and Y decreases with both W and Z fixed. One can amalgamate the first two tendencies, and observe that the final scores tend to increase as the sum of similar outcomes (W plus Z) increases; or, equivalently, as the sum of dissimilar outcomes (X plus Y) decreases. This observation, however, leads to the aforementioned exception. The (30,X,Y,35j block boasts the largest W and Z values in Figure 8, yet the results that stem from this block are not altogether consistent with the tendencies so uniformly prevalent in the rest of the table. To begin with, the onset of perpetual mutual co-operation is hastened
174
Louis Marinoff
(albeit only slightly) as the difference between X and Y increases, not decreases. And, as evidenced by the absence of tied final scores, the MEU-MEU pairs in this block attain perpetual mutual co-operation without having first equalized X and Y values, and without ever equalizing them. The scores themselves are the highest in the table, in keeping with this block's highest W + Z sum. The significance of this unusual block will be brought to light in subsequent tables. Meanwhile, Figure 8 does indeed account for the distribution of scores in Figure 3. One can observe the contributions towards skewness, with a majority of scores occurring in the 2400-2700 point range, and none exceeding 2900 points. Contributions to the minor prominence in the 1100-1200 point range occur when the sum of W plus Z falls below a certain threshold, making mutual co-operation unattainable within 1000 moves; or when the sum of W plus Z is theoretically sufficient for perpetual mutual co-operation, but the difference between X and Y is large enough to prevent its onset. These latter conditions prevail in the {20,X,Y,15} and (20,X,Y,20) blocks, respectively. Next, a similar table is generated for MAE versus MAE. Recall that the most probable {W,X,Y,Z) for the MAE-MAE pair is {52,20,20,8}. Figure 9 displays corresponding fluctuations about these most probable values, and the results to which they give rise. Reading down the first column of Figure 9, one observes that the two previous tendencies hold until the {40,X,Y,14} block; that is, the onset of perpetual mutual co-operation is hastened as the difference X minus Y decreases within blocks, and as the sum X plus Y decreases between blocks. The {40,23,23,14} matrix of the {40,X,Y,14} block also conforms to these tendencies. But the other matrices in that block yield results comparable to those of the {30,X,Y,35} block in Figure 8; that is, they give rise to perpetual mutual co-operation without first equalizing X and Y values, and the onset of mutual co-operation is hastened slightly as the difference X minus Y increases. Reading down the second column, one observes that this departure from precedent tendency now becomes the norm itself. With the obvious exception of matrices in which X equals Y initially, the second column of blocks behaves as the last block in the first column. Note that, within each block except the first, the order of the onset of perpetual mutual co-operation is increasingly jumbled. The most important overall effect of this departure, exemplified in the first three blocks of column two, is reflected in the final scores. Because the X and Y values are not equalized prior to perpetual mutual co-operation, the gap between the final scores increases as the initial difference between X and Y increases. Owing to the vicissitudes of chance during the first 100 moves, one member of the MAE-MAE pair
Initial WJC,Y,Z
Perpetual (C.c)
Final Score
Initial WX,Y,Z
Perpetual (C,c)
Final Score
Initial
WJt,Y,Z
Perpetual (C,c)
Final Score
40,29,29,2 40,30,28,2 40,31,27,2 40,32,26,2 40,33,25,2
move move move move move
227 239 252 265 280
2715-2715 2694-2694 2671-2671 2648-2648 2621-2621
50,24,24,2 50,25,23,2 50,26,22,2 50,27,21,2 50,28,20,2
move move move move move
169 203 206 209 211
2836-2836 2754-2809 2742-2812 2730-2815 2720-2820
60,19,19,2 60,20,18,2 60,21,17,2 60,22,16,2 60,23,15,2
move move move move move
140 257 623 668 717
2899-2899 2622-2622 2059-2059 1975-1975 1883-1883
40,28,27,5 40,29,26,5 40,30,25,5 40,31,24,5 40,32,23,5
move move move move move
216 227 239 252 265
2734-2734 2715-2715 2694-2694 2671-2671 2648-2648
50,23,22,5 50,24,21,5 50,25,20,5 50,26,19,5 50,27,18,5
move move move move move
158 209 211 210 212
2851-2856 2730-2815 2720-2820 2717-2827 2707-2832
60,18,17,5 60,19,16,5 60,20,15,5 60,21,14,5 60,22,13,5
move move move move move
623 668 717 263 859
2059-2059 1975-1975 1883-1883 2585-2850 1614-1614
40,26,26,8 40,27,25,8 40,28,24,8 40,29,23,8 40,30,25,8
move move move move move
195 205 216 227 239
2770-2770 2753-2753 2734-2734 2715-2715 2694-2694
50,21,21,8 50,22,20,8 50,23,19,8 50,24,18,8 50,25,17,8
move move move move move
148 211 210 212 214
2869-2869 2720-2820 2717-2827 2707-2832 2697-2837
60,16,16,8 60,17,15,8 60,18,14,8 60,19,13,8 60,20,12,8
move move move move move
124 717 263 859 265
2922-2922 1883-1883 2585-2850 1614-1614 2568-2868
40,25,24,11 40,26,23,11 40,27,22,11 40,28,21,11 40,29,20,11
move move move move move
180 195 205 216 227
2793-2798 2770-2770 2753-2753 2734-2734 2715-2715
50,20,19,11 50,21,18,11 50,22,17,11 50,23,16,11 50,24,15,11
move move move move move
210 212 214 443 213
2717-2827 2707-2832 2697-2837 2357-2357 2688-2853
60,15,14,11 60,16,13,11 60,17,12,11 60,18,11,11 60,19,10,11
move move move move none
263 859 265 268
2585-2850 1614-1614 2568-2868 2555-2875 1334-1359
40,23,23,14 40,24,22,14 40,25,21,14 40,26,20,14 40,27,19,14
move move move move move
140 166 166 165 165
2819-2819 2814-2824 2809-2929 2806-2836 2801-2841
50,18,18,14 50,19,17,14 50,20,16,14 50,21,15,14 50,22,14,14
move move move move move
129 214 443 213 523
2898-2898 2697-2837 2357-2357 2688-2853 2209-2209
60,13,13,14 60,14,12,14 60,15,11,14 60,16,10,14 60,17,9,14
move 110 move 265 move 268 none none
2941-2941 2568-2868 2555-2875 1334-1359 1327-1372
Figure 9: MAE versus MAE, varying event matrices and scores.
176
Louis Marinoff
finds that joint occurrences of its co-operation and its twin's defection outnumber joint occurrences of its defection and its twin's co-operation. In the {W,X,Y,Z} region under consideration, this member's final score decreases, while its twin's increases, as the initial difference X minus Y becomes larger. Then, suddenly, in the {50,X,Y,llj block, a new phenomenon is manifest. Four of five sets in this block give rise to perpetual mutual cooperation between moves 210-214, with respective final scores within the 2688-2853 point range. But the {50,23,16,11} matrix, which contains neither the largest nor the smallest (X,Y) difference in the block, gives rise to an unexpectedly large number of mutual defections, with the onset of perpetual mutual co-operation delayed until move 443. The resultant final score, tied at 2357 points, indicates that X and Y values are once again equalized during the game. This phenomenon is increasingly more frequent, and more drastic, through the balance of column two, and throughout column three. For instance, consider what takes place in the {60,X,Y,8} block. The first matrix, {60,16,16,8), gives rise to early perpetual mutual co-operation, commencing on move 124, and the MAE-MAE twins attain a correspondingly high score, tied at 2922 points. But the second matrix, {60,17,15,8j, leads to comparative disaster: perpetual mutual co-operation does not commence until move 717, and the pair attains a correspondingly low final score, tied at 1883 points. Hence, a small increment in the difference between X and Y produces a momentous delay in the onset of perpetual mutual co-operation, with a correspondingly large decrement in the final scores. The third matrix in the block, {60,18,14,8}, reverses the previous disaster. Perpetual mutual co-operation begins at move 263, which is now explicable in light of the initial (X,Y) difference. No equalization of (X, Y) values takes place, and the final scores are therefore fairly high but disparate, at 2585-2850 points. But the fourth matrix, {60,19,13,8}, leads to renewed disaster, with perpetual mutual co-operation commencing only on move 859, and a resultant low tied score of 1614 points. The culmination of these alternating radical changes appears in the last two blocks of column three. The combination of a sufficiently large W plus Z sum and a sufficiently large X minus Y difference can result in perpetual mutual defection from move 101 to the end of the game. In such cases, the MAE-MAE pair attains scores of less than 1400 points. Evidently, the event matrix becomes increasingly unstable as the sum of similar outcomes (W + Z) begins to exceed that of dissimilar outcomes (X + Y). The expected utilities associated with these outcomes begin to reverse their prescriptions with each increment of the (X,Y) difference, and the pendulum of joint outcomes swings steadily
The Failure of Success
177
away from perpetual mutual co-operation, and towards perpetual mutual defection, as W plus Z grows and X minus Y diminishes. Figure 9 indeed accounts for the distribution of scores in Figure 3, albeit in an unexpected fashion. When random fluctuations about the most probable event matrix, {52,20,20,8}, are relatively small, the scores attained are fairly high. Larger fluctuations which reduce the sum W + Z do not substantially reduce the final scores. But larger fluctuations which increase the sum W + Z produce both the highest scores in the distribution (when X equals Y), as well as the lowest scores (when X minus Y is sufficiently large). Next, a similar table is generated for theMAC-MAC pair. The process leading to the fragmented distribution of scores for 500 games of MAC versus MAC (displayed in Figure 5), is well depicted in Figure 10. Figure 10 shows a continuation of the new tendency observed in Figure 9; namely, a transition to increasingly unstable event matrices. Recall that the most probable event matrix for the MAC-MAC pair is {81,9,9,1}. This set of values evidently lies in a highly unstable region of the {W,X,Y,Z) spectrum, in which probabilistic fluctuation gives rise to one of three situations. Together, the three situations account for the fragmentation of the MAC-MAC pair's distribution of scores. First, perpetual mutual co-operation can be attained very rapidly, as on move 115 in the {79,X,Y,lj block, or even immediately, as on move 101 in the {83,X,Y,8} block. The onset of rapid perpetual mutual cooperation, when it occurs, is hastened as the sum W plus Z increases. And when it does occur, it results in very high (though not necessarily equal) scores for both twins, in the 2960-2992 point range. This situation contributes to the prominence at the high end of the distribution in Figure 5. Second, the onset of perpetual mutual co-operation can be noticeably retarded, occurring anywhere between move 364 and move 396 in Figure 10. The delay increases with the sum of W plus Z. And the delay, when it occurs, marks a disparity in the final scores. One pair-member attains roughly 2800-2950 points; the other, roughly 2200-2500 points. This situation thus contributes to the high-range prominence, and it forms the prominence in the next-lowest point range in Figure 5. The trough from 2600-2800 points occurs, self-evidently, because no probabilistic event matrix in this region of the {W,X,Y,Z} spectrum can give rise to a deterministic score in that range. Third, there may be no onset of perpetual mutual co-operation. Such cases give rise to disparate, low final scores. The range of the disparity varies roughly from 250 points to 550 points. This range increases, between blocks, with the sum W plus Z; and it increases, within blocks, with the difference X minus Y. A typical score is 1621-1306 points. This
Initial WJ(,Y,Z
Perpetual (C,c)
Final Score
Initial
W,X,Y,Z
Perpetual (C,c)
Final Score
Initial WJ,Y,Z
Perpetual (C,c)
Final Score
79,10,10,1 79,11,9,1 79,12,8,1 79,13,7,1 79,14,6,1
move 115 move 364 none none none
2960-2960 2352-2887 1330-1565 1318-1583 1306-1601
81,9,9,1 81,10,8,1 81,11,7,1 81,12,6,1 81,13,5,1
move 113 none none none move 385
2965-2965 1330-1585 1318-1603 1306-1621 2278-2933
83,8,8,1 83,9,7,1 83,10,6,1 83,11,5,1 83,12,4,1
move 111 none none move 396 none
2970-2970 1318-1623 1306-1641 2256-2931 1285-1680
79,10,9,2 79,11,8,2 79,12,7,2 79,13,6,2 79,14,5,2
move 364 none none none move 375
2352-2960 1330-1565 1318-1583 1306-1601 2298-2933
81,9,8,2 81,10,7,2 81,11,6,2 81,12,5,2 81,13,4,2
none none none move 385 none
1330-1585 1318-1603 1306-1621 2278-2933 1285-1660
83,8,7,2 83,9,6,2 83,10,5,2 83,11,4,2 83,12,3,2
none none move 396 none none
1318-1623 1306-1641 2256-2931 1285-1680 1272-1702
79,9,9,3 79,10,8,3 79,11,7,3 79,12,6,3 79,13,5,3
move 111 none none none move 375
2965-2965 1330-1565 1318-1583 1306-1601 2298-2933
81,8,8,3 81,9,7,3 81,10,6,3 81,11,5,3 81,12,4,3
move 109 none none move 385 none
2970-2970 1318-1603 1306-1621 2278-2933 1285-1660
83,7,7,3 83,8,6,3 83,9,5,3 83,10,4,3 83,11,3,3
move 107 none move 396 none none
2975-2975 1306-1641 2256-2931 1285-1680 1272-1702
79,8,8,5 79,9,7,5 79,10,6,5 79,11,5,5 79,12,4,5
move 107 none none move 375 none
2970-2970 1318-1583 1306-1601 2298-2933 1285-1640
81,7,7,5 81,8,6,5 81,9,5,5 81,10,4,5 81,11,3,5
move 105 none move 385 none none
2975-2975 1306-1621 2278-2933 1285-1660 1272-1682
83,6,6,5 83,7,5,5 83,8,4,5 83,9,3,5 83,10,2,5
move 104 move 396 none none none
2978-2978 2256-2931 1285-1680 1272-1702 1259-1724
79,7,6,8 79,8,5,8 79,9,4,8 9,10,3,8 79,11,2,8
none move 375 none none none
1306-1601 2298-2933 1285-1640 1272-1662 1259-1684
81,6,5,8 81,7,4,8 81,8,3,8 81,9,2,8 81,10,1,8
move 101 none none none none
2976-2981 1285-1660 1272-1682 1259-1704 1245-1730
83,5,4,8 83,6,3,8 83,7,2,8 83,8,1,8 83,9,0,8
move 101 move 101 move 101 none none
2977-2982 2972-2987 2967-2992 1245-1750 1231-1776
Figure 10: MAC versus MAC, varying event matrices and scores.
The Failure of Success
179
situation contributes to the two other prominences, in the 1500-1700 and 1300 point ranges of Figure 5. Again, troughs occur in the 19002200 and 1000-1200 point ranges because such scores are deterministically inaccessible from the event matrices in this probabilistic region of the {W,X,Y,Z} spectrum. These three different situations occur consecutively in the {83,X,Y,5J block of Figure 10. The instability of the event matrix is well evidenced in this block. The matrix {83,6,6,5} gives rise to perpetual mutual cooperation on move 104, and results in a final score tied at 2978 points. When the (X,Y) values fluctuate from (6,6) to (7,5), perpetual mutual co-operation does not begin until move 396, with a resultant score of 2256-2931. One further fluctuation in (X,Y) values, from (7,5) to (8,4), debars further perpetual mutual co-operation from occurring in this block, and results in scores such as 1285-1680. Thus, in this block, an initial (X,Y) difference of only 4 causes severe decrements, of 1693 and 1298 points, to the final scores of the MAC-MAC pair. In sum, Figures 8,9, and 10 account for the different non-normal distributions of final scores in repeated encounters between MEU-MEU, MAE-MAE and MAC-MAC pairs. Moreover, these tables reveal some unexpected, interesting and shifting tendencies across the spectrum of possible event matrices. These tendencies convey an appreciation of the general nature of the relationship between the probabilistic and deterministic phases of the maximization family's play. This appreciation extends to cases in which siblings, rather than twins, are paired. One need not resort to further analyses of numerous representative probabilistic fluctuations, but one might outline just one case to illustrate how the understanding can be applied. One hundred games of MAC versus MAE generate the non-normal distributions of final scores displayed in Figure 11. The most probable event matrix for MAC versus MAE is {64,26,7,3}, which gives rise to perpetual mutual co-operation on move 295, and thence to the most probable score of MAC 2473, MAE 2913. But the average score for 100 games is found to be MAC 1849, MAE 2123. Again, the distributions explain the discrepancy. But what gives rise to the distributions? In the initial event matrix, let W and Z be held constant at their most probable respective values of 64 and 3, and let (X,Y) fluctuate from (25,8) to (29,4). The results are displayed in Figure 12, which illustrates how the distributions in Figure 11 arise. The probabilistic event matrices for MAC versus MAE lie in an unstable region of the {W,X,Y,Z} spectrum, from which two main deterministic states are accessible. Perpetual mutual co-operation either commences around move 300, or it does not commence at all. The first state contributes to the higher
180
Louis Marinoff
Figure 11: MAE versus MAC. Histogram of scores for 100 games. Average score: MAE 2123, MAC 1849.
point-range features in the respective distributions; the second, to the lower. In the first situation, MAE outpoints MAC by a typical score of 2900-2500; in the second situation, by a typical score of 1600-1350.
6. An Appeal to Evolution Similar outlines could naturally be drawn to account for the results of other encounters between maximization family siblings. But the foregoing analyses explain the reasons for MAC's relatively poor performances against its twin and its siblings, as revealed in Figure 2. MAC's initially high co-operative weighting, which stands MAC in better stead than its siblings in competition against other strategic families, militates against MAC in intrafamilial competition. MAC's probabilisInitial W, X, Y, Z
Perpetual (C, c)
Final Score
64, 25, 8, 3 64, 26, 7, 3 64, 27, 6, 3 64, 28, 5, 3 64, 29, 4, 3
none move 295 move 299 move 302 none
1320 - 1425 2473 - 2913 2457 - 2922 2443 - 2933 1280 -1495
Figure 12: MAC versus MAE, varying event matrices and scores.
The Failure of Success
181
tic event matrices span an unstable region of the {W,X,Y,Z} spectrum, and the instability causes moderate to extreme discrepancies between MAC's most probable and average intrafamilial scores. MAC's less co-operatively weighted siblings, MAE and MELT, are also afflicted by this familial syndrome, but to correspondingly lesser extents. MAD is immune to it; hence MAD's most probable and average scores coincide. But MAD's immunity is conferred by a property which entails far worse consequences in the tournament environment; namely, the inability to cross the threshold of perpetual mutual cooperation. Hence, MAD's prophylactic measure is more debilitating than the syndrome which it prevents. Does the lack of sibling recognition among maximization family members lend itself to any social or biological interpretation? One might be tempted to draw a superficial moral from this story, to the effect that since maximization strategies embody the property of exploitiveness, then even if they find no exploitable strategies in their environment they cannot refrain from exploiting one another. Simplistic sociopolitical and ethological allegories abound. One might envision a proverbial pack of thieves falling out over their spoils, instantiating Hobbes's (1651, ch. 13) notion of fleeting or insincere alliances in his natural war "of every man against every man." One might imagine a school of sharks devouring one another during a feeding frenzy, in the spirit of Spencer's (1898, pp. 530-31) "survival of the fittest," which naturally applies to predators as well as to prey.4 But these allegorical interpretations do not account for the mathematical niceties of the maximization family's interactions. If exploitiveness were a pivotal determinant of strategic robustness, then MAD would out-perform its siblings in intrafamilial competition, followed by MEU, MAE, and MAC. As Figure 2 shows, this does not occur. Moreover, as Axelrod (1980b) predicts and my (1992) tournament demonstrates, the most robust strategy (MAC) is able both to exploit the exploitable and to co-operate with the provocable. Hence, if a gametheoretic analogue of sociobiological fitness is strategic robustness, then exploitiveness alone does not make a strategy relatively robust. A deeper interpretation of the maximization family's performance does not preclude biological and ethological analogies; rather, it suggests that a fundamental comparison be made between species recognition and strategic identification. Mechanisms of species recognition are as yet relatively little-understood across the broad zoological spectrum; however, it appears that many forms of conspecific recognition and subsequent behaviour are mediated by pheromones (e.g., see Stoddart 1976; Birch and Haynes 1982). Hosts of intraspecific biochemical messages are transmitted and received in the animal kingdom - humans included -
182
Louis Marinoff
and it is easy to appreciate why natural selection would have favoured the evolution of this general mechanism across diverse ranges of species. While n-pair, repeated Prisoner's Dilemma tournaments are susceptible to ecological modeling (see Axelrod 1980b; Marinoff 1992), they are also amenable to evolutionary change. We are not now referring to Maynard-Smith's (1982) evolutionary games theory, which ingeniously models population genetics using game-theoretic constructs;5 rather, we are invoking a cognitive scientific approach, which effects strategic evolution by simulating aspects of the neo-Darwinian paradigm using computer technology and high-level programming languages. For example, Koza's (1991) LISP programs generate the coevolution of minimax strategies in a generic two-person, zero-sum game. Fujiki and Dickinson (1987) adapt genetic algorithms to manipulate LISP expressions, thereby evolving strategies for the Prisoner's Dilemma. Danielson (1992, pp. 133-42) uses PROLOG to simulate both strategic adaptation and learning in the Prisoner's Dilemma. Although evolved strategies may incorporate meta-strategic properties, their paradigmatic development is distinctly evolutionary (rather than meta-game-theoretic6). Danielson (1992, pp. 51-52) calls this approach "moral engineering." None the less, it can be predicted that Axelrod's (1984, p. 15) maxim for the iterated Prisoner's Dilemma, "... there is no best strategy independent of the strategy used by the other player," is unlikely to be threatened by the emergence of an evolutionary "super-strategy" To see why the maxim holds in evolutionary scenarios, consider the following argument. Hypothesize that the maximization family evolved some reliable mechanism of familial identification.7 A suitable strategic analogue of a pheromone could be a designated substring of co-operations and defections (e.g., CCCCCDDDDD) nested somewhere within the 100 random moves. When two such evolved maximization siblings compete, they construct their event matrices as usual. But at the same time, they also monitor their opponent's string of moves. When either maximization strategy detects the predetermined substring that identifies its opponent as "conspecific," it immediately sends back the same substring in reply, then initiates perpetual co-operation. Its opponent detects this identifying substring, and responds with perpetual co-operation. If the maximization strategies had been vouchsafed the capacity for such behaviour, then their performances would have been considerably enhanced. MAE would have finished second instead of third in overall robustness, while MAC's margin of victory would have been even wider. However, this mechanism of identification would not render MAC the "best" strategy, independent of environment. The mechanism could also backfire, in at least three ways.
The Failure of Success
183
First, any randomizing strategy (or member of the probabilistic family) could fortuitously generate the predetermined recognition-string, and would thereby elicit perpetual co-operation from a maximizer. The maximization strategy would then be exploited. Second, given the knowledge that the maximization family employs a predetermined recognition-string, and given also a sufficient number and length of encounters, then presumably any evolving strategy could, by trialand-error, eventually learn to generate the recognition-string itself. Again, the maximization strategy would be exploited. Third, given an evolutionary scenario, one might witness the emergence of a "rogue" maximization strategy, which first produces the identification necessary to elicit perpetual co-operation from its maximizing sibling, and then proceeds to defect perpetually itself, thus maximizing its own long-term gains (except against its twin). Such a strategy, ironically, would be exploiting the very mechanism that evolved to circumvent intraspecific exploitation. As Danielson (1992, p. 135) notes, "Flexibility makes new predatory tricks possible and requires co-operative players to be more cautious." So, notwithstanding emergent evolutionary models, one confidently predicts the reassertion of the problematic nature of the Prisoner's Dilemma. Ever-more successful strategies may evolve, but - shades of the antlers of the Irish elk - any attribute that guarantees today's success may also seal tomorrow's doom.
Notes 1 The intrafamilial data in this table is based on 100 games between siblings, and 500 games between twins. 2 As previously noted (Marinoff 1992, p. 214), an occurrence of mutual defection lowers the expected utility of further defection. Thus, for maximization family encounters, mutual defection increases the propensity for mutual cooperation. 3 But a tied final score does not result uniquely from this process. For example, the {20,33,32,151 matrix in Figure 8 generates a single unilateral co-operative play (D,c) at move 911, which produces a symmetric matrix and hence a tied final score in the absence of mutual co-operation. 4 A deeply entrenched fallacy attributes the phrase "survival of the fittest" to Darwin. In fact, it was coined by Spencer in 1863/4 as a synonym for "natural selection." Darwin long resisted adopting this term, despite Wallace's (1866) and others' promptings. 5 Contrary to Axelrod's and Hamilton's (1981) assertion, there is no evolutionarily stable strategy in the repeated Prisoner's Dilemma. See Boyd and Lorberbaum (1987), Axelrod and Dion (1988), Marinoff (1990). 6 Meta-game theory was formalized by N. Howard (1971).
184
Louis Marinoff
7 J. Howard (1988) lists the source code, in BASIC, of a self-recognizing Prisoner's Dilemma strategy.
References Axelrod, R. (1980a). Effective choice in the Prisoner's Dilemma. Journal of Conflict Resolution, 24: 3-25. (1980b). More effective choice in the Prisoner's Dilemma. Journal of Conflict Resolution, 24: 379-403. - (1984). The Evolution of Cooperation. New York: Basic Books. Axelrod, R., and D. Dion (1988). The further evolution of cooperation. Science, 242: 1385-90. Birch, M., and K. Haynes (1982). Insect Pheromones. The Institute of Biology's Studies in Biology, no. 147. London: Edward Arnold. Boyd, R., and J. Lorberbaum (1987). No pure strategy is evolutionarily stable in the repeated Prisoner's Dilemma game. Nature, 327: 58-59. Danielson, Peter (1992). Artificial Morality. London: Routledge. Fujiki, C, and J. Dickinson (1987). Using the genetic algorithm to generate lisp source code to solve the Prisoner's Dilemma. In J. Grefenstette (ed.), Genetic Algorithms and Their Applications: Proceedings of the Second International Conference On Genetic Algorithms (Hilldale, NJ: Lawrence Erlbaum), pp. 236-40. Hobbes, T. (1957 [1651]). Leviathan. Edited by M. Oakeshott. Oxford: Basil Blackwell. Howard,}. (1988). Cooperation in the Prisoner's Dilemma. Theory and Decision, 24: 203-13. Howard, N. (1971). Paradoxes of Rationality: Theory ofMetagames and Political Behaviour. Cambridge, MA: MIT Press. Koza, J. (1991). Evolution and co-evolution of computer programs to control independently-acting agents. In J. Meyer and S. Wilson (eds.),From Animals to Animals: Proceedings of the First International Conference on Simulation of Adaptive Behaviour (Cambridge: MIT Press), pp. 366-75. Marinoff, L. (1990). The inapplicability of evolutionarily stable strategy to the Prisoner's Dilemma. The British Journal for the. Philosophy of Science, 41: 462-71. (1992). Maximizing expected utilities in the Prisoner's Dilemma.Journal of Conflict Resolution, 36: 183-216. Maynard Smith, John (1982). Evolution and the Theory of Games. Cambridge: Cambridge University Press. Spencer, H. (1898 [1863-4]). The Principles of Biology, Vol. 1. London: Williams and Norgate. Stoddart, D. (1976). Mammalian Odours and Pheromones. The Institute of Biology's Studies in Biology, no. 173. London: Edward Arnold.
10 Transforming Social Dilemmas: Group Identity and Co-operation Peter Kollock
Introduction In situations where there is a temptation to behave selfishly even though all would be made better off by co-operating (i.e., a social dilemma), why do people co-operate or defect? The dominant assumption is that people act so as to maximize their own outcomes. Thus, if people fail to contribute to the provision of a public good or overuse a commons, it is because of the temptation to free-ride. But narrow selfinterest may not be the only motivation at work. What drives the decision to co-operate? What is the motivational basis of co-operation? Answering this question means that researchers must tap into people's subjective definitions of strategic interaction. However, most experimental research on social dilemmas has simply assumed (implicitly or explicitly) that people are playing strategic games according to the pay-off matrices given them by the experimenter. Unfortunately there is no guarantee that subjects play an experimental game as intended by the researcher. Value is a subjective thing and for any of a variety of different reasons people might value particular outcomes more or less than the objective pay-off they receive. In other words, most researchers have assumed that when they sit subjects down to play a Prisoner's Dilemma, subjects are in fact playing that game. I wish to turn that assumption into a question and ask, What game are people playing? What is the subjective structure of the game, what variables affect this subjective structure, and to what end? My purpose is to map out, at least in an approximate way, the subjective landscape of these games, to obtain some sense of people's preference for co-operation in different settings. This study is a step in that direction, and it concentrates on a variable which has been shown to have a significant impact on interaction - group identity. Through two experimental studies I argue that there is a general tendency to subjectively transform a Prisoner's Dilemma Game into an Assurance Game (discussed below), and that group identity has a significant, patterned 185
186
Peter Kollock
effect on this transformation. In pursuing these issues I will question whether the Prisoner's Dilemma is the most appropriate metaphor for understanding co-operation.1
The Given and the Effective Matrix A useful model to frame this discussion is the work by Kelley and Thibaut (Kelley and Thibaut 1978; Kelley 1979) on the distinction between the given and the effective matrix: The given matrix summarizes each person's direct outcomes as these are determined by [the person's] own and [his or her] partner's actions and without any account being taken of the effect of those actions on the partner's outcomes. In game research the given matrix is the set of pay-offs specified for each person by the experimenter. In real life the given matrix is the set of outcomes provided to the person by external reward and incentive systems as these relate to [her or his] interests, needs, abilities, and the like ... The theory simply states that there is a process (called the "transformation process") in which the interdependent persons perceive certain properties of the given pattern and govern their behaviour according to those properties (and not simply according to their own outcomes in the given matrix). They may, for example, act so as to maximize the joint or total outcomes in thegiven matrix. By adopting a criterion of this sort, they in effect act according to a different matrix. That is, they transform the given matrix into a different one, which is the effective one. It is the effective matrix that is directly linked to behaviour. (Kelley 1979, pp. 69, 71; emphasis in original)
The given matrix in most research on co-operation (and the starting point of the present study) is the Prisoner's Dilemma. In its simplest form two actors (self and partner) are faced with a single choice between co-operation and defection. An example of this game is shown below. Partner
Self
Co-operate Defect
Co-operate
Defect
(3,1)
(1/4)
(4,1)
(2,2)
Whatever choice the partner makes, one is better off defecting (outcomes to self are in bold figures). As this is true for the partner as well, both actors converge on an outcome of mutual defection (in this exam-
Transforming Social Dilemmas
187
pie, 2 units for each), although both would have been better off if they had managed to make the "irrational" choice and co-operate (an outcome of 3 for each). Hence, the dilemma. If actors play the game literally (that is, their subjective rankings coincide with the objective rankings), it will often be very difficult to find a path to stable co-operation. This will be especially true if actors meet only a few times or if the dilemma involves many people rather than just two (termed an n-person Prisoner's Dilemma; see, e.g., Taylor 1987; R. Hardin 1982). The sorts of solutions that have been suggested are sometimes quite severe (e.g., strong monitoring and sanctioning systems) and suffer from numerous shortcomings: sanctioning systems can be very difficult and costly to establish and the end result may be morally objectionable to many people.2 Less drastic solutions have been proposed, such as factors that increase interpersonal trust, but if the situation is literally a Prisoner's Dilemma, trust will not be a way out - trusting that one's partner will co-operate provides all the more temptation to defect. The Prisoner's Dilemma is not the only possible model for social dilemmas. The possibility of transformations leads me to consider another game that has received much less attention in the literature. It differs from the Prisoner's Dilemma in one crucial respect: mutual cooperation is more highly valued than exploiting one's partner. An example is presented below. Partner
Self
Co-operate
Defect
Co-operate
(4,4)
(1,3)
Defect
(34)
(2,2)
This game is known as the Assurance Game (Sen 1974). Its name derives from the fact that a player would be willing to co-operate if he or she could be assured that the partner would co-operate. The game still has the structure of a social dilemma in that unless an actor can be sure that the other player is likely to co-operate, the actor will defect in order to avoid exploitation. However, trust is a solution in the Assurance Game, and the finding that trust-inducing manipulations do increase co-operation in subjects playing a Prisoner's Dilemma (Messick and Brewer 1983) may suggest that people routinely transform the dilemma into an Assurance Game. If this is in fact the case, it indicates that a common underlying motivation in social dilemma situations is
188
Peter Kollock
wanting to avoid being taken advantage of (alternately, wanting to be efficacious), rather than wanting to get the best possible outcome. The implication is that it is a concern for efficacy rather than narrow greed that may be motivating behaviour. The challenge becomes to find a way to describe and measure the transformations actors might be making and to see if these transformations are patterned and reliable.
Studying Transformations Kelley and Thibaut (1978) provide a logical analysis of possible transformations. Most relevant to this study is their discussion of the set of transformations that come from various linear combinations of actors' concern for their own outcomes and their partner's outcomes. One already mentioned possibility is that an actor might behave so as to maximize joint outcomes (this is described as a co-operative orientation in this literature). An actor might also desire to maximize the relative difference between self and partner (a competitive orientation), or to minimize that difference (a concern for equality). Other transformations include maximizing the partner's outcome without regard for own outcome (altruism), or maximizing own outcome without any concern for the partner's outcome (individualism). This last possibility is the limiting case of no transformation, that is, where the given and effective matrix are identical. In general then, one very important class of transformations comes from varying the weight actors assign to their own and their partners' outcomes. There is a great deal of evidence (reviewed by Kelley and Thibaut 1978, pp. 184-202) that suggests that actors routinely behave in ways that depart from simple self-interest (i.e., depart from the given matrix). However, much of this evidence is indirect: the existence of transformations is inferred rather than directly assessed. One well-developed area of research that has made an attempt to measure transformations is the work by psychologists on social value orientation (e.g., Kuhlman and Marshello 1975; McClintock and Liebrand 1988). The working assumption in this literature is that individuals have a general tendency to transform an interdependent situation in a particular way. That is, it concentrates on transformations that are a result of some aspect of an individual's personality. A well-established literature in psychology has found that "a substantial portion of the subject populations observed in a number of Eastern and Western countries do indeed systematically assign varying weights to their own and to others' outcomes. These weights are consistent with maximizing one of several value orientations: the other's gain (altruism), joint gain (co-operation), own gain (individualism), and relative gain
Transforming Social Dilemmas
189
(competition)" (McClintock and Liebrand 1988, p. 397). The researchers in this area have also shown that these orientations are relatively stable over time (Kuhlman, Camac, and Cunha 1986), and have developed methods for measuring these orientations. The method that will be used in this study is the Ring Measure of Social Values (Liebrand 1984), one of the most prominent measures in current use. As useful as this psychological research is, it has largely concentrated on assessing transformations that are the results of personality traits. I wish to also consider structural sources of transformations, and in this study examine the effects of group boundaries. Following a large literature that has examined the impact of group identity on intergroup behaviour (e.g., Tajfel and Turner 1986), work has been done on the effect of group boundaries on co-operation in social dilemma situations (Brewer and Kramer 1986; Kramer and Brewer 1984). While this research has uncovered evidence of important in-group and out-group biases, it has not sought to assess what effect group boundaries may have on possible transformations of social dilemmas, as this study does.3
Overview of the Two Studies If actors reliably transform a strategic situation and subjectively play it as if it were a structurally different game, one needs to investigate (1) what variables create these transformations, and (2) what the end result of these transformations are. In this study I used a questionnaire vignette design to examine people's subjective preferences for various outcomes in a Prisoner's Dilemma, that is, the manner in which they might transform the game. The impact of three variables on subjects' transformations was investigated: group boundaries (a structural variable), motivational orientation (a personality variable), and sex. In the first study only one variable was examined (the group identity of one's partner), using members of a college fraternity. The point of this initial experiment was to study members of a highly cohesive, naturally occurring group. Fraternities very purposefully nurture a strong sense of in-group identity. A pledge in a fraternity must master an often large and detailed body of knowledge regarding the history of the fraternity, its ideals and goals, and its rituals. It is expected that fellow members ("brothers") will actively support each other not simply during their college years, but beyond as well. Once the general transformational trends within this solidaristic group were uncovered, a second experiment was conducted to examine if similar effects could be found when the group identity was much more tenuous. In addition, two other variables were examined: the sex of the subject and the subject's general motivational orientation.
190
Peter Kollock Experiment 1
Subjects The study was conducted during the spring of 1992. Subjects consisted of the members of a fraternity at UCLA. A total of 16 subjects participated and each was paid $5.00 for one-half hour of their time. The age of the subjects ranged from 19 to 22 with a mean age of 20.25 years. Design The single factor varied (among subjects) was the group membership of the partner in the various scenarios describing outcomes of a Prisoner's Dilemma game. Six different group memberships were examined. Procedure A questionnaire was used to assess individuals' subjective transformation of a Prisoner's Dilemma situation. The subjects received the following instructions: This questionnaire describes a game. A random sample of people filling out this questionnaire will have an opportunity to play this game for actual money at a later date. Read through the directions carefully and ask us if you have any questions. You will be asked to make a number of choices. In each case you will be given $10 and will be asked to decide if you want to keep the $10 for yourself or give it all to the person you are playing with. The amount you give away will be doubled and given to the other person. However, you do not receive any return from the money you give out. The person you are playing with will be asked to make exactly the same decision about you. If s/he contributes some money to you, it will be doubled and you will receive it. Thus, the situation has the structure of a Prisoner's Dilemma: the greatest possible return comes from keeping all of one's money while one's partner contributes all of his or her money ($30 - the original $10 plus the $20 that are a result of the partner's contribution), but if both actors follow this strategy each will end up with only $10 (having contributed none to each other) rather than the $20 each could accumulate if both contributed all their money. The subjects were taken through a number of examples to make sure they understood the structure of the dilemma they faced. Following this introduction they were presented with six scenarios. In each case they were asked how they would rate the desirability of each of the following possible outcomes:4
Transforming Social Dilemmas
191
(1) Both you and your partner contribute $0 (thus, each of you earns $10). (2) You contribute $10 and your partner contributes $0 (thus, you earn $0 and your partner earns $30). (3) You contribute $0 and your partner contributes $10 (thus, you earn $30 and your partner earns $0). (4) Both you and your partner contribute $10 (thus, each of you earns $20). The situations represent the four outcomes of mutual co-operation, mutual defection, exploiting one's partner, and being exploited (note that all instructions to subjects were phrased in as neutral a fashion as possible - while I use the term "exploit" in my descriptions, such emotionally charged words were avoided in the questionnaire). This method of assessing transformations follows Kelley (1979) and Wyer (1969). Subjects' answers to questionnaire items such as these have been shown to be predictive of their actual choice behaviour in games involving money (Wyer 1969). The six scenarios differed in terms of the group identity of the partner. Initially, they were asked to rate the desirability of the four different outcomes if they were playing the game with another person, without giving any information about that person's group identity (and before the subjects realized that this was the theme of the questionnaire). Following this they were asked for their ratings given that they were playing this game with: (1) A fellow fraternity brother; (2) A member of the Greek system on campus (from another fraternity or sorority); (3) A UCLA student outside of the Greek system; (4) A student from the University of Southern California (USC); (5) An officer from the UCLA Police Department. These group identities were chosen to represent increasingly less cohesive and eventually hostile groups. I expected that subjects would feel an important bond with their fellow fraternity members and a less intense sense of shared identity with other Greeks and non-Greek
192
Peter Kollock
UCLA students. USC was chosen as a group identity likely to invoke a sense of competition or even active hostility - there is an exceptionally keen rivalry between the two institutions that is nurtured by both schools. UCLA's campus police were included as a group identity because it was known that the fraternity had run into trouble with the police and perceived the department as having unfairly harassed the fraternity. Thus, it is an instance of a group with which the members have actively been in conflict. Finally, subjects were also asked in each case to give open-ended accounts explaining their ratings. In particular, for each group subjects were asked to look back at their ratings for the four outcomes, note which outcome they rated highest, and briefly explain why this was the outcome they most preferred.
Hypotheses First, I expect that there will be a general trend towards transforming this Prisoner's Dilemma into an Assurance Game. That is, in the scenario in which they are given no information about the group membership of their partner, subjects will tend to rank mutual co-operation as more desirable despite the fact that they would be better off (in terms of money earned) if they successfully exploited their partners. Second, I expect that there will be a positive in-group bias effect in that this transformation will be especially pronounced for the situation in which subjects are asked about playing the game with a fellow fraternity brother (i.e., the preference for mutual co-operation will be higher and the preference for exploitation of the partner will be lower), and less pronounced for the scenarios involving interaction with another Greek or UCLA student. Finally, I expect a negative out-group bias effect in that the tendency to transform this situation into an Assurance Game will be minimal or non-existent for the two scenarios involving playing the game with a USC student and a campus police officer.
Experiment 1 Results Subject's ratings of the different scenarios are reported first, followed by the open-ended explanations they gave for their choices. Preference Ratings
Figure 1 displays the mean rating for the four outcomes in each condition. The Y axis corresponds to the rated desirability of the outcome (again, the scale ranged from I to 7) and the X axis lists the group identity of the partner in each case. The effect of group identity had a significant impact on the desirability of mutual co-operation (F5a = 8.13,
Transforming Social Dilemmas
193
p < .01) and exploiting one's partner (F 5U = 9.52, p < .001). Group identity did not significantly affect the desirability of being exploited (F 5U = 1.66, p = .224), or subjects' satisfaction with mutual defection (F5;n = 2.12, p = .139). In the condition in which subjects were given no information about the group identity of the partner, mutual co-operation was in fact rated higher than exploiting one's partner. Mutual defection was rated as less desirable and not surprisingly being exploited was rated as very undesirable. In other words, subjects on average did rank the game as an Assurance Game, as had been hypothesized. Also as expected, this transformation was especially pronounced when the partner's group identity was a fellow fraternity bother. Subjects' preference for mutual co-operation rises to a mean score of 6.88 (near the maximum of 7) and their preference for exploiting their partner drops sharply to 3.63 (despite the fact that this is the outcome that results in the highest objective reward). Note also that there is not much difference between their ratings of exploiting their partner and mutual exploitation, and that there is a rise in subjects' satisfaction with being exploited.5 Thus, in interaction with a fellow fraternity member they are very satisfied with an outcome of mutual co-operation, dissatisfied with either exploiting their brother or mutual defection, and although they are very dissatisfied with being exploited by their brother, they are more tolerant of this in-group exploitation than they are of being exploited by non-members.
Partner's Group Identity Figure 1: Effect of group identity on subjective preference structure: experiment 1.
194
Peter Kollock
In the scenarios involving interaction with another Greek or UCLA student, the ratings still have the structure of an Assurance Game; however, the degree to which mutual co-operation is preferred over exploitation of one's partner is not as great as in the previous case. And consistent with the final hypothesis, the transformation into an Assurance Game did not occur when the group identity of the partner was either a USC student or a campus police officer. As can be seen in Figure 1, exploiting one's partner is now ranked higher than mutual co-operation. In other words, for the first time the subjective ranking of the game corresponds (at least ordinally) with the objective structure of the game. Note that there are no significant changes in the ratings for mutual defection or being exploited - the effect of these hostile outgroup identities is seen solely in their impact on the ratings for outcomes of mutual co-operation and exploitation of partner.
Subjects' Accounts Given that exploiting one's partner yields the greatest objective rewards, what explains the fact that subjects ranked mutual co-operation as a more satisfactory outcome? The accounts subjects gave for their ratings provide some interesting information. When the identity of the partner was a fellow fraternity brother, every subject ranked mutual co-operation as the most preferred outcome. Subjects' explanations centred on two themes: (1) that mutual co-operation maximized the total amount of money that would be earned (e.g., "The last [outcome] earns $40.00 total, thus it is the most desirable"); and (2) that the outcome results in an equal pay-off for both members (e.g., "Both brothers benefit equally, both demonstrate good will toward each other"). One subject even wrote that he would not mind exploiting his brother or being exploited (not his words) because he was confident that whomever came out ahead would split the money with the other. This deep sense of solidarity did not transfer to other Greeks (there is a high degree of competitiveness among many of the fraternities in the Greek system) or other UCLA students. Although on average mutual co-operation was still ranked higher than exploiting one's partner, the difference was slight, and a number of subjects ranked exploitation of partner as the most desirable outcome. Typical explanations were: "If I don't know the person, then who cares if they don't get any money," and "In this situation I have no ties to the other player so I would like to receive the maximum amount of money." Finally, when the identity of the partner was a hostile out-group member (either a USC student or a campus police officer), 80% of the subjects preferred exploiting their partner over mutual co-operation. Their reasons ranged from the belief that USC students did not need the money
Transforming Social Dilemmas
195
("It's USC! The people there have enough money."), to statements of rivalry ("Rivalry brings out selfishness"), to simple declarations of hate or profanity ("I hate the police and I want the most money"). Experiment 2 The purpose of the second experiment is to see if similar trends are observed when a much more tenuous group identity manipulation is used with a larger and more diverse group of subjects. This group of subjects included both males and females and the Ring Measure of Social Values was administered in order to evaluate their general transformational tendencies. Design A 2 x 3 x 3 factorial design was used for the second experiment which included two between-subject factors and one within-subject factor. The between-subject factors were sex (male, female) and motivational orientation (co-operator, individualist, competitor). The within-subject factor was the group identity of the partner in the scenarios (no-group, in-group, out-group).
Subjects This experiment was also run in the spring of 1992. Subjects were recruited from undergraduate social science courses at UCLA. A total of 124 subjects participated. The age of the subjects ranged from 20 to 39, with a mean age of 22.42 years. There were 89 female subjects and 35 male subjects. Procedure The questionnaire for this experiment had a similar structure to the one used in the fraternity study. It differed in only two respects. First, a series of questions were included to measure subjects' general motivational orientation. Second, a much milder manipulation of group identity was used. Subjects' motivational orientation was evaluated using the Ring Measure of Social Values (described in detail in Liebrand 1984). Subjects are asked to choose 24 times between two outcomes that specify alternative allocations of monetary rewards or punishments. An example of one of the choices is reproduced below. Alternative A
Alternative B
self
other
self
+$15.00
$0.00
+ $14.50
other -$3.90
196
Peter Kollock
The choices are sampled from a two-dimensional space which plots the value for subjects of both their own and their partner's outcomes (the X axis corresponding to the weight given one's own outcomes, and the Y axis corresponding to the weight given one's partner's outcomes). Once again, someone who gave little weight to his or her own outcomes and was primarily concerned about the partner's outcomes would be termed an altruist. Someone who gave approximately equal weight to both own and other's outcomes would be termed a co-operator. An individualist is defined as an actor who gives little weight to other's outcomes and great weight to own outcomes. Finally, a competitor is defined as someone who is interested in maximizing the relative difference between own and other's outcomes. Following McClintock and Liebrand (1988), the version of the measure used here uses choices sampled from a circle projected on this space with its centre at the origin ($0.00) and with a radius of $15.00. The measure is calculated as follows: Adding up the chosen amounts separately for the subject and for the other provides an estimate of the weights assigned by the subject to his or her own and the other's pay-offs. These weights are used to estimate the slope of the subject's value vector extending from the origin of the own-other outcome plane. All values between 67.5 degrees and 22.5 degrees (other's outcome = 90 degrees; own outcome = 0 degrees) were classified as co-operative; those between 22.5 degrees and 337.5 degrees as individualistic; and those between 337.5 degrees and 292.5 degrees as competitive ... The length of the value vector provides an index of the consistency of a subject's choices in this linear choice model. The maximum vector length is twice the radius of the circle, and random choices result in a zero vector length. (McClintock and Liebrand 1988, p. 401) Vectors between 112.5 degrees and 67.5 degrees were classified as altruistic.6 Following McClintock and Liebrand, subjects whose vector lengths were less than one quarter of the maximum length were excluded from the analyses. As a result the data from 8 subjects were not included. No subjects were classified as altruists. The other difference with the fraternity study was the manner in which group identity was manipulated. Subjects were presented with three scenarios. In the first case they were given no information about the partner's identity. In the second case they were asked to rate the various outcomes if they were playing the game with a fellow UCLA student. In the third case they were asked for their ratings if playing the game with a student from another, unspecified university. This group identity manipulation was milder than the previous study,
Transforming Social Dilemmas
197
which involved a smaller, highly solidaristic in-group and concretely specified out-groups with which the in-group had been in conflict. Finally, subjects were also asked in each case to give open-ended accounts explaining their ratings.
Hypotheses As before, I expect that subjects will in general transform the Prisoner's Dilemma into an Assurance Game when no group identity information is given concerning the partner, that this transformation will be accentuated when the partner is an in-group member (UCLA student), and will be attenuated when the partner's group identity is a student from another university (out-group). Because this is a milder manipulation of group identity, I also expect to see evidence of an Assurance Game transformation even in the out-group condition. Based on previous work (e.g., Kuhlman and Marshello 1975; McClintock and Liebrand 1988), I expect that there will be an effect for motivational orientation in that co-operators will be more likely to rate mutual co-operation as more preferable and exploitation of partner as less preferable than individualists and competitors. No specific prediction was made regarding the effect of sex.
Experiment 2 Results A preliminary analysis found that there were no significant effects involving sex. Consequently males and females were pooled in the following analysis and the experiment becomes a 3 x 3 design.
Preference Ratings The effects of group identity are presented first. Figure 2 displays the mean rating for the four outcomes in the three group identity conditions. As with the fraternity study, the desirability ratings for mutual defection did not differ significantly across conditions, (F2112 = 0.14, p = .866). Also like the fraternity study, the effect of group identity had a significant impact on how satisfied subjects were with mutual cooperation (F2112 = 14.03, p < .001), and with exploiting one's partner (F2112 = 18.97, p < .001). Finally, group identity also had a significant effect on the desirability of being exploited (F2/112 = 4.79, p < .01). As expected, subjects did transform the game into an Assurance Game, and this transformation was accentuated in the in-group condition and attenuated in the out-group condition. Subjects were on average also less dissatisfied with being exploited by an in-group member, repeating a pattern found in the first study. These results were all statistically significant despite the mild group identity manipulation. However, comparing Figure 2 with Figure 1 demonstrates that the
198
Peter Kollock
Partner's Group Identity
Figure 2: Effect of group identity on subjective preference structure: experiment 2.
transformations were not as pronounced in the second study as they were in some of the conditions in the fraternity study. Only one effect involving motivational orientation was significant. This was the main effect of motivation on the desirability of exploiting one's partner, (F2113 = 5.00, p < .01). The ratings for this outcome can be seen in Figure 3, which displays the mean rating for the three groups of subjects (co-operators, individualists, and competitors) across the three group identity conditions. Co-operators are significantly less satisfied with exploiting their partners than either individualists or competitors across all three group identity conditions. Individuals' Game Structure A different way of organizing the data is to use a single measure to capture the crucial comparison between mutual co-operation and exploiting one's partner. A measure was created in which subjects' rating of the desirability of exploiting their partners was subtracted from their rating for mutual co-operation. This creates a measure which is positive if the game's subjective structure is consistent with an Assurance Game, negative if it is consistent with the structure of a Prisoner's Dilemma, and zero if the subject rates both these outcomes as equally desirable. Further, the greater the positive number, the more the subject prefers mutual co-operation relative to exploitation of one's partner. Likewise, the lower the negative number, the more the subject prefers exploitation of partner over mutual co-operation. Thus, this one mea-
Transforming Social Dilemmas
199
Partner's Group Identity
Figure 3: Effect of motivational orientation on desirability of exploiting one's partner: experiment 2.
sure neatly summarizes many of the features of a subject's transformation. Figure 4 displays the mean level of this measure for the three motivational orientations across the three group identity conditions. The main effect of group identity was significant (F 2U2 = 31.77, p < .001), as was the main effect of motivational orientation (F2113 = 5.33,
Partner's Group Identity
Figure 4: Effect of motivational orientation on subjective game structure: experiment 2.
200
Peter Kollock
p < .01). The Assurance Game transformation is accentuated in the in-group condition, attenuated in the out-group condition, and cooperators' Assurance Game transformations are more pronounced than individualists and competitors in all conditions. Perhaps the most interesting finding is the significant interaction effect between group identity and motivational orientation, (F4226 = 2.44, p < .05). Figure 4 reveals that while individualists and competitors behave in an essentially identical fashion in the no-group and in-group condition, competitors in the out-group condition on average rank exploitation of partner as more desirable than mutual co-operation. This is the one case in this second study where a group of subjects did not on average transform the game into an Assurance Game - for these people the game is a Prisoner's Dilemma subjectively as well as objectively.
Subjects' Accounts As in the first experiment, subjects were asked to note which outcome they had rated highest and to give an open-ended explanation of why this was the outcome they most preferred. Once again these accounts provide interesting additional insights into how and why individuals subjectively transformed the game. The fact that many more people participated in the second experiment means that there is a much larger and richer collection of explanations to evaluate. Because of the large number of accounts, a coding scheme was developed so that the variety of explanations people gave could be summarized. This summary is shown in Table 1. In the first column is the list of reasons given for the choices subjects made. Subjects are grouped according to the group identity of the partner and the outcome they rated as most desirable. Thus, for each group identity the data is organized in three columns: one for subjects who rated mutual co-operation as most desirable, one for those who ranked exploitation of partner as most desirable, and one for subjects who ranked these two outcomes as equally desirable. The explanations subjects gave for their choices seem to fall into two broad categories: statements concerning the simple monetary outcome they desired (e.g., making a profit for oneself, making a profit for both, benefiting both equally), and reasons that were more elaborative and evaluative (e.g., the importance of being fair, group solidarity, avoiding bad feelings). The reasons listed in Table 1 have been grouped according to these two categories. Note also that a distinction is made between subjects who wrote about wanting to make a profit for themselves (or for both), and subjects who explicitly stated they were interested in making the maximum possible profit. As I will discuss below, this turned out to be an important distinction.
Most Preferred Outcome In-Group Partner Out-Group Partner
Reasons Given for Preferred Outcome
Mutual Co-operation
make profit for self make maximum profit for self beat partner make profit for both make maximum profit for both benefit both equally avoid loss to self avoid loss to both
(Equally Preferred)
Exploitation of Partner
Mutual Co-operation
(Equally Preferred)
Exploitation of Partner
%
%
%
/o
%
%
1.5 0.0 0.0
53.8 7.7 7.7
13.0 60.9 13.0
6.3
33.3 16.7 16.7
13.2 36.8 26.3
53.7 9.0
23.1 0.0
54.2
23.3 0.0
9.0 3.0
15.4
7.7 0.0
0.0 0.0 0.0 0.0 0.0
10.4
0.0
0.0 0.0 0.0 2.6 0.0
0.0 3.8 3.8 0.0 0.0 3.8 3.8
0.0 8.7 0.0 0.0 0.0 0.0 0.0
4.2 0.0 0.0 2.1
0.0 3.3 0.0
18.4
10.0
23.7
3.3 0.0 6.7
0.0 0.0 0.0
0.0
10.4
6.7
2.6
8.3
10.0
18.4
48.0
30.0
38.0
16.4
likely to interact again will not see partner again group solidarity group rivalry avoid guilt or bad feeling be fair to co-operate or work together
16.4 10.4 11.9
for other reasons no reason given
4.5 6.0
15.4
13.0 13.0
N
67.0
26.0
23.0
13.4
0.0 7.5 0.0
0.0 0.0 8.3 6,3 6.3
14.6
4.2 10.4
16.7 10.0
0.0 0.0
Table 1: Summary of subjects' accounts (percentages): experiment 2. Note that, because many subjects listed more than one reason, percentages add up to more than 100.
202
Peter Kollock
I begin by examining the accounts of subjects who rated mutual cooperation as their most desired outcome. Again, the question is, Why would someone prefer this outcome given that objectively greater rewards come from exploiting one's partner? A reason given by over half of these subjects is that they wanted both people to make a profit. It is interesting that the next most common explanation was to avoid the feelings of guilt that would come from successfully exploiting their partner. Just as common was the reason that they wanted to avoid a loss to either themselves or their partners. When the partner was an in-group member, several subjects said that they ranked mutual co-operation as most desirable because they believed they would likely interact with this person in the future. A number of subjects also listed as reasons the importance of being fair and the importance of co-operating or working together. Other subjects ranked mutual co-operation and exploitation of their partner as equally desirable. What explanations did this group of subjects give for their rankings? The most common reason subjects gave was wanting to make a profit for themselves. The next most common explanations were wanting to make a profit for both self and partner, followed by wanting to benefit both equally.7 When the partner was an out-group member, wanting to beat the partner and wanting to make the maximum possible amount of profit for themselves were also common explanations. While the majority of subjects ranked mutual co-operation as more desirable or equally desirable to other outcomes, there was also a group of subjects whose most preferred outcome was exploiting their partner (i.e., the game was subjectively as well as objectively a Prisoner's Dilemma). The most common explanation subjects gave for making this choice was to make the maximum possible profit for themselves. Here an interesting contrast emerges with subjects in the previous group. Whereas subjects who ranked the two outcomes equally seemed concerned with simply making some profit, those who ranked exploitation of their partner as the most preferred outcome were more likely to explicitly state that they wanted to make the maximum amount of money they could. This seems to correspond (at least in an approximate sense) to the distinction between satisf icing (Simon 1985) and maximizing. The next most frequent reason given was wanting to beat the partner. This reason was more common when the partner was an out-group member. In the out-group condition, several subjects also gave as reasons for preferring the exploitation of their partner the belief that they were unlikely to see the person again, and that they felt a rivalry with students from another school. As an additional note, it is interesting that no one gave as a reason for their choice such possibilities as wanting to make a profit for their
Transforming Social Dilemmas
203
partner (without any mention of profit to self), or wanting to avoid a loss to the partner. In other words, while subjects were often positively concerned about their partner's outcomes, it was never their sole concern - pure altruism was not seen in this collection of explanations.
Discussion In the experiments a number of common patterns emerged. First, when no information is given about the group identity of the partner there is a very consistent tendency to rank mutual co-operation above exploitation of partner, despite the fact that exploitation brings the greater objective outcome. Second, when in-group/out-group distinctions were made the preference for mutual co-operation was significantly higher and the preference for exploitation of one's partner was significantly lower when the partner was identified as an in-group member. Third, while subjects always ranked the outcome of being exploited by their partner as very unsatisfactory, they were significantly less dissatisfied with the outcome when the partner was an in-group member. This is somewhat surprising - one might have thought that being exploited by a fellow group member would be especially disturbing, but this was not the case. While not explored, one possibility might be that because of the in-group identification, the subject might take some small satisfaction in the winnings of the in-group member, either because the winnings had at least not gone to an out-group member or because the subject might benefit in some way. A very explicit example of this last possibility is seen in the fraternity member's comment reported in the first study that if he were exploited he was confident that his brother would split the earnings with him later. Finally, there were no significant differences in subjects' satisfaction with the outcome of mutual defection. In both studies the desirability of this outcome was essentially constant across all conditions. It seemed to represent a "neutral" outcome in that while subjects did not double or triple their pay-off, at least nothing was lost (as one subject commented, "no harm, no foul"). This may be a framing effect that is a result of the way the game was constructed. What defines a Prisoner's Dilemma is the relative rankings of the outcomes, not their absolute amounts. Thus, if the game had been constructed so that mutual defection brought about a slight loss, a different pattern might have emerged. Some recent work has already demonstrated the significant effects framing can have on co-operation (Brewer and Kramer 1986), and one direction for future work would be to explore the effects of various biases and heuristics (Tversky and Kahneman 1986) on behaviour in social dilemmas.
204
Peter Kollock
The fraternity data uncovered just how great the change in preferences can be as the identity of the partner varies. The second study demonstrated that the same general effects (less pronounced but still significant) could be created with a very mild group identity manipulation. The second study also showed an interesting interaction effect between group boundaries and people's general motivational orientation, that is, the way in which structural and personality variables can combine together to produce joint effects. In looking at the two variables, however, it is interesting to note that group identity seemed to have a greater effect on transformations than motivational orientation. The general approach used has a number of advantages. By asking subjects to rank their satisfaction with various outcomes on a scale, it becomes possible to examine degrees of satisfaction rather than simply making binary distinctions between whether the game is subjectively a Prisoner's Dilemma or an Assurance Game. And by asking subjects to evaluate the four outcomes separately, one can uncover whether variables affect the desirability of these outcomes differentially. This point turns out to be especially important as the results show that factors can selectively affect various aspects of a social dilemma. Note, for example, that group identity had a significant impact on the desirability of mutual co-operation, but not on ratings of mutual defection. The open-ended accounts subjects gave for their rating provided another interesting source of information. The dominant reasons subjects gave for ranking mutual co-operation as most desirable were wanting both people to make a profit and wanting to avoid feelings of guilt. This second reason suggests the important effects emotions and moral beliefs might have on strategic behaviour. Subjects who ranked exploitation of the partner as most desirable seemed particularly concerned with maximizing their outcomes, whereas other subjects (whether they were interested in their partner's outcomes or only in their own) seemed more concerned with some form of satisficing, that is, earning some income. Finally, interacting with an out-group member seemed to encourage rivalry and the desire to beat the partner among those who ranked exploitation as most desirable.
Signalling and Co-operation Many social dilemmas objectively have the structure of an Assurance Game (Taylor 1987). In addition, the current study provides evidence that subjects routinely transform an objective Prisoner's Dilemma into an Assurance Game. Thus, a strong argument can be made for making the Assurance Game a central metaphor in studying co-operation. 1 do not mean to suggest that the Prisoner's Dilemma is unimportant, or
Transforming Social Dilemmas
205
that every situation will be transformed into an Assurance Game (the data presented here show that this is not so), but rather to ask what benefits might come from concentrating more on a different model. The hegemony of the Prisoner's Dilemma model in the literature and the widespread neglect of the subjective structure of interdependent situations argue for researchers turning their attention to new ground. Using the Assurance Game as one's metaphor redirects research in a number of important ways. Trust and everything that contributes to its production and maintenance become key issues. Given that within an Assurance Game people are conditional willing co-operators, the task becomes to assure actors that others can be counted on to co-operate. Hence, attempts to signal and advertise one's commitment to co-operate will be critical. This might be as simple as a public pledge to cooperate or an act that is more symbolic. Signs that one is committed to a group or to a particular goal would be important in encouraging others to co-operate (e.g., wearing a crucifix, a lapel pin from a fraternal organization, gang colours, a union pin, etc.). More broadly, using the Assurance Game as one's model makes the presentation of self (i.e., dramaturgy) centrally relevant to a study of human co-operation (Kollock and O'Brien 1992). In the absence of detailed, binding contracts and an institutional system to enforce them (which is to say, in the vast majority of interactions), people must find ways to present themselves as willing co-operators, to infer whether others are likely to co-operate, and to co-ordinate their activities so as to be efficacious.8 Future Directions Although there are many advantages to the attitudinal measures used in this study, and these measures have been shown to be predictive of actual behaviour, they are not exact substitutes for behaviour - the correlation between attitudes and behaviour is never perfect. One can also question the open-ended accounts people gave for their ratings. The advantage of these accounts is that they can provide information that cannot be derived from a simple series of choices in a strategic game. The explanations can help isolate what transformations might be taking place as well as why people are making the transformations (e.g., moral concerns). However, it is also possible that these accounts do not accurately reflect the true reasons for people's behaviour. One may not have conscious access to the decision process that resulted in the choice, and explanations might be crafted in order to present a favourable image of oneself. At the very least though, these records of public explanations for strategic behaviour are an interesting complement to other sources of data.
206
Peter Kollock
Given these concerns, a behavioural measure of transformations would be especially valuable. The Ring Measure (if played for actual money) is one possibility, but it takes a fair amount of time to administer, and does not allow one to directly assess the transformed matrix (one infers the transformed matrix from the general motivational tendency of the subject). I pursue the development of a behavioural measure of transformations that avoids these shortcomings in a separate experimental study (Kollock 1996). Other topics that should be examined in future research include the effects of other structural variables on transformations (e.g., the likelihood of future interaction), examining how these variables transform other objective games (e.g., Chicken), and how the history of interaction transforms situations (e.g., how a history of consistent co-operation or a sudden defection might impact a person's transformation). Finally, another interesting topic would be to explore why people make transformations. In theorizing about the possible function of transformations, Kelley and Thibaut (1978) point out that making particular transformations can be advantageous to actors: "First, [transformations] may provide a basis for action where none exists in the given matrix [e.g., certain co-ordination problems] ... Second, they may enable the person more certainly to attain better given outcomes than [he or she] would otherwise" (Kelley and Thibaut 1978, p. 170; emphasis in original). Not all transformations may be functional, however. Transformations learned in one setting might be inappropriately applied in a different setting, perhaps applied mindlessly in a habitual way. Sorting out the origins and reasons for transformations would be an extremely valuable area for research.
Conclusion The evidence that people transform interdependent situations into essentially different games argues that a broad research program is needed to study what variables systematically and reliably affect these transformations. This would complement current work which has detailed the various solutions that can help promote co-operation in particular games. Thus, ideally there would be a literature on how games are transformed as well as a literature on solutions to particular games. This distinction is important because in some instances it may be easier to encourage co-operation by first transforming a game and then using a set of solutions that might be more viable, rather than trying to solve the original game. Testing this indirect route to co-operation would be an important direction for future work.
Transforming Social Dilemmas
207
As for the present study, I am suggesting that in many situations our model of the social actor should be an individual who is perhaps wary and not interested in contributing to a lost cause, but is ultimately willing to co-operate if others do so. This is in contrast to a model of the actor who is narrowly interested only in his or her own gains. In other words, the motivational basis for many social dilemma situations is often best modeled by an Assurance Game rather than a Prisoner's Dilemma. The former model opens up a great many interesting research questions and seems to correspond better to observed behaviour.
Acknowledgments A draft of this paper was presented at the Annual Meeting of the American Sociological Association, Miami, 1993.1 thank Ronald Obvious for comments on earlier drafts.
Notes 1 Note that I am interested in developing descriptive rather than prescriptive models of strategic interaction. 2 See G. Hardin's (1968) arguments that justice and freedom might have to be sacrificed in order to ensure co-operation. 3 Note also that if certain variables are found to have a systematic and reliable effect on how pay-off structures are transformed, then this research may have something to say about the origin of value. This is an important topic because many of the intellectual traditions that have been concerned with the question of co-operation (exchange theory, game theory, micro-economics, and rational choice perspectives in general) have had little to say about this issue. The particular values (utilities) that actors have are usually taken to be outside the scope of these models (de gustibus non est disputandum); cf. Stigler and Becker (1977); for exceptions to this trend see, e.g., Emerson (1987), Frank (1987). 4 The instructions read: "If you were to play this game with , how would you rate each of the following possible outcomes as to the degree of satisfaction or dissatisfaction you would feel?" Subjects responded on a seven-point scale that ranged from 1 (very dissatisfied) to 7 (very satisfied). 5 While the rise in satisfaction for being exploited is not statistically significant, this is probably due to the small sample size. The second experiment provides further evidence of this pattern. 6 Note that these sets of choices are an example of using decomposed games to measure motivational orientation. See Messick and McClintock (1968). 7 If these explanations do not seem entirely consistent, it is because subjects often gave a separate set of reasons for each of the two outcomes they equally preferred.
208
Peter Kollock
8 An excellent example of these processes is given by Fantasia (1988) in his case-study of a wildcat strike.
References Brewer, Marilynn B., and Roderick M. Kramer (1986). Choice behavior in social dilemmas: Effects of social identity, group size, and decision framing. Journal of Personality and Social Psychology, 50(3): 543-49. Emerson, Richard M. (1987). Toward a theory of value in social exchange. In Karen Cook (ed.), Social Exchange Theory (Newbury Park, CA: Sage), pp. 11-46. Fantasia, Rick (1988). Cultures of Solidarity: Consciousness, Action, and Contemporary American Workers. Berkeley, CA: University of California Press. Frank, Robert H. (1987). If homo economicus could choose his own utility function, would he want one with a conscience? American Economic Review, 77: 593-604. Hardin, Garrett (1968). The tragedy of the commons. Science, 162: 1243-48. Reprinted in Garrett Hardin and John Baden (eds.), Managing the Commons (San Francisco: Freeman, 1977), pp. 16-31. Hardin, Russell (1982). Collective Action. Baltimore: John Hopkins. Kelley, Harold H. (1979). Personal Relationships: "Their Structures and Processes. Hillsdale, NJ: Lawrence Erlbaum. Kelley, Harold H., and John W. Thibaut (1978). Interpersonal Relations: A Theory of Interdependence. New York: Wiley. Kollock, Peter (1995). The bases of cooperation: Subjective transformations of strategic interaction. Paper presented at the first International Conference on Theory and Research in Group Processes, Jagiellonian University, Krakow, Poland, 1996. Kollock, Peter, and Jodi O'Brien (1992). The social construction of exchange. In E. J. Lawler, B. Markovsky, C. Ridgeway, and H. A. Walker (eds.),Advances in Group Processes, Vol. 9, (Greenwich, CT: JAI Press), pp. 89-112. Kramer, Roderick M., and Marilynn B. Brewer (1984). Effects of group identity on resource use in a simulated commons dilemma. Journal of Personality and Social Psychology, 46(5): 1044-57. Kuhlman, D. Michael, and Alfred F. J. Marshello (1975). Individual differences in game motivation as moderators of preprogrammed strategy effects in Prisoner's Dilemma. Journal of Personality and Social Psychology, 32(5): 922-31. Kuhlman, D. Michael, C. R. Camac, and D. A. Cunha (1986). Individual differences in social orientation. In H. Wilke, D. Messick, and C. Rutte (eds.),Experimental Social Dilemmas (Frankfurt: Peter Lang), pp. 151-74. Liebrand, Wim B. G. (1984). The effect of social motives, communication and group size on behavior in an w-person multi-stage mixed-motive game. European journal of Social Psychology, 14: 239—64.
Transforming Social Dilemmas
209
McClintock, Charles G., and Wim B. G. Liebrand (1988). Role of interdependence structure, individual value orientation, and another's strategy in social decision making: A transformational analysis. Journal of Personality and Social Psychology, 55(3): 396-409. Messick, David M., and C. G. McClintock (1968). Motivational basis of choice in experimental games. Journal of Experimental Social Psychology, 4:1-25. Messick, David M., and Marilynn B. Brewer (1983). Solving social dilemmas. In L. Wheeler and P. Shaver (eds.), Review of Personality and Social Psychology, Vol. 4 (Beverly Hills, CA: Sage), pp. 11-44. Sen, Amartya (1974). Choice, orderings and morality. In Stephan Korner (ed.), Practical Reason (New Haven, CT: Yale University Press), pp. 54-67. Simon, Herbert A (1985). Human nature in politics: The dialogue of psychology with political science. American Political Science Review, 79: 293-304. Stigler, George J., and Gary S. Becker (1977). De gustibus non est disputandum. American Economic Review, 67: 76-90. Tajfel, H., and J. C. Turner (1986). The social identity theory of intergroup behavior. In S. Worchel and W. Austin (eds.), Psychology of Intergroup Relations (Chicago: Nelson-Hall), pp. 7-24. Taylor, Michael (1987). The Possibility of Cooperation. Cambridge: Cambridge University Press. Tversky, Amos, and Daniel Kahneman (1986). Rational choice and the framing of decisions. Journal of Business, 59: S251-78. Wyer, R. S. (1969). Prediction of behavior in two-person games. Journal of Personality and Social Psychology, 13: 222-38.
11 Beliefs and Co-operation Bernardo A. Huberman and Natalie S. Glance
1. Introduction Social dilemmas have long attracted the attention of sociologists, economists and political scientists because they are central to issues that range from securing ongoing co-operation in volunteer organizations, such as unions and environmental groups, to the possibility of having a workable society without a government. Environmental pollution, nuclear arms proliferation, population explosion, conservation of electricity and fuel, and giving to charity are a few more examples of situations where an individual benefits by not contributing to the common cause, but if all individuals shirk, everyone is worse off. Although there are no simple solutions to social dilemmas, studying them sheds light on the nature of interactions among people and the emergence of social compacts. And because such dilemmas involve the interplay between individual actions and global behaviour, they elucidate how the actions of a group of individuals making personal choices gives rise to social phenomena (Glance and Huberman 1994). Discovering the global behaviour of a large system of many individual parts, such as the dynamics of social dilemmas, calls for a bottomup approach. Aggregate behaviour stems from the actions of individuals who act to maximize their utility on the basis of uncertain and possibly delayed information. As long as one is careful both in constructing the model and in making clear its limitations and its underlying assumptions, the insights obtained through such an approach can be very valuable. Beliefs and expectations are at the core of human choices and preferences. They arise from the intentional nature of people and reflect the way decision-makers convolve the future as well as the past into decisions that are made in the present. For example, individuals acting within the context of a larger group may take into account the effect of their actions both on personal welfare and on the welfare of the larger group. In other words, individuals form their own models of how the group dynamics works based on some set of beliefs that colour their preferences. 210
Beliefs and Cooperation
211
In order to better model the dynamics of social dilemmas, we extend our previous framework of individual expectations (Glance and Huberman 1993a) to one that can accommodate a wider range of beliefs. We describe, in particular, two classes of beliefs, bandwagon expectations and opportunistic expectations. Our framework of expectations rests on two pillars: the first is that agents believe the influence of their actions to decrease with the size of the group; the second is that individuals believe their actions influence others to a degree that depends on how many already contribute to the common good. How agents believe their degree of influence to vary is left unspecified within the broader framework and is instead instantiated through different classes of expectations. Agents with bandwagon expectations believe that their actions will be imitated by others to a greater extent when the overall state tends towards co-operation and to a lesser extent when it tends towards defection. For this class of expectations, agents believe that co-operation encourages co-operation and that defection encourages defection at a rate which depends linearly on the proportion of the group co-operating. Bandwagon agents are thus more likely to co-operate the greater the observed level of co-operation. Opportunistic expectations are similar to bandwagon expectations when the proportion co-operating is small. However, when the proportion is large, opportunistic agents believe that the rate at which co-operation encourages co-operation and defection encourages defection decreases linearly with the fraction co-operating. As a result, agents with opportunistic expectations will "free ride" in a mostly co-operating group since they expect that a small amount of free riding will go unnoticed. The concept of beliefs and expectations, of course, covers a much broader spectrum than the narrow domain of how one's actions might affect another's. Cultural beliefs concerning "good" and "bad," social norms, personal morality, life-style, external pressure, and many other biases enter into an individual's preferences. These variations among individuals we model in a general way using the notion of diversity (Glance and Huberman 1993b). We found in earlier work that in scenarios where diversity can be modeled as a spread about a common set of beliefs then it effectively acts as an additional source of uncertainty. Having merged together the various facets of diversity in this fashion, we can now concentrate on how different sorts of expectations affect group dynamics. For the above two classes of expectations, we show that there is a critical group size beyond which co-operation is no longer sustainable and that below this critical size there is a regime in which there are two fixed points. The stability of these fixed points and their dynamic characteristics are very similar for both classes of beliefs when there are at
212
Bernardo A. Huberman and Natalie S. Glance
most small delays in information but differs when the delays become large. For bandwagon expectations, the two fixed points are stable, and in the presence of uncertain information, large-scale fluctuations eventually take the group over from the metastable fixed point to the optimal one. These transitions are both unpredictable and sudden, and take place over a time scale that grows exponentially with the size of the group. The same kind of behaviour is observed in groups with opportunistic expectations. However, in this case delays in information can play an important role. When the delays are long enough, the more cooperative equilibrium becomes unstable due to oscillations and chaos. Section 2 introduces social dilemmas and their game-theoretical representation and Section 3 develops the theory of their dynamics, including expectations. In Section 4, analytical techniques are used to solve for the dynamics: a set of general results covers both types of expectation. Section 5 presents the results of computer simulations which further elucidate the behaviour of the system for the two classes of bandwagon and opportunistic expectations. In particular, the computer experiments show how a group can suddenly and unexpectedly move from one equilibrium to another and how the dynamics of an opportunistic group with delayed information can exhibit oscillations and bursty chaos.
2. Social Dilemmas There is a long history of interest in collective action problems in political science, sociology, and economics (Schelling 1978; R. Hardin 1982). Hardin coined the phrase "the tragedy of the commons" to reflect the fate of the human species if it fails to successfully resolve the social dilemma of limiting population growth (G. Hardin 1968). Furthermore, Olson argued that the logic of collective action implies that only small groups can successfully provide themselves with a common good (Olson 1965). Others, from Smith (1937) to Taylor (1976, 1987), have taken the problem of social dilemmas as central to the justification of the existence of the state. In economics and sociology, the study of social dilemmas sheds light on, for example, the adoption of new technologies (Friedman 1990) and the mobilization of political movements (Oliver and Maxwell 1988). In a general social dilemma, a group of people attempts to obtain a common good in the absence of central authority. The dilemma can be represented using game theory. Each individual has two choices: either to contribute to the common good, or to shirk and free ride on the work of others. The payoffs are structured so that the incentives in the game mirror those present in social dilemmas. All individuals share equally in the common good, regardless of their actions. However, each person that co-operates increases the amount of the common good by a fixed
Beliefs and Cooperation
213
amount, but receives only a fraction of that amount in return. Since the cost of co-operating is greater than the marginal benefit, the individual defects. Now the dilemma rears its ugly head: each individual faces the same choice; thus, all defect and the common good is not produced at all. The individually rational strategy of weighing costs against benefits results in an inferior outcome and no common good is produced. However, the logic behind the decision to co-operate or not changes when the interaction is ongoing, since future expected utility gains will join present ones in influencing the rational individual's decision. In particular, individual expectations concerning the future evolution of the game can play a significant role in each member's decisions. The importance given the future depends on how long the individuals expect the interaction to last. If they expect the game to end soon, then, rationally, future expected returns should be discounted heavily with respect to known immediate returns. On the other hand, if the interaction is likely to continue for a long time, then members may be wise to discount the future only slightly and make choices that maximize their returns on the long run. Notice that making present choices that depend on the future is rational only if, and to the extent that, a member believes its choices influence the decisions others make. One may then ask the following questions about situations of this kind: if agents make decisions on whether or not to co-operate on the basis of imperfect information about group activity, and incorporate expectations on how their decision will affect other agents, then how will the evolution of co-operation proceed? In particular, which behaviours are specific to the type of expectations and which are more general?
The Economics of Free Riding In our mathematical treatment of the collective action problem we state the benefits and costs to the individual associated with the two actions of co-operation and defection, i.e., contributing or not to the social good. The problem thus posed, referred to in the literature as the n-person Prisoner's Dilemma (R. Hardin 1971; Taylor 1976; Bendor and Mookherjee 1987), is presented below. In Section 3 we will then show how beliefs and expectations about other individuals' actions in the future can influence a member's perceptions of which action, co-operation or defection, will benefit him most in the long run. We will also discuss the different classes of expectations and convolve expectations into individual utility. Using these preference functions, in Section 4 we will apply the stability function formalism (Ceccatto and Huberman 1989) to provide an understanding of the dynamics of co-operation. In our model of social dilemmas, each individual can either contribute (co-operate) to the production of the good, or not (defect). While
214
Bernardo A. Huberman and Natalie S. Glance
no individual can directly observe the effort of another, each member observes instead the collective output and can deduce overall group participation using knowledge of individual and group production functions. We also introduce an amount of uncertainty into the relation between members' efforts and group performance. There are many possible causes for this uncertainty (Bendor and Mookherjee 1987); for example, a member may try but fail to contribute due to unforeseen obstacles. Alternatively, another type of uncertainty might arise due to individuals with bounded rationality occasionally making suboptimal decisions (Selten 1975; Simon 1969). In any case, we treat here only idiosyncratic disturbances or errors, whose occurrences are purely uncorrelated. Consequently, we assume that a group member intending to participate does so successfully with probability p and fails with probability 1 — p, with an effect equivalent to a defection. Similarly, an attempt to defect results in zero contribution with probability q, but results in unintentional co-operation with probability 1 - q. Then, as all attempts are assumed to be uncorrelated, the number of successfully co-operating members, nc, is a mixture of two binomial random variables with mean (nc) = pnc + (\ — q) (n — nc), where nc is the number of members attempting to co-operate within a group of size n. Let ki denote whether member i intends to co-operate (kt = 1) or defect (fc, = 0), and let k'{ denote whether member i is co-operating or defecting in effect. The number of members co-operating is nc = Zfc'-. The limit p and q equal to 1 corresponds to an error-free world of complete information, while p and q equal to 0.5 reflect the case where the effect of an action is completely divorced from intent. Whenever p and q deviate from 1, the perceived level of co-operation will differ from the actual attempted amount. In a simple, but general limit, collective benefits increase linearly in the contributions of the members, at a rate b per co-operating member. Each contributing individual bears a personal cost, c. Then the utility at time t for member i is
Using its knowledge of the functional form of the utility function,1 each individual can deduce the number of individuals effectively co-operating from the utility collected at time t by inverting Equation 1:
This estimation differs from the actual number of individuals intending to co-operate in a manner described by the mixture of two
Beliefs and Cooperation
215
binomial distributions. We also define fc(t), to denote the fraction, nc(t)/n of individuals effectively co-operating at time t. When all members contribute successfully, each receives net benefits (bn/n) ~ c = b - c, independent of the group size. The production of the collective good becomes a dilemma when
Thus, although the good of all is maximized when everyone co-operates (b - c> 0), the dominant strategy in the one-shot game is to defect since additional gain of personal participation is less than the private cost (b/n ~~ c < 0). 3. Expectations and Beliefs How agents take into account the future is wrapped into their expectations. The barest notion of expectations comes from the economic concept of horizon length. The horizon length is how far an agent looks into the future, or how long the agent expects to continue interacting with the other agents in the group. The horizon length may be limited by an agent's lifetime, by the agent's projection of the group's lifetime, by bank interest rates and so forth. In our framework, agents believe that their present actions will affect those of others in the future. In particular, the agents expect that defection encourages defection and co-operation encourages co-operation,2 but to a degree that depends on the size of the group and the present level of production. How agents believe their degree of influence to vary is left unspecified within the broader framework and is instead instantiated within the various classes of expectations. In addition, the larger the group, the less significance agents accord to their actions: the benefit produced by an agent is diluted by the size of the group when it is shared among all agents. Agents that free ride can expect the effect to be very noticeable in a small group, but less so in a larger group. This is similar to the reasoning students might use when deciding whether or not to attend a lecture they would prefer to skip. Among an audience of 500, one's absence would probably go unnoticed (but if all students in the class reason similarly ...). On the other hand, in a small seminar of ten, one might fear the personal censure of the professor. This framework of expectations leaves unspecified how individuals believe their actions will affect others in the future. The specification of these beliefs can be covered by five different classes, which are represented qualitatively in Figure 1 and quantitatively by the expectation function, E(fc), a measure of individuals' perception of their influence on others. Low values of E(fc) indicate that individuals believe their influence to be small so that however they act will have little effect on
216
Bernardo A. Huberman and Natalie S. Glance
the future of the group. High values, on the other hand, indicate that individuals believe their influence to be large. These expectation functions shown are intended to be representative of the five different classes and are not to be thought of as constrained to the exact shape shown. In general, agent expectations will be some mixture of these sets of beliefs, often with some class dominating. The first set of beliefs pictured in Figure l(a) we call flat expectations, according to which agents (a) Flat expectations
(c) Opportunistic expectations
(b) Bandwagon expectations
(d) Contrarian expectations
fraction cooperating (e) Extreme expectations
Legend fraction cooperating Figure 1: The different classes of expectations that fit within our framework are (a) flat expectations, (b) bandwagon expectations, (c) opportunistic expectations, (d) inverse expectations, and (e) extreme expectations. These expectation functions shown are intended to be representative of the five different classes and are not to be thought of as constrained to the exact shape shown. The value of the expectation function, EC/), indicates how strongly individuals believe that their actions will encourage similar behaviour by the rest of the group. We emphasize bandwagon and opportunistic expectations. Agents with bandwagon expectations believe that the group will imitate their actions to the extent that its members are already behaving similarly. Agents with opportunistic expectations believe instead that when most of the group is co-operating, then the imitative surge in response to their actions will be small.
Beliefs and Cooperation
217
believe that the effect of their actions is independent of the proportion of the group co-operating. The "contrarian" and "inverse" classes of expectations shown in (d) and (e) we consider unrealistic because agents with these expectations believe that co-operation will induce co-operation at high rates even when the level of co-operation in the group is very low. Figure l(b) shows a type of expectation, E (fc) ~ fc, we call "bandwagons" which assumes that agents believe that the group will imitate their actions to the extent that its members are already behaving similarly. Bandwagon agents expect that if they decide to free ride in a group of contributors, others will eventually choose to defect as well. The agents also believe that the rate at which the switch occurs over time depends on the fraction of the group presently co-operating. The more agents already co-operating, the faster the expected transition to defection. Similarly, agents expect that if they start co-operating in a group of free riders, others will start co-operating over time. Once again the agents believe that the rate depends on the proportion of cooperators, which in this case is very low. The key assumption behind bandwagon expectations is that agents believe their actions influence the contributors more than the sluggards. Consider the set of beliefs the agent expects of others in the context of recycling programs. Recycling has a strong public good component because its benefits are available to all regardless of participation. Not too long ago very few towns had such programs. Perhaps you would read in the paper that a small town in Oregon had started a recycling program. Big deal. But several years later, when you read that many cities have jumped onto the recycling bandwagon, then suddenly the long-term benefits of recycling seem more visible: recycled products proliferate in the stores, companies turn green, and so forth. Alternatively, imagine some futuristic time when everyone recycles, in fact your town has been recycling for years, everything from cans to newspapers to plastic milk jugs. Then you hear that some places are cutting back their recycling efforts because of the expense and because they now believe that the programs do not do that much good after all. You think about all your wasted effort and imagine that the other towns still recycling are reaching the same conclusion. In view of this trend, your commitment to recycling may falter. The "opportunistic" expectations, E(fc) ~ fc(l — fc), of Figure l(c) resemble bandwagon expectations when the proportion of co-operating members is small. However, when most of the group is co-operating, the agent believes that the imitative surge in response to his action will be small: the higher the fraction co-operating, the lower the expected response to an occasional defection or a marginal co-operation. Thus,
218
Bernardo A. Huberman and Natalie S. Glance
the agent may be tempted to opportunistically defect, enjoying the fruits of co-operation without incurring the cost. Opportunistic expectations merges bandwagon expectations at the low end and contrarian expectations at the high end; when the amount of co-operation is small, opportunistic agents believe that their co-operation will encourage co-operation and their defection will encourage defection at a rate that depends on the fraction co-operating, but that when the level of co-operation is high, the predicted rate depends on the fraction defecting instead. Opportunistic expectations apply, for example, to situations in which agents believe that the common good can be produced by a subset of the group (even when that is not the case). Consider a common good such as public radio. The more voluntary contributions the radio station receives, the better the quality of the programming. An opportunistic listener who perceives that the station's received voluntary contributions are adequate will decide not to send his yearly contribution, believing that his own action has little consequence.
Expectations and Utility In Section 2 we presented the utility function for an individual faced with a social dilemma, neglecting the effect of expectations. We now return to the utility calculation to include the influence of expectations on preferences. This can be done, first, in a fairly general way using the expectation function, Ef/ c ). Individuals use their expectation function to extrapolate perceived levels of co-operation into the future. The framework underlying the different classes of expectations assumes that the members of the group expect the game to be of finite duration, parametrized by their horizon length, H.3 Since finite horizons mean that a dollar today is worth more than a dollar tomorrow, agents discount future returns expected at a time t' from the present at the rate e~tVH with respect to immediate expected returns. Second, members expect that their choice of action, when reflected in the net benefits received by the others, will influence future levels of co-operation. Since, however, the decision of one individual affects another's return by only ±b/n, we assume that members perceive their influence as decreasing with increasing group size. The time scale of the dynamics is normalized using the parameter a, which is the rate at which members of the group re-examine their choices. Individuals deduce the level of co-operation, fc, from their received utility, as per Equation 2. Since the amount of utility obtained may depend on past (instead of present) levels of co-operation, the deduced fraction co-operating may actually correspond to a previous state of the group. We use the parameter T to represent this delay in information. The deduced fraction co-operating also differs from the past value because of uncertainty, as discussed earlier.
Beliefs and Cooperation
219
Figure 2: Agents extrapolate the evolution of the group's behaviour using delayed information of the fraction co-operating. The upper curve represents how a member expects the dynamics to evolve if he co-operates, the lower if he defects. The rate with which the two curves respectively rise and fall depends on the member's class of expectations functions. Agents with bandwagon expectations extrapolate out very flat curves when the fraction co-operating is small and much steeper curves when this fraction is large. Opportunistic agents, in contrast, extrapolate out very flat curves whenever the fraction cooperating becomes either small or large, but predict sharper rises and declines for intermediate levels of co-operation.
Below, we develop mathematically how individual expectations convolves with instantaneous utility to determine the condition for cooperation. Let t represent the present time and let A/c(£ + t') denote the expected future difference (at time t + t') between the fraction of agents co-operating and the fraction of those defecting. Figure 2 shows how agents extrapolate the observed aggregate behaviour of the group into the future. Because of delays, the agents do not know the evolution of the dynamics in the grey area of the figure. Instead, they use the delayed value of the fraction co-operating, fc(t - T), to extrapolate the group's behaviour into the present and then into the future. The difference between the two curves in Figure 2 corresponds to the time-varying expected deviance A/c(f + t'). The upper curve in the figure represents a member's extrapolation of group behaviour if he cooperates, the lower curve if he defects. The initial slope of the curves is proportional to the individual's expectation function. Thus, when the expectation function has a small value, the expected deviation grows very slowly - representing the individual's belief that one's actions count for little. On the other hand, when the expectation function yields a large value, the curves respectively rise and fall rapidly - indicating the belief that one's action influences others strongly. The rate at which the extrapolated curves rise and fall also depends on the size of the group and the re-evaluation rate: the first because of the assumption that influence declines with group size and the second because a sets the time scale.
220
Bernardo A. Huberman and Natalie S. Glance
Because a member's choice causes an instantaneous difference at {' = 0 of A/ c (f, t' - 0) = l/n, the extrapolated deviation at time t + t' becomes Af In summary, me.mbers re-evaluate their decision whether or not to contribute to the production of the good at an average rate a, deducing the value fc(t — T) and following some set expectations about the future, represented by the expectation function, E(/ c ). From the members' prediction of how they expect fc to evolve in relation to their choice and discounting the future appropriately, they then make the decision on whether to co-operate or defect by estimating their expected utility over time. Putting it all together, individuals perceive the advantage of cooperating over defecting at time t to be the net benefit
An individual co-operates when AB,(f) > 0, defects when AB,-(t) < 0, and chooses at random between defection and co-operation when AB,(f) = 0. The decision is based on the fraction of the group perceived as co-operating at a time Tin the past,/c(f — T). These criteria reduce to the following condition for co-operation at time t:
The condition for co-operation derived above depends explicitly on the class of expectations and can be visualized using Figure 1. In the absence of uncertainty an agent with bandwagon expectations cooperates in the dark-grey region where
while an opportunistic agent co-operates in the dark gray region of (c) where
Beliefs and Cooperation
221
The parameter m indicates the steepness of the expectation functions and /™-T indicates the level of co-operation at which opportunistic expectations reach a maximum. Figure l(b) and (c) should help make these conditions for co-operation more clear: in the cross-hatched regions where E(/ c ) > Ecrit agents will prefer co-operation (neglecting imperfect information), while in the lightly-shaded regions where E(/ c ) < Ecrjt agents will prefer defection. Since fc(t — T) is a mixture of two binomially distributed variables, Equation 6 provides a full prescription of the stochastic evolution for the interaction, depending on the relevant class of expectations. In particular, members co-operate with probability P (fc(t - T)) that they perceive co-operation as maximizing their expected future accumulated utility, given the actual attempted level of co-operation fc(t — T).
4. Theory As in Glance and Huberman (1993a), we borrow methods from statistical thermodynamics (van Kampen 1981) in order to study the evolution of social co-operation. This field attempts to derive the macroscopic properties of matter (such as liquid versus solid, metal or insulator) from knowledge of the underlying interactions among the constituent atoms and molecules. In the context of social dilemmas, we adapt this methodology to study the aggregate behaviour of a group composed of intentional individuals confronted with social choices.
Dynamics A differential equation describing the stochastic discrete interaction specified in the previous section can be derived using the mean-field approximation. This entails assuming that: (1) the size, n, of the group is large; and (2) the average value of a function of some variable is well approximated by the value of the function at the average of that variable. The equations developed below will allow us to determine the equilibrium points and their stability characteristics. We specialize to the symmetric case p ~ q in which an individual is equally likely to be perceived as defecting when he intended to co-operate as co-operating when he intended to defect. By the Central Limit Theorem, for large n, the random variable fc tends to a Gaussian distribution with mean (fc) = pfc + (I — p) (1 — fc). Using the distribution of fc, we now want to find the mean probability, (pc(fc)}, that E(fc) > Ecrit. Within the mean-field approximation, { p c ( f c ) ) is given by the probability pc(fc) that £«/;» > Ecrit (i.e.,(Pc(fa) = pc E«/ r »|). Thus, the mean probability that £(/,) > E cn/ becomes
222
Bernardo A. Huberman and Natalie S. Glance
where erf (x/V2cr) represents the error function associated with the normal curve. The uncertainty parameter a captures imperfect information and the spread of diversity of beliefs within the group (Glance and Huberman 1993b). The evolution of the number of agents co-operating in time is then described by the dynamical equation (Huberman and Hogg 1988)
where a is the re-evaluation rate and T is the delay parameter, as defined earlier. Figure 3 pictorializes this differential equation for the bandwagon expectations in (a) and for opportunistic ones in (b). The figures superimpose y = pc(fc] and y =fc. When the difference/, — pc(fc) is positive, group members prefer defection (region D), when it is negative they prefer co-operation (region C). The fixed points of the differential equation are given by the points of intersection of the two curves; the second of the three fixed points is unstable since a small co-operative perturbation takes the system into the more co-operative equilibrium,
Figure 3: The fixed points of the differential equation describing the dynamics of group co-operation are given by the intersection of y = pc(fc) and y = fc. In region C where pc(fc) >/., the dynamics evolve towards more co-operation, while in region D where/. > p,.(fc), the dynamics evolve towards more defection. Contrasting (a) and (b) hints at the difference in the dynamical behaviour for bandwagon versus opportunistic expectations. For opportunistic expectations, the more co-operative fixed point occurs where p,.(/r) intersects fc with a negative slope. When the slope at this point of intersection is steep enough, the fixed point becomes unstable causing osci llations and chaos, behaviour that is not observed for bandwagon expectations.
Beliefs and Cooperation
223
while a small perturbation in the other direction takes the system into the fixed point of overall defection. As the size of the group increases, the curve y = pc(fc) shifts so that it intersects y = fc only near/c = 0. By solving
we can obtain the critical sizes beyond which co-operation can no longer be sustained for the different classes of expectations. In the case of perfect certainty (p = q = I), these critical sizes can be expressed in simple analytical form. Consider first an instantiation of bandwagon expectations:
In this case, the critical group size beyond which co-operation is no longer a fixed point occurs when Ecrjt > 1, i.e., for groups of size greater than
This can be seen both pictorially in Figure l(b) and from Equation 6: with perfect information, no individual will co-operate when Ecrit is greater than the expectation function E(fc) for all/c. Similarly, if we choose the opportunistic expectations
the solution to Equation 10 yields only one non-co-operative fixed point for n> n*. Co-operation is the only possible global outcome if the group size falls below a second critical size nmin. For perfect information,
for bandwagon agents and
224
Bernardo A. Huberman and Natalie S. Glance
(approximately) for opportunistic agents. Notice that in either case these two critical sizes are not equal; in other words, there is a range of sizes between nmin and n* for which both co-operation and defection are possible fixed points. An estimate of the possible critical group sizes can be obtained if one assumes, for example, a horizon length H = 50 (which corresponds to a termination probability 8 = 0.98), the re-evaluation rate a = I, the benefit for co-operation b = 2.5 and the cost of co-operation c = 1. In this case one obtains n* = 77, nmin = 10, in the case of bandwagon expectations and nmm = 18 for opportunistic agents. Observe that an increase in the horizon length would lead to corresponding increases in the critical sizes. Finally, linear stability analysis of the differential equation (Equation 10) shows that for bandwagon expectations the stability of the equilibrium points is independent of the value of the delay T and of the reevaluation rate a. Thus, the asymptotic behaviour of the group interaction does not depend on the delay; this observation is corroborated for the discrete model by numerical computer simulations such as those presented in Section 5. Moreover, the equilibrium points belong to one of two types: stable fixed point attractors or unstable fixed point repellors. Due to the linearity of the condition for co-operation for bandwagon expectations (Equation 6), the dynamical portrait of the continuous model contains no limit cycles or chaotic attractors. However, for opportunistic expectations the more co-operative equilibrium point becomes unstable for large delays, although the fixed point near/c = 0 remains stable. Integrating the differential equation above reveals a panoply of dynamical behaviour from oscillations to chaos. For small delays, the dynamics of opportunistic expectations is very similar to that of bandwagon expectations: two stable fixed points separated by a barrier. We now examine how the stochastic fluctuations arising from uncertain information affects the dynamics of social dilemmas. Fluctuations Equation 10 determines the average properties of a collection of agents having to choose between co-operation and defection. The asymptotic behaviour generated by the dynamics provides the fixed points of the system and an indication of their stability; it does not address the question of how fluctuations away from equilibrium state evolve in the presence of uncertainty. These fluctuations are important for two reasons: (1) the time necessary for the system to relax back to equilibrium after small changes in the number of agents co-operating or defecting might be long compared to the timescale of the collective task to be per-
Beliefs and Cooperation
225
formed or the measuring time of an outside observer; (2) large enough fluctuations in the number of defecting or collaborating agents could shift the state of the system from co-operating to defecting and viceversa. If that is the case, it becomes important to know how probable these large fluctuations are, and how they evolve in time. In what follows we will use a formalism developed in (15) that is well suited for studying fluctuations away from the equilibrium behaviour of the system. This formalism relies on the existence of an optimality function, O, that can be constructed from knowledge of the utility function. The ft function has the important property that its local minima are the equilibria of the system as well as the most probable configurations of the system. Depending on the complexity of the £1 function, there may be several equilibria, with the overall global minimum being the optimal state of the system. Specifically, the equilibrium probability distributipn P (fc) is given by
where the optimality function ft for our model of ongoing collective action is given by
in terms of the mean probability pc(fc ) that co-operation is preferred. Thus, the optimal configuration corresponds to the value offc at which 11 reaches its global minimum. Within this formalism it is easy to study the dynamics of fluctuations away from the minima in the absence of delays. First, consider the case where there is a single equilibrium (which can be either co-operative or defecting). Fluctuations away from this state relax back exponentially fast to the equilibrium point, with a characteristic time of the order of I/a, which is the average re-evaluation time for the individuals. Alternatively, there may be multiple equilibria, with the optimal state of the system given by the global minimum of the 11 function. A situation in which the system has two equilibria is schematically illustrated in Figure 4 for both bandwagon and opportunistic expectations. In (a) an 11 function characteristic of bandwagon expectations is shown whose minima are at either extreme and in (b) an ft function characteristic of opportunistic expectations is shown with one minima at fc ~ 0 and a second minima occurring at an intermediate value of fc. The quadratic form of this second minima accounts for the discrepancy in the dynamics of opportunistic expectations versus bandwagon expectations.
226
Bernardo A. Huberman and Natalie S. Glance (a) Bandwagon expectations
fraction cooperating
(b) Opportunistic expectations
fraction cooperating
Figure 4: The optimality function ft versus fc, the fraction of agents' co-operation, for bandwagon expectations in (a) and opportunistic expectations in (b). The global minimum is at A, local minimum at B, and h is the height of the barrier separating state B from A.
If the system is initially in an equilibrium which corresponds to the global minimum (e.g., state A), fluctuations away from this state will relax back exponentially fast to that state. But if the system is initially trapped in a metastable state (state B), the dynamics away from this state is both more complicated and interesting. As was shown in (Ceccatto and Huberman 1989), within a short time scale, fluctuations away from a local minimum relax back to it, but within a long time scale, a giant fluctuation can take place in which a large fraction of the agents switches strategies, pushing the system over the barrier maximum. Once the critical mass required for a giant fluctuation accumulates, the remaining agents rapidly switch to the new strategy and the system slides into the global equilibrium. The time scales over which the nucleation of a giant fluctuation occurs is exponential in the number of agents. However, when such transitions take place, they do so very rapidly - the total time it takes for all agents to cross over is logarithmic in the number of agents. Since the logarithm of a large number is very small when compared to its exponential, the theory predicts that nothing much happens for a long time, but when the transition occurs, it does so very rapidly. The process of escaping from the metastable state depends on the amount of imperfect knowledge that individuals have about the state of the system, in other words, on what individuals think the other agents are doing. In the absence of imperfect knowledge the system would always stay in the local minimum downhill from the initial conditions, since small excursions away from it by a few agents would reduce their utility. Only in the case of imperfect knowledge, which causes occasional large errors in the individual's estimation of the actual number co-operating, can a critical mass of individuals change their behaviour.
Beliefs and Cooperation
227
Determining the time that it takes for the group to crossover to the global minimum is a calculation analogous to particle decay in a bistable potential and has been performed many times (van Kampen 1981; Suzuki 1977). The time, t, that it takes for a group of size n to cross over from a metastable Nash equilibrium to the optimal one is given by
with h the height of the barrier as shown in Figure 4 and a a measure of the imperfectness of information. We should point out, however, that in our model the barrier height itself also depends on n, H, and p, making simple analytical estimates of the crossover time considerably more difficult.
5. Computer Simulations The theory presented in the previous section has a number of limitations. The mean-field dynamics provides an approximation to the model in the extreme limit of infinite group size, as does the O function formalism. In addition, the theory can predict whether or not the equilibrium points are stable, but not the characteristics of the dynamics when they are unstable. Finally, although the fi function formalism predicts sudden transitions away from the metastable state into the global equilibrium and shows that the average time to transition is exponential in the group size, in order to calibrate the transition time, we must run computer experiments. While the theory may tell us that co-operation is the overall equilibrium point for a group of size 10 with horizon length 12, it does not indicate on average how long it will take for the system to break out of an initially defecting state (only that this time is exponential in the size of the group). With this in mind, we ran a number of event-driven Monte Carlo simulations that we used to both test the analytical predictions of the theory and to calibrate the time constants with which co-operation and defection appear in a system of given size. These simulations run asynchronously: agents wake up at random intervals at an average rate and re-evalute their decision to co-operate or defect. They deduce the fraction co-operating from their accrued utility. Because of uncertainty, their deduction may differ from the true value co-operating. Higher levels of uncertainty result in large deviations from the tme value. The agents then decide whether or not to co-operate according to the choice criterion given by Equation 6. This choice criterion varies, of course, depending on the appropriate class of expectations. We present below a few representative results among the wide range of group dynamics and statistics collected for many different group sizes, horizon lengths and uncertainty levels.
228
Bernardo A. Huberman and Natalie S. Glance
Fixed Points and Fluctuations The dynamics of bandwagon and opportunistic expectations are very similar when the delay in information vanishes (i.e., the limit T—»0). In this case, as shown in the previous sections, our model of the collective action problem yields dynamics with two fixed points for a large range of group sizes, one an optimal equilibrium, the other a metastable equilibrium. If the group initially finds itself at or near the metastable equilibrium, it may be trapped there for very long times, making it effectively a stable state when the time scale of the interaction is shorter than the crossover time to the optimal configuration. This crossover time, t, given in Equation 19, is exponential in the size n of the group, in the height of the barrier, h, and in the inverse uncertainty, I/a. Note that the barrier height is also a function of the uncertainty, a, and indirectly of n and the horizon length H, so the functional dependencies for the crossover time are more complex than appears at first sight. Nevertheless, as verified through simulations of groups of individuals engaged in collective action problems, crossover times are exponentially long. As a result, a large group which has an initial tendency to co-operate may remain in a co-operative state for a very long time even though the optimal state is defection. Conversely, small groups whose initial tendency is to defect can persist in non-optimal defection, until a large fluctuation finally takes the group over to co-operation. As a concrete example, consider two small co-operating groups of size n = 6, with horizon length H = 9.5 and bandwagon expectations, for which the optimal state is co-operation. At t = 0, the groups merge to form a larger, co-operating group of size n = 12. For the larger group, co-operation is now a metastable state: no one individual will find it beneficial to defect and the metastable co-operative state can be maintained for very long times, especially if p is close to 1. As shown in Figure 5, in one case mutual co-operation lasts for about 4,000 time steps, until a sudden transition (of duration proportional to the logarithm of the size of the group) to mutual defection occurs, from which the system will almost never recover (the time scale of recovery is many orders of magnitude larger than the crossover time). In this example, p = 0.93. If the amount of error increases so that p now equals 0.91 (thus reducing the height of barrier between co-operation and defection by 21%), the crossover to defection occurs on the order of hundreds of time steps, instead of thousands. Groups of agents with opportunistic instead of bandwagon expectations, behave qualitatively similarly in the absence of information delays. For a range of group sizes, there are again two equilibrium points, however, as predicted by the II function (e.g., Figure 4(b)), the more co-operative equilibrium is actually a mixture of co-operation
Beliefs and Cooperation
229
Figure 5: Outbreak of defection. At t = 0, two co-operating groups of size n = 6 merge to form a larger, co-operating group of size n = 12. The agents follow bandwagon expectations and have horizon length H = 9.5 throughout, with p = 0.93, b = 2.5, c = 1, a == I and T = 1. For these parameters, co-operation is the optimal state for a group of size n = 6, but for the combined group of size n = 12, co-operation is metastable. Indeed, as the figure shows, metastable co-operation persists for almost 4,000 time steps in this example. Uncertainty (p less than one) ensures that eventually a large fluctuation in the perceived number of agents co-operating eventually takes the group over into a state of mutual defection, which is optimal.
and defection. Thus, co-operation and defection can coexist with some agents free riding while other agents contribute. The dynamics, however, is ergodic as long as the agents have identical preferences and beliefs: all individuals "take turns" co-operating and defecting. Adding a bit of diversity, on the other hand, is enough to disrupt this parity: with diversity, at the mixed equilibrium point, some individuals will always tend to free ride while others contribute. A second difference between opportunistic and bandwagon expectations at zero delays is that the typical size of fluctuations at the more co-operative equilibrium are larger for opportunistic groups. The quadratic curvature of the fl function (Figure 4(b)) at the second equilibrium explains the increase in fluctuations. For systems with bandwagon expectations, fluctuations from the co-operative equilibrium can take the system, away from equilibrium in only one direction. (Although with high uncertainty the equilibrium points move away from the extremes; however, high uncertainty also washes away the difference between opportunistic and bandwagon expectations in more general ways.) For example, Figure 6 gives a graph of an opportunistic group that has a long-term equilibrium that is a mixture of co-operation and defection: group size n = 10, horizon length H = 1.0, with p = 0.95. In this case, the number co-operating fluctuates around n = 8 and the typical size of fluctuations is ±1 with an occasional fluctuation of size two.
230
Bernardo A. Huberman and Natalie S. Glance
Figure 6: Fluctuations about the mixed co-operative/defecting fixed point for an opportunistic group of size n = 10, horizon length H = 10, with p = 0.95. The number co-operating fluctuates around n = 8 and the typical size of fluctuations is ± 1 with an occasional fluctuation of size two.
We also collected statistics to verify the exponential dependence of the mean crossover time on the probability p that an agent's action is misperceived. For the data in Figure 7, the crossover time from metastable co-operation to overall defection was averaged over 600 simulations at each value of the uncertainty parameter p. Fitting an exponential to the data indicates that the mean crossover time goes as t <* ef^\ where/(p) is quadratic in p. However, the more relevant relationship is between the mean crossover time and the height of the barrier of the II function. The mean crossover time t appears to be given by t = constant exp (h (n, a)), where the barrier height depends on the group size, n, and
Figure 7: Exponential increase of crossover time as a function of the probability p that an agent's action is misperceived. The mean crossover time was obtained by averaging over 600 simulations at each value of the uncertainty parameter p. In the all the runs, n = 12, H — 9.5, b = 2.5, c — 1, a = 1 and r - 1, with bandwagon expectations. The grey curve is an exponential fit to the data.
Beliefs and Cooperation
231
the uncertainty, a. For the example in Figure 7, the barrier height depends quadratically on the parameter, p. The dependence of barrier height on group size is less straightforward, but apart from some outliers, the barrier height also increases quadratically in group size.
Opportunistic Oscillations When there are significant delays in information (in units of I/a), the dynamical behaviour of opportunistic agents becomes very different from that of agents with bandwagon expectations, in particular for small amounts of uncertainty. For large delays and low uncertainty, the equilibrium points of the system become unstable. However, for any given group of agents with opportunistic expectations, the dynamics can be stabilized regardless of the delay by simply increasing the amount of uncertainty (or, alternatively, adding diversity). This claim can be verified by performing a stability analysis about the fixed points of Equation 10 and has been confirmed through computer experiments. For long delays and low uncertainty, we observe a number of different behaviours. For example, when the agents' horizon length is long enough that the mixed equilibrium becomes more likely than the defecting one with a high barrier between the two, we observe large oscillations at low levels of uncertainty. Figure 8 shows the time series of a computer experiment run in this regime. The size of the group is n = 10, the horizon length is H = 10, the delay is T = 1, and p = 0.99. Because of the small number of agents in the group, the stochastic dynamics is not purely oscillatory as predicted by the differential equation of Equation 10, although the approximation improves for larger groups.
Figure 8: Oscillatory dynamics of a group of opportunistic agents of size n — 10. The agents have horizon length H — 10, with p = 0.99, re-evaluation rate a --- 1, information delay r — 1, and the same benefit and cost for co-operation as before.
232
Bernardo A. Huberman and Natalie S. Glance
As the uncertainty rises (which corresponds to decreasing p in the experiments), the amplitude of the fluctuations decreases, until, for high enough uncertainty, the equilibrium point becomes stable and stochastic fluctuations dominate. The distinction between stochastic fluctuations and instability is an important one, and brings up the possibility that one might be able to distinguish between randomness and deterministic chaos. Bursty Chaos If we decrease the horizon length of the opportunistic agents, we compromise the optimal!ty of the mixed equilibrium in favour of that of the defecting one (i.e., the left minimum of the O function shifts downwards while the right one shifts upwards). Computer experiments show that for the shorter horizon length of H = 5.5, for example, the dynamics now flip-flops between two different behaviours, as can be seen in Figure 9 (with r = 4 and with greater uncertainty p = 0.9, which speeds up the flip-flops). For long times, the system remains trapped in either the equilibrium point at fc = 0, with only small fluctuations away, or in a chaotic wandering state. The latter behaviour implies sensitivity to initial conditions even in the limit of small uncertainty, which makes long-term predictions about the group's behaviour impossible. Occasionally, the system breaks away from one of these two attractors only to be drawn back into the other, resulting in a bursty type of dynamics. Temporal averages are consequently a misleading indicator in such situations since bursty behaviour implies that typical is not average for this kind of dynamics: given a brief snapshot, we cannot envision the longer history.
Figure 9: Bursty stable and chaotic dynamics of a group of opportunistic agents of size n = 10. The agents have horizon length H = 5.5, with p = 0.9, re-evaluation rate a = I, information delay T = 4, and the same benefit and cost for co-operation as before. The flip-flops between the two attractors of chaotic oscillations and the fixed point at overall defection extend into time beyond the time series shown here.
Beliefs and Cooperation
233
6. Conclusion In this paper we have extended our previous study of the dynamics of social dilemmas to encompass a wide range of different individual beliefs. Within our framework of expectations all individuals are assumed to believe that a co-operative action on their part encourages further co-operation while defection encourages further defection. The unspecified part of the framework is how strong do individuals believe their influence to be. Originally, we choose bandwagon expectations as a simple way to capture the individual belief that the group will imitate one's actions to the extent that it is already behaving similarly. However, this runs counter to many people's intuition (thanks to selfreflection?) that when the rest of the group co-operates, people are very likely to free ride, expecting to have very little effect. The opportunistic class of expectations was introduced to capture this intuition and in the hope of anchoring our results in a more general set of expectations. We attempted to map out the different types of beliefs using five classes of expectation functions whose functional forms are to be considered loosely as characterizing the classes rather than constraining them. We explored the two classes of bandwagon and opportunistic expectations in greater detail, since they seemed most realistic and interesting. We derived two general results for both types: (1) that there is an upper limit to the group size beyond which co-operation cannot be sustained; and (2) that, as long as the delays in information are small, there is a range of group sizes for which there are two fixed points for the dynamics, and that there can be sudden and abrupt transitions from the metastable equilibrium to the overall equilibrium if the group is observed for a long enough period of time. The two fixed points for agents with bandwagon expectations are at either extremes of mutual defection and mutual co-operation, while for opportunistic agents, the second fixed point is mixed: a primarily cooperating group supports a defecting minority. For the case of opportunistic agents, we also found that delays in information caused the second, more co-operative fixed point to become unstable. As a result, the dynamics exhibits a panoply of behaviours, from opportunistic oscillations to bursty chaos, thus excluding the possibility of sustained co-operation over very long times. There are clear implications that follow from this work for the possibility of co-operation in social groups and organizations. In order to achieve spontaneous co-operation over long periods of time, an organization made up of individuals with different beliefs and expectations should be structured into small subunits, with their members having access to timely information about the overall productivity of the system. This allows for the spontaneous emergence of co-operation
234
Bernardo A. Huberman and Natalie S. Glance
and its persistence over long times. Failure to make available information about the overall utility accrued by the organization in a timely manner can lead to complicated patterns of unpredictable and unavoidable opportunistic defections, thus lowering the average level of co-operation in the system. Notes 1 For a justification of the form of the individual utility function in the context of either divisible goods or pure public goods see Bendor and Mookherjee (1987). 2 This assumption is justified in part by Quattrone and Tversky's experimental findings in the context of voting that many people view their own choices as being diagnostic of the choices of others, despite the lack of causal connections (Quattrone and Tversky 1986). 3 The concept of a horizon is formally related to a discount 8, which reflects the perceived probability that the game will continue through the next time step. The two are connected through the relation
which implies H = 1/(1 — S).
References Bendor, Jonathan, and Dilip Mookherjee (1987). Institutional structure and the logic of ongoing collective action. American Political Science Review, 81(1): 129-54. Ceccatto, H. A., and B. A. Huberman (1989). Persistence of nonoptimal strategies. Proceedings of the National Academy of Sciences, USA, 86: 3443-46. Friedman, David D. (1990). Price Theory. Cincinnati, OH: South-Western. Glance, Natalie S., and Bernardo A. Huberman (1993a). The outbreak of co-operation, journal of Mathematical Sociology, 17(4): 281-302. (1993b). Diversity and collective action. In H. Haken and A. Mikhailov (eds.), Interdisciplinary Approaches to Complex Nonlinear Phenomena (New York: Springer), pp. 44-64. (1994). Dynamics of social dilemmas. Scientific American, (March): 76-81. Hardin, Garrett (1968). The tragedy of the commons. Science, 162: 1243-48. Hardin, Russell (1971). Collective action as an agreeable n-Prisoners' Dilemma. Behavioral Science, 16(5): 472-81. (1982). Collective Action. Baltimore: John Hopkins University Press. Huberman, Bernardo A., and Tad Hogg (1988). The behavior of computational ecologies. In B. A. Huberman. (ed.), The Ecology of Computation (Amsterdam: North-Holland), pp. 77-115. Oliver, Pamela B., and Gerald Maxwell (1988). The paradox of group size in collective action: A theory of the critical mass, II. American Sociological Review, 53: 1-8.
Beliefs and Cooperation
235
Olson, Mancur (1965). The Logic of Collective Action. Cambridge, MA: Harvard University Press. Quattrone, George A., and Amos Tversky (1986). Self-deception and the voter's illusion. In John Elster (ed.), The Multiple Self (Cambridge: Cambridge University Press), pp. 35-38. Schelling, Thomas C. (1978). Micro-motives and Macrobehaviour, Boston, MA: W. W. Norton. Selten, Reinhardt (1975). Re-examination of the perfectness concept for equilibrium points in extensive games. International Journal of Game Theory, 4: 2555. Simon, Herbert (1969). The Sciences of the Artificial. Cambridge, MA: MIT Press. Smith, Adam (1937). The Wealth of Nations. New York: Random House. Suzuki, Masuo (1977). Scaling theory of transient phenomena near the instability point. Journal of Statistical Physics, 16:11-32. Taylor, Michael (1976). Anarchy and Cooperation. New York: John Wiley. (1987). The Possibility of Cooperation. Cambridge: Cambridge University Press. van Kampen, N. G. (1981). Stochastic Processes in Physics and Chemistry, Amsterdam: North-Holland
12
The Neural Representation of the Social World Paul M. Churchland
1. Social Space A crab lives in a submarine space of rocks and open sand and hidden recesses. A ground squirrel, in a space of bolt holes and branching tunnels and leaf-lined bedrooms. A human occupies a physical space of comparable complexity, but in our case it is overwhelmingly obvious that we live also in an intricate space of obligations, duties, entitlements, prohibitions, appointments, debts, affections, insults, allies, contracts, enemies, infatuations, compromises, mutual love, legitimate expectations, and collective ideals. Learning the structure of this social space, learning to recognize the current position of oneself and others within it, and learning to navigate one's way through that space without personal or social destruction, is at least as important to any human as learning the counterpart skills for purely physical space. This is not to slight the squirrels and crabs, nor the bees and ants and termites either, come to think of it. The social dimensions of their cognitive lives, if simpler than ours, are still intricate and no doubt of comparable importance to them. What is important, at all levels of the phylogenetic scale, is that each creature lives in a world not just of physical objects, but of other creatures as well, creatures that can perceive and plan and act, both for and against one's interests. Those other creatures, therefore, bear systematic attention. Even non-social animals must learn to perceive, and to respond to, the threat of predators or the opportunity for prey. Social animals must learn, in addition, the interactive culture that structures their collective life. This means that their nervous systems must learn to represent the many dimensions of the local social space, a space that embeds them as surely and as relevantly as does the local physical space. They must learn a hierarchy of categories for social agents, events, positions, configurations, and processes. They must learn to recognize instances of those many categories through the veil of degraded inputs, chronic ambiguity, and the occasional deliberate deception. Above all, they must learn to generate 236
The Neural Representation of the Social World
237
appropriate behavioural outputs in that social space, just as surely as they must learn to locomote, grasp food, and find shelter. In confronting these additional necessities, a social creature must use the same sorts of neuronal resources and coding strategies that it uses for its representation of the sheerly physical world. The job may be special, but the tools available are the same. The creature must configure the many millions of synaptic connection strengths within its brain so as to represent the structure of the social reality in which it lives. Further, it must learn to generate sequences of neuronal activation-patterns that will produce socially acceptable or socially advantageous behavioural outputs. As we will see in what follows, social and moral reality is also the province of the physical brain. Social and moral cognition, social and moral behaviour, are no less activities of the brain than is any other kind of cognition or behaviour. We need to confront this fact, squarely and forthrightly, if we are ever to understand our own moral natures. We need to confront it if we are ever to deal both effectively and humanely with our too-frequent social pathologies. And we need to confront it if we are ever to realize our full social and moral potential. Inevitably, these sentiments will evoke discomfort in some readers, as if, by being located in the purely physical brain, social and moral knowledge were about to be devalued in some way. Let me say, most emphatically, that devaluation is not my purpose. As I see it, social and moral comprehension has just as much right to the term "knowledge" as does scientific or theoretical comprehension. No more right, but no less. In the case of gregarious creatures such as humans, social and moral understanding is as hard won, it is as robustly empirical and objective, and it is as vital to our well-being as is any piece of scientific knowledge. It also shows progress over time, both within an individual's lifetime and over the course of many centuries. It adjusts itself steadily to the pressures of cruel experience. And it is drawn ever forward by the hope of a surer peace, a more fruitful commerce, and a deeper enlightenment. Beyond these brief remarks, the philosophical defence of moral realism must find another occasion. With the patient reader fairly forewarned, let us put this issue aside for now and approach the focal issue of how social and moral knowledge, whatever its metaphysical status, might actually be embodied in the brains of living biological creatures. It can't be too difficult. Ants and bees live intricate social lives, but their neural resources are minuscule: for an ant, 104 neurons, tops. However tiny those resources may be, evidently they are adequate. A worker ant's neural network learns to recognize a wide variety of
238
Paul M. Churchland
socially relevant things: pheromonal trail markings to be pursued or avoided; a vocabulary of antennae exchanges to steer one another's behaviour; the occasions for general defence, or attack, or fission of the colony; fertile pasture for the nest's aphid herd; the complex needs of the queen and her developing eggs; and so forth.
Presumably the challenge of social cognition and social behaviour is not fundamentally different from that of physical cognition and behaviour. The social features or processes to be discriminated may be subtle and complex, but as recent research with artificial neural networks illustrates, a high-dimensional vectorial representation - that is, a complex pattern of activation levels across a large population of neurons can successfully capture all of them. To see how this might be so, let us start with a simple case: the principal emotional states as they are displayed in human faces. 2. EMPATH: A Network for Recognizing Human Emotions Neural-net researchers have recently succeeded in modeling some elementary examples of social perception. I here draw on the work of Garrison Cottrell and Janet Metcalfe at the University of California, San Diego. Their four-stage artificial network is schematically portrayed in Figure 1. Its input layer or "retina" is a 64 X 64-pixel grid whose elements each admit of 256 different levels of activation or "brightness." This resolution, both in space and in brightness, is adequate to code recognizable representations of real faces. The input cells each project an axonal end-branch to every cell at the second layer of 1024 cells, a way station that compresses the retinal information in useful ways that need not detain us here. Each of the cells in that second layer projects an axonal filament to each and every one of the 80 cells in the third layer, which layer represents an abstract space of 80 dimensions in which the input faces are explicitly coded. This third layer projects finally to an output layer of only eight cells. These output cells have the job of explicitly representing the specific emotional expression present in the current input photograph. In all, the network contains (64 X 64) + (32 X 32) + 80 + 8 = 5208 cells, and a grand total of 4,276,864 synaptic connections. Cottrell and Metcalfe trained this network on eight familiar emotional states, as they were willingly feigned in the co-operating faces of twenty undergraduate subjects, ten male and ten female. Three of these charming subjects are displayed eight times in Figure 2, one for each of the eight emotions. In sequence, you will there see astonishment, delight, pleasure, relaxation, sleepiness, boredom, misery, and anger. The aim was to discover if a network of the modest size at issue could
The Neural Representation of the Social World
239
Layer Four: Detection (8 cells)
Layer Three: Face Space (80 cells)
Layer Two: Compression (1024 cells)
Layer One: Input image (4096 cells) Figure 1: EMPATH, a feedforward network for recognizing eight salient human emotions.
learn to discriminate features at this level of subtlety, across a real diversity of human faces. The answer is yes/ but it must be qualified. On the training set of (8 emotions X 20 faces = ) 160 photos in all, the network reached - after 1000 presentations of the entire training set, with incremental synaptic
Figure 2: Eight familiar emotional states, as feigned in the facial expressions of three human subjects. From the left, they are astonishment, delight, pleasure, relaxation, sleepiness, boredom, misery, and anger. These photos, and those for seventeen other human subjects, were used to train EMPATH, a network for discriminating emotions as they are displayed in human faces.
240
Paul M. Churchland
adjustments after each presentation - high levels of accuracy on the four positive emotions (about 80%), but very poor levels on the negative emotions, with the sole exception of anger, which was correctly identified 85% percent of the time. Withal, it did learn. And it did generalize successfully to photographs of people it had never seen before. Its performance was robustly accurate for five of the eight emotions, and its weakest performance parallels a similar performance weakness in humans. (Subsequent testing on real humans showed that they too had trouble discriminating sleepiness, boredom, and misery, as displayed in the training photographs. Look again at Figure 2 and you will appreciate the problem.) This means that the emotional expressions at issue are indeed within the grasp of a neural network, and it indicates that a larger network and a larger training set might do a great deal better. EMPATH is an "existence proof," if you like: a proof that for some networks, and for some socially relevant human behaviours, the one can learn to discriminate the other. Examination of the activation patterns produced at the third layer - by the presentation of any particular face - reveals that the network has developed a set of eight different prototypical activation patterns, one for each of the eight emotions it has learned, although the patterns for the three problematic negative emotions were diffuse and rather indistinct. These eight prototypical patterns are what each of the final eight output units are tuned to detect. 3. Social Features and Prototypical Sequences EMPATH's level of sophistication is of course quite low. The "patterns" to which it has become tuned are timeless snapshots. It has no grasp of any expressive sequences. In stark contrast to a normal human, it will recognize sadness in a series of heaving sobs no more reliably than in a single photo of one slice of that tell-tale sequence. For both the human and the network, a single photo might be ambiguous. But to the human, that distressing sequence of behaviour certainly will not be. Lacking any recurrent pathways, EMPATH cannot tap into the rich palette of information contained in how perceivable patterns unfold in time. For this reason, no network with a purely feedforward architecture, no matter how large, could ever equal the recognitional capacities of a human. Lacking any grasp of temporal patterns carries a further price. EMPATH has no conception of what sorts of causal antecedents typically produce the principal emotions, and no conception of what effects those emotions have on the ongoing cognitive, social, and physical behaviour of the people who have them. That the discovered loss of a loved one typically causes grief; that grief typically causes some degree
The Neural Representation of the Social World
241
Sensory i n p u t Figure 3: An elementary recurrent network. The recurrent pathways are in boldface.
of social paralysis; these things are utterly beyond EMPATH's ken. In short, the prototypical causal roles of the several emotions are also beyond any network like EMPATH. Just as researchers have already discovered in the realm of purely physical cognition, sophisticated social cognition requires a grasp of patterns in time, and this requires that the successful network be richly endowed with recurrent pathways, additional pathways that cycle information from higher neuronal layers back to earlier neuronal layers. This alone will permit the recognition of causal sequences. Figure 3 provides a cartoon example of a recurrent network. Such networks are trained, not on timeless snapshots, as was EMPATH. They are trained on appropriate sequences of input patterns. An important subset of causal sequences is the set of ritual or conventional sequences. To take some prototypical examples, consider a social introduction, an exchange of pleasantries, an extended negotiation, a closing of a deal, a proper leave-taking, and so on. All of these mutual exchanges require, for their recognition as well as their execution, a well-tuned recurrent network. And they require of the network a considerable history spent embedded within a social space already filled with such prototypical activities on every side. After all, those prototypes must be learned, and this will require both instructive examples and plenty of time to internalize them. In the end, the acquired library of social prototypes hierarchically embedded in the vast neuronal activation space of any normally
242
Paul M. Churchland
socialized human must rival, if it does not exceed, the acquired library of purely natural or non-social prototypes. One need only read a novel by someone like George Eliot or Henry James to appreciate the intricate structure of human social space and the complexity of human social dynamics. More simply, just recall your teenage years. Mastering that complexity is a cognitive achievement at least equal to earning a degree in physics. And yet with few exceptions, all of us do it. 4. Are There "Social Areas" in the Brain? Experimental neuroscience in the twentieth century has focused almost exclusively on finding the neuroanatomical (i.e., structural) and the neurophysiological (i.e., activational) correlates of perceptual properties that are purely natural or physical in nature. The central and programmatic question has been as follows. Where in the brain, and by what processes, do we recognize such properties as colour, shape, motion, sound, taste, aroma, temperature, texture, bodily damage, relative distance, and so on? The pursuit of such questions has led to real insights, and we have long been able to provide a map of the various areas in the brain that seem centrally involved in each of the functions mentioned. The discovery technique is simple in concept. Just insert a long, thin microelectrode into any one of the cells in the cortical area in question (the brain has no pain sensors, so the experimental animal is utterly unaware of this telephone tap), and then see whether and how that cell responds when the animal is shown colour or motion, or hears tones, or feels warmth and cold, and so on. In this fashion, a functional map is painstakingly produced. Figure 4 provides a quick look at the several primary and secondary sensory cortices and their positions within the rear half of a typical primate cerebral cortex. But what about the front half of the cortex, the so-called "frontal lobe"? What is it for? The conventional but vague answer is, "to formulate potential motor behaviours for delivery to and execution by the motor cortex." Here we possess much less insight into the significance of these cortical structures and their neuronal activities. We cannot manipulate the input to those areas in any detail, as we can with the several sensory areas, because the input received by the premotor areas comes ultimately from all over the brain. It comes from areas that are already high up in the processing hierarchy, areas a long way from the sensory periphery where we can easily control what is and isn't presented. On the other hand, we can insert microelectrodes as before, but this time stimulate the target cell rather than record from it. In the motor cortex itself, this works beautifully. If we briefly stimulate the cells in
The Neural Representation of the Social World
243
Figure 4: The location of some of the primary and secondary sensory areas within the primate cerebral cortex. (Subcortical structures are not shown.) The "motor strip" or motor output cortex is also shown. Note the broad cortical areas that lie outside these easily identified areas.
certain areas, certain muscles in the body twitch, and there is a systematic correspondence between the areas of motor cortex and the muscles they control. In short, the motor strip itself constitutes a well-ordered map of the body's many muscles, much as the primary visual cortex is a map of the eye's retina. Stimulating single cells outside of and upstream from the motor areas, however, produces little or nothing in the way of behavioural response, presumably because the production of actual behaviour requires smooth sequences of large activation vectors involving many thousands of cells at once. That kind of stimulation we still lack the technology to produce. A conventional education in neuroscience thus leaves one wondering exactly how the entire spectrum of sensory inputs processed in the rear half of the brain finally gets transformed into some appropriate motor outputs formulated in the front half of the brain. This is indeed a genuine problem, and it is no wonder that researchers have found it so difficult. From the perspective we have gained from our study of artificial networks, we can see how complex the business of vector coding and vector transformation must be in something as large as the brain. Plainly, sleuthing out the brain's complete sensorimotor strategy would be a daunting task even if the brain were an artificial network, a
244
Paul M. Churchland
network whose every synaptic connection strength were known and all of whose neuronal activation levels were open to continuous and simultaneous monitoring. But a living brain is not so accommodating. Its connection strengths are mostly inaccessible, and monitoring the activity of more than a few cells at a time is currently impossible. This is one of the reasons why the recent artificial network models have made possible so much progress. We can learn things from the models that we might never have learned from the brain directly. And we can then return to the biological brain with some new and betterinformed experimental questions to pose, questions concerning the empirical faithfulness of our network models, questions that we do have some hope of answering. Accordingly, the hidden transformations that produce behaviour from perceptual input need not remain hidden after all. If we aspire to track them down, however, we need to broaden our conception of the problem. In particular, we should be wary of the assumption that perception is first and foremost the perception of purely physical features in the world. And we should be wary of the correlative assumption that behavioural output is first and primarily the manipulation of physical objects. We should be wary because we already know that humans and other social animals are keenly sensitive, perceptually, to social features of their surroundings. And because we already know that humans and social animals manipulate their social environment as well as their purely physical surroundings. And above all, because we already know that infants in most social species begin acquiring their social coordination at least as early as they begin learning sensorimotor co-ordination in its purely physical sense. Even infants can discriminate a smile from a scowl, a kind tone of voice from a hostile tone, a humorous exchange from a fractious one. And even an infant can successfully call for protection, induce feeding behaviour, and invite affection and play. I do not mean to suggest that social properties are anything more, ultimately, than just intricate aspects of the purely physical world. Nor do I wish to suggest that they have independent causal properties over and above what is captured by physics and chemistry. What I do wish to assert is that, in learning to represent the world, the brains of infant social creatures focus naturally and relentlessly on the social features of their local environment, often slighting physical features that will later seem unmissable. Human children, for example, typically do not acquire command of the basic colour vocabulary until their third or fourth year of life, long after they have gained linguistic competence on matters such as anger, promises, friendship, ownership, and love. As a parent, I was quite surprised to discover this in my own children, and
The Neural Representation of the Social World
245
surprised again to learn that the pattern is quite general. But perhaps I should not have been. The social features listed are far more important to a young child's practical life than are the endlessly various colours. The general lesson is plain. As social infants partition their activation spaces, the categories that form are just as often social categories as they are natural or physical categories. In apportioning neuronal resources for important cognitive tasks, the brain expends roughly as much of those resources on representing and controlling social reality as it does on representing and controlling physical reality. Look once again, in light of these remarks, at the brain in figure 3. Note the unmapped frontal half, and the large unmapped areas of the rear half. Might some of these areas be principally involved in social perception and action? Might they be teeming with vast vectorial sequences representing social realities of one sort or other? Indeed, once the question is raised, why stop with these areas? Might the socalled "primary" sensory cortical areas - for touch, vision, and hearing especially - be as much in the business of grasping and processing social facts as they are in the business of grasping and processing purely physical facts? These two functions are certainly not mutually exclusive. I think the answer is almost certainly yes to all of these questions. We lack intricate brain maps for social features comparable to existing brain maps for physical features, not because they aren't there to be found, I suggest, but rather because we have not looked for them with a determination comparable to the physical case.
5. Moral Perception and Moral Understanding Though there is no room to detail the case here, an examination of how neural networks sustain scientific understanding reveals that the role of learned prototypes and their continual redeployment in new domains of phenomena is central to the scientific process (Kuhn 1962; Churchland 1989,1995). Specific rules or "laws of nature" play an undeniably important but none the less secondary role, mostly in the social business of communicating or teaching scientific skills. One's scientific understanding is lodged primarily in one's acquired hierarchy of structural and dynamical prototypes, not primarily in a set of linguistic formulae. In a parallel fashion, neural network research has revealed how our knowledge of a language may be embodied in a hierarchy of prototypes for verbal sequences that admit of varied instances and indefinitely many combinations, rather than in a set of specific rules-to-be-followed (Elman 1992). Of course we can and do state grammatical rules, but a child's grammatical competence in no way depends upon ever
246
Paul M. Churchland
hearing them uttered or being able to state them. It may be that the main function of such rules resides in the social business of describing and refining our linguistic skills. One's grammatical capacity, at its original core, may consist of something other than a list of internalized rules-to-be-followed. With these two points in mind, let us now turn to the celebrated matter of our moral capacity. Let us address our ability to recognize cruelty and kindness, avarice and generosity, treachery and honour, mendacity and honesty, the Cowardly Way Out and the Right Thing to Do. Here, once again, the intellectual tradition of western moral philosophy is focused on rules, on specific laws or principles. These are supposed to govern one's behaviour, to the extent that one's behaviour is moral at all. And the discussion has always centred on which rules are the truly valid, correct, or binding rules. I have no wish whatever to minimize the importance of that ongoing moral conversation. It is an essential part of mankind's collective cognitive adventure, and 1 would be honoured to make even the most modest of contributions to it. Nevertheless, it may be that a normal human's capacity for moral perception, cognition, deliberation, and action has rather less to do with rules, whether internal or external, than is commonly supposed. What is the alternative to a rule-based account of our moral capacity? The alternative is a hierarchy of learned prototypes, for both moral perception and moral behaviour, prototypes embodied in the welltuned configuration of a neural network's synaptic weights. We may here find a more fruitful path to understanding the nature of moral learning, moral insight, moral disagreements, moral failings, moral pathologies, and moral growth at the level of entire societies. Let us explore this alternative, just to see how some familiar territory looks from a new and different hilltop. One of the lessons of neural network research is that one's capacity for recognizing and discriminating perceptual properties usually outstrips one's ability to articulate or express the basis of such discriminations in words. Tastes and colours are the leading examples, but the point quickly shows itself to have a much broader application. Faces, too, are something we can discriminate, recognize, and remember to a degree that exceeds any verbal articulation we could possibly provide. The facial expression of emotions is evidently a third example. The recognition of sounds is a forth. In fact, the cognitive priority of the preverbal over the verbal shows itself, upon examination, to be a feature of almost all of our cognitive categories. This supra-verbal grasp of the world's many dimensions of variation is perhaps the main point of having concepts: it allows us to deal
The Neural Representation of the Social World
247
appropriately with the always novel but never-entirely-novel situations flowing endlessly towards us from an open-ended future. That same flexible readiness characterizes our social and moral concepts no less than our physical concepts. And our moral concepts show the same penetration and supra-verbal sophistication shown by nonmoral concepts. One's ability to recognize instances of cruelty, patience, meanness, and courage, for instance, far outstrips one's capacity for verbal definition of those notions. One's diffuse expectations of their likely consequences similarly exceeds any verbal formulae that one could offer or construct, and those expectations are much the more penetrating because of it. All told, moral cognition would seem to display the same profile or signature that in other domains indicates the activity of a well-tuned neural network underlying the whole process. If this is so, then moral perception will be subject to same ambiguities that characterize perception generally. Moral perception will be subject to the same modulation, shaping, and occasional "prejudice" that recurrent pathways make possible. By the same token, moral perception will occasionally be capable of the same cognitive "reversals" that we saw in such examples as the old / young woman in Figure 5. Pursuing the parallel further, it should also display cases where one's first moral reaction to a novel social situation is simply moral confusion, but where a little background knowledge or collateral information suddenly resolves that confusion into an example of something familiar, into an unexpected instance of some familiar moral prototype.
Figure 5. The Old Woman / Young Woman. A classic case of a visually ambiguous figure. The old woman is looking left and slightly towards us with her chin buried in her ruff. The young woman is looking to the left but away from us; her nose is barely visible, but her left ear, jaw line, and choker necklace are directly before us.
248
Paul M. Churchland
On these same assumptions, moral learning will be a matter of slowly generating a hierarchy of moral prototypes, presumably from a substantial number of relevant examples of the moral kinds at issue. Hence the relevance of stories and fables, and above all, the ongoing relevance of the parental example of interpersonal behaviour, and parental commentary on and consistent guidance of childhood behaviour. No child can learn the route to love and laughter entirely unaided, and no child will escape the pitfalls of selfishness and chronic conflict without an environment filled with examples to the contrary. People with moral perception will be people who have learned those lessons well. People with reliable moral perception will be those who can protect their moral perception from the predations of self-deception and the corruptions of self-service. And, let us add, from the predations of group-think and the corruptions of fanaticism, which involves a rapacious disrespect for the moral cognition of others. People with unusually penetrating moral insight will be those who can see a problematic moral situation in more than one way, and who can evaluate the relative accuracy and relevance of those competing interpretations. Such people will be those with unusual moral imagination, and a critical capacity to match. The former virtue will require a rich library of moral prototypes from which to draw, and especial skills in the recurrent manipulation of one's moral perception. The latter virtue will require a keen eye for local divergences from any presumptive prototype, and a willingness to take them seriously as grounds for finding some alternative understanding. Such people will by definition be rare, although all of us have some moral imagination, and all of us some capacity for criticism. Accordingly, moral disagreements will be less a matter of interpersonal conflict over what "moral rules" to follow, and more a matter of interpersonal divergence as to what moral prototype best characterizes the situation at issue. It will be more a matter, that is, of divergences over what kind of case we are confronting in the first place. Moral argument and moral persuasion, on this view, will most typically be a matter of trying to make salient this, that, or the other feature of the problematic situation, in hopes of winning one's opponent's assent to the local appropriateness of one general moral prototype over another. A non-moral parallel of this phenomenon can again be found in the old/young woman example of Figure 5. If that figure were a photograph, say, and if there were some issue as to what it was really a picture of, I think we would agree that the young-woman interpretation is by far the more "realistic" of the two. The old-woman interpretation, by comparison, asks us to believe in the reality of a hyperbolic cartoon.
The Neural Representation of the Social World
249
A genuinely moral example of this point about the nature of moral disagreement can be found in the current issue over a woman's right to abort a first-trimester pregnancy without legal impediment. One side of the debate considers the status of the early fetus and invokes the moral prototype of a Person, albeit a very tiny and incomplete person, a person who is defenceless for that very reason. The other side of the debate addresses the same situation and invokes the prototype of a tiny and unwelcome Growth, as yet no more a person than is a cyst or a cluster of one's own skin cells. The first prototype bids us bring to bear all the presumptive rights of protection due any person, especially one that is young and defenceless. The second prototype bids us leave the woman to deal with the tiny growth as she sees fit, depending on the value it may or may not currently have for her, relative to her own longterm plans as an independently rightful human. Moral argument, in this case as elsewhere, typically consists in urging the accuracy or the poverty of the prototypes at issue as portrayals of the situation at hand. I cite this example not to enter into this debate, nor to presume on the patience of either party. I cite it to illustrate a point about the nature of moral disagreements and the nature of moral arguments. The point is that real disagreements need not be and seldom are about what explicit moral rules are true or false: the adversaries in this case might even agree on the obvious principles lurking in the area, such as, "It is prima facie wrong to kill any person." The disagreement here lies at a level deeper than that glib anodyne. It lies in a disagreement about the boundaries of the category "person," and hence about whether the explicit principle even applies to the case at hand. It lies in a divergence in the way people perceive or interpret the social world they encounter, and in their inevitably divergent behavioural responses to that world. Whatever the eventual resolution of this divergence of moral cognition, it is antecedently plain that both parties to this debate are driven by some or other application of a moral prototype. But not all conflicts are thus morally grounded. Interpersonal conflicts are regularly no more principled than that between a jackal and a buzzard quarrelling over a steaming carcass. Or a pair of two-year-old human children screaming in frustration at a tug-of-war over the same toy. This returns us, naturally enough, to the matter of moral development in children, and to the matter of the occasional failures of such development. How do such failures look, on the trained-network model here being explored? Some of them recall a view from antiquity. Plato was inclined to argue, at least in his occasional voice as Socrates, that no man ever knowingly does wrong. For if he recognizes the action as being genuinely wrong - rather than just "thought to be wrong by others" - what
250
Paul M. Churchland
motive could he possibly have to perform it? Generations of students have rejected Plato's suggestion, and rightly so. But Plato's point, however overstated, remains an instructive one: an enormous portion of human moral misbehaviour is due primarily to cognitive failures of one kind or another. Such failures are inevitable. We have neither infinite intelligence nor total information. No one is perfect. But some people, as we know, are notably less perfect than the norm, and their failures are systematic. In fact, some people are rightly judged to be chronic troublemakers, terminal narcissists, thoughtless blockheads, and treacherous snakes; not to mention bullies and sadists. Whence stem these sorry failings? From many sources, no doubt. But we may note right at the beginning that a simple failure to develop the normal range of moral perception and social skills will account for a great deal here. Consider the child who, for whatever reasons, learns only very slowly to distinguish the minute-by-minute flux of rights, expectations, entitlements, and duties as they are created and cancelled in the course of an afternoon at the day-care centre, an outing with one's siblings, or a playground game of hide-and-seek. Such a child is doomed to chronic conflict with other children - doomed to cause them disappointment, frustration, and eventually anger, all of it directed at him. Moreover, he has all of it coming, despite the fact that a flinty-eyed determination to "flout the rules" is not what lies behind his unacceptable behaviour. The child is a moral cretin because he has not acquired the skills already flourishing in the others. He is missing skills of recognition to begin with, and also the skills of matching his behaviour to the moral circumstance at hand, even when it is dimly recognized. The child steps out of turn, seizes disallowed advantages, reacts badly to constraints binding on everyone, denies earned approval to others, and is blind to opportunities for profitable co-operation. His failure to develop and deploy a roughly normal hierarchy of social and moral prototypes may seem tragic, and it is. But one's sympathies must lie with the other children when, after due patience runs out, they drive the miscreant howling from the playground. What holds for a playground community holds for adult communities as well. We all know adult humans whose behaviour recalls to some degree the bleak portrait just outlined. They are, to put the point gently, unskilled in social practices. Moreover, all of them pay a stiff and continuing price for their failure. Overt retribution aside, they miss out on the profound and ever-compounding advantages that successful socialization brings, specifically, the intricate practical, cognitive, and emotional commerce that lifts everyone in its embrace.
The Neural Representation of the Social World
251
6. The Basis of Moral Character This quick portrait of the moral miscreant invites a correspondingly altered portrait of the morally successful person. The common picture of the Moral Man as one who has acquiesced in a set of explicit rules imposed from the outside - from God, perhaps, or from Society - is dubious in the extreme. A relentless commitment to a handful of explicit rules does not make one a morally successful or a morally insightful person. That is the path of the Bible Thumper and the Waver of Mao's Little Red Book. The price of virtue is a good deal higher than that, and the path thereto is a good deal longer. It is much more accurate to see the moral person as one who has acquired a complex set of subtle and enviable skills: perceptual, cognitive, and behavioural. This was of course the view of Aristotle, to recall another name from antiquity. Moral virtue, as he saw it, was something acquired and refined over a lifetime of social experience, not something swallowed whole from an outside authority. It was a matter of developing a set of largely inarticulable skills, a matter of practical wisdom. Aristotle's perspective and the neural network perspective here converge. To see this more clearly, focus now on the single individual, one who grows up among creatures with a more-or-less common human nature, in an environment of ongoing social practices and presumptive moral wisdom already in place. The child's initiation into that smooth collective practice takes time, time to learn how to recognize a large variety of prototypical social situations, time to learn how to deal with those situations, time to learn how to balance or arbitrate conflicting perceptions and conflicting demands, and time to learn the sorts of patience and self-control that characterize mature skills in any domain of activity. After all, there is nothing essentially moral about learning to defer immediate gratification in favour of later or more diffuse rewards. So far as the child's brain is concerned, such learning, such neural representation, and such deployment of those prototypical resources are all indistinguishable from their counterparts in the acquisition of skills generally. There are real successes, real failures, real confusions, and real rewards in the long-term quality of life that one's moral skills produce. As in the case of internalizing mankind's scientific knowledge, a person who internalizes mankind's moral knowledge is a more powerful, effective, and resourceful creature because of it. To draw the parallels here drawn is to emphasize the practical or pragmatic nature of both scientific and broadly normative knowledge. It is to emphasize the fact that both embody different forms of know-how: how to navigate the natural world in the former case, and how to navigate the social world in the latter.
252
Paul M. Churchland
This portrait of the moral person as a person who has acquired a certain family of perceptual and behavioural skills contrasts sharply with the more traditional accounts that picture the moral person as one who has agreed to follow a certain set of rules (e.g., "Always keep your promises," etc.), or alternatively, as one who has a certain set of overriding desires (e.g., to maximize the general happiness, etc.). Both of these more traditional accounts are badly out of focus. For one thing, it is just not possible to capture, in a set of explicit imperative sentences or rules, more than a small part of the practical wisdom possessed by a mature moral individual. It is no more possible here than in the case of any other form of expertise - scientific, athletic, technological, artistic, or political. The sheer amount of information stored in a well-trained network the size of a human brain, and the massively distributed and exquisitely context-sensitive ways in which it is stored therein, preclude its complete expression in a handful of sentences, or even a large bookful. Statable rules are not the basis of one's moral character. They are merely its pale and partial reflection at the comparatively impotent level of language. If rules do not do it, neither are suitable desires the true basis of anyone's moral character. Certainly they are not sufficient. A person might have an all-consuming desire to maximize human happiness. But if that person has no comprehension of what sorts of things genuinely serve lasting human happiness; no capacity for recognizing other people's emotions, aspirations, and current purposes; no ability to engage in smoothly co-operative undertakings; no skills whatever at pursuing that all-consuming desire; then that person is not a moral saint. He is a pathetic fool, a hopeless busybody, a loose cannon, and a serious menace to his society. Neither are canonical desires obviously necessary. A man may have, as his most basic and overriding desire in life, the desire to see his own children mature and prosper. To him, let us suppose, everything else is distantly secondary. And yet, such a person may still be numbered among the most consummately moral people of his community, so long as he pursues his personal goal, as others may pursue theirs, in a fashion that is scrupulously fair to the aspirations of others and ever protective of the practices that serve everyone's aspirations indifferently. Attempting to portray either accepted rules or canonical desires as the basis of moral character has the further disadvantage of inviting the sceptic's hostile question: "Why should I follow those rules?" in the first case, and "What if I don't have those desires?" in the second. If, however, we reconceive strong moral character as the possession of a broad family of perceptual, cognitive, and behavioural skills in the social domain, then the sceptic's question must become, "Why should
The Neural Representation of the Social World
253
I acquire those skills?" To which the honest answer is, "Because they are easily the most important skills you will ever learn." This novel perspective on the nature of human cognition, both scientific and moral, comes to us from two disciplines - cognitive neuroscience and connectionist AI - that had no prior interest or connection with either the philosophy of science or moral theory. And yet the impact on both these philosophical disciplines is destined to be revolutionary. Not because an understanding of neural networks will obviate the task of scientists or of moral / political philosophers. Not for a second. Substantive science and substantive ethics will still have to be done, by scientists and by moralists and mostly in the empirical trenches. Rather, what will change is our conception of the nature of scientific and moral knowledge, as it lives and breathes within the brains of real creatures. Thus, the impact on meteethics is modestly obvious already. And no doubt a revolution in moral psychology will eventually have some impact on substantive ethics as well, on matters of moral training, moral pathology, and moral correction, for example. But that is for moral philosophers to work through, not cognitive theorists. The message of this paper is that an ongoing conversation between these two communities has now become essential.
Acknowledgments This paper is excerpted from chapters 6 and 10 of P. M. Churchland, The Engine of Reason, The Seat of the Soul: A Philosophical Journey into the Brain (Cambridge, MA, 1995: Bradford Books/MIT Press). Reprinted here with permission of the MIT Press.)
References Churchland, P. M. (1989). A Neurocomputational Perspective: The Nature of Mind and the Structure of Science. Cambridge, MA: Bradford Books/MIT Press. (1995). The Engine of Reason, The Seat of the Soul: A Philosophical Journey into the Brain. Cambridge, MA: Bradford Books/MIT Press. Elman, J. L. (1992). Grammatical structure and distributed representations. In Steven Davis (ed.), Connectionism: Theory and Practice (New York: Oxford University Press), pp. 138-78. Kuhn, T. S. (1962). The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
This page intentionally left blank
Morality
This page intentionally left blank
13 Moral Dualism David Schmidtz
1. The Formal Structure of a Moral Theory When I teach Introductory Ethics, I spend several weeks on deontology and utilitarianism. Then I ask students to evaluate the two theories and every year a few of them say the truth is a little bit of both. A little bit of "rightness is determined solely by consequences" and a little bit of "consequences have nothing to do with it." I used to get exasperated. Now I think my students are right, and I have been trying to develop a theory that reflects my students' intuitions in a coherent way. What follows is a progress report. My approach to moral theory begins by borrowing from H. L. A. Hart. Hart's legal theory distinguishes between primary and secondary legal rules (1961, pp. 89-93). Primary rules comprise what we normally think of as the law. They define our legal rights and obligations. We use secondary rules, especially rules of recognition, to determine what the law is. For example, among the primary rules in my neighbourhood is a law saying the speed limit is thirty miles per hour. The secondary rule by which we recognize the speed limit is: read the signs. Exceeding speed limits is illegal, but there is no further law obliging us to read signs that post the speed limit. So long as I stay within the speed limit, the police do not worry about whether I read the signs. In reading the signs, we follow a secondary rule, not a primary rule. We can think of moral theories in a similar way.1 For example, utilitarianism's recognition rule is the principle of utility: X is moral if and only if X maximizes utility. As it stands, the principle defines a family of moral theories rather than any particular member thereof. The different flavours of utilitarianism are produced by replacing X with a specific subject matter. Act-utilitarianism applies the principle of utility to actions themselves. Act-utilitarianism's fully specified recognition rule - an act is moral if and only if it maximizes utility - then translates directly into act-utilitarianism's single rule of conduct: maximize utility. Rule-utilitarianism applies the principle of utility to sets of action-guiding rules. The resulting recognition rule states that an action guide is moral if and only if following it has more utility than 257
258
David Schmidtz
would following any alternative action guide. Of course, the utilitymaximizing set of primary rules might boil down to a single rule of conduct saying "maximize utility." Then again, it might not.2 Deontological theories are harder to characterize. We could begin with a generic recognition rule saying X is moral if X is universalizable. Applying the rule to maxims yields a more specific recognition rule (something like: a maxim is moral if acting on it is universalizable), which in turn yields a set of imperatives, reverence towards which is grounded in considerations of universalizability. Perhaps the idea of universalizability does not have enough content to yield determinate imperatives on its own. Deontology may need a second recognition rule formulated in terms of respect for persons as ends in themselves, so that the two rules can converge on a set of concrete imperatives. But that is another story. A moral theory consists of a recognition rule applied to a particular subject matter. Given a subject matter, a rule of recognition for morals specifies grounds for regarding things of that kind as moral. By "grounds" I do not mean necessary and sufficient conditions. In actutilitarianism, the principle of utility presents itself as necessary and sufficient for an act's morality, but trying to set up necessary and sufficient conditions is not the only way of doing moral theory. To have a genuine recognition rule, all we really need is what I call a supporting condition. A supporting condition is a condition that suffices as a basis for endorsement in the absence of countervailing conditions. Intuitionists claim that we could never fully articulate all the considerations relevant to moral judgment. We can allow for that possibility (without letting it stop us from doing moral theory) by formulating recognition rules in terms of supporting conditions - conditions that suffice to shift the burden of proof without claiming to rule out the possibility of the burden being shifted back again, perhaps by considerations we have yet to articulate. As an example of a supporting condition, we might say, along the lines of act-utilitarianism, that an act is moral if it maximizes utility, barring countervailing conditions. In two ways, act-utilitarianism properly so-called goes beyond merely offering a supporting condition. First, it denies that there are countervailing conditions, thereby representing the principle of utility as a proper sufficient condition, not just a supporting condition. Second, act-utilitarianism says an act is moral only if the act maximizes utility, thereby representing the principle of utility not only as sufficient but also as necessary for an act's morality. Personally, I do not think we will ever have a complete analysis of morality, any more than we will ever have a complete analysis of
Moral Dualism
259
knowledge. We use such terms in a variety of related ways, and there is no single principle nor any biconditional analysis to which the varying uses can all be reduced. That is not an admission of defeat, though, for the important thing is not to find the one true principle, but rather to look for principles that can form a backbone for a useful rule of recognition. Three points are worth highlighting. 1. A moral theory can range over more than one subject matter. We devise moral theories to help us answer questions raised by the subject of individual choice and action, of course. Yet, we might also want to assess individual character. Or we might want to assess the morality of the institutional frameworks within which individuals choose and act and develop their characters. These are distinct subject matters. 2. A moral theory can incorporate more than one recognition rule. There is nothing in the nature of morality to indicate that we should aim to answer all questions with a single recognition rule, because there is nothing in the nature of recognition rules to suggest there cannot be more than one. Modern ethical enquiry is often interpreted (maybe less often today than a few years ago) as a search for a single-stranded theory - a single rule of recognition applied to a single subject matter, usually the subject of what moral agents ought to do. Maybe Kant and Mill intended to promulgate single-stranded theories; they often are taken to have done so by friends and foes alike. In any case, when interpreted in that way, their theories can capture no more than a fragment of the truth. The truth is: morality is more than one thing. A theory will not give us an accurate picture of morality unless it reflects the fact that morality has more than one strand. Accordingly, I will not try to derive all of morality from a single recognition rule. I once began a paper by noting that utilitarianism (which says rightness is determined by consequences) and deontology (which says it is not) both express powerful insights into the nature of morality. "On the one hand, doing as much good as one can is surely right. On the other hand, it is also right to keep promises, sometimes even in cases where breaking them has better consequences" (1990, p. 622). The paper concluded on a grim note. "We have intuitions about morality that seem essentially embedded in theories that contradict each other. Something has to give" (p. 627). At the time, I was stumped by this dilemma, but it has become clear that what can and should give is the assumption that morality is single-stranded. When we come to despair of finding the single property shared by all things moral, we can stop looking for essence and start looking for family resemblance. By abandoning the search for a single-stranded moral theory, we put ourselves in a position
260
David Schmidtz Personal Strand
Generic Recognition Rule Subject Matter Fully Specified Recognition Rule
Interpersonal Strand
Is X individually rational?
Is Y collectively rational?
X = personal goals.
Y = interpersonal constraints.
A goal is moral if pursuing it is individually rational.
A constraint is moral if pursuing goals within it is collectively rational.
(subject to countervailing conditions)
(subject to countervailing conditions)
Figure 1: A multi-stranded moral theory.
to notice that whether Tightness is determined solely by consequences might depend on the subject matter. 3. A moral theory can be structurally open-ended. Utilitarianism and deontology, or single-stranded interpretations thereof, try to capture the whole truth about morality with a single recognition rule. By the lights of either theory, the other theory is a rival competing for the same turf. The theories are closed systems in the sense that, having incorporated one recognition rule, and claiming to capture the whole of morality with it, they have no room for others. By contrast, I see morality as an open-ended series of structurally parallel strands, each with its own recognition rule, each contributing different threads of morality's action guide. Any particular recognition rule has a naturally limited range, applying only to its own subject matter. No particular recognition rule pretends to capture the whole of morality, and so verifying that they do not do so will not refute the theory.
2. The Substance of Moral Dualism Those are my thoughts about the formal structure of moral theory. As far as substance goes, Figure 1 shows one way to flesh out a multistranded, open-ended moral theory. According to this theory, a goal is moral if pursuing it is individually rational; a constraint is moral if operating within it is collectively rational. Putting the two strands together, we conclude that to be moral is to pursue individually rational goals within collectively rational constraints. The two subject matters do not jointly exhaust the logical space of morality's possible subject matters, of course, but ranging over all possible subject matters is not necessarily a prerequisite for
Moral Dualism
261
comprehensiveness, because a theory ranging over all possible subjects of moral enquiry might exhibit a great deal of redundancy. (In any case, the theory depicted in Figure 1 could be made more comprehensive by adding further strands, or more simply, by broadening the subject matters of the existing strands.) This is roughly the theory with which I began. The form of the theory does not dictate its content, though, and I was not satisfied with the content. More than once, I changed my mind about how the subject matters and recognition rules should be defined. I was not always sure why I was changing my mind, either. What should we expect of a theory? To what questions is a moral theory supposed to give answers? Barbara Herman writes, "In general, judgment is possible only when the material to be judged is presented in a manner that fits the form of judgment. Moral judgment is not the first step in moral deliberation" (1985, p. 417). In different words, a subject matter must raise moral questions before we ever start devising recognition rules to help us answer them. Herman also seems to be saying that we can and sometimes must reconstruct a pre-theoretically given subject matter. If a subject matter raises questions that cannot all be answered by a single recognition rule, then we need to simplify and clarify and separate the materials to be judged before we can devise recognition rules to fit them.. If my experience is any indication, Herman is right. I wanted to apply recognition rules only to subjects that make us feel a need for moral theory in the first place. I began with the general idea of assessing the morality of personal goals (What goals should we pursue?) and interpersonal constraints (How should the interests of others constrain our pursuits?). My subsequent search for recognition rules for those two subject matters was guided by objectives that are not easily reconciled. A recognition rule must home in on a property with normative force, one that constitutes reason for endorsement. To complicate matters, the property's normative force must be independent of morality. It must give us reasons for endorsement of an ordinary kind, ordinary in the sense of appealing to interests and desires. (If we said we recognize Y as moral in virtue of its having the property of being moral, that would be circular. Our method of recognizing what is moral must not presuppose that we already recognize what is moral.) On the other hand, morality is, after all, what a recognition rule is supposed to be recognizing. The rule has to be a basis for endorsing something as moral despite having a normative force that is not essentially moral. We are looking for "what Peter Danielson (1992, p. 19) calls a fundamental justification within the realm of morality, which is a justification that does not appeal to concepts of the realm.3 If we recognize X as moral by recognizing a reason to endorse X that is not an essentially
262
David Schmidtz
moral reason, our recognition rule serves its epistemic function by recognizing what Danielson would call X's fundamental justification. Individual and collective rationality can provide a fundamental justification in the realm of morality because neither one is an essentially moral concept, yet each has its own kind of normative force - each embodies a kind of reason for action or reason for endorsement. In that sense, they are qualified to provide a fundamental justification within the realm of morality, whereas essentially moral concepts like justice are not. But why think of these conditions, particularly individual rationality, as supporting characteristically moral endorsement? To answer that, I need to step back. I originally got involved in trying to develop a moral theory because I was writing a book on the "Why be moral?" question. For my purposes, the question "What is morality?" translates into: what is it that people are calling into question when they ask "why be moral?" And the latter question has a real answer. "Why be moral" is not an idle question. When I ask "Why be moral?" I am questioning something from a first person singular perspective when I and my interlocutors take it for granted that there is a separate perspective from which we will be upset if it turns out that I have no reason to be moral from my singular perspective. That separate perspective is a first person plural perspective.4 Intuitively, my endorsement begins to look like characteristically moral endorsement when grounded in the thought, not that I have reason for endorsement, but that we have reason for endorsement. If moral endorsement involves taking a plural perspective, then we can imagine how being moral could be disadvantageous for you or me and yet we could still have clear reason to endorse being moral. For example, many theorists now think of co-operating in a Prisoner's Dilemma as a paradigm case of being moral. Co-operating in a Prisoner's Dilemma is disadvantageous from an I-perspective, yet it remains rational in the sense of being to our advantage from a we-perspective. It is from a plural perspective that, in a Prisoner's Dilemma, we find something stupid about individual rationality. When you and I each decide not to co-operate, you and I each do as well as possible, yet we do not do as well as possible. On the other hand, if being moral was pointless not only from a singular perspective but from a plural perspective as well, then it would be pointless, period. In that case, we would have no interest in asking "Why be moral?" On the contrary, being moral would be something we would have good reason to avoid in ourselves and condemn in others. Being moral, though, is not like that. It need not be rational from a singular perspective, but part of the essence of being moral is that we have reason to endorse it from a plural perspective.5 Accordingly,
Moral Dualism
263
when morality's recognition rules pick out X as moral, they do so by recognizing that X has properties we have reason to endorse from a plural perspective.6 By that standard, individual rationality does not make the grade, hence the first change to the theory depicted in Figure 1. Recognizing something as individually rational is weak at best as a basis for endorsing it from a plural perspective. One alternative is to talk in terms of what I have elsewhere called reflective rationality.7 Reflective rationality is individual means-end rationality together with a particular psychological profile. Reflectively rational people are introspective. In particular, they realize that their self-regard cannot be taken for granted. Self-regard waxes and wanes. People lose interest in themselves all the time. The strength of people's self-regard is a key variable in their preference functions, so reflectively rational people take steps to develop and enhance a healthy self-interest. Further, reflectively rational people conclude that to develop and enhance their interest in themselves, they need to give their lives instrumental value. And one of the best ways of giving their lives instrumental value is to make themselves valuable to others, to be a force for good in their communities, to be esteemed by others. And such a person is introspective enough to want it to be his or her real self, not merely a false fagade, that is esteemed by others. A reflectively rational person has an interest in being a person of up-front principle a person of virtue. So when I talk about reflective rationality, you can think of it as an Aristotelian idealization of self-interest. It is a standard means-end conception of rationality informed by empirical psychological assumptions about the need for self-respect and about what people have to do to develop and enhance their self-respect in a social setting. In short, a reflectively rational person asks not only "what do I want to get?" but also "what do I want to be?" The former question does not wear moral significance on its sleeve; the latter one does. Is reflective rationality grounds for specifically moral endorsement? Not necessarily. It depends on the subject matter. When we look at how people treat themselves, though, and especially at how their pursuits affect their characters, reflective rationality surely does matter to us from a plural perspective. We try to raise our children to be reflectively rational not only because we care about how they treat others but also because we care about how they treat themselves. We approve of people developing in a reflectively rational way, at least insofar as our question is about how people should treat themselves. Reflective rationality's relevance may extend beyond the issue of how people affect themselves, but it seemed especially apt for that subject.
264
David Schmidtz
Second change. What I call reflective rationality - this Aristotelian idealization of self-interest - is strongest as a basis for assessing how we treat ourselves. We could assess the goal of being a good professor or a good tennis player by asking whether it would be reflectively rational to develop yourself along those lines, but suppose we look at the goal of being a good mugger. We could argue that trying to be a good mugger is not reflectively rational, and therefore is not moral. But even if the argument goes through, it seems to get the right answer for the wrong reason. What makes it wrong to be a mugger is not that wise men find it unfulfilling, but rather that it violates interpersonal constraints. For that reason, we are well-advised to apply reflective rationality only to the subject of how our pursuits affect ourselves. Aside from obligations to others that derive from obligations to ourselves, the subject of how we ought to treat others is something we leave for the interpersonal strand. We thus have the makings of one part of morality: We can assess personal goals, in terms of how they affect the agent, by asking whether choosing and pursuing those goals is reflectively rational. Third change. Insofar as the interpersonal strand looks like rule-utilitarianism, it threatens to collapse into act-utilitarianism. If constraints are things we impose on ourselves, why would it not be collectively rational to impose upon ourselves a constraint to do all and only that which maximizes utility? If that happens, the personal strand pretty much disappears and we are left with old-fashioned act-utilitarianism. But the subject matter I actually have in mind is externally imposed constraints, not internally imposed constraints. Our social matrix imposes all kinds of formal and informal constraints on us, and I am interested in how existing social structures, formal and informal, actually constrain people. My theory is that these constraints, as they work through social structure, are moral if their actual effect is to make people better off. And when it comes to which structurally embedded constraints are collectively rational, we are talking about something that will not collapse into act-utilitarianism. Laws against fraud and murder are collectively rational, and replacing them with a single law directing us to maximize utility would not be. Fourth and final change. Actually, just a clarification. We need to say what it means to be collectively rational. One thing I would not do is equate collective rationality with maximizing aggregate utility. Reason is: generally speaking, we do not recognize our social matrix as maximizing aggregate utility. We do not have that kind of information. But we often do have information from which we can conclude that a certain norm makes people in general better off. When you do know that an institution is maximizing utility, I do not object to using that information as a basis for endorsement. From a plural perspective, 1
Moral Dualism
265
approve of institutions that maximize utility, barring countervailing conditions. I also approve of institutions that make literally everyone better off. But I want a recognition rule with a little more breadth of application. So, I have been working with the idea of making people in general better off. That is something about which we often do have evidence: impressionistic, first-hand evidence, a priori theoretical evidence, and statistical evidence. That is what I shall be referring to when I speak of collective rationality. Of course, as a supporting condition, such a recognition rule can be subject to countervailing conditions. For example, we might reject an institution even though it makes most people better off, if we learn that it does so by exploitive means. (Suppose it involves using some patients as involuntary organ donors. That sort of thing.) In fact, concerns about exploitation (using some people as mere means to the ends of others) are so obvious and pervasive that we might consider building a caveat to that effect into the recognition rule in the first place, thereby turning the recognition rule into something that provided stronger support within a narrower range. Note that there are two ways to do this. First, we could build that caveat into the generic recognition rule, in which case it would enter the theory as a partial repudiation of the theory's consequentialist thrust. Alternatively, instead of simply tacking a non-exploitation proviso onto the generic recognition rule, we could derive it from the fully specified rule that we produce when we apply the generic rule to the subject of constraints embedded in institutional structure. On this view, the caveat concerning exploitation is a clarification of, rather than a partial retraction of, the proposal that an institution is moral if it is collectively rational. Just as we could argue that even deontologists have reason to reject institutions that make people worse off, so too can we argue that even utilitarians have reason to reject institutions that use some people as mere means to the ends of others. It is a familiar idea that a prohibition of exploitation is not built into the concept of utility maximization. Indeed, exploitation may well be utility-maximizing in particular cases and in such cases would command endorsement on act-utilitarian grounds. But the results of exploitive institutions are not like the results of exploitive acts. Exploitive institutions, like exploitive acts, can have utility in isolated cases, but an institution is not an isolated event. Whether an institution makes people better off is not a matter of how it functions on a particular day but rather of how it functions over its lifetime. An institution that operates by exploitive methods may have arguably good results on a particular day, but as time passes, the institution's overall record is increasingly likely to reflect the tendency of exploitation to have bad results.
266
David Schmidtz
If institutions with the power to act by exploitive methods could be trusted to do so only when the benefits would be great and widespread, the situation might be different. Indeed, we might say such institutions are moral after all. (Note that this would not be a problem for the theory as it stands. It would prove only that it was right to insist that making people better off by non-exploitive means is a supporting condition for institutional morality rather than a necessary condition.) Exploitive institutions, though, tend to be costly in an ongoing way, and those costs have a history of getting out of control. Therefore, when a conception of serving the common good joins the idea of making people better off to the idea of operating by non-exploitive methods, the latter idea is not merely a bit of deontology arbitrarily grafted onto the theory so as to save it from the worst excesses of utilitarianism. On the contrary, we are talking about institutions, and in that special context, a restriction against exploitation naturally follows from a consideration of how institutions make people better off. Indeed, limiting opportunities for exploitation is one of the primary methods by which institutions make people in general better off. This gives us the makings of a second part of morality: We can assess constraints imposed on us by social structure by asking whether such impositions are collectively rational in the sense of making people in general better off by non-exploitive methods. By this standard, there may not be a sharp separation between our having reason to endorse an institution and our not having reason to endorse it. The strength of our reasons for endorsement may instead form a continuum. At one end, an institution benefits no one and there is no reason to endorse it; at the other extreme, an institution genuinely makes everyone better off and there is no reason not to endorse it. As we move towards the latter extreme, we encounter institutions that make more and more people better off and which we have increasingly clear reason to recognize as making people in general better off. A theory that pretends to give us a precise cutoff (between institutions that are recognizably moral and those that are not) is, in most cases, merely pretending. When a theory answers our questions about the morality of real-world institutions by presenting us with spuriously precise cutoffs, it fails to take our questions seriously.8 This is not to deny that before we can say whether institutions make people better off, we need to know what counts as evidence that people are better off. Better off compared to what? What is the baseline? Should we say institutions serve the common good if people are better off under them than they would be in a hypothetical state of nature? It is hard to specify a baseline in a non-arbitrary way. There also will be problems of moral epistemology; that is, how do we know how well off people would be under counterfactual baseline conditions?
Moral Dualism
267
My thought is to look at how people answer such questions in the real world. No one begins by trying to imagine a hypothetical baseline array of institutions. We do not need to know about the state of nature or about how to characterize a hypothetical original position. It is easier and more relevant to ask how people's lives actually change through contact with particular institutions. Does a given institution solve a real problem? It is no easy task to say whether a particular institution solves a real problem, thereby affecting people in a positive way. Nevertheless, we do it all the time. We have fairly sophisticated ways of assessing whether rent controls or agricultural price support programs, for example, make people in general better off. We do not need to posit arbitrary baselines, either. It is enough to look at people's prospects before and after an institution's emergence, to compare communities with the institution to communities without it, and so on. These comparisons, and others like them, give us baselines, and such baselines are not arbitrary.9 In assessing institutions, we use any information we happen to have. Evidence that an institution is making people better off could come in the form of information that people who come in contact with the institution have higher life expectancies or higher average incomes, for example. No such measure would be plausible as an analysis of what it means to make people better off. But such measures comprise the kind of information people actually have. Therefore, such measures comprise the kind of information on which people must base their decisions about whether to regard an institution as moral. The more specific indices are surrogates for the idea of making people better off. Or, if you like, they are part of the common good's ostensive definition, in the sense that one way to make people better off, other things equal, is to increase their life expectancy; another way is to increase their income, and so on. Note that I am trying to capture the spirit of interpersonal morality here, not the letter. The idea of making people in general better off is a vague idea, but I believe that this vague idea accurately represents the spirit of morality. If we need a more precise working definition of this vague idea, then we turn to any of the more precise theoretical or statistical surrogates mentioned above, depending on what kind of concrete information we actually have available, so long as the surrogate is in keeping with the spirit of making people in general better off. Figure 2 depicts the gist of moral dualism. One strand focuses on matters of character; the other focuses on how constraints function when externally imposed on people by formal and informal social structures. According to moral dualism, our question about what serves as a recognition rule for morals has at least two answers. One rule recognizes, as moral, goals that help the agent develop and sustain
268
David Schmidtz Personal Strand
Interpersonal Strand
Is X reflectively rational?
Is Y collectively rational?
Subject Matter
X = goals, in terms of how their pursuit affects the agent's character.
Y = constraints on conduct as embedded in social structure,
Fully Specified Recognition Rule
A goal is moral if pursumg it helps the agent to develop in a reflectively rational way.
A constraint on conduct is moral if it works through social structure in a colleclively rational way.
(subject to countervailing conditions)
(subject to countervailing conditions)
Generic Recognition Rule
Figure 2: Moral dualism's two strands.
a reflectively rational character. The other rule recognizes, as moral, constraints that function in a collectively rational way as embedded in social structure. In plainer words, we owe it to ourselves to nurture virtuous character, and we owe it to others to abide by social norms that serve the common good. Moral dualism's recognition rules pick out parts of common-sense morality that serve our ends from a plural perspective and thus are not merely "moral as the term is commonly understood" but also are independently shown to warrant endorsement. Reflectively rational character and collectively rational institutions command our endorsement from a plural perspective even when they fail to motivate some of us from a singular perspective, and one of the marks of a moral reason is that it engages us in that way. When applied to suitable subject matters, moral dualism's recognition rules identify parts of common sense morality as warranting endorsement on grounds that go beyond matters of occurrent desire. They home in on concerns crucial to human wellbeing and to a concept of the good life for human beings living together. The interpersonal strand will be seen by some as a kind of rule-utilitarianism, and I have no objection to that, so long as it is understood how far we stretch the label if and when we label moral dualism's interpersonal strand as rule-utilitarian. First, the interpersonal strand does not purport to be the whole of morality. Second, it does not ask us to search the universe of logically possible sets of action-guiding rules. It does not ask us to adopt the action guide that would lead us to produce more utility than we would produce by following any alternative action guide. It does not ask us to adopt the action guide that would have the most utility if it were adopted by everyone. Instead, it asks us
Moral Dualism
269
to be aware, as citizens, of existing formal and informal constraints and to respect those that are actually working through social structure in such a way as to make people in general better off. The interpersonal strand ties moral obligations to the content of existing social structure. (It does not do so blindly, though. It ties only one strand of morality to existing social structure, and that strand ties itself to existing social structures only to the degree that we have reason to endorse them.) I do not know whether moral dualism's two recognition rules are morality's ultimate foundation. But they are fundamental enough to support endorsement, and that is what matters. We could try to reduce matters of character to matters of serving the common good, or viceversa. We could try to reduce both to variations on some third and even more basic theme. We might even succeed. But neither rule's status as a supporting condition depends on whether we can reduce it to something else. When we get to either rule, we get to something that, applied to an appropriate subject matter, supports endorsement. We do not need to reduce either rule to anything more fundamental. Nor do we need to look for grounds that are unique in supporting endorsement. There are truths about how our choices affect our characters, and there are truths about whether our social structures make us better off; therein lie genuine and related grounds for endorsement. Our grounds for endorsement must be real; they need not be uniquely so. 3. The Two Strands Form a Unified Theory This section argues that moral dualism's two strands really do go together; they are parts of the same theory. First, they help to determine each other's content. Second, they help to determine each other's limits. The next section talks about what happens when moral goals come into conflict with moral constraints. The previous section defined collective rationality in terms of making people in general better off. If we were working with a single-stranded utilitarian theory of the right, there would not be much else to say. We would be more or less at a dead end, needing to look beyond the theory for a conception of goodness - what counts as making people better off. In contrast, one interesting thing about moral dualism is that, since the theory has two distinct recognition rules, they can look to each other rather than beyond the theory. For example, we can look to the personal strand for ideas about what social structure is supposed to accomplish. Specifically, we can define collective rationality as the property of being conducive to the flourishing of people in general as reflectively rational agents. The theory's two strands are therefore connected via their recognition rules, since they each incorporate a conception of rationality, one of which is defined in terms of the other. See Figure 3.
David Schmidtz
270
Persona! Strand Generic Recognition Rule Subject Matter
fully Specified HecognitionRule
Is X reflectively rational?
Interpersonal Strand helps define
Is Y collectively rational?
how their pursuit affects the agent's character.
Y = constraints on conduct as embedded in social structure.
A goal is moral if pursujng it helps the agent to develop in a reflectively rational way.
A constraint on conduct is moral if it works through social structure in a collectively rational way.
(subject to countervailing conditions)
(subject to countervailing conditions)
X = goals, in terms of
Figure 3: Defining collective rationality.
It does not go both ways. I would not try to define reflective rationality in terms of collective rationality. As the personal strand's recognition rule, though, reflective rationality has a kind of definitional link to collective rationality, since it is subject to countervailing conditions that get their content from the interpersonal strand. Specifically, moral goals must be pursued within moral constraints. Kate's goal of going to medical school and becoming a surgeon may be reflectively rational, but the morality of her goal does not give her license to raise tuition money by fraudulent means. Pursuits that sustain a reflectively rational character are presumptively moral, but the presumption can be reversed by showing that the pursuit has run afoul of collectively rational interpersonal constraints. See Figure 4. We have looked at two definitional links between moral dualism's recognition rules. By definition, collective rationality involves helping individuals to flourish as reflectively rational beings. Reflective rationality's countervailing conditions, in turn, are defined in terms of collective rationality. There are contingent links as well, to the extent that establishing honest rapport with others, and living peacefully and productively within one's community are contingently part of being reflectively rational in a social setting.10 Such contingent links between reflective and collective rationality are secured not only by the tendency of reflectively rational agents to want to play a role in a collectively rational community, but also (from the other direction) by the tendency of a collectively rational community to make room for reflectively rational pursuits. Because structurally embedded constraints impose themselves from outside rather than from within, they serve the common good only insofar as they
271
Moral Dualism Personal Strand Generic Recognition Rule
Interpersonal Strand
Is X reflectively rational?
helps define
Is Y collectively rational?
Subject Matter
X = goals, In terras of how their pursuit affects the agent's character.
Y=constraints on conduct as embedded in social structure.
Fully Specified RecognitionRule
A goal is moral if pursuing it helps the agent to develop in a reflectively rational way.
A constraint on conduct is moral if it works through social structure in a collectively rational way. helps define
(subject to countervailing conditions)
(subject to countervailing conditions)
Figure 4: Defining the personal strand's countervailing conditions.
induce people to serve the common good. Collectively rational communities create incentives and opportunities such that individually rational agents, reflective or not, normally have reasons to act in ways that serve the common good. See Figure 5. We should not think of either strand as independently specifying a sufficient condition for something being moral. No single strand speaks for morality as a whole. If a goal's choice and subsequent pursuit help Kate to sustain a reflectively rational character, then the goal is moral by the lights of the personal strand. If Kate pursues that goal within constraints imposed by collectively rational social structures, then her pursuit is moral by the lights of the interpersonal strand. To Personal Strand Generic Recognition Rule Subject Matter
Fully Specified Recognition Rule
Interpersonal Strand
Is X reflectively rational?
helps define
X = goals, in terms of how their pursuit affects the agent's character. A goal is moral if pursuing
it
heips
the
agent
to
Is Y collectively rational? Y = constraints on conduct as embedded in social structure.
contigent links
develop in a reflectively rational way.
A constraint on conduct is moral if it works through social structure in a collectively rational way.
helps define
(subject to countervailing conditions)
(subject to countervailing conditions)
Figure 5: Contingent links between moral goals and moral constraints.
272
David Schmidtz
be moral, period, her choice and subsequent pursuit must pass both tests (and maybe other tests as well, if there are further strands of morality bearing on action). In terms of their action-guiding function, the two strands are complementary parts of a unified theory. In concert, they converge on an action guide that says something about both ends and means: One should pursue reflectively rational ends via means permitted by collectively rational social structures. Morally, one seeks to make oneself a better person - a person with more to live for - within constraints imposed by social structures that serve the common good.
4. Conflict Like reflective rationality as applied to personal goals, collective rationality as applied to interpersonal constraints admits of countervailing conditions. Suppose you have a medical emergency on your hands. You are bleeding badly, and the most straightforward way of getting to a doctor involves parking illegally, on a street where parking laws serve the common good. My remarks on how the strands intertwine, and my conclusion that they constitute a unified theory, may seem to suggest that they never yield conflicting guidance. Are the strands really so neatly woven together? I think that, although morality's interpersonal constraints mesh with morality's personal strand in normal cases, the two can clash in extreme cases. The kind of moral force that real-world regulations have, when they have any, cannot rule out the possibility of cases where people have decisive reasons to disregard pertinent regulations and, for example, park illegally in an emergency. It can sometimes be reflectively rational, even for people who appreciate and respect moral social structures, to react in ways that are not "by the book." The fact that a given pursuit is reflectively rational is not enough to override collectively rational constraints. On the other hand, if a reflectively rational pursuit is sufficiently important on its own terms, that can make it a matter of collective rationality as well. After all, collective rationality ultimately is a matter of what makes people generally better off as reflectively rational agents. In situations of conflict, morality's interpersonal constraints normally trump - they constrain - personal goals, but when reflectively rational goals become overwhelmingly important, moral agents ought to realize that forcing people to ignore overwhelmingly important moral goals is no part of a moral social structure's purpose. Forcing us to ignore overwhelmingly important goals would not serve the common good. Parking regulations become morally irrelevant, for example, to a person seeking medical attention for a seriously injured family member.11
Moral Dualism
273
Single-stranded theories have dominated modern moral philosophy in part because of a widespread view that theories incorporating more than one recognition rule inevitably are beset by internal conflict. Moreover, so the story goes, conflicts are resolvable only by some overarching principle of adjudication. The over-arching principle will be the theory's real and only fundamental recognition rule. Thus, some theorists would say there is an instability in theories proposing more than one recognition rule. With a little pushing, all such theories collapse into single-stranded theories. Perhaps it is already clear why this view is mistaken when it comes to moral dualism. Moral social structures impose constraints that can conflict with reflective rationality, but the two strands can settle their disputes without having to resort to outside arbitration. In most cases, the personal strand defers to the interpersonal strand. Reflectively rational agents have reason to defer to (and even to internalize constraints imposed by) a collectively rational social structure, insofar as the structure serves their ends from a plural perspective and also insofar as the structure gives them various first person singular incentives to defer. In extraordinary cases, though, our reflectively rational goals can become overwhelmingly important. Reflectively rational people have reasons to comply with moral social structures in normal cases, but to comply with moral parking laws in a situation in which we are in danger of bleeding to death would not be reflectively rational. In a case like that, the personal strand cannot defer, but the interpersonal strand can, and it has good reason to do so. Moral social structures do not try to change us in such a way that we would rather die than double-park. The attempt to command such blind allegiance would not serve the common good. Therefore, both strands, each on their own ground, agree that in normal cases, reflectively rational goals must be pursued within constraints imposed by collectively rational social structures. Likewise, both strands, each on their own ground, agree there can be cases in which abiding by collectively rational parking laws would be wrong: neither individually nor collectively rational. If it seems paradoxical to say abiding by collectively rational rules is not always collectively rational, then recall that the collective rationality of laws is a matter of how they function over the course of their existence. What we are talking about now is the collective rationality of abiding by moral institutions in abnormal cases. For example, a rule against shouting in libraries is moral because it makes people generally better off. The constraint makes people better off in virtue of how it channels behaviour in normal cases. The
274
David Schmidtz
constraint does not spell out what people should do in literally every contingency that might arise. Library patrons generally are better off with simple norms, and the fact that they are generally better off with simple norms is what gives simple norms their moral force. Shouting in a library might be reflectively or even collectively rational in a given case, but that by itself does not undermine the moral standing of a prohibition that (in virtue of making it possible for patrons to count on peace and quiet) remains collectively rational as an ongoing norm. If you notice smoke pouring from the ventilator shaft, though, the situation has deviated so far from the norm that normal constraints no longer apply. And it is obvious that they no longer apply. When smoke is pouring from the ventilator shaft, the background conditions that give the norm its ongoing utility, and thus its moral force, can no longer be taken as given. The norm's moral force is predicated on the ongoing existence of the library and its patrons, but their ongoing existence is now the issue. When the library begins to look more like a death trap than like a quiet place to do research, disturbing fellow patrons becomes morally okay. The personal strand dictates violating the constraint against disturbing fellow patrons, and the interpersonal strand does not demand compliance (or even permit it, given that different norms apply in emergencies). Note that there is no direct appeal here to the plural perspective as an over-arching moral principle. Instead, as in normal cases, the plural perspective is the perspective from which we identify reasons for endorsement; it is not itself a reason for endorsement. Having identified collective rationality as grounds for endorsing social structures as moral, we conclude that a rule against disturbing fellow patrons, grounded in considerations of collective rationality, reaches its natural limit when the building is on fire. The only kind of rule against disturbing library patrons that could be considered collectively rational is a rule that is understood not to apply when you notice smoke pouring from the ventilator shaft. Likewise, it would run counter to interpersonal morality for a government to hire police officers to run down and handcuff people who double-park in a desperate attempt to avoid bleeding to death. This is so even when laws forbidding double parking are moral. That is how moral dualism explains the occurrence and resolution of moral conflict. There is no need - there is not even room - for an over-arching principle of adjudication. Moral dualism's two strands provide the source of each other's countervailing conditions. Conflict between the two strands is the source of the personal strand's countervailing conditions in normal cases, and is the source of the interpersonal strand's countervailing
Moral Dualism
275
Personal Strand Generic Recognition Bute
Is X reflectively rational?
Subject Matter
X = goals, in terms of how their pursuit affects the agent's character.
Fully Specified Recognition Rule
A goal is moral if pursuing it helps the agent to
Interpersonal Strand helps
define
Is Y collectively rational? Y = constraints on conduct as embedded in social structure.
contingent links
develop in a reflectively rational way.
A constraint on conduct is moral if it works through social structure in a collectively rational way.
helpsdefine
(subject to countervailing conditions)
(subject to countervailing conditions)
Figure 6: Defining the interpersonal strand's countervailing conditions.
conditions in abnormal cases. The latter point is the puzzle's final piece; we have characterized the theory's recognition rules, its subject matters, and now its main countervailing conditions as well. Reflective rationality is defined exogenously, by a theory of humanly rational choice. Collective rationality is defined in terms of reflective rationality. Personal morality's limits are defined in normal cases by the constraints of interpersonal morality. And interpersonal morality's limits are likewise defined in abnormal cases by considerations of reflective rationality so important that obeying a moral constraint would defeat the constraint's purpose, which is to help people flourish as reflectively rational beings in a social context. See Figure 6. In passing, it can be obvious what to do when the library is on fire, or when you need to double-park to save your life, but it is not always obvious how to resolve conflicts. We saw that the two strands can conflict, and that they can resolve conflicts without outside arbitration, but it is also true that morality does not give us precisely articulated rules of conduct. That lack of precisely articulated guidance can be especially daunting in cases where morality's strands conflict. Real-world moral agents need to exercise judgment. There is no handy rule by which we discern the bright line between emergency and non-emergency cases. The bright line may not even exist; the boundary between the two categories may instead be a grey area in which there simply is no definite answer, never mind a definite procedure for identifying the answer. If there are cases in which it never becomes clear which course of action is morally best or morally required, then we have no choice but to pick something and get on with our lives as moral agents.
276
David Schmidtz
The other thing to say is that legislators create bright lines when framing laws, and it can be moral for them to do so. For the sake of argument, suppose we abstract from morally justified constraints imposed on us by institutions. Imagine that, in this abstract situation, there is no definitive moral answer to the question of what we should do when we have to choose between killing one person and letting five people die. Even so, legislators can make sure the question has a definitive even though arbitrary legal answer. An arbitrarily drawn bright legal line can then present itself as a morally justified constraint, justified because it serves the common good to have some bright line or other rather than none. Thus, not all moral constraints are timeless. They can be born with changing laws and social norms, and they can pass away.
5. The Right and the Good I began by allowing that both utilitarianism and deontology capture important truths about morality. Morality is teleological at the level of recognition rules. How would moral dualism make room for deontological intuitions? Do we need a third strand of morality, generated by applying a principle of universalizability to constraints one might legislate for oneself as a member of a kingdom of ends? Probably not. Universalizability is a standard of Tightness rather than a standard of goodness, so in that respect it is not on a par with collective or reflective rationality. It has a sometimes powerful claim on us as a standard of Tightness precisely because it is well-grounded. It is grounded in two ways. One kind of support for our commitment to universalizability derives from our commitment to collective rationality as grounds for endorsement (or criticism). This is why people take "What if everybody did that?" to be an important question. This aspect of our concern for universalizability points to real or imagined consequences - consequences of a kind that bear on our collective interests and thus fall within the province of the interpersonal strand. Our commitment to universalizability has a second kind of support as well, which is why deontology is not a form of consequentialism. This second aspect of our commitment is grounded in personal integrity, where having integrity involves being true to ourselves and thus to our principles. (To test one's integrity is to test one's commitment to one's principles.) To say one should be willing to universalize the maxim of one's action is to say one should be willing to let one's actions stand as an example for others, which is a matter of principle in the most literal way. Like the notion of virtue, the notion of universalizability is not really action-guiding; it lacks the kind of content it would need to guide action. But matters of principle are matters of integrity, and matters of integrity are matters of virtue. So the second
Moral Dualism
277
part of common-sense morality's commitment to universalizability falls -within the domain of the personal strand, having to do with developing one's own character in such a way as to earn one's self-respect. This accords not only with common sense, but with much of the spirit of Kant's defence of deontology as well. If these two proposals fail to exhaust the extent of our commitment to universalizability, then we may need a third strand to capture the residue of deontological intuition. But as far as I can see, the two strands of moral dualism combine to capture what is worth capturing in the common-sense commitment to universalizability. The theory as it stands may be incomplete in various ways, but it seems not guilty of a failure to capture deontology's most important insights. There is, of course, a controversy in moral philosophy over whether the right is prior to the good. Some theorists dismiss the idea that morality's recognition rules are teleological; they assume it contradicts their belief that the right is prior to the good. It would be a mistake to dismiss moral dualism on that basis, though, because moral dualism allows that the right is prior to the good at the action-guiding level. We should keep promises because it is right, and at the action-guiding level this is all that needs to be said. But that does not tell us what makes promise-keeping right, or even (in cases of doubt) whether promisekeeping is right. When it comes to recognizing what is right, the good is prior to the right, and must be so. We judge acts in terms of the right, but when we need to explain what makes an act right, or whether it is right in a doubtful case, we can do so only in terms of the good. Accordingly, my position regarding the controversy over the relative priority of the right and the good is (1) the right is prior at the action-guiding level, and (2) the good is prior at the level of recognition rules.12 Teleological considerations need not enter a moral agent's deliberations about what to do. If we cannot act without breaking a promise, then under the circumstances that may be all we need to know in order to know we categorically should not act. Sometimes, though, we do not know what morality requires of us. Some promises should not be kept, and we do not always know which is which. When we do not know, we need to fall back on recognition rules, which identify the point of being categorically required (that is, required on grounds that do not appeal to the agent's interests and desires) to act in one way rather than another. In any event, it is uncontroversial that the moral significance of institutions (and of the constraints they impose on us) is bound up with how they function, and in particular with how they affect human beings. It may seem that this view defines me as a utilitarian. What really separates utilitarians from deontologists, though, is that utilitarians apply a
278
David Schmidtz
principle of utility not only to institutions but to personal conduct as well. Deontologists do not deny that institutions ought to be good for something. Persons may be ends in themselves, but institutions are not persons. They are not ends in themselves, and deontologists need not view them as such. Institutions can command respect as means to the ends of human agents. Unlike human agents, they cannot command respect as ends in themselves. Their morality must be understood in functional terms.
6. Conclusions My most important conclusions (and those in which I have the most confidence) were stated near the beginning. First, a moral theory can range over more than one subject matter. Second, a moral theory can incorporate more than one recognition rule. Third, a theory can be structurally open-ended, constituted by strands that leave room for adding further strands or for accepting that some of our moral knowledge may be irremediably inarticulate. I gave a substantive example of a two-stranded theory, showing how conflict between the strands is resolved using conceptual resources internal to the two strands, thus showing how the strands weave together to form a unified whole in the absence of an over-arching principle of adjudication. Since my ultimate objective is to examine the degree to which being moral is coextensive with being rational, I isolated these two strands of morality in part because they have interesting connections to rationality. The analysis permits the theory to be open-ended, which lets us explore the reconciliation of rationality and morality without first needing to produce an all-encompassing account of morality. Section 1 accused modern ethical theories of capturing only a fragment of the truth. I do not purport to have captured the whole truth myself. But at least moral dualism leaves room for its own incompleteness. Unlike standard versions of utilitarianism and deontology, it can accommodate more than one strand. If necessary, it can accommodate more than two. Those who want to make the analysis of morality more complete can add other subject matters and apply other recognition rules, as needed.
Acknowledgments The author thanks Princeton University Press for permission to reprint material from Rational Choice and Moral Agency.
Notes 1 H. L. A. Hart, himself a legal positivist, argued that rules of recognition for law may or may not pick out what is moral when they pick out law. Herein
Moral Dualism
279
lies a crucial disanalogy between rules of recognition for morals and for laws. Questions about legality are sometimes answered by simply "looking it up." Arguably, we do not need to know we have moral reason to obey a law in order to recognize it as law. Legal positivism is, roughly speaking, the thesis that a recognition rule can correctly pick out a rule of conduct as legal even though the rule is immoral. But there can be no such a thing as moral positivism, since it is manifestly impossible for a rule of recognition to correctly pick out a rule of conduct as moral when it is not moral. It may not be essential to laws that they have an inner morality, but we can entertain no such agnosticism about morality itself. A recognition rule must pick out actions as morally required only if there is decisive reason (in the absence of countervailing conditions) to perform them. Morality's recognition rules perform their epistemological function by homing in on properties that have genuine normative force. 2 I see no reason to think it would. Recognition rules are not ultimate rules of conduct; primary rules are not mere rules of thumb. Primary rules do not defer to the "ultimate" rules in cases of conflict. Again, consider the legal analogy. In a situation where obeying the speed limit somehow interferes with reading the signs, the primary rule is still binding. The speed limit does not give way to a higher law bidding us to read the signs. Likewise, in ethics, if we recognize that, in the world we actually live in, following the rule "keep promises no matter what" has better consequences than following alternative rules like "keep promises if and only if doing so maximizes utility," then the principle of utility (qua recognition rule) picks out "keep promises no matter what" as being among morality's rules of conduct. 3 Because collective rationality, for example, is not an essentially moral concept, it can provide a fundamental justification within the realm of morality and thus can form the backbone of a useful recognition rule for morals, whereas essentially moral concepts (justice, for example) cannot. 4 Unfortunately, while the scope of a person's I-perspective is fixed (encompassing the person's own interests and preferences), we-perspectives do not have fixed borders. It should go without saying, though, that the plural perspective is no mere fiction. It is not for nothing that natural languages have words like 'we' and 'us' for plural self-reference. The plural perspective we implicitly take when we sincerely worry about the "Why be moral?" question usually does not encompass the whole world. Its scope expands and contracts along with our awareness of whose interests are at stake. (The scope of my plural perspective will not always coincide with the scope of yours, which is one reason why we sometimes disagree about what is moral. Discussing our differences often helps us extend our perspectives in ways that bring them into alignment, though, so disagreement stemming from differences in perspectival scope need not be intractable.)
280
David Schmidtz
5 A moral perspective is more specific than a plural perspective not because it is a more narrowly defined perspective but rather because it consists of taking a plural perspective only with respect to issues already defined as moral issues. Given an intuitive understanding of the subject matters of moral enquiry - of the kinds of things that raise moral questions - we have something about which we can theorize. We can devise a theory about how those questions should be answered and why. (Let me stress that I am not offering my intuitions as recognition rules for morals. Intuition enters the picture as a source of questions, not as a tool for answering them.) Given a predefined subject matter, my proposal is that we capture the normative force of morality's recognition rules when we say they home in on properties that, with respect to that subject matter, we have reason to endorse from a plural perspective. 6 This is a characterization of the perspective from which we formulate recognition rules. Whether being moral necessarily involves taking a plural perspective is a separate question. 7 See Schmidtz (1995), chs. 1-5. 8 The objection here is to building precise cutoffs into recognition rules for moral institutions. I have no objection to institutions themselves imposing arbitrary cutoffs that mark, for example, sixteen years as the age when it becomes legal to drive a car. Precise legal cutoffs might be recognizably moral - they might serve the common good - despite being somewhat arbitrary. I return to this point in the section on conflict. 9 That is, we look at how institution X functions in its actual context, which means we take the rest of the institutional context as given. We have good reason to use this baseline not only because it tends to be the one baseline regarding which we have actual data, but also because it is the baseline with respect to which institution X's functioning is of immediate practical significance. By this method, we can evaluate each part of the institutional array in turn. If (for some reason) we wanted to evaluate everything at once, we would need some other method. 10 Schmidtz (1995, ch. 5) argues that this normally is the case. 11 In passing, one person's moral goals can come into conflict with another person's goals in the sense that A achieving his moral goal might be incompatible with B achieving hers. Such conflict strikes me as a genuine practical problem, not an artifact of the theory. Such conflicts actually occur in the real world, and a theory that allows for the possibility of such conflicts is merely telling it like it is. For example, there may be several people for whom it is moral to seek to discover penicillin, or the shortest route to the East Indies, or the structure of DNA. They cannot all achieve their goals, but that fact does not make their goals any less moral. Indeed, it might serve the common good for social structures to encourage them to pursue
Moral Dualism
281
their goals, the incompatibility notwithstanding. I thank C. B. Daniels and Duncan Macintosh for discussions of this point. 12 Although John Rawls's official position is that in justice as fairness the right is prior to the good (1971, p. 31), his theory's recognition rule is paradigmatically teleological. We are to recognize a principle as just by asking whether people behind a veil of ignorance would perceive a basic structure informed by the principle as being to their advantage. "The evaluation of principles must proceed in terms of the general consequences of their public recognition and universal application" (p. 138). This is not the sort of statement one expects to find at the core of a theory in which the right is supposed to be prior to the good. Perhaps what Rawls really wants to say is that the right is prior to the good at the action-guiding level.
References Danielson, Peter (1992). Artificial Morality. New York: Routledge. Hart, H. L. A. (1961). The Concept of Law. Oxford: Clarendon Press. Herman, B. (1985). The practice of moral judgment. Journal of Philosophy, 82: 414-36. Rawls, J. (1971). A Theory of Justice. Cambridge: Belknap. Schmidtz, D. (1990). Scheffler's hybrid theory of the right.News, 24: 622-27. (1995). Rational Choice and Moral Agency. Princeton: Princeton University Press.
14 Categorically Rational Preferences and the Structure of Morality Duncan Macintosh 1. Introduction: The Reduction of Morality to Rationality Infamously, David Gauthier (1986) has sought to reduce morality to rationality. Simplifying (and exaggerating) his position somewhat, he claims moral problems are partial conflicts of interest, game-theoretically depictable as the Prisoner's Dilemma. Here, one agent can do well only if another does poorly, but both can do fairly well by making and keeping agreements to comply with Pareto-optimal solutions to their conflict; in the PD, this consists in making and keeping agreements to co-operate. And co-operating seems to be the morally required action, since it consists in refraining from exploiting another agent for one's own gain. However, in the PD, it maximizes for each agent, no matter what the other does, to defect. In complying with optimal compromises, then, agents must refrain from maximizing their individual expected utilities, something prima facie irrational on the standard theory of rationality as maximization. So the other part of Gauthier's reduction is the claim that, if it is rational to adopt dispositions constraining one's tendencies to maximize (as it is in PDs), then it is rational to act from that constraint (act as per the compromise) with other agents inclined to do likewise. Gauthier applies maximization first to choice of disposition rather than of action; rationality then dictates acting out maximizing dispositions. Both claims have been criticized. It has been doubted, first, whether all moral problems are really PDs, and, second, whether it is there rational to act "morally" - co-operatively. In this paper, I defend the reduction of morality to rationality. I first introduce claims for which I have argued elsewhere: it is rational to co-operate in PDs. For it is instrumentally rationally obligatory to revise the preferences on which one maximizes when about to face a PD so that one would then find co-operating maximizing and so 282
Categoically Rational Preferences and the Structure of Morality
283
rational with others with (rationally) appropriate preferences.1 This instances a more general feature of rational preferences: given preferences are rationally obligatory just in case having them maximizes on those held prior to them; and it maximizes on those one has when about to face a PD, to adopt different ones in going in to a PD.2 I then claim that PDs do not capture all moral problems, for there are ways to fail to be in a partial conflict (PD) with someone that are themselves morally problematic. To be fully moral, not only must one, when about to be in partial conflicts with fellow rational agents, so revise one's values that one is inclined to co-operate with such agents; sometimes, one's values must be such as to place one either in a pure coordination game, or in a partial conflict, both situations where, because of one's values, in order to increase one's own utility, one must increase that of another agent. To fully reduce morality to rationality, it must be shown irrational for agents to have preferences which would fail to put them in PDs or co-ordination games when morality requires them to be in such games. It would then follow that, for agents with fully rational preferences, all remaining moral problems really are PDs, and that since it is rational to be moral in PDs (because rational to acquire "moral" preferences when going in to PDs), morality does reduce to rationality. I try to achieve this result by analyzing the conditions on having fully rational preferences. I claim one's preferences are fully rational just if they could have been arrived at by a series of preference choices of the sort described above (where at each stage in the history of one's preferences, one had ones maximizing on one's prior ones), from the first preferences one ever had, were they rational. And I argue that there are unsuspected constraints on rationally permissible first values, ones deriving from three facts: preferences are just reasons for choices; in choosing one's first preferences, the factors which normally make it merely agent-relative which preferences are rational either do not operate, or will not permit immoral preferences; and in choosing preferences to have in situations where others too have preferences, one rationally must co-ordinate one's own with theirs to ensure that everyone's satisfy the condition of being able to be reasons for choices (on pain of contradicting the assumption that other agents had certain preferences in the imagined situations). These constraints impose on rationality in application to first values, a character like that of the categorical rationality envisioned by Kant. And the only preferences rational on these tests are moral ones. Thereafter, in any later choices of values given first values, if all agents are rational, all will always arrive at moral values.
284
Duncan Macintosh
2. Instrumental Rationality and the Rationality of Preference-Revision; PD Co-operation Rationalized Many philosophers doubt that it is rational to act from the constraint Gauthier advocates for the PD.31 have tried to solve this problem by arguing (1991a, 1991b, 1991c) that it is rational to revise one's preferences so as to value keeping agreements which resolve partial conflicts; then, since one's values have changed, even if one maximizes on one's (new) values (as one should as an instrumentally rational agent), one will keep the agreement if the other agent has similar values. This is possible because - as I argue in my (1993) and (1992) - the structure of hypothetical or instrumental rationality allows it rationally to evaluate the ends instrumentally served. Thus, we may speak not just of the rationality of a choice of actions given one's preferences and beliefs, but of the rationality of one's preferences given that it is rational to do things - possibly including revising one's preferences that would help cause the conditions targeted by one's preferences. Since sometimes other people will act to help cause the targets of one's preferences just if one were to change those preferences, it can sometimes be rationally required that one adopt different ends. For example, in a paradoxical choice situation (PCS) like the PD, others will help one to reduce one's jail time (the aim of one's original preferences) only if one stops caring only about that, and comes more to prefer to keep agreements. More technically, hypothetical or instrumental rationality consists in maximizing (one's individual expected utility as defined) on one's preferences given one's beliefs. But rational agents would use this principle to criticize and revise the preferences on which they would maximize; they would ask whether them holding the preferences they now hold maximizes on those preferences. Where it would not, as in PCSs, the agents would revise their preferences in whatever ways would be maximizing on their original preferences.
3. Moral Problems and the Prisoner's Dilemma So if agents are about to find themselves in PDs, it is rational for them to acquire values which would rationalize co-operation, and then to co-operate accordingly; and were all moral problems merely PDs, and if being moral simply consisted in co-operating in them, then morality would reduce to rationality. But are all moral problems partial conflicts of interests? It might seem so; for surely if there is no conflict in our interests, there are no moral objections to us each acting on our interests - neither of us harms
Categoically Rational Preferences and the Structure of Morality
285
the other in the sense of preventing him from advancing his interests. While if there is a total conflict in our interests, it would surely be morally arbitrary to require one of us to yield any advantage we may have to the other; so there can be no moral duty to do so. Thus, since our interests are either non-conflicting, totally conflicting, or partially conflicting, since it seems there can be a moral issue only if they are the latter, and since it is rational to compromise here, surely morality reduces to rationality. But what determines whether we are in a partial conflict? Our values, our circumstances of choice, and our powers to act. Suppose we are in a total conflict because, while you have values prima facie morally innocent, I have ones defined this way: whatever your preferences, I prefer that they not be satisfied. Surely I may be malevolent here; and if so, surely I cannot claim it morally indifferent which of us gets what we want? It may be that I have morally problematic values. Or suppose we are in a total conflict because, while if I had to fear you, or needed your help in some project, I would find it reasonable to compromise, I need not fear you, or do not need your help, because I am strong, you weak. Surely the mere fact of our power asymmetry should not mean our situation cannot be morally problematic? And suppose we have no conflict, but the reason for this is that I raised you to be a willing slave to my preferences: whenever I want something, you want that my want be satisfied. (For example, you are the heir to a fortune, I am your guardian, I covet your inheritance, and I raise you into wanting the satisfaction of my preferences, and so cause you to want to give me all your money when you come of age; or think of feminist arguments against the self-oppressing patriarchal values of "real" women, women who value only or primarily the satisfaction of the values of others - their men and their family members.) Surely there could be something morally problematic about this way of there failing to be a partial conflict? Further, it is one thing for my rational actions not to profit me at your expense, for my acting on my goals not to be an interference with you acting on yours. But it often seems insufficient for one not having been immoral that one not have interfered; sometimes, one must offer help, even at one's own expense. This connects with moral innocence not necessarily resulting from power asymmetries between agents. It could happen that, given my powers and values, I need not do anything that would enhance your utility in order to enhance mine. But our intuitive morali ty requires agents to have some fellow-feeling. Sometimes my values must be such that, even if no action of yours is required in order that my values be advanced, some of your values must be advanced in order for some of mine to be advanced. That is, I should care, at least a little, about the fate of those affected by my
286
Duncan Macintosh
actions and omissions - and this amounts to an obligation that one's values not fail to put one in a PD with another agent just because one's utility does not depend on his actions; rather, one's values may only fail to put one in a PD either because, given his values, your utility and his are independent, or because, given your values and his, your utility is dependent on his utility. So we can have a moral problem not just if there is a partial conflict, but also if the reason we have, instead, a total conflict, is the malevolence of an agent, or the excessive power or weakness of one of the parties; or if the reason we have, instead, no conflict, is the unsavoury origin of the values of one party; or if, while we have no conflict, one of us has a duty to aid the other, to confer a benefit on someone even where our own utility is not (unlike in the partial conflict situation of the PD) even partially contingent on his actions - translating into values, we may have duties to have our utility be partly conditional on the utility of others. So to fully reduce morality to rationality, we need somehow to show that there are rational constraints on the contents and origins of people's values. If we could show it was irrational to have malevolent values, and to have slavish ones, and to fail to have inclinations to help another where that would mean little cost to oneself, we could say it is irrational for people to be in total conflicts due to malevolence, or in non-conflicts due to value enslavement, or to find themselves without a kindness inclining them to aid the needy; and that, if, after our values have passed these tests, we are still sometimes in a partial conflict, it is rationally obligatory there to compromise as per Gauthier's innovation to the theory of rational choice given values, or mine to the theory of the rationality of values given values. Morality then would reduce to rationality. So how can we assess the rationality of our values? We might apply my criterion of instrumentally rational preference-revision to the history of our each arriving at our current values. But we may each have made rational choices from values with which we were first endowed by nature or rearing. And unless something can rationally criticize those, our current values may be of the problematic sort, and yet still rationally permissible. I might have been born with or raised to malevolent or stingy values, or you to slavish ones; or events might have made it rational for us to have arrived at evil values, whatever ones we began with. The only hope of reducing morality to rationality, then, is to find a way of rationally evaluating agents' first values. For it is from the vantage of these first values that agents choose their later values; and it is from the vantage of these later values that they choose their current
Categoically Rational Preferences and the Structure of Morality
287
values. So our first values determine, by way of a series of rational choices of values given prior values, the values we now hold (and determine their rationality). The reduction project requires it to be irrational to have first values morally problematic (assuming everyone else is to have rational first values), and irrational, assuming everyone else is rational, to move to values morally problematic. I shall now try to get this by deriving rational constraints on first values from the principles of rationality applied to the special case of first values, and from reflection on the metaphysics of values qua reasons, reflection, in particular, on the nature of preferences. I shall argue that, applied to the choice of first values, the metaphysics of values qua reasons give rationality a character akin to the categorical rationality articulated by Kant. 4. Hypothetical Rationality and the Rationality of First Values It is sometimes said that morality is indifferent as between persons, being universal, impartial and fair, while rationality is particular as between persons, being individual, partial, and self-serving. But in fact, rationality is indifferent as between persons too; it is just that it is not indifferent as between values: what is rational for someone in a given circumstance depends on what she values. Different people with different values, in the same circumstances, will find different actions rational. However, people with the same values, in the same circumstances, will find the same actions rational. Still, the whole reason for doubting that morality could reduce to rationality is that, while one's moral duties hold categorically, independent of one's values, one's rational duties seem to hold only hypothetically, depending on one's values. And it seems it could only be a lucky accident if the actions recommended by hypothetical rationality to a person given her values were to be the same as those obligated by morality. For suppose some behaviours to be morally required independently of rationality (e.g., by the requirements of universality and impartiality, or of equal concern and respect for all affected parties); and suppose rationality, at least instrumental rationality, to be some preference-sensitive function (like the maximization function) from preferences to choices of behaviour (preference-sensitive in that the actions it recommends vary with preferences): then for any such function, it is possible to specify preferences which the function does not take over into moral behaviour. And so agents who happen to have such preferences will not find it rational to behave morally. There are only three ways out of this: revise our conception of what is morally required; find further constraints on the rationality of
288
Duncan Macintosh
preferences; or show that the functions determining the morality and rationality of actions are co-intensional. I shall develop the latter two options, deriving the third from new arguments for the second. The project of reducing morality to rationality was never to show that, given arbitrary values, moral conduct is rationally obligatory. Indeed, no one has ever held this. Hobbes and Gauthier are sometimes wrongly believed to have held it. But Hobbes does not describe morality as emerging rationally from arbitrary values. Rather, he assumes that people have selfish values, but not malevolent ones. They care about themselves, but they do not have values logically (definitionally) able to be satisfied only by the non-satisfaction of the values of another agent. They are, if you will, not malevolently tuistic (where to be tuistic is to see intrinsic value in the welfare or illfare of others, to be altruistic, to see it in their welfare, malevolent, in their illfare). Meanwhile, Gauthier thinks morality emerges from rationality for agents assumed to have non-tuistic values, values (again) not defined as targeting the satisfaction or non-satisfaction of the values of others. (He was trying not to assume that agents had fellow-feeling, which assumption would have trivialized the reduction of morality to rationality, but only by basing it on a mere contingency). But in stipulating non-tuism, he in effect helped himself to the assumption that agents do not value each other's illfare. Indeed, it is because these authors merely assume these constraints on values, rather than showing the rational obligatoriness of one's so restricting one's values, and of having somewhat altruistic values, that their projects of reducing morality to rationality fail. The closest one comes to a philosopher who thinks he can motivate arbitrary agents, with arbitrary values, into acting morally, is Kant. And he only got this by stipulating that agents had the power to go against hypothetical rationality given their values, and to choose instead by principles they could will to be laws of nature. (Or rather, he claimed it was a condition on agents being moral agents, ones able to do the right thing simply because it is right; if agents could not so choose, they were just machines controlled by the values given them by nature and culture - they did not properly choose at all, and so their behaviours, not really being actions, would not be subjectable to moral evaluation.4 But he also argued that actual agents can make categorically rational choices, by arguing that the structure of nature might accord with that of noumenal choice.) In effect, then, while he took agents as he presumed they were and claimed they had motivations to morality, this was not provided by their values, which, therefore, serve as idle cogs in the generation of moral behaviours. The more plausible hope of a reduction will not consist in rationally deriving morality from arbitrary preferences, but from the conditions
Categoically Rational Preferences and the Structure of Morality
289
on the possibility and rationality of preferences. There are two reasons one might not think this possible. First, since Hume, it has been thought that the only rationality constraint on one's values is that they be consistent with one's beliefs: if you value A, and believe doing B will get you A, ceteris paribus, you must come to value doing B. That apart, rationally, you may value anything. If, however, I was right that it can be rationally obligatory to acquire certain preferences when that would advance one's prior preferences, then there is a further constraint on rational value: if one values X, and if coming to value Y for itself would help to get you X, you must come to value Y - see my (1992). But furthermore, I shall argue, the very concept of a value - and so of the values which can figure in arguments for the hypothetical rationality of values given prior values - can constrain the rationality of initial values, of values given no values. For it is in the concept of preferences that they must be able to serve as reasons in the circumstances in which they are held, which in turn requires that there be actions there available to agents which these values could give them reason to do. And this constrains rationally willable first values: you cannot will to have those that no action could serve in the circumstance of holding them. The second reason one may not have thought it possible to derive morality from rationality is that while morality has a universalizability requirement, rationality seems not to. But in fact, both do. What made hypothetical rationality particular to individual persons, rather than indifferent as between them, was that it was sensitive to values, which vary across persons. But in a rational choice of values made prior to having any values, rationality becomes impartial to individual persons, indifferent as between them, because there is not yet anything for it be partial to a person by. The principles determining the rationality of actions and of new values are only non-indifferent to persons so far as persons have (possibly different) values. The principles are only particular to or dependent on values, and thereby persons, for choice situations where the principles operate given people's prior values. But if I am right on the first point, that rationality also operates where an agent has no prior values, we get the surprising result that if it here recommends anything to one agent, it recommends the same to all agents: it is indifferent to persons because, in this condition, persons do not have different values to make rationality yield different enjoinments for them. So rationality has a universalizability requirement in the situation of choice of first values: it is only rationally permissible to will to have values one could will everyone to have in the same circumstance. This requirement follows from the one normally operant even in standard applications of instrumental rationality to the choice of values given values: values can be ones I rationally should adopt given
290
Duncan Macintosh
my prior ones in my circumstances, only if they would be ones rational for anyone to adopt had they similar prior values, and were they in similar circumstances. Now, if the antecedent of this requirement is false, as it is for everyone when they have no prior values, then everyone, prior to their having values, is in the same circumstance, that of having no values. So whatever rationality would there recommend for one valueless person is what it would recommend for any - indeed, every - valueless person; thus, a value can be rational for one person only if everyone could rationally have it. And I shall argue that values which would pass these constraints of rationality in this circumstance would be moral values, ones rationally inducing of moral behaviour. We shall proceed, then, in two steps. First, we show that rationality constrains first values. Second, we show that values are choosable by oneself as one's first ones, only if so choosable by everybody; in choosing one's first values, one must ask if everyone could have them, i.e., one must, in effect, choose for one's self only values one could still choose if one were choosing that everyone have them. The only values which will pass this test will be moral ones.
5. On Rationally Possible Values It might be thought that making a rational choice of first values is impossible. For in choosing without prior preferences, one has none to use in deciding which to adopt. But there are other constraints on rationally appropriate first preferences. To see these, we must consider what preferences are, what role they play in rational choice. Preferences are just reasons for choices; they rationalize the choices believed to make most probable the conditions they target. A preference is characteristically a reason for choice of action, but it may also be a reason to be a certain way, e.g., to have some other preference; for sometimes, as we saw for the PD, the very having of a new preference can advance the targets of ones now held. It is then rational, given the old preferences, to supplant them with the new. Here, we might speak of the old as being reasons for choosing the new. And for simplicity, we shall count revising, adopting, having, and keeping a preference as possible choosable actions - see my (1992). That a preference is a reason for choice, and that the having of a current preference cannot be rational if having some other one would better advance the former's target, are connected: having a preference is rational only if having it advances its target. Normally, a preference has this effect by rationally motivating its holder to act so as to make the obtaining of its target more likely. A preference can do this only if its target is one for which, in the circumstances where the preference is held, there is some action available to its holder whose performance
Categoically Rational Preferences and the Structure of Morality
291
would advance its target. That is, a preference can do this only if it is "actionable." So, normally, a preference for some target can exist only if, when the preference is held, there is some action it then gives its holder reason to do, one that would advance that target. And that a preference is, normally, nothing but what can play this role means that, normally, one can only prefer targets some action available to one in the circumstances can help make obtain. (So it is not enough for a preference now to be rationally permissible that its target is one some action would advance in some other possible circumstance than that in which the preference is actually held; rather, what you can prefer varies with which possible targets are such that you can now do something to advance their obtaining - see my (1994).) Summarizing: a preference is just a reason for a choice; so one can only have a preference for conditions which are such that, when preferring them, there is some action one could do that would better the odds of those conditions' obtaining (would "probabilify" those conditions), so that, in preferring those conditions, one has reason to do that action. But it is also possible for you to have a preference you cannot act on, but which I will act on for you. For recall that being a certain way, namely, having a certain preference, can sometimes be a way of causing states - e.g., in PCSs, you having certain preferences induces others to act so as cause certain states, as when you acquire the preference to cooperate in the PD in order to induce others to co-operate with you, those who will co-operate with those who prefer to co-operate too. So a preference you cannot act on yourself remains one you can have if you having it makes it likely that someone else will advance it for you. Here, you merely having or adopting that preference is an "action" which advances its target, since it gets another agent to advance its target; and so the preference is permissible. We can assimilate a preference's having, in order to be rationally permissible, to motivate its holder or someone else to advance its target, into one compendious condition on the rationality of a preference: a rationally permissible preference must be "self-advancing," or "self-maximizing" when held. There are two other constraints on the preferences which might be rational as one's first ones. One derives from the fact that some states of affairs are impossible (either inherently, or given the circumstances): since no action can probabilify an impossible state, and since preferences are only individuated by which actions would probabilify their target states, there can be no preferences for impossible states - see my (1993). Because one cannot prefer impossible conditions, nor (as we saw earlier) possible ones not probabilifiable by any of one's available actions (not even the "actions" of merely having some preferences), excluded as possible targets of preferences are things like the past, the
292
Duncan Macintosh
contradictory, the known-to-be-contrary to fact, the already-known-tobe-a-fact, and anything contrary to or ineluctable given natural law. (Unless it is one's preferring it that makes it ineluctable; the point is, you cannot prefer anything whose likelihood your motivated actions cannot increase. But because your actions can sometimes make a difference, determinism would not mean nothing can be preferred.) The final constraint derives from the preceding one: since were there to be preferences for impossibilia, they could not be satisfied (for what satisfies a preference is the obtaining of its target state, and an impossible state is one that cannot obtain, so a preference for it could not be satisfied), not only can one not prefer impossible states, one cannot have impossible-to-satisfy preferences. This also follows from how having a preference relates one to its possible satisfaction. Say you prefer that condition x obtain. For x to obtain, given that you prefer this, is for the preference for x to be satisfied. So in preferring x, either you are also preferring that the preference for x be satisfied, or once you saw that to get x, you had to satisfy the preference for x, rationally you would prefer its satisfaction. But then you cannot have an unsatisfiable preference. For in having one, you would be preferring its satisfaction; it cannot be satisfied, so you would be preferring the impossible; but you cannot prefer the impossible, and so cannot have that preference. A necessary condition of having a preference, then, is that it is at least possible that it be satisfied in the circumstance in which it is held. So we have found four constraints on the possibility and so rationality of a preference, first or otherwise: it must be self-advancing to have it, it must be actionable, it must not target the impossible, and it must be satisfiable. We may now apply all this to the rational choice of first values. Perhaps someone with no preferences would have no reason to come to prefer anything. Still, were he to form preferences, for them to be rationally permissible they must meet our four constraints. And we shall take the question, which first preferences is it rational to have? as the question, given that one has no preferences, if one were to form some, which would it be permissible to form after all the irrational or impossible ones are ruled out? If we combine the constraints on first preferences with its being rationally obligatory to maximize on one's preferences, and with my extension of this into the rationality of preferences given prior preferences, we have a general theory of rationality in values. If one has, as yet, no preferences, it is rationally permissible to adopt any ones, in any combination, provided they jointly meet our four conditions. Thereafter, it is rationally obligatory to revise one's preferences in whatever ways would be maximizing on them (likewise for one's
Categoically Rational Preferences and the Structure of Morality
293
revised preferences, and so on), provided the revisants meet the four constraints; and it would be permissible at any time to acquire any new preferences provided they meet the constraints, and provided it would not be anti-maximizing on preferences one already has. Thus, one's current preferences are rationally permissible just if derived from permissible first preferences by maximizing revisions and non-anti-maximizing accretions. Rational actions, in turn, are ones maximizing on rationally permitted current preferences. 6. Categorical Rationality and the Values Rational for Me Only if Rational for Everyone Supposing then that it is necessary to values being rational as one's first ones that they be (among other things) actionable and satisfiable when held, we turn to the second matter: showing that they are rational only if actionable and satisfiable if all agents had them as their first values. Recall that one reason it seems morality cannot be reduced to rationality is that morality is universalized and categorical, rationality, particular and hypothetical. But rationality is like this only because it is particular to the individual values of agents. If agents have no values, as before acquiring their first ones, then such rational constraints as there may be on their first values are neither particular nor hypothetical. So if it is rationally required or permitted for anyone that she have certain first values, it is also rationally required or permitted for everyone. But then it can only be rationally required or permitted for anyone to have certain first values if it can be so for everyone; one can only rationally will that one's self have a certain first value if one could rationally will that everyone have it. This constitutes a fifth constraint on rationally permissible individual first values. But this may be too quick. That agents choosing their first values are in the same situation value-wise (that of having no values as yet), does not mean each agent rationally must choose her values as if by choosing them for all agents. For even if agents could only rationally choose actionable and satisfiable preferences, it is false that the only thing which makes rationality particular to persons is their individual values. Agents also differ in their powers; and so surely they differ in which values they can have, because they differ in whether their powers give them an action to advance some goal which might be targeted by their values. Thus, values which may be rational for me to adopt, because my powers would let me advance them, might not be ones rational for you; you might be too weak to advance them. But then different values are rationally permissible for different agents. So if the agents knew their individual powers, they could, rationally, choose different first values; I can rationally will to have certain values, without
294
Duncan Macintosh
having to be able to will that everyone have them. But then, if I knew I was strong and well-situated, surely I could adopt, say, malevolent values, confident of them being actionable for me. The constraints of rationality would not filter out immoral values. So to keep faith with our moral intuitions it seems we must deny agents such knowledge in choosing their first values. Our reduction project must show it rational for agents to choose their values indifferently to their individual powers and circumstances. But how can we justify this using only rationality as the test of proper first values? How do we get from one's choosing values prior to having values, to choosing ones prior to knowing one's circumstances and powers? Well, persons are individuated by their values; so if a hypothetical first-value-chooser is valueless, it is no given person; and nor, then, does it yet have such and such powers, in such and such circumstances. So it cannot help but choose values in ignorance of such features of determinate persons. One's identity as a person is given by the rationally self-updating conating cognizer one is. If no one yet has any conations (as where all agents are choosing their first values), no one yet has a personal identity; no one is yet you, for example. So no one is such that the principles rationalizing values figure with partiality for you. So, that some person will have special powers does not make it rationally permissible for an ex ante chooser to pick values which, were she the powerful person, would mean she had satisfiable values, though a weaker person would not. For no chooser of first values knows she is especially powerful. But in imagining yourself choosing your first values, if you are no person then, in what sense are you choosing your values? Well, it is not that you are no one, but that who you are is indeterminate as between the agents for whom you are choosing values. Thus, your choice problem is to choose values rationally appropriate no matter which of the people who will come to exist by the infusion of values will be you. This makes the choice of first values a problem whose solution requires agents to imagine the choice from an original position behind something like a Rawlsian veil of ignorance about the identities of those who must live with the values chosen. For Rawls, this was just a thought-experimental apparatus for elucidating the concept of fairness, something of no interest to, and with no power to influence, those currently indifferent to justice - someone could make Rawls' calculations, and yet not be rationally required to change his values or behaviour. But our original position represents the vantage from which is decided the rationality of one's first values, and so of all later ones derived from them. And so it speaks to all rational agents, not just ones who already value, say, justice. An agent whose current values could not have been derived from first values chosen from our original position, would have irrational values.
Categoically Rational Preferences and the Structure of Morality
295
So the question is now, which preferences is it rationally permissible to prefer people to have, given that you know no particular facts about your powers and circumstances, but only general facts about everyone's? You know about logic, laws of nature, the different powers of different agents, and so on, but not about whether you are an agent with such and such powers. As we argued above, it is only rationally permissible to prefer to have satisfiable first preferences. To decide which ones are rational, then, we must consider what makes a preference impossible to satisfy, thence to see what it is impossible to prefer. We saw that one cannot have preferences for things one can now do nothing about. But there are also social limits on what one can prefer, on which, more shortly. Another reason morality has seemed irreducible to rationality is that morality seems to have a determinate content computable from the moral requirement on everyone to treat everyone with equal concern and respect; but no such content seemed present in the requirement that one have had rationally permissible first values. For it seemed that, prior to one's having any values, there are none for instrumental rationality (long thought to exhaust practical rationality) to permute into a determinate recommendation for choice, neither for choice of values, nor of actions. Without given values, the duty of instrumental rationality - to advance one's values - seems empty. But it has proved wrong to think rationality contentless where one has no values; for there are constraints on rationally choosable first values: to value x is to have a reason to act, and to value satisfaction of the value for x; thus, x can be valued only if the preference for x is actionable and satisfiable, and so the first values one may will to have must have these properties. If a choice of first values is rational for a given person only if rational for all persons (which follows from all first-value-choosers being in the same predicament), and if, to be rational for one person, the values must be actionable and satisfiable for him, then they can be rationally havable for him only if they would remain actionable and satisfiable for all persons were all to have them. But now we have determinate constraints on the content of rational first values. Values are rationally permissible just if, were all agents to have them as their first values, all would find them actionable and satisfiable. So a rational agent can only will to have as his first values, ones he could will that everyone have; and in so willing, he is willing that everyone have ones everyone could act on and satisfy should every person have those values. This gives us a formal structure for rational values, one very like that Kant proposed with his test for categorically rational (and so possibly morally obligatory) actions: for him, an action is categorically rational just if you could will its motive principle to be a law of nature, one all agents followed in like circumstances. For us, a value is rational just if
296
Duncan Macintosh
you could will that all agents have it, which .they could only if all could then act on and satisfy it. Both tests work independently of the values agents happen now to have; and for both, something can pass them for one agent only if it could do so for all. Both yield principles of rational choice appropriate no matter who you are, nor what your circumstance, principles categorical in holding no matter what, rather than hypothetical in holding only on hypothesis of certain givens which may vary across individuals and circumstances. But while Kant's test seemed imposed on the motivations of agents, ours derives from what it is to have a motive. But now we must show how our criterion entails the rational impermissibility of the values which posed difficulties for Gauthier's project.
7. Values Rationally Permissible and Impermissible We identified several values as inconsonant with morality in morally criticizing Gauthier. To complete the reduction project, we must show the irrationality of malevolent values (ones logically defined as aiming at the non-satisfaction of the values of others), slavish values (ones aiming only at satisfying the values of others), bullying values (ones inclining one to profit at the expense of others), and stingy values (ones inclining one to withhold aid to another, even where it would involve little cost to oneself). Are these values irrational by our measure? The question what values to have, asked prior to having any, must be answerable the same way for everyone. So I can only say I ought to (or may) have such and such values, if I can say everyone ought to (or may). Values are reasons for action. A proposed value is only such a reason if some action is such that doing it advances that value. If the value is for an impossible state, there is no such action, and so that state cannot be the target of a value. Suppose (as we just argued) that values can only be those one may have if everyone can have them. Suppose everyone can have them only if, if everyone had them, everyone would have an action to advance them. At least two sorts of ends are ruled out by this test: the end of having (only) others' ends attained (the end of slavish values, those feminists bemoan in "real women"), and the end of having the non-attainment of others' ends (the end of malevolent values). The first fails because if everyone has as their only end, the obtaining of others' ends, no action advances anyone's ends; it is impossible that the state obtain that everyone's ends so defined are satisfied, because unless someone has ends defined independently of the attainment of the ends of others, no state of affairs is described as an end as such, and so no action is made such that one has reason to do it by virtue of its probabilifying some state. The second fails because, if everyone wants that no one (else) get what they want, again, their
Categoically Rational Preferences and the Structure of Morality
297
wants do not combine to define a state such that some choice could help procure it. Unless one of us wants something other than that others not get what they want, no state is one everyone wants not to be brought about, and so again, because of the circular interdependence of these values, they are not actionable, nor satisfiable. That leaves bully and stingy preferences. Bully preferences are ones to profit at the expense of another. If everyone so preferred, would everyone have actions to advance their preferences? No. For everyone can profit only if no one is successfully bullied; for if you are bullied, you have been deprived of possible profit - impossible if everyone profits. So it is impossible for everyone to advance their preferences if everyone has bully preferences (for there is no such state as the one in which everyone's bully preferences are satisfied). And it is rationally permissible to have bully preferences only if, if everyone had them, everyone could advance them. They could not, so you may not have them. But what of stingy preferences? These are ones not to give help, even at little cost to oneself. Could everyone prefer to refrain from helping? Would everyone's so preferring be consistent with everyone's having an action available to advance their preferences? Let us consider. To have stingy preferences is to prefer to withhold aid from those who need it to advance their values. They only need it if they cannot advance their values without it. If you have stingy preferences, either others have preferences they can only advance/satisfy if you help, or they do not. If they do not, your preference is idle. If they do, your preference is advanced/satisfied only if theirs is not. But a distribution of preferences among agents is permitted only if the preferences so distributed are co-advanceable and co-satisfiable. That is false of distributions featuring non-idle stingy preferences. So in any situation where a moral problem could arise, i.e., where another's welfare (qua preference satisfaction) is at risk, you cannot have such first preferences. It would seem, then, that it is rationally impermissible to have as one's first values, malevolent, slavish, bully and stingy preferences; and since these exhaust immoral preferences, it has proved rationally impermissible to have immoral preferences. The requirements on a satisfactory reduction of morality to rationality are met for first preferences.
8. On Values Rationally Permissible for Me Given the Possible Values of Others (Objections and Possible Renovations) The argument (in Section 6, above) from the identity conditions on persons to the notion that all agents are in the same predicament in selecting first values, and so to each agent's representing all agents in such choices, is pretty shaky. Can we dispense with it? Perhaps. Here is an argument I build on in my (1994). Suppose your identity is determinate
298
Duncan Macintosh
when you choose your first values, so that you know your powers and circumstances. In acquiring preferences, you are forming a preferenceranking of every possible state of affairs. Some of such states contain other people. People have preferences. The conditions on preferences possible for them are the same as for you - their preferences must be self-advancing, actionable, must not target the impossible, and must be satisfiable. Your preferences determine how you will use your powers, and this may lead you to do things that would make certain preferences of others unactionable, impossible of satisfaction, and so on, contradicting the supposition that they have those preferences. So you cannot prefer conditions in which they have preferences which would not satisfy the conditions on possible preferences given the preferences you have, given your powers, and given the actions these two things will conspire to make you do. Even knowing your powers, then, the preferences you can will to have are limited to those action on which would not make impossible others having the preferences you suppose them to have in the situations as you preference-rank them. Rationally, you must limit the preferences you choose for various situations to those action on which is compatible with other agents' preferences in those situations remaining actionable, satisfiable, etc. But then you cannot rationally will to have stingy, bully, or malevolent preferences. For the obtaining of their targets requires that other people have certain preferences, but, paradoxically, have them when, due to the actions your preferences would rationalize, the conditions needed for their preferences even to be possible, would fail. That is, to will to have such preferences would be to will to prefer the impossible, which is itself impossible. So no matter how powerful you (know you) are, it is rationally impossible for you to have any of those three sorts of immoral preferences, since they are preferences both that other agents have certain preferences, and that the conditions required in order for them to be able to have such preferences not be satisfied. Since that is an impossible compound state, and since one cannot prefer the impossible, one cannot will to have those immoral preferences. This approach is also better in another way. For arguably the universalizability test for rational first values was more strict than was appropriate given our analysis of the nature of preferences and of the implications of this for which ones are individually rationally permissible. Our analysis showed that a preference is possible (and so rationally permissible) only if actionable and satisfiable. This does not entail that a preference is rational only if it would remain actionable (etc.) were all to have one like it. It only shows that one cannot will both that one have certain preferences in certain circumstances, and will that others there have preferences unactionable and unsatisfiable given one's proposed preferences and the consequences of one's holding them in those circumstances.
Categoically Rational Preferences and the Structure of Morality
299
But there may be trade-offs in this approach. True, one need not restrict one's preferences to ones everyone could jointly have; one need only restrict them to ones consistent with those others are imagined to have in the circumstances in which one proposes to have certain preferences of one's own. But then are not prima facie morally problematic values rationally permissible? For could one not will to have the values of masters in situations in which others are stipulated to have the values of slaves? If so, either we must repudiate that part of our morality, developed by feminists, which holds it morally problematic for people to prefer nothing but the satisfaction of the preferences of others; or we have not found a way to perfectly moralize values from rationality alone. Fortunately, such values would not be rationally permissible, at least on a worst-case reading of their content; for they would not be co-satisfiable. Suppose slaves have the slavish values discussed above, masters, the malevolent ones: then we have, again, circularity. I am the slave, you the master. I want that you get what you want; you want that I not get what I want. But unless one of us has an independently defined want, nothing is such that you want it and I want you to get it. And there is in this something like the liar paradox: if I want that you get what you want, and you want that I not get what I want, then you want that you not get what you want. To get what you want, you must both get it and not. Impossible. So even by this test, one cannot rationally will to have either of malevolent or slavish values in situations where other agents are imagined to have the other value. Still, it seems possible to will to have slavish values where others would have otherwise morally innocent first-order values. Fortunately, I think I have a way around this, but I must leave it for another occasion, namely, my (1994). 9. The Reconciliation of Morality and Rationality I argued that certain values are rationally impermissible. But the foregoing is also informative on which ones are permissible, and on which moral system these would constitute. You may have any preferences provided if everyone had them, everyone could advance and satisfy them; and agents may differ in their preferences, provided given those they have, all may advance and satisfy them. Some values are such that a given agent can only pass these tests in having them if her having them happens under a co-ordination constraint with other agents, such that the resulting pattern of values among agents consists of co-satisfiable (etc.) values. And in some patterns, some of the values in them are satisfiable just if some of the value-holders in the pattern have values inclining them to help some of those with certain other values. For example, one agent can only have the desire, as a handicapped person,
300
Duncan Macintosh
to live a relatively normal life, if non-handicapped agents desire to see that the handicapped live such a life, even should they need help in this. So that pattern of distribution of rational first values in a community must be such that the several values of the agents in effect implement social arrangements and goods distributions akin to those which agents would choose in something like Rawlsian ignorance (thence to ensure the actionability of the values of the least powerful agent). And while it is impossible that everyone have logically tuistic benevolent values, everyone can have ones consistent with those that citizens of a Rawlsian-chosen society would have; for such citizens would have various individual projects, but would share the project of having arrangements advantageous to the least well-off agent. Agents could still get into PDs, for sometimes, by compromise in pursuing their several life-projects, the agents could enjoy a co-operative surplus; and our previous results on the rationality of revising one's values when about to face a PD will guarantee that agents behave non-exploitively when in the PD. But it would not be possible for agents to fail to be in PDs as a result of the malevolence of agents' values, nor as a result of bullying or stingy inclinations. (And nor would it be possible as a result of agents having slavish values, at least not if the universalizability test works; or failing that, at least not if I can defuse the objection to the co-ordination test that it leaves slavish values rationally permissible.) So we have here an argument for the co-intensionality of the notion of justice as fairness, and the notion of practical reason as maximization on individual preferences, where all preferences must be actionable and satisfiable; for the two notions unify at the point of rational choice of first values. Rationality proves normatively thick, like a moral system. But its "content" derives from the structure of all valuing and choosing, not from some arbitrary conception of what is truly valuable. It does by deduction from the nature of value and the principles of instrumental rationality what moral systems do by deduction from specific value premises. Acknowledgments For helpful comments, my thanks to my colleagues and students at Dalhousie University, and to my co-participants in the conference of which this volume is the proceedings. They have made me aware of many possible objections, ones to which I cannot here reply, given the limitations of space and of my ingenuity. But I hope I say enough to make the project seem worth exploring. I enlarge on and defend it in my (1994). The writing of this essay was supported by a research grant from the Social Sciences and Humanities Research council of Canada.
Categoically Rational Preferences and the Structure of Morality
301
Notes 1 Actually, it is only rational to co-operate in PDs where one has had a preinteractive opportunity to undergo alterations in one's values, and only when facing agents whom one knows will likely be caused by one's altered values to reciprocate co-operation. See my (1991a, 1991b, and 1991c). 2 These preferences need a rather complicated structure in order to avoid making the agent vulnerable to exploitation from those who have not revised their preferences, and in order for his preferences not to be circularly defined relative to the preferences of those agents with whom the new preferences incline him to co-operate. See my (1991a, 1991b, and 1991c). 3 For arguments to this effect, see my (1991a). And for references to other philosophers with similar reservations, see my (199Id and 1988). 4 For a good interpolative exposition of Kant's views, see Solomon (1993), pp. 693-709.
References Gauthier, David (1986). Morals By Agreement. Oxford: Clarendon Press Macintosh, Duncan (1988). Libertarian agency and rational morality: Actiontheoretic objections to Gauthier's dispositional solution of the compliance problem. The Southern journal of Philosophy, 26: 499-525. (1991a). Preference's progress: rational self-alteration and the rationality of morality. Dialogue: Canadian Philosophical Review, 30: 3-32. (1991b). McClennen's early co-operative solution to the Prisoner's Dilemma. The Southern Journal of Philosophy, 29: 341-58. (1991c). Co-operative solutions to the Prisoner's Dilemma.Philosophical Studies, 64: 309-21. (1991d). Retaliation rationalized: Gauthier's solution to the deterrence dilemma. Pacific Philosophical Quarterly, 72: 9-32. (1992). Preference-revision and the paradoxes of instrumental rationality. Canadian Journal of Philosophy, 22: 503-30. (1993). Persons and the satisfaction of preferences: Problems in the rational kinematics of values. The Journal of Philosophy, 90: 163-80. (1994). Rational first values and the reduction of morality to rationality. Unpublished manuscript. Halifax, NS: Dalhousie University Solomon, Robert C. (1993). Introducing Philosophy: A Text With Integrated Readings. 5th Edition. Toronto: Harcourt Brace Jovanovich.
15
Why We Need a Moral Equilibrium Theory William J. Talbott
1. Introduction If Kant's derivation of the Categorical Imperative had been successful, he would have completed two overlapping projects in moral philosophy: (1) the deduction project (moral deductionism): the project of deriving the fundamental principle or principles of morality from the most general constraints on rationality agency, in order to show that, whatever ends or goals one may have, it cannot be rational to act immorally in pursuing them; (2) the reduction project (moral reductionism): the project of formulating principles that would determine the moral permissibility or impermissibility of an act purely as a function of its non-moral features.1 It is generally agreed that Kant did not succeed at either project. But he has very many intellectual heirs, including many of the participants at the conference on "Modeling Rational and Moral Agents," who use tools and examples - most prominently the Prisoner's Dilemma - from formal theories of rational choice to construct models of rational and moral agents. Anyone who attempts to model rational and moral agents very quickly becomes aware of a tension between the two projects of deduction and of reduction. Kant's discussion of his Categorical Imperative nicely illustrates the tension. When his focus was on the deduction project, Kant treated his principle as merely formal, with minimal content (1785, pp. 38-39). How else could he expect to show that it was a constraint on all rational action? But when his focus was on the reduction project - that is, when he was intent on showing that his principle correctly distinguished morally right from morally wrong acts - then Kant interpreted his principle in a way that provided it with much more substantial content (1785, pp. 40-41). This tension in Kant's account is mirrored in current attempts to model rational and moral agents. If the attempt is motivated by a preoccupation with the deduction project, the agents and rules that are called "moral" seem to be but pale shadows of agents or rules that one would ordinarily or pre-theoretically think of as moral. 302
Why We Need a Moral Equilibrium Theory
303
In this paper, I propose to take over some of the apparatus employed in modeling rational agents to solve some problems that arise in attempts to model moral agents. Because I borrow from the equilibrium analysis employed in non-co-operative game theory, it is almost irresistible to suppose that I am setting the stage for the derivation of a principle of morality from non-co-operative game theory. This would be a mistake. I intend to be contributing to the reductionist project, not the deductionist project. My goal is to take the first steps toward the formulation of a moral theory based on an equilibrium analysis - which I refer to as a moral equilibrium theory. My impression is that preoccupation with the deductionist project has significantly retarded progress on the admittedly less glamorous reductionist project. Therefore, I hope that this paper will provide an opportunity to think about moral reductionism without any deductionist inhibitions. The moral reductionist project is not trivial, as it would be if there were no constraints on the content of potential rules. "Do the right thing!" (or, more generally, "Don't do anything wrong!") and "Do what a virtuous person would do!" are rules that seem unobjectionable as moral advice, but they are not significant contributions to the reductionist project, because they distinguish between moral and immoral acts in terms of their moral properties. Moral reductionism requires that the distinction be made in purely descriptive (non-moral) terms. To even suggest that there might be a successful completion of the reductionist project will seem to some readers to be a sign of serious philosophical confusion on my part. If there were general agreement on moral judgments, there would at least be a clear criterion of success - moral reductionism would seek to find a purely descriptive (nonmoral) basis for the generally agreed upon moral judgments. But given the variety of moral views that have been espoused, and the extent of disagreement even among people who seem to share a moral point of view or moral tradition, it is not at all clear that it even makes sense to speak of "success" or "progress" in the reductionist project. Whose moral judgments do I propose to "reduce"? This is too large a question for me to try to answer here. In this paper I focus on the One-Shot, Two-Person Prisoner's Dilemma (PD), because there is near universal agreement that, at least under favorable circumstances, morality would require at least some Conditional Co-operation in such cases. In my teaching, I have found that students - even students who are inclined not to co-operate in a PD - have no difficulty in recognizing that there is a conception of morality as Conditional Cooperation that, to put it intuitively, requires moral agents to be willing to Co-operate with other Co-operators in PD situations. On this basis,
304
William J. Talbott
I simply assume that there is a shared conception of morality as Conditional Co-operation. I do not claim that this is the only possible conception of morality - only that it is one important conception. My modest goal in this paper is to formulate purely descriptive principles that agree with the generally shared intuitions concerning the requirements of the conception of morality as Conditional Co-operation. I will say that principles that pass this test are extensionally adequate. It may seem that formulating an extensionally adequate principle for the PD is trivial. One obvious suggestion is the principle: In a PD, Cooperate with Co-operators; Defect with Defectors. This is very close to Gauthier's (1986, p. 167,1988, p. 399) principle of Constrained Maximization, which is introduced in Section 2 below. The main subject of this paper is the problems of indeterminacy and even incoherence that such principles generate. Because the main focus of this paper is on problems of indeterminacy in moral reductionist principles, I should say what makes indeterminacy in a moral reductionist principle a problem. From the point of view of the moral reductionist project, it is not indeterminacy per se that is a problem. For example, in most decision situations our considered moral intuitions do not pick out a uniquely right act. Rather, they merely rule out some alternatives as morally impermissible, leaving more than one alternative as morally permissible. Where our moral intuitions do not single out a uniquely right act, any extensionally adequate reductionist principle should reflect this indeterminacy of our moral intuitions. Therefore, where the indeterminacy of a reductionist principle matches the indeterminacy of our moral intuitions, the indeterminacy of the principle is not a problem. Indeterminacy is only a problem when it is an indication of extensional inadequacy - as it would be if the relevant reductionist principle were indeterminate among several acts, some of which were ruled out by our shared moral intuitions as morally impermissible. For example, it seems intuitively clear that in a PD, the conception of morality as Conditional Co-operation (and, indeed, any other plausible conception of morality) requires that two moral agents Co-operate with each other, when both know that both are moral. Thus, any extensionally adequate rule must require Co-operation in such a situation. As I discuss in Section 2 below, Smith (1991) has criticized Gauthier's principle of Constrained Maximization because it does not require Co-operation in this case. Smith argues that Gauthier's principle is indeterminate between joint Co-operation and joint Defection. Because the conception of morality as Conditional Co-operation would not permit joint Defection in this sort of case, from the point of
Why We Need a Moral Equilibrium Theory
305
view of the reductionist project, this sort of indeterminacy is a problem for Gauthier's principle. It is an indication of extensional inadequacy. Even if Gauthier's statement of his Constrained Maximization principle is indeterminate in a way that makes it extensionally inadequate, one might hope that a simple fix could resolve the problem. Danielson (1991,1992) has made the most concerted effort to solve the indeterminacy problem raised by Smith. In Section 3 below I review Danielson's replies to Smith and show that, though they may provide the resources for solving what I refer to as the special indeterminacy problem, they cannot solve the general indeterminacy problem. Although my discussion focuses on Gauthier's Constrained Maximization principle, I believe that the indeterminacy problem raised by Smith is quite general. For example, Regan (1980) has identified a parallel problem for utilitarian moral theory. In Section 4, I explain the source of the problem, which I take to be that, as formulated by Gauthier, the Constrained Maximization principle can generate input-output loops. I show how rules that can generate input-output loops can lead not only to problems of indeterminacy, but also to problems of incoherence. Thus, the challenge for the moral reductionist is to formulate extensionally adequate reductionist principles that do not permit inputoutput loops. To do so, I borrow from and generalize the equilibrium analysis of non-co-operative game theory. I begin by dividing decision rules into two parts: ranking rules and joint act selection principles. In Section 5,1 develop an idea from Sen (1974) and construct an inductive framework for ranking rules. Then, in Section 6,1 show how the ranking rules combine with the joint act selection principles of a generalized equilibrium analysis to resolve the special and general indeterminacy problems, as well as to avoid potential problems of incoherence. This is a conceptual paper, not a technical one. All of the technical results are trivial. At every opportunity I have side-stepped technical complications to promote conceptual clarity. In order to make the exposition in the text as clear as possible, I use footnotes for important technical details and assumptions. 2. Two Indeterminacy Problems for Gauthier's Principle of Constrained Maximization I begin with a brief review of the One-Shot, Two-Person Prisoner's Dilemma (PD). In this paper I confine the discussion to a quite hypothetical form of the PD - one involving/M/l information - which is to say, that all factors relevant to either agent's choice are common knowledge, including that fact itself.2 Thus, the following information is assumed to be common knowledge.
306
William J. Talbott
Two agents, X and Y, are isolated from each other, so that neither agent's choice has any causal influence on the other agent's choice. Neither agent has any way of finding out the other's choice before making her own choice (though, of course, each can use the information she has about the other agent to try to predict how the other will choose). Each must decide between two actions, Co-operate (C) and Defect (D), as illustrated in Figure 1. In Figure 1, each row of the matrix corresponds to one of X's alternative choices, and each column to one of Y's alternative choices.3 Each cell of the matrix represents a combination of choices by X and Y, and contains the ordered pair of the utility to X of the relevant choice combination, and the utility to Y of the same choice combination.4 In a PD, for each agent Defection dominates Co-operation - that is, the two agents' actions are causally independent of each other and each agent would maximize her own utility by Defecting, regardless of whether the other agent Co-operates or Defects. But the choice combination [D,D] (i.e., the combination in which they both choose their dominant alternative) is jointly sub-optimal, because each agent's utility would be higher if they both Co-operated than if they both Defected. In most discussions of the PD, each of the agents is assumed to be an egoist. In the following, I assume that there is some generally agreed upon measure of individual well-being that can be used to generate egoist utilities. To simplify the exposition, in the examples that I discuss, I will assume that, at least initially, all the agents' utility assignments are egoist. In a PD, a Straightforward Maximizer (SM) always chooses her dominant act, and thus always Defects, because, no matter what the other agent chooses, she will be better off if she Defects than if she Cooperates. Thus, two SM agents would both Defect in a PD with full information. The result of joint Defection by two SM agents is jointly sub-optimal, because they would both prefer joint Co-operation. PDs have generated an enormous literature on the ways that two agents might avoid Agent Y
Agent X
C
D
C
(3,3)
(1,4)
D
(4,1)
(2,2)
Figure 1: One-shot, two-person Prisoner's Dilemma matrix.
Why We Need a Moral Equilibrium Theory
307
the jointly sub-optimal result. If the sole goal is simply to avoid the jointly sub-optimal result [D,D], there is a rule that, when adopted by either agent, guarantees that result: Unconditional Co-operation (UC) Rule for an agent X: In a PD, X (always) Co-operates. In a PD, an Unconditional Co-operator X never achieves the jointly sub-optimal outcome [D,D], but she may well achieve the outcome [C,D], which is worse for her than the jointly sub-optimal outcome [D,D]. Most people would regard the UC agent's willingness to Cooperate with Defectors to be, at best, morally supererogatory, and certainly not morally required. Even those who have a conception of morality that requires Unconditional Co-operation usually have no trouble recognizing that there is a conception of morality as Conditional Co-operation that would require Co-operation with other Cooperators, but would at least permit, if not require, Defection with other Defectors in a PD. One of the main goals of this paper is to state a principle that tracks these common intuitions of a morality of Conditional Co-operation. Gauthier (1986,1988) has come close to formulating such a principle. As applied to the PD, Gauthier's principle of Constrained Maximization would require: Gauthier Constrained Maximization (GCM) Rule for an agent X: In a PD, X Co-operates if X expects the other agent to Co-operate; X Defects if X expects the other agent to Defect.5 Gauthier's rule yields the desired result (Co-operate with Co-operators; Defect with Defectors) in any PD in which the GCM agent has enough information to form an expectation about how the other agent will act. For example, if a GCM agent X is interacting with an Unconditional Co-operator (UC) Y in a PD with full information, X will realize that Y will Co-operate regardless of what Y expects X to do; and on the basis of this expectation that Y will Co-operate, the GCM rule will require X to Co-operate also. Similarly, if a GCM agent X is interacting with a Straightforward Maximizer (SM) agent Y in a PD with full information, X will realize that Y will Defect, regardless of what Y expects X to do; and on the basis of the expectation that Y will Defect, the GCM rule will require X to Defect also. The problem for the GCM rule is that it is indeterminate in any case in which an agent applying it has no rational basis for forming an expectation about what the other agent will do. For example, Gauthier assumes that two GCM agents in a PD with full information will
308
William J. Talbott
Co-operate with each other. But Smith (1991) argues persuasively that it is indeterminate whether two GCM agents will Co-operate with each other, even in cases of full information.6 The logic of the problem is quite simple. Let X and Y be two GCM agents in a PD with full information. Consider, for example, how the GCM rule would apply to X's choice (Y's situation is exactly symmetrical): The GCM rule is compatible with X's Co-operating (if she expects Y to Co-operate). But it is also compatible with X's Defecting (if she expects Y to Defect). In order for the GCM rule to determine X's choice, X must be able to form an expectation about what Y will do. X knows that Y is a GCM agent. Applying the GCM rule to Y's choice, X can conclude only that Y will Co-operate, if she expects X to Cooperate; and that Y will Defect, if she expects X to Defect. Neither agent has enough information to form an expectation about what choice the other will make, and thus, even in a case of full information, the GCM rule is indeterminate! It is compatible with both agents' Cooperating and with both agents' Defecting. I express this result by saying that the GCM rule is indeterminate when matched with itself in a PD with full information. I refer to this as the special indeterminacy problem for the GCM rule. After showing that Gauthier's GCM rule is indeterminate when matched with itself in a PD with full information, Smith (1991) considers a variety of alternative Constrained Maximization rules to attempt to formulate an adequate rule of Conditional Co-operation for the PD. Ultimately, Smith ends her discussion doubting whether there is an adequate formulation of such a rule (p. 242). If the special indeterminacy problem were the only indeterminacy problem, the GCM rule could be easily fixed, as I illustrate below. However, a more serious indeterminacy problem, which I illustrate below, arises from the fact that there are a potential infinity of nonequivalent rules of Conditional Co-operation that, intuitively, would Co-operate with the GCM rule if it would Co-operate with them. The general indeterminacy problem is the problem of formulating the potential infinity of different Conditionally Co-operative rules in such a way that they determinately Co-operate with each other in PDs with full information. Danielson (1991, 1992) has made the most concerted attempt to solve the special and general indeterminacy problems. I take up his proposals in the next section.
3. Danielson's Attempts to Solve the Special and General Indeterminacy Problems Danielson (1991) refers to the indeterminacy problem as the "coherence problem," and recognizes that it is "no minor procedural prob-
Why We Need a Moral Equilibrium Theory
309
lem" (p. 309). Danielson claims to have solved the problem. However, Danielson does not try to solve it directly. Rather, he tries to solve what he takes to be a parallel problem - how to define "artificial" CM agents and other significant types of agent in a computer simulation. Danielson's (1991, 1992) "solution" to the problem of defining artificial CM agents exploits the fact that artificial agents are defined by a decision rule written in a programming language, and such a decision rule can include a clause that, in a PD, tests for whether its own quotation matches the quotation of the other agent's rule (or, in Danielson's test, the relevant part of the other agent's rule). Thus, it is possible for Danielson to define what I refer to as a Quotational Constrained Maximization (QCM) rule - a rule that uses a quotational test to determine whether the other agent uses the UC rule, and also to determine whether the quotation of the other agent's rule matches the quotation of the QCM agent's own rule. If either test is positive, QCM Co-operates; otherwise, it Defects. This is a neat programming trick.7 It provides a model of one solution to the special indeterminacy problem for a CM rule for non-artificial agents - that is, the problem of formulating a CM rule which guarantees that CM agents will Co-operate with each other in a PD with full information. Consider the following analogue of a QCM rule for non-artificial, human agents: Danielson Constrained Maximization (DCM) Rule for an agent X: In a PD, X Co-operates, if: (1) X expects the other agent to Co-operate or (2) the other agent employs this rule; X Defects, if X expects the other agent to Defect. It is easy to see that in a PD with full information, a DCM agent would not only Co-operate with all agents that she expected to Cooperate (e.g., a UC agent), but she would also Co-operate with any agent she believed to use the DCM rule. Thus, the DCM rule solves the special indeterminacy problem. In a PD with full information (including information about the other agent's decision rule), DCM agents Cooperate with each other. The main problem for Danielson's account is that it does not seem that it could solve the general indeterminacy problem - the problem that arises from the potentially infinite number of non-equivalent Conditionally Co-operative rules, of which the DCM rule is only one.8 Intuitively, a moral agent in a PD should Co-operate with any other Conditional Co-operator who is willing to Co-operate with her, even if the other agent employs a different rule of Conditional Co-operation. For example, consider the following rule, based on one discussed by Danielson (1992, p. 135):
310
William J. Talbott
Danielson Unconditional Co-operator Protector (DUCP) Rule for an agent X: In a PD, X Co-operates if: (1) the other agent is nice to Unconditional Co-operators (UC) - that is, the other agent Co-operates with a UC agent in a PD with full information - and either (2a) X expects the other agent to Co-operate or (2b) the other agent employs this rule; X Defects if: (1) the other agent is not nice to UC agents; or (2) X expects the other agent to Defect. In a PD, the DUCP rule is intended to "punish" (i.e., Defect with) agents who are not nice to UC agents and to try to "reward" (i.e., try to mutually Co-operate with) agents who are nice to UC agents. Clearly, in a PD with full information, a DUCP agent will Defect with an SM agent, because SM agents are not nice to UC agents. Also, a DUCP agent will Co-operate with a UC agent, because UC agents are nice to each other, and UC agents Co-operate with anyone in a PD, including DUCP agents. Also, DUCP agents will Co-operate with each other, because DUCP agents are nice to UC agents, and clause (2b) is satisfied (they both employ the DUCP rule). But what happens when a DCM agent interacts with a DUCP agent in a PD with full information? Intuitively, a DCM and a DUCP agent should Co-operate with each other, because they are both nice to UC agents, and they will both Co-operate if they expect the other to Co-operate. But, as the DCM and DUCP rules are stated, it is indeterminate whether a DCM agent will Co-operate with a DUCP agent in a PD, even if there is full information, because neither rule has any basis for forming the expectation that the other will Co-operate. As stated, the rules are compatible with both agents Co-operating and with both agents Defecting. This is an example of the general indeterminacy problem for Conditionally Co-operative rules.9 Danielson is aware of this problem. He calls it a "co-ordination problem" (1992, pp. 132-33). He discusses various possible solutions to it, and finally suggests a new kind of rule - a rule of Indirect Co-operation (1C). 1C is an adaptive, second-level rule that, intuitively, tests to see if it could improve the outcome by adopting (relevant parts of) the other agent's rule, and if so, it adopts (relevant parts of) the other agent's rule (1992, pp. 140-43). Danielson's discussion of indirect rules is quite complex, but it is not necessary to review the detailed discussion to see that with the introduction of second-level rules, Danielson surrenders any hope of solving the general indeterminacy problem. Note that in order to decide whether to modify itself to incorporate the relevant parts of the other agent's rule, the 1C rule (and other similar secondlevel rules) must be able to form two expectations about the other rule: (1) What the other rule would do in a PD interaction with an unmodified 1C rule; and (2) What the other rule would do in a PD interaction
Why We Need a Moral Equilibrium Theory
311
with an 1C rule that had modified itself to incorporate the relevant parts of the other rule. Thus, in an interaction between two secondorder rules of this kind, Rj and R2, neither Rj nor R2 would be able to make a decision about whether or not to modify itself until it had formed an expectation about what the other rule would do in interactions with its modified and unmodified self. But this immediately generates the logic of the indeterminacy problems. For example, R, could not determine what R2 would do in an interaction with its unmodified self, until it determined whether R2 would modify itself. But R2's decision about whether or not to modify itself depends on whether R: would modify itself! Each rule's decision whether to modify itself depends on the other rule's decision of whether to modify itself. Thus, neither rule can acquire the information necessary to determine whether to modify itself. As before, the second-level special indeterminacy problem - the problem of specifying a rule that determinately Co-operates with itself - could be solved by simply including in the rule a clause requiring Cooperation with other agents using the same rule. But this technique cannot solve the general indeterminacy problem for second-level rules, because there remain a potential infinity of different, non-equivalent, second-level rules that are what might be termed second-level Conditionally Co-operative rules, and, intuitively, these second-level Conditionally Co-operative rules define agents who would be willing to Cooperate with an Indirect Co-operator if she were willing to Co-operate with them. Danielson is aware of the potential for such problems, because he himself refers to the possibility of "second-level co-ordination problems" (1992, p. 143). He does not offer any advice on how to solve such problems, and, for the reasons discussed above, it seems to me that there is no way to solve them within his framework. 4. Indeterminacy or Incoherence It may seem that there is a simple logical solution to the problems of indeterminacy discussed above. For example, as stated above, the GCM rule invites potential indeterminacy, because it only specifies what to do in two cases: (1) the case in which the other agent is expected to Co-operate (in which case the GCM rule requires Co-operation); and (2) the case in which the other agent is expected to Defect (in which case the GCM rule requires Defection). To make the rule determinate in all cases, it would seem that it would only be necessary to add a third clause, instructing the agent how to choose in the situation in which the other agent's choice cannot be determined (i.e., when there is no basis for expecting the other agent to Co-operate and no basis for expecting the other agent to Defect).10 This suggests two
312
William J. Talbott
new rules, both variations on the GCM rule, which differ only in whether they require Co-operation or Defection in the case in which the other agent's choice cannot be determined: GCM-C Rule for an agent X: In a PD, X Defects just in case X expects the other agent to Defect (therefore, X Co-operates if X expects the other agent to Co-operate and if X cannot determine the other agent's choice). GCM-D Rule for an agent X: In a PD, X Co-operates just in case X expects the other agent to Co-operate (therefore, X Defects if X expects the other agent to Defect and if X cannot determine the other agent's choice). What would happen if two GCM-C agents X and Y were to interact in a PD with full information? It would seem that each would begin by trying to form an expectation about what the other would do. However, as the above discussion of the GCM rule illustrated, neither agent can directly determine the other agent's choice. For example, if X were to attempt to determine Y's choice, X would realize that Y's choice would depend on her expectation about what X would do; and similarly, for Y. This produces the same sort of endless looping of deliberation discussed above. Thus, at this initial stage of deliberation, neither agent will be able to form an expectation about what the other will do. At first glance, it seems that the GCM-C rule covers just this eventuality, because in a situation in which a GCM-C agent cannot determine what the other agent will do, the GCM-C rule requires Co-operation. Thus, it would seem that where X and Y are both GCM-C agents, in a PD with full information, the GCM-C rule would require each agent to Co-operate, thus solving the special indeterminacy problem. But this analysis leads to a puzzle. On the assumption that X and Y cannot determine what the other will do, the GCM-C rule requires that both of them Co-operate, and thus enables both of them to determine what the other will do! This is not exactly incoherent, because, logically, it might only be a reductio of the assumption that they cannot determine what the other will do. But the logical problem goes deeper, because it was only on the supposition that they could not determine what the other would do, that they were able to determine what the other would do! If the argument is a reductio of the assumption that they cannot determine what the other will do, then they have a proof that they can determine what the other will do. But how could they possibly determine this? On the assumption that they can determine
Why We Need a Moral Equilibrium Theory
313
what the other will do, it seems that they simply do not have enough information to determine what the other will do! If this is not incoherent, it is at least something of a pragmatic paradox. X and Y must accept that they can determine what the other will do while at the same time having not the slightest idea of how to determine what the other will do. Moreover, their decision rules are so simple that it requires very little reflection on them to come to the conclusion that there is no way for either of them to determine what the other will do, unless they are unable to determine what the other will do! However, the feeling of paradox is somewhat allayed by the fact that, once they have reasoned from the assumption that they cannot determine what the other will do to the conclusion that both will Cooperate, this result survives the apparently contradictory information that they have determined what the other will do. The information that the other agent will Co-operate only tends to reinforce each agent's resolve to Co-operate, because the GCM-C rule requires that they Cooperate both in cases in which they cannot determine what the other agent will do and in cases in which they expect the other agent to Cooperate. Thus, even if the reasoning has a paradoxical feel, it leads to a result that is consistent with the GCM-C rule. I believe that this reasoning is not just paradoxical, it is incoherent. But the incoherence is easier to appreciate if one considers a PD interaction between a GCM-C agent X and a GCM-D agent Y. Recall that the GCM-C and GCM-D rules both mimic the original GCM rule when the agent employing them is able to determine what the other agent will do. The difference between the two rules is that in a case in which an agent employing one of them is not able to determine what the other agent will do, the GCM-C rule requires Co-operation and the GCM-D rule requires Defection. What will be the result of a GCM-C/GCM-D interaction in a PD with full information? Suppose, as before, that initially neither X nor Y can determine what the other will do. Then the GCM-C rule will require that (al) X Cooperate; the GCM-D rule will require that (a2) Y Defect. But given (al): (bl) Y will expect X to Co-operate; and given (a2): (b2) X will expect Y to Defect. But given (b2): (cl) X's GCM-C rule would require X to Defect, which is different from the result that X obtained on the supposition that she could not determine what Y would do; and given (bl): (c2) Y's GCM-D rule would require Y to Co-operate, which is different from the result that Y obtained on the supposition that she could not determine what X would do. Obviously, this reasoning can continue endlessly. At each stage of the reasoning, the two agents are out of phase with each other, as they alternate between reasons for Defecting
314
William J. Talbott
and reasons for Co-operating. At no point would X and Y be able to form mutually reinforcing expectations about each other's decision. As in the previous case, it is possible to regard this as a reductio of the assumption that neither agent can determine what the other will do. But in this case, there is no reasoning, not even paradoxical reasoning, available to produce the mutually reinforcing expectations that are necessary for them to have any possibility of determining what the other will do. The incoherence is due to its seeming equally clear on reflection both that the only coherent possibility is that each can determine what the other will do, and that in fact neither has any way of determining what the other will do! Notice that if they had some way of forming the expectation that both of them would Co-operate, those expectations would be mutually reinforcing (both decision rules, GCM-C and GCM-D, require Co-operation with another Co-operator); and if they had some way of both forming the expectation that both of them would Defect, those expectations would be mutually reinforcing also (both rules require Defection with another Defector). But though mutual Co-operation and mutual Defection would satisfy both agents' decision rules, because there is no rational way to exclude either choice combination, there is no rational way for the agents to form the expectations that would produce either result. Once the possibility of incoherence is raised, it is easy to devise rules that exhibit it in a more virulent form. For example, Danielson suggests the possibility of a rule that tests other rules for whether they Co-operate with themselves in a PD, and then uses this information to determine its choice (1992, pp. 136-137). This suggests the following rule: Anti-Selfsame Co-operation (ASC) Rule for an agent X: In a PD, Defect just in case the other agent's rule Co-operates with itself (i.e., requires Co-operation in an interaction with another agent employing the same rule) in a PD with full information. What will two ASC agents X and Y do in a PD with full information? There are two alternatives, either the ASC rule Co-operates with itself in a PD with full information, or it does not. If it does Co-operate with itself, then it requires that X and Y Defect, and thus it does not Cooperate with itself. If it does not Co-operate with itself, then it requires that X and Y Co-operate, and thus it does Co-operate with itself! The rule is incoherent. To say that the rule is incoherent is to say that there can be no such rule. Reflection on these and related examples convinces me that rules of the kind that have been considered thus far must be indeterminate or
Why We Need a Moral Equilibrium Theory
315
incoherent. I believe that the indeterminacy and incoherence of such rules is symptomatic of an underlying pathology in these rules. In the remainder of this section, I explain the pathology, and in the next two sections I show how to construct decision rules that avoid the pathology. Each of the rules discussed above functions as a kind of input-output device. For example, in a PD involving a GCM agent X and another agent Y, X's GCM rule produces an act for X as output, given as input various information including what act Y will choose. Notice that to determine an act for X, the GCM rule requires information about Y's act. In my terms, this shows that in a PD, the GCM rule can generate an input-output loop with the other agent's decision rule. Intuitively, a decision rule has a potential to generate input-output loops in strategic interactions with other agents when its output depends on input information about the output of one or more of the other agents' decision rules.11 Rules that can generate input-output loops do not cause trouble when they interact with other rules that do not generate inputoutput loops - for example, the GCM rule is determinate in a PD interaction with the SM (Straightforward Maximizer) rule. But when two (or more) rules that can generate input-output loops interact with each other, there is a potential for indeterminacy or incoherence. When each agent's rule makes its output depend on information about the output of the other agent's rule, there are two unwelcome possibilities: (1) if the output of the rules is undefined for the situation in which neither agent is able to determine the output of the other agent's rule, there is a potential for indeterminacy; or (2) if the output of the rules is defined for the situation in which neither agent is able to determine the output of the other agent's rule, there is a potential for incoherence. The potential for indeterminacy is illustrated by the special indeterminacy problem for Gauthier's Constrained Maximization (GCM) rule. In a PD, if both agents X and Y employ the GCM rule, their rules generate an input-output loop. Neither agent can determine what the other will do. Because the output of the GCM rule is undefined when a GCM agent in a PD cannot determine what the other agent will do, the result is indeterminacy. Note that Danielson's idea solves the special indeterminacy problem, because, in a PD interaction with itself, the Danielson Constrained Maximization (DCM) rule does not generate an input-output loop. In an interaction with itself, the DCM rule generates a choice (Co-operate) given only the information that the other agent employs the DCM rule. So long as this information can be obtained without the agent's having to determine what the other agent will do, the DCM rule does not generate an input-output loop. Therefore, in a PD interaction with itself, the DCM rule avoids indeterminacy. But the DCM rule is indeterminate
316
William J. Talbott
in interactions with other Conditionally Co-operative rules, such as DUCP, because in a PD interaction, the DCM and DUCP rules do generate an input-output loop. There is a potential for incoherence when rules that permit inputoutput loops are formulated so as to logically preclude indeterminacy, because there may be no rationally attainable information state consistent with the output of the agents' rules, as illustrated by the PD interaction of the GCM-C and GCM-D rules above. In the extreme case illustrated by the PD interaction of the ASC rule with itself, there may be no possible information state consistent with the agents' rules. Thus, I conclude that any adequate decision rules must be formulated so as not to permit input-output loops. It is not obvious that it is possible to formulate extensionally adequate rules of Conditional Cooperation that are immune to the generation of input-output loops. I have already mentioned that Smith (1991) is pessimistic about the possibilities. When Regan (1980) attempted to solve a parallel indeterminacy problem for utilitarian theory, he also felt forced to conclude that it is not possible to formulate a rule that resolves all such problems.12 But Smith and Regan's pessimism is mistaken. Very much the same problem must be solved by any theory of strategic interaction. When von Neumann and Morgenstern (1944) faced the parallel problem in the theory of strategic rational choice (i.e., game theory), they solved it with an equilibrium analysis, which Nash (1951) extended to provide the foundations for what is referred to as non-cooperative game theory.13 In the next two sections, I show how the equilibrium analysis of non-co-operative game theory can be used to define decision rules that do not permit input-output loops, and thus, ultimately, to define a rule of Constrained Maximization as Conditional Co-operation that avoid the problems of indeterminacy and incoherence discussed above. 5. An Inductive Framework for Ranking Rules My goal is to formulate an extensionally adequate rule of Conditional Co-operation for PDs with full information that does not permit inputoutput loops. Translated into non-technical terms, this means that the rules that determine an agent X's acts cannot require as input any kind of information about other agents that X could not provide to the other agents about herself as input to their rules. My solution to this problem will be a generalization of the equilibrium analysis of non-co-operative game theory. On my proposed generalization, the equilibrium analysis attributes to each agent two kinds of decision rules, ranking rules, used to rank or to determine utility assignments to the relevant choice combinations
Why We Need a Moral Equilibrium Theory
317
(for example, by the desirability of their outcomes) and joint act selection principles, which jointly determine or constrain each agent's act on the basis of the various agents' final rankings of the possible choice combinations (as determined by their ranking rules). Ranking rules can vary from agent to agent. At least some of the joint act selection principles are assumed to be universal, as I explain below. I discuss ranking rules in this section and joint act selection principles in the next section. Ranking rules are the rules that determine an agent's rankings of (or preferences over) the relevant choice combinations, as represented by her utility assignments. In order to formulate extensionally adequate rules of Conditional Co-operation, I follow up on a suggestion of Sen (1974) and allow for agents to modify their rankings in some situations. For example, suppose an agent X with an egoist ranking of the possible choice combinations finds that she is in a PD with the pay-off structure of Figure 1 above. Sen's suggestion is that in such a situation, a Conditional Co-operator X would rank mutual Co-operation [C,C] above all other outcomes, even the outcome ([D,C]) in which she Defects and the other Agent Co-operates. At first glance, this suggestion seems quite puzzling. One of the defining conditions of a PD is that the agent X rank the outcome [D,C] above all other outcomes. How can the preferences of a Conditional Co-operator be coherently stated? I suggest that we distinguish different levels of preferences, or as I will say, different levels of rankings of the possible choice combinations.14 An agent is assumed to have a ranking rule that determines a level-0 or default ranking of the possible choice combinations, but it is not assumed that the default ranking is the agent's final ranking. Thus, for example, two agents X and Y, whose default rankings are egoist, might find that their default rankings satisfy the conditions for a PD, as illustrated in Figure 1.1 refer to such a situation as a level-0 PD. The reason for allowing more than one level of rankings is to make it possible to ask the question whether, given that the level-0 rankings produce a PD, an agent is inclined to change her ranking of the possible choice combinations. Intuitively, a Conditional Co-operator X would change her ranking of the possible choice combinations, so that, at the next higher level (i.e., level-1), [C,C] would be the highest-ranked alternative - ranked even above [C,D]. There is no incoherence in this claim, because there is no incoherence in supposing that at level-0, [C,D] is ranked above [C,C], but that their relative ranking is reversed at level1. Following McClennen (1988), I refer to rules which have the potential to alter the default ranking on the basis of information about the default rankings of other agents (as well as their own) as context-sensitive or context-dependent rules. Rules that never alter their default rankings are context-independent}51 illustrate both sorts of rules below.
318
William J.Talbott
I began this section with the goal of formulating Conditionally Cooperative decision rules that do not permit input-output loops. However, I have just acknowledged that any satisfactory rule will have to be context-sensitive - that is, it will have to be able to modify its rankings of the relevant choice combinations on the basis of information about the other agents' rankingsl Now it seems undeniable that a rule that determines an agent's rankings on the basis of information about the other agents' rankings must be liable to input-output loops! Is there any way to formulate context-sensitive ranking rules so that they do not permit input-output loops? The intuitive idea behind my solution to this problem is to allow rules that generate higher-level rankings (where the number of higher levels is assumed to be finite), but to impose an inductive structure on the rules that generate the rankings at the higher levels, so that at any level n, though an agent's ranking at level n may depend on information about the other agents' rankings below level n, it cannot depend on any information about the other agents' rankings at level n or above. If this restriction is satisfied, then none of the rules will permit input-output loops, because the rule determining each agent's level-n ranking will not require information about any other agent's level-« ranking, or any information that itself depends on determining any other agent's leveln ranking. The full inductive framework for ranking rules is as follows: (1) Default ranking. Each ranking rule must contain a level-0 clause that determines a level-0 (default) context-independent ranking of outcomes. The level-0 ranking becomes the final ranking, unless altered by a higher-level clause in the rule. (2) Higher-level rankings. A rule need not contain any clause above level-0, but if it does contain a clause for level n, then it must contain a clause for every level less than n. Any or all of the clauses from level-1 to level-(w-l) may simply reassert the ranking at the previous level. (3) Level number of a rule. Each rule is assigned a level number corresponding to the number of its highest-level clause. No rule has more than a finite number of clauses. (4) Preventing input-output loops. For all n > 0, a level-n clause must determine a ranking of the possible choice combinations based solely on the level number of the other rules (i.e., the number of each other rule's highest-level clause, which is also the number of their highest-level ranking) and on information about their rank-
Why We Need a Moral Equilibrium Theory
319
ings at level (n—1) and below. In other words, no information about the other rules' rankings at level n or above (except perhaps the mere information that they have clauses generating such rankings) can be employed by a rule in the determination of its level-n ranking of the outcomes. Thus, for example, a level-1 ranking cannot be based on any information about the other agents' rankings, except information about the level numbers of the other agents' rules (the number of their highest level) and information about the other agents' level-0 rankings.16 Rules formulated in this inductive framework will generate determinate rankings at every level; and they will avoid the potential for incoherence illustrated by the Anti-Selfsame Co-operation (ASC) Rule discussed above. Recall that the ASC Rule was stated intuitively as the rule that required Defection in a (level-0) PD with full information, just in case the other agent's rule Co-operated with itself in a (level-0) PD with full information. I have already argued that no such rule exists, because the requirements are incoherent. I now show that no such ranking rule can be formulated within the inductive framework described above. No such rule can be formulated within this inductive framework, because any attempt to formulate it (call it proto-ASC) would have to have some highest-level clause, let it be n, and its level-n clause would have to determine the proto-ASC rule's final ranking on the basis of the other agents' rankings at level-(n — 1) or below (and perhaps the level numbers of the other agents' rules). But in a level-0 PD interaction with a rule of level m - (m > n), including an interaction with another protoASC agent (where m = n), there is no way for proto-ASC to base its level-n ranking on whether the other rule Co-operates with itself in a level-0 PD with full information, because whether the other rule Cooperates with itself in such a situation depends on the other rule's level-m ranking (m s n) (in ways that I have yet to describe), and within the inductive framework described above, the ASC rule's leveln ranking cannot be based on any information about rankings at leveln or above. Thus, it is impossible to formulate the ASC rule stated above in this inductive framework.17 What remains to be shown is that a suitable rule of Conditional Cooperation that avoids the special and general indeterminacy problems can be formulated within this inductive framework. To do this, I proceed to specify some sample ranking rules and then to show how, in combination with the relevant joint act selection principles, they produce choices, or constraints on choices.
320
William J. Talbott
6. Joint Act Selection Principles In the following discussion, for simplicity, I assume that all the ranking rules I discuss have egoist level-0 rankings - that is, at level-0 they all base their ranking of alternative choice combinations on the agent's judgment of the extent to which the relevant choice combinations would further her own well-being or the satisfaction of her own selfdirected desires. This is merely a simplification that enables me to neatly connect my discussion to the literature on the Prisoner's Dilemma. I myself do not believe that most people have egoist level-0 rankings. Generalizing the account to non-egoist level-0 rankings is straightforward, as I explain in the Conclusion. The Straightforward Maximizer of egoist utility has the simplest egoist rule: SM Ranking Rule (a level-0 rule) for an agent X: Level-0 clause: Rank the possible choice combinations on the basis of egoist utility to X. An SM agent X's final ranking of choice combinations is just her level-0 egoist ranking. The SM rule is not context-sensitive, because no information about the rankings of the other agents can influence an SM agent to alter her ranking. In a two-person interaction in which the two agents' level-0 rankings have the structure of a PD, as illustrated in Figure 1, an SM agent's final rankings are the same as her level-0 rankings, and thus, for an SM agent, Defection dominates Co-operation in the final as well as the level-0 ranking. As I have already mentioned, an agent's ranking rule determines her final ranking of the alternative choice combinations, but rules of a different kind, joint act selection principles, are required to determine an agent's choices on the basis of her final ranking. Just as the ranking rules had to be formulated in a framework that prevented input-output loops, joint act selection principles must also be formulated so as to avoid generating input-output loops. To avoid input-output loops, joint act selection principles do not determine individual acts in isolation, but rather as parts of equilibrium choice combinations. Because joint act selection principles determine the choices of all relevant agents simultaneously on the basis of their final rankings of the possible choice combinations, input-output loops are avoided. The concept of an equilibrium choice combination is easily illustrated by the final rankings of two SM agents X and Y illustrated in Figure 1. In Figure 1, the combination [D,C] is individually stable for agent X, because there is no alternative act open to X which would, if substituted for X's act in the combination [D,C] produce a higher-ranked choice combination. In fact, the only alternative act open to X is to Co-operate,
Why We Need a Moral Equilibrium Theory
321
which would produce the combination [C,C] (with utility of 3), a combination that X ranks below the combination [D,C] (with utility of 4). Similarly, [D,D] is also individually stable for X, because the only alternative open to X is again to Co-operate, which would produce the combination [C,D] (with utility of 1), a combination that X ranks below the combination [D,D] (with utility of 2). Thus, for X, there are two individually stable choice combinations, the two combinations in which X Defects - that is, [D,C] and [D,D]. Of course, this is simply another way of saying that given X's ranking in Figure 1, Defection dominates Co-operation for X. Parallel reasoning shows that there are two individually stable choice combinations for Y, the two combinations in which Y Defects, fC,D] and [D,D]. An equilibrium choice combination is a combination that is individually stable for each agent. In this case, there is only one equilibrium choice combination, the combination, [D,D], because [D,D] is the only choice combination that is individually stable for both X and Y. This situation is the simplest case for an equilibrium analysis. In an interaction with full information, if there is a unique equilibrium combination of choices, each agent is required to choose the act of hers that is part of the unique equilibrium combination. This is surely the most uncontroversial joint act selection principle in equilibrium analysis. I give it a name for future reference: Uniqueness Principle. In a strategic interaction with full information in which there is only one equilibrium choice combination, each agent must choose an act compatible with the realization of the unique equilibrium combination. Because the rankings of the two SM agents in Figure 1 generate the unique equilibrium combination [D,D], the Uniqueness Principle requires each agent to choose an act compatible with the realization of [D,D]. Thus, the Uniqueness Principle requires each agent to Defect. This is hardly a surprising result. But it is useful to notice how the result was obtained. Just as the inductive framework for the ranking rules prevents input-output loops, joint act selection principles avoid input-output loops by determining all agents' acts simultaneously as a function of their final rankings of the possible choice combinations. Thus, the indeterminacy and incoherence problems of rules that can generate input-output loops are avoided. Although the Uniqueness Principle is the most modest equilibrium act selection principle imaginable, it suffices to resolve many decision situations when there is full information. For example, it not only resolves PD interactions between two SM agents, but also similar inter-
322
William J. Talbott
actions between two Unconditional Co-operators (UC). To see how it does so, it is necessary to formulate a UC ranking rule. Intuitively, in a level-0 PD, an Unconditional Co-operator (UC) prefers to Co-operate regardless of what the other agent might do. Translating this condition into the inductive framework for ranking rules described above yields the following level-1 rule. UC Ranking Rule (a level-1 rule) for an agent X: Level-0 Clause: Rank the possible choice combinations on the basis of egoist utility to X [level-0 clause is same as in the SM rule]. Level-1 Clause: In a level-0 PD, rank the possible choice combinations so that both choice combinations in which X Co-operates are ranked above both combinations in which X Defects; otherwise, reassert the level-0 ranking.18 What happens when two UC agents X and Y are in a level-0 PD with full information? Information about their level-0 rankings triggers the level-1 clause of the UC ranking rule to give each of them a new level1 ranking of the possible choice combinations, as illustrated in Figure 2. Figure 2 represents the final rankings of the possible choice combinations for both X and Y. Note that in Figure 2, for both X and Y, Cooperation dominates Defection. Given the rankings in Figure 2, the choice combinations [C,C] and [C,D] are both individually stable for X; the choice combinations [C,C] and [D,C] are both individually stable for Y. Thus, [C,C] is the unique equilibrium combination (i.e., the unique choice combination that is individually stable for both X and Y). The Uniqueness Principle applies to this case to determine that X and Y should act so as to bring about [C,C] - that is, they should each Co-operate. Thus far, the equilibrium analysis has only provided a roundabout way of obtaining the same results that were obtained on a simpler, nonequilibrium analysis. From the point of view of the equilibrium analysis, that is because the SM/SM interaction and the UC/UC interaction in a level-0 PD with full information are interactions of the simplest UC Agent Y
UC Agent X
C
D
'
C
D
(4,4)
(3,3)
(2,2)
(1,1)
Figure 2: The final (level-1) rankings of two Unconditional Co-operators in a levcl-0 PD (as illustrated in figure 1) with full information.
Why We Need a Moral Equilibrium Theory
323
kind - where there is a unique equilibrium choice combination. Level-0 PDs involving two Constrained Maximizers are theoretically more interesting, because they do not produce a unique equilibrium choice combination. The intuitive idea behind the CM ranking rule is that in a level-0 PD, a Constrained Maximizer is a Conditional Co-operator who is not inclined to take advantage of the other agent - and thus ranks [C,C] (mutual Co-operation) above all other choice combinations, including [D,C] (taking advantage of the other agent's Co-operation) - but neither is she inclined to be taken advantage of by the other agent - and thus ranks [C,D] (having her Co-operation taken advantage of by the other agent) below all other choice combinations.19 This leads to the following level-1 ranking rule: CM Ranking Rule (a level-1 rule) for an agent X: Level-0 Clause: Rank the possible choice combinations on the basis of egoist utility to X [level-0 clause is same as in the SM rule]. Level-1 Clause: In a level-0 PD, rank the possible choice combinations so that mutual Co-operation ([C,C]) is ranked above all other possible choice combinations and the combination in which X Co-operates and the other agent Defects ([C,D]) is ranked below all other possible choice combinations.20 Figure 3 illustrates the final rankings of two CM agents in a situation with full information, when their level-0 rankings generate a level-0 PD (as illustrated in Figure 1). Based on these rankings, the two choice combinations [C,C] and [D,D] are individually stable for X; and the same two choice combinations are individually stable for Y. Thus, in this case, [C,C] and [D,D] are both equilibrium choice combinations. Although there are no generally agreed upon principles of equilibrium analysis for resolving all decision situations with multiple equilibria, this case is as uncontroversial as such a situation can be. The idea behind the solution is a simple one: If either X or Y individually could choose between the two combinations [C,C] and [D,D], they would both CM Agent Y
CM Agent X
C
D
'
C
D
(4,4)
(1,2)
(2,1)
(3,3)
Figure 3: The final (level-1) rankings of two Constrained Maximizers in a level-0 PD (as illustrated in figure 1) with full information.
324
William J. Talbott
choose [C,C]. Moreover, if either X or Y could transfer her choice to the other, so that one person could choose for both, they would both be willing to let the other agent choose for both of them. Because both X and Y have full information, they both realize this about each other. This information enables them to rely on each other to do their part to bring about the equilibrium combination favored by both, fC,C], by each choosing to Co-operate. The principle involved can be stated as follows: Joint Pareto Principle. In a two-person strategic interaction with full information in which there is more than one equilibrium combination of choices, if there is one equilibrium combination that both agents rank above all the others, then each agent must choose an act compatible with the realization of that equilibrium combination.21 The Joint Pareto Principle explains why two agents employing CM ranking rules would Co-operate in a level-0 PD with full information. There are two equilibrium combinations, [C,C] and [D,D]; and both agents rank [C,C] above [D,D]. Thus, the Joint Pareto Principle requires that they both act so as to produce the Pareto-superior equilibrium combination [C,C]. And therefore, they must both Co-operate. Thus, the equilibrium analysis solves the special indeterminacy problem for the Constrained Maximization rule. If Constrained Maximizers use the CM ranking rule, they will Co-operate with each other in a level-0 PD with full information. Before showing how the equilibrium analysis solves the general indeterminacy problem, it is useful to summarize the results to this point. In a level-0 PD - that is, in a two-person interaction where the two agents' default rankings produce a PD, as illustrated in Figure 1 - for the context-independent SM rule, the default rankings are the final rankings, but the context-sensitive UC and CM rules alter their default rankings to produce different final (level-1) rankings, as illustrated in Figures 2 and 3. Each agent's final rankings determine which choice combinations are individually stable for that agent. Figure 4 shows which of the possible choice combinations in a level-0 PD with full information are individually stable for agent X and which are individually stable for agent Y, as a function of the ranking rules, SM, UC, or CM. Because, in each case, there are two individually stable choice combinations for each agent, Figure 4 also shows their relative ranking (where the relevant ranking rule determines it). I have also described above how the agents' ranking rules determine which choice combinations are equilibrium combinations. The equilibrium combinations are simply the choice combinations that are individually stable for both agents. Using the information summarized in
Why We Need a Moral Equilibrium Theory
Ranking Rule SM (Straightforward Maximizer UC (Unconditional Co-operator CM (Constrained Maximizer
325
Individually Stable Choice Combinations for X
Individually Stable Choice Combinations for Y
[D,C] > [D,D]
[C,D] > [D,D]
[C,C] [C,D]
[C,C] [D,C]
[C,C] > [D,D]
[C,C] > [D,D]
Figure 4: Individually stable choice combinations, and their relative rankings (if determined), for agents X and Y in a level-0 PD with full information as a function of the ranking rules SM, UC, and CM.
Figure 4, it is possible to construct a table of equilibrium choice combinations in a level-0 PD with full information, as a function of the agents' ranking rules. See Figure 5. Of the nine combinations of ranking rules for agents X and Y in a PD that are illustrated in Figure 5, eight of them are of the simplest type, because there is only one equilibrium choice combination. In those eight cases, the Uniqueness Principle jointly determines both agents' acts. Note that, as expected, when an SM agent X interacts with a UC agent Y, the sole equilibrium combination is the combination [D,C], in which X takes advantage of Y's Co-operativeness. But when an SM agent X interacts with a CM agent Y, the sole equilibrium combination is the combination [D,D], in which neither takes advantage of the other, but they forego the benefits of mutual Co-operation. And when a CM Agent Y's Ranking Rule
Agent X's Ranking Rule
SM
UC
CM
SM
[D,D]
[D,C]
[D,D]
UC
[C,D]
[C,C]
[C,C]
CM
[D,D]
[C,C]
[C,C] > [D,D]
both agents rank
Figure 5: Equilibrium Choice Combinations in a PD with Full Information as a Function of the Ranking Rules of the Two Agents X and Y.
326
William J. Talbott
agent X interacts with a UC agent Y, the sole equilibrium combination is [C,C], mutual Co-operation, even though X could take advantage of Y if she were so inclined. A CM agent is a Conditional Co-operator, because she Co-operates with other agents rather than take advantage of them, but Defects with other agents rather than be taken advantage of by them. The only interaction represented in Figure 5 in which there is more than one equilibrium choice combination is the case in which a CM agent interacts with another CM agent. In that case, there are two equilibrium choice combinations, [C,C] and [D,D]. However, because both agents rank [C,C] above [D,D], the Joint Pareto Principle selects the equilibrium [C,C], and both agents will Co-operate.22 Having shown how the generalized equilibrium analysis solves the special indeterminacy problem, I turn now to the problem of general indeterminacy - that is, the problem of guaranteeing that the CM rule Co-operates with the potential infinity of other non-equivalent Conditionally Co-operative rules. The solution is simple. In a level-0 PD with full information - that is, in any two-person interaction with full information in which the level0 rankings generate a PD - the CM rule will generate a final ranking with two individually stable choice combinations, [C,C] and [D,D], with [C,C] ranked above [D,D]. Thus, mutual Co-operation in a PD with full information is guaranteed whenever: (1) a CM agent X interacts with an agent Y whose ranking rule generates a final ranking in which the outcomes [C,C] and [D,C] are individually stable (i.e., a final ranking that mimics the UC level-1 ranking). In such a case, [C,C] is the unique equilibrium choice combination. Thus, CM Co-operates with any of the potentially infinite number of non-equivalent rules that are Unconditionally Cooperative with CM, because at least in an interaction with CM (though perhaps not in interactions with all other rules), they produce a final ranking that mimics the level-1 UC ranking. And, whenever: (2) a CM agent X interacts with an agent Y whose ranking rule generates a final ranking in which [C,C] and [D,D] are both individually stable and [C,C] is ranked above [D,D] (i.e., generates a final ranking that mimics the level-1 CM ranking). In such a case, the equilibrium combination [C,C] is picked out by the Joint Pareto Principle. Thus, CM Co-operates with any of the potentially infinite number of non-equivalent rules that are Conditionally Co-operative with
Why We Need a Moral Equilibrium Theory
327
CM, because at least in an interaction with CM (though perhaps not in interactions with all other rules), they produce a final ranking that mimics the level-1 CM ranking. Because the CM rule Co-operates with every one of the potentially infinite number of non-equivalent rules of these two kinds, the equilibrium analysis solves the general indeterminacy problem. As an illustration, consider Danielson's Unconditional Co-operator Protector (DUCP) rule. Because, as stated above, the DUCP rule must be liable to input-output loops, there is no way to translate Danielson's rule directly into the inductive framework for ranking rules. However, it is possible to define a level-2 ranking rule that behaves as Danielson intended the DUCP rule to behave with the SM, UC, and CM rules. I refer to it as the LTCP ranking rule. Like the other rules discussed above, the UCP rule's default ranking is an egoist ranking. The UCP's level-1 ranking can be any ranking that would Co-operate with UC (i.e., it can either mimic the level-1 ranking of UC or it can mimic the level-1 ranking of CM). It does not matter which, because at level-1, the UCP rule would simply be waiting to "see" how the other agent's rule would respond to the level0 PD. If the other rule's level-1 ranking is one that would produce Cooperation with the UC rule, then at level-2, the UCP rule would mimic the level-1 ranking of the CM rule and generate a final ranking in which [C,C] and [D,D] are both individually stable and [C,C] is ranked above [D,D]. However, if the other rule's level-1 ranking is one that would not produce Co-operation with UC, then at level-2, the UCP rule would mimic the level-0 ranking of the SM rule - that is, revert to its default ranking (in which Defecting dominates Co-operation). Thus, the UCP rule would Co-operate with the UC rule, the CM rule, and with the UCP rule itself, because all three rules generate level-1 rankings that would lead UCP to mimic the level-1 CM ranking in its final level-2 ranking. But the UCP rule would not Co-operate with any rule that did not generate a level-1 ranking that would produce Cooperation with UC.23 Given that so many intuitively statable rules seem to be liable to input-output loops, and thus to lead to problems of incoherence or indeterminacy, it is somewhat surprising that it is possible to state a rule of Constrained Maximization (or Conditional Co-operation) that does not permit input-output loops, and thus does not generate any of the above-discussed problems of incoherence or indeterminacy. The structure of its level-1 ranking guarantees that in level-0 PDs with full information, the CM rule will Co-operate with any rule willing to Cooperate with it, and Defect with any rule that Defects with it.24
328
William J. Talbott
6. Conclusion The results reached in this paper are based on numerous simplifying assumptions, and thus are primarily of theoretical rather than practical interest. One simplifying assumption that can easily be relaxed is my assumption that all the ranking rules of interest generate egoist default rankings. Because the CM ranking rule that I discuss above incorporates an egoist default ranking, it would be more exact to describe that rule as a ranking rule for Constrained Maximization of Egoist Utility (CMEU). Other level-0 clauses can be combined with the level-1 clause of the CM ranking rule stated above to produce other members of the CM family of ranking rules. For example, if a level-0 clause that ranks choice combinations on the basis of their egoistic utility to others (their altruistic utility) is combined with the level-1 clause of the CM rule above, the result is a ranking rule for the Constrained Maximization of Altruistic Utility (CMAU). It seems to me that any level-0 clause could be combined with the level-1 clause of the CM rule stated above to produce a member of the family of Constrained Maximization ranking rules. Thus, it seems to me that what Constrained Maximizers have in common is that they have the same level-1 clause. I have also simplified the discussion by considering only a small number of ranking rules and only a small number of well-behaved interactions involving those rules. A complete theory would have to cover all possible ranking rules and all possible decision situations. Such a theory would have to include consideration of mixed acts (i.e., probabilistic mixtures of pure acts), which I have been able to neglect due to a judicious selection of ranking rules and decision situations. A complete theory would also have to face what is undoubtedly the major unsolved problem for the equilibrium analysis in non-co-operative game theory - that is, the fact that in most interactions there are multiple equilibrium choice combinations, and the equilibrium combinations are rarely so well-behaved as to be resolvable by the Joint Pareto Principle or other agreed upon principles of joint act selection for cases with multiple equilibria. However, I do not believe that these cases of multiple equilibria are cause for pessimism for a moral equilibrium theory. In the first place, as I mentioned in the Introduction, indeterminacy per se is not a problem for the moral reductionist project. In most decision situations, our considered moral judgments are indeterminate in the sense that they allow that more than one act is morally permissible. An extensionally adequate rule of Constrained Maximization as Conditional Co-operation must be similarly indeterminate.
Why We Need a Moral Equilibrium Theory
329
Second, because moral agents employ higher-level ranking rules, I believe that many decision situations in which the agents' default rankings generate multiple equilibrium combinations are resolved by triggering higher-level rankings that are better behaved. In this paper, I have focused on one case in which dissatisfaction with the results of choices based on the agents' default ranking can lead them to favor a higher-level rule that alters the default ranking. It is precisely this sort of consideration that might motivate an agent to adopt the CM ranking rule. Adopting the CM ranking rule leads an agent to alter her rankings of the alternative choice combinations in an interaction in which the agents' default rankings generate a PD, with the result that the combination [C,C], which was not even an equilibrium combination under the default rankings, can be an equilibrium in the final rankings. My impression is that many decision situations with multiple equilibria are resolved in very much this way. The inductive framework for ranking rules introduced above provides the basis for a new unification of the two branches of game theory, co-operative game theory, and non-co-operative game theory. From the point of view of non-co-operative game theory, co-operative game theory is often interpreted as the theory of rational contract, under conditions where it is common knowledge that any contract that is mutually agreed to will be enforced by sufficiently strong sanctions. I suggest rather that co-operative game theory can be interpreted as the theory of a certain class of higher-level ranking rules, or that it can at least be extended to be a theory of these higher-level ranking rules, which include the moral ranking rules.25 If human beings were all Straightforward Maximizers of Egoistic Utility, then these higher-level rules would only be needed to explain contracts and other agreements backed up by sanctions. But I believe that they explain much more than that. For example, there have been very many experiments done to test human behaviour in various types of Prisoner's Dilemma situations. These experiments consistently report a substantial amount of Cooperation, even in cases in which it is quite clear that there will be no sanctioning of Defectors.261 believe that there is an interesting higherlevel ranking rule of Constrained Maximization as Conditional Cooperation that explains much of the reported Co-operation. The main goal of a moral equilibrium theory is to attempt to formulate the higher-level ranking rules that explain such behaviour. The most significant simplifying assumption that I have employed in this paper is the full information assumption, because no human being will ever be in a strategic interaction with full information. To relax the full information assumption would require joint act selection
330
William J. Talbott
principles that could be applied in situations of uncertainty, including uncertainty about the other agents' rankings of the relevant choice combinations. I believe that here a Bayesian approach holds promise.27 But such situations are much too complex to attempt to analyze here. Of course, in the absence of a complete theory, I cannot guarantee that the generalized equilibrium analysis that I propose will not give rise to indeterminacies - for example, in situations of multiple equilibria or in situations of uncertainty - that do compromise extensional adequacy. All that I can do is to recommend that we try it and find out. At the very least, there is one family of indeterminacy problems including both the special and general indeterminacy problems discussed above - that can be solved by a generalized equilibrium analysis, because it avoids decision rules that permit input-output loops. I believe that this is the first step toward the development of an extensionally adequate reductionist moral equilibrium theory.28
Acknowledgments This is a substantially revised version of the paper I presented at the Conference on Modeling Moral and Rational Agents at Simon Fraser University, Vancouver, B.C. on February 11-12, 1994. Ironically, it was not until after this paper had reached the copy-editor that I became aware of Peter Kollock's paper, "Transforming Social Dilemmas: Group Identity and Co-operation," which also appears in this volume. Unfortunately, I can do little more than mention the connection between Kollock's paper and my own in this brief note. Upon reading the two papers, it will be evident that the distinction that Kollock employs between a "given" decision matrix and an "effective" decision matrix is precisely the distinction that I have tried to clarify with my inductive framework for higher-level decision rules. I believe it will also be clear that unless Kollock employs an inductive framework for higher-level decision rules of the kind that I develop in this paper, the distinction between "given" and "effective" decision matrices will lead to the sorts of indeterminacy and incoherence that I illustrate in this paper. The paper has benefited from discussions with many people, especially Elijah Millgram and Michael Taylor. I also received helpful comments on an earlier version from two anonymous referees. Work on this paper was supported in part by a grant from the Graduate School Fund of the University of Washington and in part by a mentoring grant from the College of Arts and Sciences of the University of Washington.
Notes '1 It is important to distinguish moral reductionism from other types of reductionism in philosophy. Acceptable moral reductionist principles must
Why We Need a Moral Equilibrium Theory
331
distinguish morally permissible from morally impermissible acts purely on the basis of their factual, non-normative properties. But acceptable factual properties are not themselves limited by a further requirement of reducibility to some favored level of scientific description (e.g., physical theory). Thus, for example, I assume that information about the beliefs, preferences, and intentions of the agent performing an act is information that a reductionist moral principle can use to distinguish morally permissible from morally impermissible acts. 2 The original Prisoner's Dilemma is from Luce and Raiffa (1957, p. 95). To say that all of the relevant facts about the decision situation are common knowledge is to say that each agent knows all the relevant facts; each agent knows that each agent knows all the relevant facts; each agent knows that each agent knows that each agent knows all the relevant facts; etc. In the rational choice literature, perfect information usually includes the common knowledge that all agents are rational. Full information is my generalization of the perfect information assumption, because it includes common knowledge of each agent's decision rule, with no assumption that their rules are "rational." This corresponds to Gauthier's assumption that the agents are transparent, although it is not exactly the same thing. As Gauthier (1986) defines the term, if an agent is "transparent" then her dispositions are common knowledge (pp. 173-74). Many authors have discussed the problematic relation between dispositions and actions on Gauthier's account (e.g., McClennen (1988) and Smith (1991). I avoid these difficulties by replacing common knowledge of dispositions with common knowledge of decision rules. 3 It should be noted that this is a simplified representation of the actual decision situation. A complete representation would include all the acts that the agents regard as available alternatives. Thus, even if C and D are the only pure (non-randomized) acts available to the two agents, a complete matrix for the PD would also require rows and columns for all available mixed acts - that is, all available randomizations over the available pure acts. To simplify the exposition, in this paper I have confined my attention to situations in which all the relevant decision rules yield pure act solutions. But, in most interesting situations, mixed acts cannot be ignored. 4 Following the usual practice, I use numerical utility assignments to represent the agent's preferences over or rankings of the relevant choice combinations. I assume a cardinal assignment of utilities (unique up to a transformation that preserves ratios of utility intervals) to make calculations of expected utility meaningful. It should be noted that the Figure 1 matrix is only one of a potentially infinite number of non-equivalent matrices with a pay-off structure characteristic of a Prisoner's Dilemma. There is disagreement in the rational choice literature over the exact structure of pay-offs (i.e., of utility assignments) that is necessary for a PD. Taylor (1987, p. 13)
!32
William J. Talbott
requires only the ordinal relations among the pay-offs illustrated in Figure 1; Axelrod (1984, p. 10) includes an additional constraint - that for each agent the average of the highest and lowest pay-offs not exceed the secondhighest pay-off. Though I favor Taylor's more liberal condition as a defining condition of PDs, in this paper, to simplify the exposition, I restrict my attention to PDs that satisfy Axelrod's stricter condition, also. (In cases in which the Axelrod condition is not satisfied, it would be necessary to consider mixed (i.e., randomizing) acts, because two agents who decided to randomize and choose C with probability 1/2 might be able to secure a higher expected utility to each than the utility of mutual Co-operation [C,C], Although a full moral equilibrium theory would need to include consideration of mixed acts, to keep the exposition simple, 1 have focused on cases in which they can be ignored.) In order to make possible a distinction between default and higher-level utility assignments (or preferences), which I introduce in Section 5 below, here I simply mention that the utility assignments in Figure 1 are assumed to reflect the agent's default utility assignments (representing their default preferences), and leave it open -whether those default utility assignments (or preferences) might be modified by deliberation. Finally, I assume that no agent has any extrinsic information about the other - that is, no information about the other except information about their current situation and motivations. By adding extrinsic information, one can overcome what I take to be the prima facie moral intuition in favor of mutual Co-operation in a PD. For example, if the other agent were known to be a murderer, there might be moral reasons to punish the other agent by not Co-operating with her, even if the other agent were willing to reciprocate Co-operation. I wish to avoid all such complicating factors here. 5 Gauthier (1986) actually defines Constrained Maximization in the context of what he takes to be a complete theory of morality. Gauthier's complete theory would place additional constraints on the kinds of PD situations in which it is appropriate to act on the CM rule (namely, where the utilities of each agent at least approach what they would expect from Gauthier's principle of minimax relative concession). In this paper, I abstract away from the further details of Gauthier's theory, because Gauthier claims that in at least some such situations, his account would recommend that Constrained Maximizers Co-operate with each other in a One-Shot, Two-Person Prisoner's Dilemma. That claim can be discussed independently of the other details of his theory. Also, Gauthier states the Constrained Maximization rule in a general form that would apply not only to PDs involving two agents, but to situations analogous to PDs involving more than two agents - that is,n-person PDs (1986, p. 167). It is in its application to n-person PDs that Gauthier's Constrained Maximization principle shows most clearly the difference
Why We Need a Moral Equilibrium Theory
6
7 8
9
333
between the morality of Conditional Co-operation and individual rationality. I had intended to include a discussion of M-person PDs in this paper, but the discussion of Two-Person PDs crowded it out. For the Two-Person PD, Gauthier has simplified the statement of his Constrained Maximization rule: A Constrained Maximizer is "a conditional co-operator; she co-operates with those whom she expects to co-operate with her"(1988, p. 399). The statement of the GCM rule that I adopt in the text is based on this simplified statement of Gauthier's principle. An interesting question, which I do not pursue here, is whether the conception of morality as Conditional Co-operation would require or merely permit but not require Defecting with those whom one expects to Defect. I do not mean to be settling that issue with the GCM rule stated in the text, but merely setting it aside. It simplifies the exposition to formulate the GCM rule so that it requires Defecting with someone whom one expects to Defect. The problems of indeterminacy to be discussed below would also arise if the GCM rule were formulated so that it permitted but did not require Defecting with someone whom one expected to Defect. Even before Smith, Campbell (1988) had raised this as a problem for Gauthier's account of Constrained Maximization. I believe Smith has given the most forceful statement of the problem, including showing that Campbell's proposed solution to the problem is not satisfactory. This sort of solution has also been proposed by Howard (1988). Because Danielson's rule uses a quotational test, there is also a problem in guaranteeing that his rule will Co-operate with the potential infinity of rules logically equivalent but not identical (quotationally) to the DCM rule. Danielson (1992, pp. 132-33) is aware of this problem. I ignore it, because the problem of the potential infinity of non-equivalent Conditionally Co-operative rules is of much greater significance for the moral reductionist project. I should also mention that although I discuss potential indeterminacy problems - and later, potential incoherence problems - for English-language analogues to Danielson's quotational rules, the decision rules that Danielson formulates in programming language are neither indeterminate nor incoherent. They fail the test of extensional adequacy because, though they determinately Co-operate with the other rules that pass their quotational test, they determinately Defect with the potentially infinite number of Conditionally Co-operative rules that do not pass their quotational test. For example, a Quotational CM (QCM) rule would Defect with a Quotational UCP (QUCP) rule, because the QUCP rule would fail the QCM rule's quotational test for a UC rule (QUCP is not an Unconditional Co-operator), and it would fail the QCM rule's quotational test for identifying the QCM rule itself. The only way that Danielson could solve this problem with a quotational rule would be, for example, to incorporate into a Quotational CM
334
William J. Talbott
rule a different quotational test for each different type of Conditional Cooperator, including, for example, a quotational test for the QUCP rule. But because there are a potentially infinite number of different, non-equivalent Conditionally Co-operative rules that a QCM agent could potentially Cooperate with - and, intuitively, should Co-operate with - it is in principle impossible for Danielson's quotational method of defining Conditionally Co-operative strategies to be extensionally adequate. Even artificial agents are limited to decision rules of finite length. 10 I assume that the two alternatives, Co-operate and Defect, logically exhaust the acts open to each agent, as they would, if, for example, any failure to Co-operate constituted Defection (so that there was no way to avoid either Co-operating or Defecting). Similarly, I assume that the three following three alternatives logically exhaust X's possible cognitive attitudes toward the other agent's choice: (1) X expects the other agent to Co-operate; (2) X expects the other agent to Defect; and (3) X cannot determine the other agent's choice. 11 A precise definition of rules that permit input-output loops can be given (though, for present purposes, it seems prudent to avoid the additional complexity that would be required to carefully distinguish use from mention): For an agent X, let R be a rule that determines whether or not X has some property F (e.g., the property of choosing to Co-operate) on the basis of information possessed by X. (The relevant information is the input to the rule and the output is either F(X) or -F(X), which I abbreviate as ± F(X).) Rule R can generate input-output loops just in case in order to determine ± F(X), it requires information ± I, which X cannot determine without determining ± F(Y) (for some agent Y); or without determining some information that logically implies ± F(Y). 12 In order to solve an analogous problem of indeterminacy for act utilitarianism, Regan (1980) develops a theory which he refers to as co-operative utilitarianism. Regan's theory stands as a monument to the inevitable complexity of thought that is generated by strategic interaction, and to the problems generated by rules that permit input-output loops in strategic interactions. The complexity of Regan's theory is due to his quite clear understanding of the difficulties, and to his admirable unwillingness to try to cover them up. Thus, after what can only be described as a Herculean effort to solve the indeterminacy phenomenon with a decision rule that permits input-output loops (his procedure P), Regan himself admits that the result is not completely successful (pp. 161-62). The reason is that in certain situations Regan's decision rule has no determinate stopping point. Regan himself acknowledges that the reasoning of agents employing his decision rule has the potential to loop back on itself endlessly and that the decision rule "does not give definite instructions to the group as a whole about when to stop" (p. 161). He downplays this indeterminacy problem because he
Why We Need a Moral Equilibrium Theory
13
14 15 16
17
18
19
335
believes that it will be a feature of "any plausible consequentialist theory" (p. 162). In this he is mistaken. The equilibrium analysis that I employ below would also solve Regan's indeterminacy problem, as I explain briefly in note 28. For a good critical introduction to equilibrium analysis in non-co-operative game theory that is not mathematically demanding, see Kreps (1990). For a mathematically more rigorous introduction to non-co-operative game theory, see Fudenberg and Tirole (1991). Non-co-operative game theory is by no means complete, and there is much disagreement on the details of the theory, but I believe that it is possible to abstract away from the disagreements to show how, in principle, an equilibrium analysis has the potential to solve indeterminacy problems for theories of strategic interactions generally, including moral theories. I should note that one of the ways that I would propose to generalize the equilibrium analysis of non-co-operative game theory is that I would not assume that mixed acts are ranked according to their expected utility. For example, someone who had a moral objection to gambling would not be expected to rank the acquisition of a lottery ticket according to its expected utility. Sen (1974, pp. 58-59) credits Harsanyi (1955) with the idea that one's "ethical preferences" might be derived from one's initial or "subjective" preferences. I should mention that I adopt only the terminology from McClennen. His use of context-sensitive rules is quite different from mine. When I say that a rule's level-n ranking may be based on information about the other rules' rankings below level-n, I mean to include not only information about what those rankings are in the current situation, but also information about what they would be in alternative, hypothetical situations. This information does not make it possible to generate input-output loops. I should note that the inductive structure of the ranking rules places constraints on the information that can be used in the generation of higher-level rankings, but does not constrain the availability of information itself. Thus, an agent with a level-n ranking rule might very well be able tofigure out the final ranking of an agent with a higher-level ranking rule - for example, a level-m ranking rule (m 2 n). The inductive structure of ranking rules simply prevents her from using information about the other agent's level-nj ranking (m ^ n) in determining her own final (level-n) ranking. Because the level-1 clause of the UC ranking rule stated in the text does not completely determine the level-1 ranking, but only constrains it, it is useful to think of the UC rule not as a single rule, but as a family of ranking rules that satisfy the conditions stated in the text. The ranking of choice combinations in Figure 2 then is an example of one of the many possible rankings consistent with the level-1 clause of the UC ranking rule. It would be possible to relax this constraint slightly. It does not seem to me that a CM agent X must rank being taken advantage [C,D] below taking
336
20
21
22
23
William J. Talbott advantage of the other agent [D,C]. I ignore this complication in the text in order to simplify the statement of the CM ranking rule. As was true of the UC ranking rule, clause-1 of the CM ranking rule does not determine a unique ranking but a family of rankings consistent with it. Figure 3 then shows only one of many possible rankings of the choice combinations that are consistent with clause-1 of the CM ranking rule. The interaction of two CM agents illustrated in Figure 3 has the structure of an Assurance Game (Sen 1974, pp. 59-60). I follow Taylor in holding that there is a determinate solution to an Assurance Game when there is full information (1987, p. 19). When there is not full information, things get more complicated, especially if there are more than two agents involved, in which case there is some reason to think that the preferred equilibrium should be coalition-proof. See Fudenberg and Tirole for examples and discussion (1991, pp. 20-22). Also, as Brian Skyrms has mentioned to me in conversation, when there is any non-zero probability that an agent will not choose as prescribed by the Joint Pareto Principle, then considerations of risk dominance become important. See, for example, Harsanyi and Selten (1988, ch. 5). Not all combinations of ranking rules are as well behaved as these. For example, consider a level-1 Sado-Masochistic (S&M) ranking rule for an agent X: In a level-0 PD, the only two individually stable outcomes in the S&M rule's level-1 ranking are the outcomes in which either X is taken advantage of by the other agent Y or X takes advantage of Y (i.e., [C,D] and [D,C]). In a level-0 PD interaction between a CM agent and an S&M agent, there would be no equilibrium combinations of pure acts. Mixed strategies would have to be considered. It is also possible to define a Conditional Defection (CD) ranking rule for an agent X: In a level-0 PD, both [C,C] and [D,D] would be individually stable choice combinations in the CD rule's level-1 ranking, but [D,D] would be ranked above [C,C]. In a level-0 PD interaction between a CM and a CD agent, both [C,C] and [D,D] would be equilibrium choice combinations, but the Joint Pareto Principle would not apply, because the two agents would not agree on the relative ranking of the two equilibrium combinations. Again, mixed strategies would have to be considered. I avoid such complications here. Of course, the UCP ranking rule defined in the text can be "deceived" into Conditional Co-operation by suitably "devious" rules - for example, by a UCP Deceiver (UCPD) Rule of level~3 that in a level-0 PD generates rankings at level-1 and level-2 that would Co-operate with UC, but at level-3 is Conditionally Co-operative (i.e., it mimics the CM level-1 rankings) only if the other rule is a lower-level rule whose final ranking is itself Conditionally Co-operative; otherwise, at level-3 it reverts to the level-0 SM ranking. In a level-0 PD with full information, the UCP rule in the text would Co-operate
Why We Need a Moral Equilibrium Theory
24
25
26
27
337
with this UCPD rule, even though the UCPD rule would not Co-operate with the UC rule (it only "pretends" that it will). This is just a reminder that, for the reasons discussed above, there is no way to determinately formulate an "infallible" Unconditional Co-operator Protector rule, for such a rule •would generate input-output loops. Note that, by contrast, in cases of full information, no rule, no matter how high its level, can "deceive" the CM rule into Co-operating with it while it Defects. In cases of full information, the CM rule Co-operates only with rules that Co-operate with it. Note also that Danielson's (1992, p. 89) favored rule of rational morality, his rule of Reciprocal Co-operation (RC) (Co-operate just in case one's own Co-operation is both necessary and sufficient for the other agent's Co-operation) unavoidably generates input-output loops. It would be possible to formulate a level-2 ranking rule that approximated Danielson's RC Rule. But the approximation would not be a very good one, because the kind of test required by RC (that its Co-operation be both necessary and sufficient for the other agent's Co-operation) is an open invitation to input-output loops. I confine my discussion to level-0 PDs because the CM rule cannot generate PDs at any level above level-0, unless the other agent has a rule that unilaterally transforms a non-PD at level-0 into a PD at a higher level. The CM rule will not Conditionally Co-operate in such higher-level PDs, but, from the point of view of the moral reduction project, this does not seem to be a problem for the CM rule. I find myself unable to imagine a plausible decision rule that would unilaterally transform a non-PD ranking into a PD ranking, but even if there were such a rule, it seems to me that a rule that Conditionally Co-operated with such rules would invite being taken advantage of by them. As I conceive of it, a morality of Conditional Cooperation requires only that one not take advantage of others; it does not require that one leave oneself open to being taken advantage of by them. For a good introduction to co-operative game theory, see Shubik (1982). Shubik also discusses non-co-operative game theory. Although it seems to me that one way to make sense of co-operative game theory is as a theory of higher-level ranking rules for situations in which there is a possibility of producing joint benefits, I do not mean to imply that any existing co-operative game theory is extensionally adequate for moral theory. There is still much work to be done. For examples of some of these experiments and references to others, see Frank (1988, ch. 7). Frank also discusses some of the large number of other experiments that report co-operative or unselfish behaviour in situations in which there is no sanctioning of non-co-operative or selfish behaviour. I hope that a model of morality as Conditional Co-operation can be extended to explain many of these findings also. In cases of less than full information, I believe that a CM agent would base her choice on what might be termed a calculation of joint Expected Utility (to
338
William J. Talbott
avoid input-output loops). Intuitively, a CM agent in a situation of less than full information would choose C if its joint Expected Utility were at least 2 (the utility to each Defector when both agents Defect). I do not have a fully worked out theory of joint Expected Utility. The Bayesian approaches of Harsanyi and Selten (1988) and of Skyrms (1992) are interesting examples of how to incorporate Expected Utility calculations into theories of strategic interaction, but I believe that their accounts would only be appropriate for situations in which it was common knowledge that all agents were Straightforward Maximizers, which, I believe, are the kind of situations that they intend to be modeling. 28 In an earlier draft of this paper, I argued that the generalized equilibrium analysis that I advocate here would solve the indeterminacy problem that Regan (1980) hoped to solve with his co-operative utilitarianism. I have already mentioned that Regan's own attempted solution was unsuccessful, because he used a decision rule that permitted input-output loops. See note 12. Space considerations led me to delete the discussion of Regan's indeterminacy problem from this paper. The main idea of the solution is simply to formulate Act Utilitarianism as a ranking rule that ranks choice combinations on the basis of their total utility, and then to employ the joint act selection rules of the generalized equilibrium analysis. Also in my oral presentation at the conference on Modeling Rational and Moral Agents, I briefly discussed the n-person Prisoner's Dilemma. The present analysis generalizes naturally to provide an account of Conditional Co-operation in the n-person PD with full information, and thus to provide a generalized CM ranking rule that corresponds to Gauthier's general characterization of Constrained Maximization (1986, p. 167), but again reasons of space prevent me from carrying out the generalization here.
References Axelrod, Robert (1984). The Evolution of Cooperation. New York: Basic Books. Campbell, Richmond (1988). Gauthier's theory of morals by agreement. The Philosophical Quarterly, 38: 342-64. Danielson, Peter (1991). Closing the compliance dilemma: How it's rational to be moral in a Lamarckian world. In Peter Vallentyne (ed.),Contmctarianism and Rational Choice (Cambridge: Cambridge University Press), pp. 291-322. (1992). Artificial Morality. London: Routledge. Frank, Robert H. (1988). Passions Within Reason. New York: W. W. Norton. Fudenberg, Drew, and Jean Tirole (1991). Game Theory. Cambridge, MA: MIT Press. Gauthier, David (1986). Morals By Agreement. Oxford: Clarendon Press. (1988). Moral artifice. Canadian Journal of Philosophy, 18: 385-418. Harsanyi, John C. (1955). Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility. Journal of Political Economy, 63: 309-321.
Why We Need a Moral Equilibrium Theory
339
Harsanyi, John C, and Reinhard Selten (1988). A General Theory of Equilibrium Selection in Games. Cambridge, MA: MIT Press. Howard, J. V. (1988). Cooperation in the prisoner's dilemma. Theory and Decision, 24: 203-213. Kant, Immanuel (1785). The Foundations of the Metaphysics of Morals. Tr. by Lewis White Beck (New York: Bobbs Merrill, 1959). Kreps, David M. (1990). Game Theory and Economic Modelling. Oxford: Clarendon Press. Luce, R. Duncan, and Howard Raiffa (1957). Games and Decisions. New York: John Wiley. McClennen, Edward F. (1988). Constrained maximization and resolute choice. Social Philosophy and Policy, 5: 95-118. Nash, J.F., Jr. (1951). Non-cooperative games. Annals of Mathematics, 54: 289-95. Regan, Donald (1980). Utilitarianism and Co-operation. Oxford: Clarendon Press. Sen, Amartya (1974). Choice, orderings, and morality. In Stephan Korner (ed.), Practical Reason (New Haven: Yale University Press), pp. 54-82. Shubik, Martin (1982). Game Theory in the Social Sciences. Cambridge, MA: MIT Press. Skyrms, Brian (1990). The Dynamics of Rational Deliberation. Cambridge, MA: Harvard University Press. Smith, Holly (1991). Deriving morality from rationality. In Peter Vallentyne (ed.), Contractarianism and Rational Choice (Cambridge: Cambridge University Press), pp. 229-53. Taylor, Michael (1987). The Possibility of Cooperation. Cambridge: Cambridge University Press. Vallentyne, Peter (ed.) (1991). Contractarianism and Rational Choice. Cambridge: Cambridge University Press. von Neumann, John, and Oskar Morgenstern (1944): Theory of Games and Economic Behavior. New York: Wiley.
16
Morality's Last Chance Chantale LaCasse and Don Ross
What distinguishes a moral agent from a non-moral agent? A minimum condition is that a moral agent's behaviour must conform at least closely enough for probabilistic prediction to a set of strictures whose contents would be recognized in reflective equilibrium as moral, where, furthermore, at least part of the best explanation for the conformity in question appeals to the fact that the strictures are moral. This condition is necessary: if there are no agents that satisfy it, then there are no moral agents. Now, most readers will likely be of the view that there must be agents that satisfy the condition, precisely because it is so minimal. It does not require that a moral agent stick to her own moral code much more than fifty percent of the time, and it does not require that the moral code of a moral agent be even reasonably complete. Thus, someone who refrained from no offence against interpersonal decency except the torture of infants, but who avoided torturing infants at least partly because it is immoral to do so, would fit the condition. However, our view, its prima fade implausibility notwithstanding, is that the condition is unsatisfiable, and that there are, therefore, no moral agents. Our position thus encourages a version of moral scepticism.1 What follows will not amount to a full argument for moral scepticism; however, we hope to move some distance in the direction of that conclusion by showing that leading members of the most sophisticated recent family of attempts at applying the concept of moral agency to explain behaviour and social stability fail for a common and. general reason. The basis for our moral scepticism is our belief that appeals to morality are ultimately redundant in explanations of behavioural and social regularities. It may be objected that this belief cannot be grounds for moral scepticism unless the main justifying purpose of moral concepts is to help in furnishing explanations; and this, it would be argued, is false. We accept the latter point. However, our argument does not depend on supposing that moral concepts exist to serve explanatory purposes. The second clause in the minimum condition above makes reference to best explanations not because assisting these is the point 340
Morality's Last Chance
341
of morality, but because if moral concepts were not invoked by the best explanations of regularities in the behaviour of any agents, then this would provide grounds for doubting that there are moral agents. Agents who satisfied only the first clause of the minimum condition would, according to a non-sceptic about morality, merely seem to be moral. Now, we do see the claimed unsatisfiability of the minimum condition as arising mainly from the unsatisfiability of the second clause. Therefore, we will be focusing our attention on the gratuitousness of moral concepts to explanations; but we wish to be clear that this does not rest on our imagining that explanatory virtues are the only ones that matter. We have said that we will not provide a full defence of moral scepticism in this paper. One sort of opponent who will be untouched by our arguments is the person who believes that there are non-constructed moral facts whose nature is independent of actual human preferences but which can, at least in principle, be discovered through enquiry or revelation. If this kind of moral realism is true, then the minimum condition is clearly satisfiable just in case there are agents who can both discover the true contents of morality and who can, as a matter of psychological fact, be motivated to guide their behaviour by the mere fact that the discovered contents are moral contents. While we are aware that many serious philosophers have defended non-constructivist realism, we find the metaphysical views on which it relies to be too bizarre to comment upon. Addressing ourselves, then, to constructivists, we note that constructivists of (for example) the Rawlsian sort are moral realists too. So why cannot the constructivist argue for the satisfiability of the minimum condition in exactly the same way as the other sort of realist whose position we have refused to engage? Our answer is that the constructivist is constrained, as the Platonist is not, by the need to identify the function of the constructed morality, and this, in turn, potentially conflicts with the requirement that a moral agent be capable of acting on moral motivations. The constructivist must find something both useful and empirically plausible for morality to do. Among constructivists, those who have tried most explicitly to satisfy this demand have been contractarians. Unlike Rawlsians, who presuppose a particular meta-ethic and who thus do not engage the concerns of the moral sceptic, recent contractarian work has sought - to use the phrase of Danielson (1992) - a "fundamental justification" for morality. We have engaged in this brisk and somewhat idiosyncratic march through elementary issues in meta-ethics in order to indicate why we think that a certain family of contractarian theories represents "morality's last chance." We refer to "a certain family" of contractarian theories because many of the more venerable theories in the contractarian
Chantale LaCasse and Don Ross
342
tradition are widely, and rightly, regarded as not providing an adequate foundation for morality. The basic Hobbesian insight, that agents in groups may improve their individual well-being by arriving at binding agreements that restrain their pursuit of self-interest, does not establish a function for morality, since Hobbesian agents are constrained following their agreement by force rather than by principle. However, once we recognize Hobbes' insight that utility maximization by individual agents can be inefficient in the absence of co-ordination, it is then both possible and natural to ask whether morality's function could be to serve as the co-ordinating mechanism in certain sorts of social situations. This question has been the focus of a vigorous recent literature, whose highlights, on which we will focus, are Gauthier (1986), and Danielson (1992). A common distinguishing feature of this literature is that it has sought to use resources and conceptual insights from the economic theory of practical rationality and from game theory in order to identify moral dispositions to which rational agents should be disposed. The leading advantage of this attempt to find (as we shall say) "economic" foundations for morality is that the concepts to which it appeals have been very precisely refined. As usual, however, this sword is double-edged: it permits us to show, in a way that was not previously possible, that the concept of morality is ill-suited to the task of understanding social co-ordination. A first crucial task is to properly distinguish the various sorts of games which may be used to model social situations. Based on the uses to which game theory has been put by moral philosophers, we will distinguish four types of games. Each type will be identified by reference to the following generic matrix where u and v designate payoffs to players A and B, respectively. Player B
Player A
b1
b2
a1
ul,vl
U2,V2
a2
U>V3
U4/V4
(1) Games of pure conflict. These arise, in the two-agent case, when player A's ranking of outcomes is precisely opposite to player B's (u; > Uj iff Vj > v;, i * j, i,j = 1,2,3,4). Danielson (1992, pp. 31-34) argues that because games of this type admit of no co-operative outcomes, "the rules of instrumental morality would presumably ignore such conflicts; nothing can be done to civilize this type of interaction"
Morality's Last Chance
343
(p. 32). We are persuaded by this claim; therefore nothing further will be said about games of pure conflict. (2) Co-ordination games. Consider a case where the ranking of outcomes for player A is identical to the ranking for player B: (u; > Uj iff v; > Vj, i ¥= j, i,j = 1,2,3/4). The interesting sub-class of such games consists of those which have multiple equilibria. These are co-ordination games. Their equilibria can always be Pareto-ranked. Thus, with reference to the matrix, if (a^bj) and (a2,b2) are two equilibria, then either u: > u4 and Vj > v4 or Uj < u4 and vl < v4. Co-ordination games, according to the philosophers we will be criticizing, may be played without reference to moral considerations, since they do not involve conflicts of interest. Note, however, that there is no guarantee that players in a co-ordination game will in fact co-ordinate on the optimal outcome. This is important because, even if economic contractarians were to succeed in turning the one-shot Prisoner's Dilemma into a co-ordination game, it would be a mistake to suppose that Pareto-efficiency would then be guaranteed. (3) Social choice games. These are games with multiple equilibria which cannot be Pareto-ranked. In two-agent games, this means that the preferences of the players are opposed on the equilibrium outcomes, but not necessarily on the outcomes of other strategy combinations. So, with reference to the matrix, if (a-^bj) and (a2,b2) are two equilibria, then either u: > u4 and vx < v4 or Uj < u4 and vl > v4. Moral and political philosophers have devoted considerable attention to games of this sort. However, we will argue below that although these games constitute a useful setting for considering issues of justice, their discussion need not make reference to moral concepts. (4) Social dilemmas. For reasons which will be given later, we believe that this name for the class of games in question is misleading. However, for the moment we follow usage that has become widespread. Social dilemmas (henceforth "SDs") are one-shot games in which the unique Nash equilibrium is Pareto-dominated by a non-equilibrium state. Thus, we might have Uj > u4 and vl > v4, where (a2,b2) is the unique Nash equilibrium. By far the most famous example of an SD is the oneshot Prisoner's Dilemma. Most of the attention in the economic contractarian literature has been focused on SDs, since it is in these cases that agents who are rational in the received sense of decision theory and game theory (henceforth "economically rational agents," or "ERAs") cannot achieve the Pareto-superior co-operative outcome. The leading idea of the economic contractarians is that, in SDs, morality can
344
Chantale LaCasse and Don Ross
assist rationality to accomplish what rationality alone cannot; they aim to show that agents who are rational and moral (henceforth "RAMAs") can reach co-operation. Note that it is crucial, for these purposes, that the class of SDs be restricted to one-shot games. In a repeated game, even where the constituent game being repeated is an SD, ERAs can achieve co-operation and so morality is redundant. We claimed above that morality has no work to do in games of pure conflict or in co-ordination games. We will shortly follow Gauthier (1986) and Danielson (1992) in turning our attention exclusively to SDs. However, since the claim we seek to support is that morality has no work to do anywhere, we must first consider the question of whether it has a role to play in social choice games, since a number of philosophers, particularly those in the broadly utilitarian tradition, suppose that it does. Since we do not have the space to review the very large literature relevant to this issue, we will hoist the matter briskly onto the table by considering the most explicit attempt to bring social choice games (in the strict and technical sense) to bear on problems in moral philosophy - that of Binmore (1994). Binmore agrees with our own eventual conclusion (though for reasons we find inadequate2) that morality has no role to play in SDs. However, for him, morality is relevant to choosing the fairest among a set of feasible equilibria. We have no quarrel at least in the present context - with Binmore's project, which is to develop a theory of just mechanisms for mediating the social distribution of utility that does not make appeal, as does Rawls' similar project, to loosely Kantian moral foundations. However, we contend that Binmore is wrong to speak of his theory as a theory of social morality. Whereas Rawls presupposes a moral theory and uses it as a foundation for a theory of justice, Binmore's goal should properly be seen as that of building a theory of justice that does not depend on considerations of moral theory. Binmore first asks us to construe all of life as a bargaining game played among rational agents. We are then to imagine a second game, which Binmore calls "the game of morals," in which agents can retreat behind a veil of ignorance ("thinner" than the veil used by Rawls3) in case any player feels unfairly disadvantaged. The game of morals is supposed to serve as a device for selecting fair equilibria in the Game of Life. Binmore summarizes the idea as follows: I suggest that a "fair social contract" be defined as an equilibrium in the game of life that calls for the use of strategies which, if used in the game of morals, would leave no player in the game of morals with an incentive to appeal to the device of the original position. (P. 335)
Morality's Last Chance
345
Motivational issues are crucial to the question of whether this idea finds a role for morality in social choice games, or does just the opposite. There are two possible points at which moral motivations could be relevant to agents engaged in the exercise which Binmore recommends: (1) the point at which they decide to resort to the game of morals, and / or (2) in the course of their actual bargaining behind the veil. With respect to point (2), it is clear that moral motivations are irrelevant. Players of the game of morals are rational economic bargainers, who will cheat wherever this is in their interest. However, "they have no incentive to do so, because playing the game of morals as though its rules were binding leads to an equilibrium in the game of life. No player can therefore gain by deviating unless some other player acts against his own self-interest by deviating first" (Binmore 1994, p. 42). Once Binmore's device gets his agents to the co-ordinated outcome, they then go on co-operating, because they have, in the end, no reason not to. One might indeed gloss Binmore's general project as attempting to show that, in social choice games, morality is unnecessary for agents who have a disposition to imagine playing the game of morals. This then directs us to point (1). It is tempting, especially in light of Binmore's choice of terminology, to say that the disposition to imagine playing the game of morals amounts to the disposition to be moral. However, two considerations render this idea for the role of morality uninteresting to the moral sceptic. First, the only justifying argument for the equilibrium that would be chosen by the players of the game of morals appeals to considerations of social stability, rather than to any antecedent moral concerns. Resort to the game of morals enables us to answer those who believe that the existing equilibrium in the game of life is inferior.4 The game of morals is itself a device for permitting evaluation of Pareto non-comparable outcomes. Agents behind Binmore's veil take account both of personal preferences and of "empathetic" preferences, that is, rankings which they discover by simulating one another. The idea of empathetic preferences may suggest moral motivations. However, they are more properly thought of as sophisticated refinements of Humean sympathetic attachments (Binmore 1994, pp. 54-61 and 285-96). As in Hume's account, our sympathies enable us to imagine the game of morals but do not motivate us to do so. One can convince an ERA to imagine playing the game of morals, by appeal to her interest in social stability. She will be able to engage in the imaginary game, if she agrees to it, because evolution has equipped her with the ability to simulate other players. But all of this is compatible with her being an implacable maximizer who has never been moved by a moral motivation in her life. We have no argument with the basic Humean idea that human cooperation is possible because we are naturally disposed to sympathize
346
Chantale LaCasse and Don Ross
with one another. That evolution should have endowed us with such dispositions is unsurprising. Indeed, facts about our species' evolutionary ecology are just what need to be invoked in explaining why people arrived at the idea of morality in the first place. However, we join the economic contractarians in being persuaded that morality cannot be identified with evolutionary altruism. Discovering the mechanism that gives rise to certain sentiments gives a rational agent no reason not to try to overcome the influence of those sentiments, or to learn to use them exploitively, if a little game theory convinces her that she could maximize her utility by doing so. To suppose otherwise is to commit the elementary fallacy of pop sociobiology: assuming that one should side with one's genes against oneself. We are not accusing Binmore of committing this fallacy; our only claim against him is that he misidentifies the concept of morality in the course of tracking his real target, the concept of justice. Our claim that attention to social choice games need not make use of first-order moral concepts5 implicitly depends on another aspect of the economic contractarian view. The concept of morality, we agree, necessarily incorporates the idea of a motivational structure that constrains behaviour; and such structures are explicitly absent from the players of Binmore's games. As previously advertised, we depart from the economic contractarians in our belief that morality, so conceived, is incompatible with rationality. But accepting their concept of morality draws our attention to the fourth among the classes of games described above: SDs. For it is only in the case of SDs that "unconstrained maximizers" cannot achieve co-operation;6 thus, it is only here that constraint looks as if it might have some work to do. Let us begin by reviewing the familiar reasons for viewing SDs as dilemmas. Since the Nash equilibrium in an SD is Pareto-dominated, each player realizes that if each of them acts so as to maximize her individual utility, the social outcome will be inefficient. Each agent thus faces a choice between (1) behaving rationally, despite the fact that the outcome thus generated will fail to maximize group welfare, and (2) changing her behaviour in pursuit of group welfare, hoping that everybody else, knowing just what she does, will follow suit. This is supposed to be a dilemma for a rational agent because the agent knows that she herself benefits if everyone behaves so as to maximize group welfare; her own utility is, by definition, higher under that outcome than under the equilibrium outcome. As noted, the paradigm case of an SD is the one-shot Prisoner's Dilemma. If morality could be shown to be capable of generating co-operation in the one-shot PD, then this would constitute a function for morality that would be both empirically plau-
Morality's Last Chance
347
sible and that could convince a rational person that it is better to be a RAMA than to be an ERA. The fountainhead of this enterprise is Gauthier (1986). We will not concentrate much attention on Gauthier's theory, however, for two reasons. First, it has been subjected to several lines of criticism, to be described below, which we believe to be devastating. Second, for reasons to be discussed, rejection of Gauthier's theory does not in itself suggest moral scepticism. Our main focus will therefore be the project of Danielson (1992), which, by calling into view deep philosophical questions (as opposed to mainly technical questions) about the relationship between morality and rationality, more directly invites the attention of the moral sceptic. To understand the basis for Danielson's project, however, one must know why and how it grows out of Gauthier's. We will therefore briefly review the leading difficulties with Gauthier's theory. The theory itself is now well known. Rather than try to summarize it, we will just list its key moves and assumptions. First, Gauthier accepts the received economic concept of rationality, in the sense that he does not quarrel with any of the axioms of rational choice theory, or demand redefinition of any of the fundamental concepts invoked by that theory. What he proposes instead is that rational choice theory be applied to an additional set of objects. In contrast to the theory's traditional focus on choices themselves, Gauthier urges that its domain be enriched to include meta-choices, that is, dispositions to choose. He then argues that an agent maximizes expected utility in this problem by choosing a disposition to constrain her maximization in choices over actions. One of the most important dispositions of the constrained maximizer is to adopt the disposition of "conditional co-operation" in SDs; this tells her to co-operate with anyone who is disposed to cooperate with her. In SDs, she thus sometimes declines to maximize her utility. How can this be rational? Gauthier's answer is that because it is rational to adopt the disposition to conditionally co-operate, it must be rational to act on the basis of that disposition. Thus, he concludes, there are circumstances in which it is rational - in the received sense of the term - not to act so as to inaximize utility. Where the ERA's very rationality would subvert her interests, morality comes to the rescue and renders the RAMA better off. Critics have drawn attention to several problematic aspects of this story. (For most of them, see the papers in Vallentyne (1991).) The line of attack which we believe to be fundamental, and unanswerable, focuses on the fact that Gauthier's constrained maximizer chooses a strategy for the one-shot PD (and, implicitly, other SDs) which is ruled
348
Chantale LaCasse and Don Ross
out by the very definition of the game. As Smith (1991) points out most explicitly, a pair of conditional co-operators will converge on the Pareto-superior outcome in the one-shot PD only if each can be sure that the other will obey her chosen disposition to conditionally cooperate. This appears to turn the problem into a co-ordination game. However, one cannot "turn" the one-shot PD into a co-ordination game without having changed the game itself. Suppose one grants to Gauthier, for the sake of argument, that a rational agent playing the meta-game of disposition selection should choose the strategy of constrained maximization. This cannot possibly imply what Gauthier urges, namely, that it will therefore be rational - in the received sense of "rationality," which Gauthier claims to accept - for her to act on this disposition when she finds herself in an SD. The preferences over outcomes and the strategies of the other agents that structure her situation as an SD tell us that, given any strategy chosen by the other players, the temptation to defect will trump the appeal of co-operation. If, for whatever reason, this does not hold in a given case, then that case is not an instance of the one-shot PD, or any other SD.7 Procedurally, this refutation by contradiction expresses itself as the problem of commitment. Since, by the definition of a one-shot game, succumbing to the temptation to defect can have no negative repercussions for the agent that are not already incorporated into her preference ranking, her choice of disposition cannot constitute a reason for her to resist the temptation. Therefore, no signal that she sends to the other player indicating her disposition to co-operate can constitute convincing commitment. Where both players are constrained maximizers, this inability to commit will be reciprocal, so both will defect. Given the deductively demonstrable impossibility of rationally co-operating in the one-shot PD, Binmore (1994, p. 180) is being suitably charitable when he refuses to take Gauthier at his word in the latter's insistence that he accepts the received economic conception of rationality. However, that conception is built into the definition of the PD. One may be inclined to join Gauthier (1993) in being sceptical about the force of this argument precisely because it is so direct and simple. The argument seems to commit the familiar philosopher's mistake of trying to generate an impossibility result by treating a particular conceptual definition as if it were an extrinsically motivated necessity. This is a mistake because we should prefer making progress to blocking it; and honouring this priority requires willingness to fashion concepts so as to do justice to our pre-regimented intuitions, rather than the other way around. This is why we have said that the argument against Gauthier does not in itself refute economic contractarianism or suggest moral scepticism. It engages Gauthier's claim to have a polished, tech-
Morality's Last Chance
349
nically adequate version of economic contractarianism, but it fails to engage the intuitions that make the project seem interesting and worthwhile in the first place. Obviously, however, the technical criticism cannot just be ignored. If the defender of Gauthier wishes to claim that the technical argument against him is merely technical, then she must find a class of game-like objects - let us call them "*games" which incorporate the feature of SDs that makes SDs morally relevant. That is, it must be the case that in these *games, a rational agent will be motivated to constrain her pursuit of maximization. Now, if we are not to be technical fussbudgets, we must allow the economic contractarian some scope to fiddle with the received conception of rationality; otherwise, we will beg the question by hopelessly stacking the odds against her. We must not give away too much, however. The revised concept of rationality and the concept of a *game must both be clear enough, at the end of the analysis, that it is possible to clearly state the content of the theory. It will likely be necessary to do this partly in terms of critical comparison with the more traditional concepts, lest we have no firm grip on what has been accomplished. During this enterprise, the sceptic will be alert for the re-emergence of the tension in Gauthier's theory that she believes to be deep and fundamental, rather than merely technical. The tension in question arises in the search for a way in which constraints on rational maximizing could be enforced by their very rationality, rather than by an external mechanism (as in the case of Hobbes' sovereign). The sceptic expects that attempts to reform Gauthier's theory will at best succeed in shuffling this tension around. We therefore now turn our attention to the project of Danielson (1992), who seeks to rescue Gauthier's theory by doing just what the technical criticism suggests one must: relaxing definitions and axioms. This is the part of our discussion that is intended to have the strongest inductive thrust. In pursuit of Danielson, we will have to follow the relationship between morality and rationality to a deeper level; and it is at this point, we contend, that the incoherence of the concept of morality manifests itself. Danielson's understanding of the relationship between game theory and morality is as follows. First, he takes game theory to be an extension of the received theory of decision under uncertainty, whose job "is to provide more determinate grounds for choice" (Danielson 1992, p. 62). Game theory is thereby taken to incorporate the recommendation that agents pursue "straightforward maximization"; this is glossed as "defending the thesis that maximizing at each choice point is the best way to maximize" (1992, p. 63). Thus, interpreted as a normative theory, and as the application of decision theory to interactive, strategic contexts, game theory seems to argue directly against the possibility of
350
Chantale LaCasse and Don Ross
a moral theory, since the straightforward maximizer appears to be a paradigmatically amoral agent. However, Danielson then occupies the bulk of his book trying to show, through computer tournaments in which agents built out of PROLOG code engage in series of pair-wise, one-shot *games that "indirect, morally constrained choice does better" (p. 63). Some of his players are straightforward maximizers, but most use other, reactively sensitive strategies, some of them quite complicated and some involving dispositions of the agents to open themselves for inspection - that is, to reveal their decision rules - to other agents under particular circumstances. A result that is among the less interesting in its own right, but most relevant in the context of the present discussion, is that straightforward maximizers do not fare very well in environments which include more sophisticated decision rules. This is interpreted by Danielson as evidence that "the received theory's" claim that straightforward maximization is superior, is false. Room is thus opened for agents who use decision rules that our intuitions might recognize as moral - in particular, agents whose dispositions call for co-operative behaviour, under some circumstances, in the *game-theoretic counterparts of SDs. Before discussing Danielson's deliberate modifications to standard game theory, let us note an aspect of his treatment of game theory which he does not acknowledge as controversial. We have argued elsewhere (LaCasse and Ross 1994) that game theory should not properly be understood as an extension of the normative theory of decision. It shares concepts with decision theory, to be sure; however, in order to see game theory as sharing the normative ambitions of decision theory one must treat the strategies of agents other than i as elements in the lotteries over which i maximizes expected utility, and this is contrary to the standard practice of both decision theorists and game theorists. It is at least as consistent with game theorists' practice to regard game theory as a positive science whose aim is simply to identify equilibria for specified games. In that case, it cannot conflict with the procedural advice given by any moral theory. The point of this in the present context is that, in our view, game theory does not cause trouble for moral theory because it recommends amoral conduct. Instead, game theory threatens moral theory because, as we have seen, it exposes the fact that the empirical function for morality on which theorists following Gauthier have pinned their hopes - bringing about mutual co-operation in SDs - is impossible. The defender of Danielson might be inclined to declare a standoff at this point. After all, Danielson recognizes that game theory is incompatible with prescribing non-equilibrium outcomes in SDs. So even if he were persuaded to abandon his view of standard game theory as a
Morality's Last Chance
351
rival normative theory, he would still be motivated to relax some of its assumptions. We are not convinced by this response; but we shall set the point aside for now in order to focus on the specific ways in which Danielson weakens game theory in order to escape the impossibility result. Doing this involves some difficulties. We wish, ultimately, to compare what game theory has to say about Danielson's tournament results with Danielson's own interpretation of his results in the context of *game theory. However, this is complicated by the fact that we are in some disagreement with Danielson's understanding of game theory. We must therefore distinguish three classes of objects: (1) games as Danielson sees them, henceforth, "D-games"; (2) *games, that is, what Danielson's agents play once he has relaxed the axioms and definitions of D-game theory; and (3) games, as these are understood by game theorists. We will begin by explicitly describing D-games and *games. Aspects of standard game theory will then be introduced as and when they are relevant to our subsequent argument. This procedure is designed to minimize the temptation to fall back into technical arguments, which would beg the question against Danielson. We start, then, with D-games. A D-game is a strategic interaction among agents,8 in a situation defined by the agents' preferences over the possible outcomes. It is assumed that the agents have common knowledge of the situation. As noted above, D-game theory is normative; it prescribes that an agent should make the choice which leads to her most highly preferred outcome. Following Gauthier, this is referred to as "straightforward," or "direct," maximization, and is characterized by Danielson as follows: "What is crucial to straightforward maximization is not the look - or any procedural suggestion - of maximizing, but the inability to choose against one's preferences" (Danielson 1992, p. 47). D-game theory views all agents as identical, in the sense that all are rational in the same way, and all know that all are. Since there is also common knowledge of the situation, each agent can predict what all others will do (1992, p. 48). This is the point of calling straightforward maximizing "direct"; there is no point to an agent's taking intermediate information-gathering steps before acting. There is, as Danielson (1992, p. 108) puts it, no "gap between preferences and behaviour." In discussing the problems with Gauthier's theory, we indicated the motivations that lead Danielson to seek to replace D-games by *games. There is no place for morality in the life of a direct maximizer because the direct maximizer cannot, by definition, be constrained. Therefore, morality - if it is to be compatible with rationality - requires indirect maximization. Conceptually, this requires breaking the direct link between the agent's most preferred outcome and her choice. However, since indirect maximization is still maximization, indirect maximizers
352
Chantale LaCasse and Don Ross
must turn out to receive higher payoffs, on average, than direct maximizers. The indirect maximizer must thus be cunning as well as morally wise. She must be able not only to choose to be constrained, but she must also be able to choose how to be constrained, under what circumstances (i.e., with whom) to be constrained, and how to ensure others that she can be constrained.9 Thus, to develop the concept of the indirect maximizer is to provide a decision procedure to replace direct maximization. The indirect maximizer, then, must be an agent who (sometimes) chooses against her preferences - by choosing moves that do not yield the highest payoff - but who also chooses a decision rule according to her preferences, that is, by choosing a rule that yields the highest average payoff. This would not appear paradoxical if "highest average payoff" were defined over a repeated game. However, the goal does seem initially contradictory in the context of one-shot games or one-shot Dgames. Danielson tries to resolve this apparent paradox by distinguishing between preferences and interests. The payoffs in a *game represent the actual stakes in the interaction, which are identified with the "interests" of the agents. Interests are not a function of preferences, but of selection pressures. In Danielson's simulation, they are determined exogenously by the design of the tournament structure. To take what may be the best analogy from the actual world, we may think of firms as having an interest in maximizing average profits, where this interest is a function of the selection pressures operating in the market.10 Agents also have preferences over the outcomes of the *game. These need not track the actual stakes in the *game; an agent's utility function might not be increasing in the units by which interests are counted. To the extent that agents are designed mainly by selection pressures, however, preferences will tend to track interests. If they did not, no variant of game theory would be relevant to questions about the proliferation of behavioural strategies in populations. However, the conceptual gap between interests and preferences permits us to make conceptual sense of the idea that a "substantively rational" agent - an agent whose choice behaviour over decision rules tends to maximize her interests could rationally choose against her preferences in *games. We now begin our criticism of Danielson's project. We will focus on what is plausibly his central result, at least in the context of the rest of the literature. This is that in a tournament involving four basic decision procedures for the sequential one-shot PD, the type of agent who fares best, in terms of average payoffs, is a certain sort of player that Danielson takes to be constrained. The types in question are: (1) the unconditional co-operator (UC); (2) the unconditional defector (UD) (identified by Danielson with the straightforward maximizer, SM); (3) Gauthier's
Morality's Last Chance
353
conditional co-operator (CC), who co-operates with all and only those who co-operate; and (4) the reciprocal co-operator (RC), who co-operates only where co-operation is necessary to avoid being defected on. RC wins this tournament. Since RC does not always choose defection, RC appears to be constrained. If this interpretation of the tournament result can be sustained, then a role for morality is established, since we would then have at least one case of a *game where a particular sort of RAMA fares better than an ERA. We will argue that Danielson's interpretation of the result cannot be sustained. This will be done in two parts. First, we will show that *games can be interpreted as games, and that therefore, contrary to Danielson's own expectation, standard game theory can fully represent the richness of the decision procedures - which we will reinterpret as strategies - of responsive agents such as CC and RC. Danielson misses this, we suggest, because he correctly sees that *games cannot be represented as D-games. D-games are a straw target, however. In the second part of our argument, we show that when the tournament *game is properly represented as a game, RC turns out to be the rational strategy according to the received definition of rationality. Thus, RC is an ERA, and is unconstrained according to Danielson's own concept of constraint. The invocation of moral concepts in describing RC and explaining its success thus turns out to be gratuitous, just as the moral sceptic would predict. We will then add inductive punch to the argument by giving reasons for thinking that a strategy of the sort exemplified by our argument should be expected to work against any attempt to buttress rationality by morality. We begin, then, with the demonstration of how to interpret "'games as games. Our example will be based on the *game that Danielson's agents play in most of his tournaments, the so-called "Extended Prisoner's Dilemma" ("XPD"). (Call this game the "*XPD"). Our procedure will be as follows. We will first present the XPD as a proper game. Next, we will describe the *XPD. The *XPD, as we shall see, is not equivalent to the XPD. However, we will then show that the *XPD is equivalent to another game, which we call the XPD2. This will permit us to discuss the results of Danielson's tournaments among players of the *XPD in game-theoretic terms. The extensive form of the XPD is presented in Figure 1. Players choose their actions sequentially rather than simultaneously, as in the PD. This means that player 2 (henceforth "P2") observes the choice of the first player (PI) before taking an action. The difference between the PD and the XPD is thus one of information: in the PD, P2 chooses his action without knowing what PI is doing, while in the XPD, P2 chooses an action knowing what PI has already done. In the XPD, there are two
354
Chantale LaCasse and Don Ross
Figure 1: The extensive form of the XPD.
information states: P2 could, when called upon to move, have observed either C or D. The set of decision points (represented as nodes in the game tree) at which a player is in the same information state is called an information set. P2 has two information sets, each containing one node; each information set is associated with one of Pi's two possible moves. In the PD, by contrast, P2 knows nothing of the move chosen by PI when he himself must choose an action. Therefore, he has one information set which contains two nodes; at both these nodes, his information state is that he knows nothing. In standard game theory, one has a complete description of a game given a set of players, a set of strategies for each player, and an assignment of payoffs. As in the PD, the set of players in the XPD is (P1,P2) and the structure of the payoffs reflects the fact that each player prefers the action of defecting, regardless of the other's action. However, the set of strategies available to P2 in the XPD will not be the same as in the PD, because a strategy for an agent is defined as a function of the information available to that agent, and, as described above, the information available to P2 in the PD is not the same as in the XPD. This is important for the argument to follow, because one of the game-theoretic concepts that is not well defined within D-game theory is that of a strategy, and this, in turn, creates much of the difficulty in interpreting *games as Dgames. Strictly speaking, a player's strategy is a complete plan of
Morality's Last Chance
355
action, which specifies everything that the player can do at each of his nodes in the tree. In the PD, where P2 has one information set, a strategy is equivalent to an action (C or D) and thus his set of strategies is just {C,D}. In the XPD, in contrast, P2 has two information sets; a complete plan of action must specify what he would do if PI played C and what he would do if PI played D. Therefore, his strategy is a function f2: {C,D} i-» {C,D} which maps an information set (identified by the action of PI to which it corresponds) onto an action; the subscript indicates that this is a strategy for the second mover. There are 4 possible strategies for P2 in the XPD. His strategy set is thus where each element is defined as follows:
The interpretation of these strategies is straightforward: f2h(X) = Y means that the /zth strategy for the second mover calls for him to choose action Y when PI chooses action X. Thus, the first strategy, f 2 J , is UC's strategy: P2 plans to co-operate regardless of what he observes PI actually choosing. According to the second strategy, P2 plans to defect regardless of the moves that he observes PI choosing, so this is the strategy identified with UD. The third strategy is that of CC, who copies Pi's move. The fourth strategy is one of mutual exploitation: the agent plans to exploit PI by defecting if PI chooses to co-operate, but to let himself be exploited if PI chooses to defect. (Henceforth, an agent using this strategy will be called "MX".) So much for P2's strategy set in the XPD. No similar difficulties are involved in specifying the strategy set for PI. It remains {C,D}, as in the PD, since in both cases, PI has one information set. We now turn to the question of equilibrium in the XPD. Given the various sorts of game-like objects on the table, it is worth reminding the reader of the strict, game-theoretic equilibrium concept, which can be expressed in terms of the concept of a best reply. A strategy for agent i is a best reply to j's strategy if it maximizes z's payoff given j's strategy. Then a Nash equilibrium (NE) for a two-player game is a pair of strategies, one for each player, such that i maximizes his payoff given the strategy chosen by/ (i =£ j, i,j = 1,2). That is, the NE is a strategy pair where i uses a best reply to j's strategy (i + j, i,j = 1,2). Following this definition, in the XPD, (D, ) is the unique subgame perfect
356
Chantale LaCasse and Don Ross
equilibrium (SPE). This means that the strategy pair is a Nash equilibrium: given that PI defects, P2 maximizes his payoff by defecting as well (f22(D) = D); and given that P2 plans to defect regardless of what PI does, PI maximizes her payoff by defecting. Because this game is one of perfect information, saying that the equilibrium is subgame perfect means that it is the backward induction solution. That is, P2's strategy is a best reply to any action taken by PI, and not just a best reply to his equilibrium action. Thus, the outcome of the game is that both players defect. As Danielson recognizes (1992, pp. 21-22), an ERA does not co-operate in this game, or in its counterpart game, the D-XPD. Danielson attributes the sub-optimality of the equilibrium outcome of the D-XPD to the way in which agents choose their actions. Therefore, in the *XPD, players' actions (C or D) are outputs of well-defined, sometimes constrained / moral, decision procedures, as described during our discussion of *games. Consider, then, an *XPD which could pair any two of the following three types of agents: UC, UD, and CC. We restrict ourselves to the case where all agents are transparent - that is, have full access to one another's decision rules - and in which there is no learning." What happens when two agents are matched in such a tournament? Following Danielson's design, PI can, if she 'wishes, examine P2's decision rule (by executing P2's PROLOG code). This allows her to find out how P2 will respond to her move. With this information in hand, she takes an action, dictated by her own decision procedure. Given Pi's move, P2 then selects his action by following his decision procedure. We claim that the *XPD can be expressed as a game, which we call the XPD2. Its extensive form is shown in Figure 2. The XPD2 is distinct from the XPD, for exactly the same reason that the PD and the XPD are different games: the information available to the players is different, and therefore their strategy sets are different. In the XPD, PI has no access to 2's decision procedure before choosing her move. A representation of Danielson's game must take into account that in the *XPD, PI, when she chooses to execute P2's code, knows P2's strategy before deciding whether to co-operate or defect. Standard game theory requires that in such circumstances, P2 be modeled as moving first, choosing a contingent plan of action, and PI modeled as moving second, choosing C or D on the basis of P2's plan. So, in the XPD2, P2 first chooses a strategy from the set and announces his strategy to PI. Given the strategy announced by P2, PI chooses whether to co-operate or to defect. Each strategy in the set F - giving P2's response to any move by PI - corresponds to an available decision rule in the *XPD tournament. As argued above, the decision procedures of UC, UD, and CC can be recognized as , and respectively.12 (The MX agent is not present
Morality's Last Chance
357
Figure 2: The extensive form of the XPD2
in the *XPD and we therefore choose to exclude him from the XPD2. This is equivalent to restricting the strategy set to F rather than considering F'. The MX agent, however, is included in the specification of the XPD because his possibility is given by the structure of the game.) The actions specified in the game are sufficient to determine an "induced XPD outcome." That is, they are sufficient for determining whether the agents have co-operated or not, and for calculating their payoffs. For instance, if P2 chooses to be an unconditional co-operator (i.e., he chooses ) and PI chooses C, then there is joint co-operation , and the payoffs are (2,2).13 If we are to be faithful to (since the way in which Danielson allows his agents to commit to decision procedures, then no other actions can be included in the description of the game. In particular, the XPD2 does not include P2's action (C or D). Saying that P2 is committed to his decision procedure, and that he is unable to cheat or lie, is equivalent to saying that he is not subsequently given the choice of whether to co-operate or defect; he is bound by his announced strategy. We can imagine several objections being raised to our claim that the XPD2 is an accurate representation of the *XPD. The first of these is that in the XPD2, P2's choice of a decision procedure is an explicit move in the game, whereas Danielson's agents in the *XPD tournament are rigid, coming endowed with decision procedures that they do not
358
Chantale LaCasse and Don Ross
choose. However, if Danielson's construction of tournaments among artificial agents is to contribute to the project of finding a fundamental justification for morality, then one must construe tournament results as giving us information as to the "best" decision procedure to follow in an XPD, and by extension, in SDs in general. Therefore, it must be the case that we are to evaluate Danielson's agents as if they had chosen their own decision procedures. (For more on this point, see note 9.) Since a game theoretic representation requires that all choices made by agents be incorporated into the defining structure of the game, we are forced to decide this matter one way or the other. Trying to set this aside during the early stages of modeling, as Danielson does, leaves the agents' strategic situation crucially under-specified. In that context, it is no surprise that a game theoretic representation looks unconstructable; but this cannot be taken as evidence against the adequacy of such a representation without begging the question. A second possible objection is that if Danielson's agents in their game theoretical incarnation can be construed as choosing a decision procedure, then both PI and P2 should be modeled as doing so; that is, PI should not be modeled as merely choosing whether to co-operate or defect. It is important to recognize, however, that in the XPD2, PI chooses whether to co-operate or defect for any given strategy chosen by P2. {C,D} is Pi's set of actions at each of her information sets. However, a strategy for PI will specify whether she co-operates or defects/or each decision procedure that P2 could choose. Formally, a strategy for PI is a function, F H-> {C,D}, which maps an information set (identified by the action of P2 to which it corresponds) onto an action for PI; here, the action for P2 is a choice of disposition from the set F, and an action for PI is a choice of co-operating or defecting. Therefore, Pi's strategy specifies what she does against every type of agent that she can meet (a UC, a UD, or a CC); it is a full-fledged decision procedure, in Danielson's sense. As a final objection, it might be pointed out that not all PI agents in Danielson's tournament need to check P2's code before choosing moves; the decision procedures for UC and UD specify that their first moves are C and D, respectively, without regard for P2's decision procedure in move 2. However, there is no difficulty in representing UC and UD as particular strategies in the XPD2:
Z means that Pi's This is to be interpreted in the usual way: strategy k specifies an action Z when P2 chooses his hth disposition.
Morality's Last Chance
359
As required,specifies that p1 always plays C,while specifies that she always plays D. Having claimed that the *XPD can be represented as a game, we owe an explanation of why Danielson mistakenly asserts the inadequacy of game theory to the problems modeled in his tournaments. The explanation resides in the fact that the objects he permutes into *games, namely, D-games, are not equivalent to games. The crucial step in the argument which establishes that the *XPD is representable as a game is the possibility of identifying Danielson's decision rules, or "conditional strategies" with standard strategies in the received game theoretic framework. Danielson does not believe such identification is possible, as the following remarks attest: We need to give more content to the alternatives that games present only abstractly if we are to develop moral solutions. We need to know if it is possible to employ conditional strategies. (1992, p. 35) The structures needed for rational antagonistic play, even in a game as profoundly difficult as chess, are different from those needed to solve a moral problem by enforcing a conditional rule of responsive play. (1992, p. 36)
Danielson's denial that decision rules can be cast as strategies has its roots in his reading of game theory as normative, and in the rationality concept which follows from that reading. As stated previously, Danielson believes that game theory advises agents that direct maximization constitutes rational behaviour in strategic situations. According to Dgame theory, the agent should choose her action based solely on the structure of the game and the knowledge that other agents are similarly rational, since this is held to be sufficient for predicting their behaviour. On this reading, the direct maximizer does not need to be responsive to other players' behaviour and does not need to condition her strategy on the choices of other agents. Since D-game theory is prescriptive, an ERA should not hold responsive or conditional strategies. Therefore, D-game theory, which is supposed to model the behaviour of such agents, is not equipped to handle such objects. D-game theory is an inaccurate rendering of game theory, on several counts. First, as argued above, game theory does not prescribe behaviours. It is used to identify equilibria, that is, situations that are stable by virtue of the fact that each agent, simultaneously using his best reply, has no cause to wish to change his strategic choice. Second, in game theory, a strategy is inherently responsive and conditional to begin with. As previously discussed, a strategy for a player assigns an action to each of his information sets. Any given information set contains all
360
Chantale LaCasse and Don Ross
decision points at which his knowledge of the previous moves taken in the game is the same. Therefore, a strategy is a mapping from what has already occurred to what the agent now chooses to do; it is by definition a response to others which is conditional on what they have already done. More concretely, one can think of a strategy as a set of statements of the form "if I know that you have done p (i.e., if I reach information set X) then I will choose action A" (action A being assigned to information set X through the strategy function), where there is one such statement for each information set.14 This mistake about the nature of strategies infects the D-game theoretic conception of rationality. In game theory, a strategy is a responsive and conditional object because the rationality of an agent's choice is ascertained only with reference to the strategies chosen by the other players. Calling an agent "rational" in this context just means that she uses her best strategy in response to the strategies chosen by the other players. It is not possible to determine whether an agent is acting rationally without reference to the strategies chosen by the other agents, since an agent's payoff is a function of the strategic choice of all agents.15 It follows that an agent's behaviour cannot necessarily be predicted merely on the basis of the structure of the game. In fact, this is only possible for games with unique equilibria. When there are multiple equilibria, game theory does not tell us how one equilibrium is selected and thus it does not offer unequivocal predictions regarding the behaviour of the agents. Danielson's identification of the ERA with the UD provides a concrete example of the way in which his conception of economic rationality leads him to draw false conclusions about the limitations of game theory. When discussing the public announcements that players can make about their decision rules, he states that the received theory really has little to complain about, since straightforward maximizers presumably have no use for information of the sort we are revealing. That is, according to the received theory, rational (i.e. SM) players in a PD have nothing to say (worth taking seriously) to each other, so adding an information dimension shouldn't make any difference. (1992, p. 77)
However, as we have seen, adding "an information dimension" changes the strategic context. Since an ERA maximizes her payoff given the strategies that others choose, a UD is not necessarily an ERA in the tournament which Danielson sets up; as we are about to prove, in the XPD2, RC is the ERA.16 This completes our demonstration that there exists a game, the XPD2, which faithfully represents the *XPD. The point of this work, as
Morality's Last Chance
361
declared earlier, is to permit comparison between analysis of the XPD2 and the outcome of Danielson's *XPD tournament. We thus turn to the first task of the analysis, which is to find the subgame perfect equilibrium in the XPD2. We proceed by backwards induction. That is, we first find the strategy for PI which maximizes her payoff at each of her information sets; then, we find P2's best reply to this strategy. As previously noted, in the context of this game, each information set for PI is identified with a disposition adopted by P2; therefore, backwards induction allows us to find the best action against each of P2's three possible dispositions. Taking the vector of these actions identifies a strategy or a decision procedure for PI; this decision procedure maximizes her payoff against a mixed population of UCs, UDs, and CCs. Let us walk carefully through the induction, then, with the aim of finding the best reply for PI. Referring to the extensive form of the game, above, and starting at the left-most information set, we find the case where P2 has chosen UC. In that case, if PI defects, the induced outcome will be (D,C) and she will receive a payoff of 3; if she co-operates, the induced outcome will be (C,C) and she will receive a payoff of 2. Therefore, she chooses to defect against UC. When P2 chooses UD, then the possible outcomes are (C,D) if PI chooses to co-operate and (D,D) if she chooses to defect. To maximize her payoff, she chooses to defect (since 1 > 0). Proceeding in the same way, we can see that PI chooses to co-operate with CC (2 > 1). The strategy for PI which constitutes a best reply at each of her information sets is thus:
We now find the best reply for P2. PI chooses to defect at her 1st, 3rd and 4th information sets and to co-operate at her second. Given this strategy, P2 maximizes his payoff by choosing and getting a payoff of 2, this being larger than the payoff of 1 which he would get by choosing and the payoff of 0 which he would get by choosing . Therefore, the unique SPE in the XPD2 is for P2 to choose and for PI to choose 4. he induced outcome is joint co-operation.17
This result is highly significant in the present context. Pi's best reply strategy, gj4, amounts to the following decision rule: defect against UC and UD, and co-operate with CC. Not surprisingly, the ERA in this game exploits the agent that can be exploited, UC, who will not respond to defection with defection. She avoids being exploited by defecting against UD. She co-operates when P2 chooses a disposition which will not allow him to be exploited but which allows co-operation, namely CC. If this policy sounds familiar, it should. This decision procedure is precisely that of the favoured agent in Danielson's *XPD
362
Chantale LaCasse and Don Ross
tournament, RC. RC in move 1 co-operates when the agent in move 2 responds to defection with defection and responds to co-operation with co-operation; otherwise, RC defects. This is precisely the decision rule embodied in 4 .herefore, the ERA for the role of PI in the XPD2 is the agent who adopts RC's decision procedure. As argued above, the correct interpretation of this statement is that RC's decision procedure is a best reply to each and every one of the dispositions UC, UD, and CC (or, equivalently, to each of the strategies , and ). It is therefore less than surprising that the RC agent does best in a tournament where his opponents precisely have these dispositions. According to the analysis just given, if the player in the role of P2 were able to commit to not cheating or lying, and if the player in the role of PI knew this, then P2 would end up using a strategy which corresponds to the disposition attributed to RC, PI would adopt a strategy which can be associated with CC, and the players would end up cooperating. However, one should most emphatically not conclude from this that it is rational to co-operate in the XPD. The reason is simply that the XPD2 and XPD are not the same game. In the XPD2, P2 can commit to not cheating or lying, and it is precisely this ability which makes co-operation possible; conversely, it is the absence of such abilities among the players of the XPD that implies that co-operation is impossible there. These differences in the commitment possibilities between the two games lead directly to the differences in their structures as presented earlier. (That is, the ability to commit on the part of P2 implies that different information is available to PI in the XPD2 than in the XPD, which is in turn responsible for the differences in the strategic possibilities embedded in the two games.18) This implies that Danielson's tournament results emphatically do not suggest that it is rational to co-operate in SDs. What the tournament result, as analyzed by way of the XPD2, does suggest is that RC is not a moral agent. Danielson, in discussing the putative morality of RC, is mostly concerned with the question of whether RC can be considered as moral as CC, rather than with establishing RC's morality per se. However, following the other economic contractarians, Danielson identifies a moral agent as someone who respects the requirement that "an agent sometimes act contrary to her own interests in favour of mutual advantage" (1992, p. 3). RC, he believes, meets this criterion: "RC is a constrained agent. It sometimes chooses C while it prefers (D,C) to (C,C) and (D,D) to (C,D)" (1992, p. 114).19 We contend, however, that we have shown this claim to be false. That RC "sometimes chooses C while it prefers (D,C) to (C,C) and (D,D) to (C,D)" is vacuously true. It is true that RC sometimes chooses C. It is also true that she prefers (D,C) to (C,C) and (D,D) to (C,D). However, it is not true that she chooses C despite the fact that she prefers
Morality's Last Chance
363
(D,C) to (C,C) and (D,D) to (C,D). RC chooses to co-operate with exactly one type of P2 agent, namely, the conditional co-operator (f23). When she plays against the conditional co-operator, there are only two possible induced outcomes in the XPD game: (C,C) and (D,D). This is because if RC chooses C/D, then the conditional co-operator's decision rule dictates that she should opt for C/D as well. Therefore, the (D,C) and (C,D) outcomes are not possible. RC chooses co-operation because it is the best she can do (2 > 1) given that she cannot exploit CC; she does not choose co-operation because it is the mutually beneficial outcome. RC is therefore not constrained in the sense of having a decision rule that prevents her from choosing an action which would give her a higher payoff. Is RC constrained in any other way? Well, she is not constrained in the sense that the concept of constrained maximization is necessary to account for her behaviour. We have seen that RC can be construed as maximizing her payoff in the usual way in the XPD2 game. Nor is she constrained in the sense that there are some elements of her set of feasible strategies to which she does not have access. At each information set in the XPD2 game, RC freely chooses C or D. We therefore conclude that RC is not moral, and that morality is a superfluous concept in modeling her behaviour. The defender of Danielson is still not quite out of objections. Considering a last one will be helpful, because it will permit us to generalize our critical conclusion. Suppose that one accepts (1) that XPD2 is a representation of *XPD when the population of agents is constituted of UCs, UDs, and CCs; and (2) that in the XPD2, RC in the role of PI is an ERA. Doubts might persist, however, based on worries about what happens when RC is in the role of P2. It could be argued that, in the role of PI, even a direct maximizer can agree to co-operate, if he is sensitive to the dispositions taken by the other agents. In Danielson's view, this is a big if: the received theory, as he understands it, does not allow sensitivity to other agents' dispositions. However, in trying to take account of a defence of the standard theory, he does construct a direct maximizer capable of taking others' decision rules into account: Player I needs to form an expectation of the other player (player II)'s choice. On what should a straightforward maximizer base this expectation? Here, we face a choice point; the answer is not determined by the motivational definition of the SM agent as never self-denying. The answer may seem obvious: a straightforward maximizer should find out what the other player will in fact do, if possible ... The problem is the weight of received tradition. SM is found in a theory that tends to a prioristic epistemology, assuming whenever possible that the other agent is similar to oneself ... It turns out to be easy and instructive to implement both epistemic flavours of the received theory's champion. (1992, p. 150)
364
Chantale LaCasse and Don Ross
Running this implementation, Danielson finds that a "sensitive" straightforward maximizer can co-operate in the PI role, but not in the P2 role. He explains it thus: A second moving SM (of either flavour) would not co-operate. The reason is that the (C,D) outcome maintains its motivational pull, so a second moving SM would always choose D, and thus prove untrustworthy. (1992, pp. 152-53)
Therefore, Danielson might continue, the crucial test of whether an agent is moral is whether she resists the temptation to defect when she is the second mover. The XPD2 only represents RC in the role of PI and not in the role of P2. Therefore, the argument that RC is not moral (and for that matter, that RC is the ERA) is incomplete. Our hypothetical defender of Danielson actually has two distinct objections: (1) our account of RC's supposed lack of morality is restricted to RC in the role of first mover; and (2) our claim that the *XPD can be represented as a game is incomplete because it considers only a subset of Danielson's agents (namely, UC, UD, and CC).20 We answer the second objection by constructing a new game, the XPD3, that represents the *XPD when the population of agents in the role of P2 includes RCs. Examining the equilibria of this game leads us to conclude that a new agent is the ERA in the role of P2. This new agent acts exactly like RC in our population of players, except that he succeeds in co-operating with one of RC's demons, the CUC. (A CUC is an agent originally proposed by Smith (1991), and named by Danielson (1991), who co-operates only with UC, thereby exploiting precisely the opportunities for co-operation forsaken by RC.) This result implies that, in any population of agents excluding CUC (like the one considered in Danielson's tournaments), RC is the ERA. Using the XPD3 game, we also answer the first objection with a quick argument to the effect that RC still cannot be construed as moral when she is in the role of P2. So we now consider an *XPD which includes agents like RC. Again, we restrict our attention to the case where all agents are transparent and in which there is no learning. We will show that RCs cannot be accommodated in the role of P2 in the XPD2; hence the need to construct the new game. The XPD3 will be distinct from the XPD2 for the same reason that the XPD2 differs from the XPD: they incorporate different information structures, and therefore different strategic possibilities. To see why this is the case, we examine the decision procedure adopted by RCs in the role of P2 and we show that it requires information which is not available to P2 in the XPD2. RC's decision procedure can be described as follows. RC, in the role of P2, co-operates with PI if and only if: (1) PI has a decision rule which would dictate that she play D were RC to use a response func-
Morality's Last Chance
365
tion (i.e., a function f2h : {C,D( >-> {C,D}) that returned D for her move; and (2) PI plays C.21 This means that RC first finds out whether PI would choose to defect were he (RC) to withhold assurance of co-operation. If PI would defect under these circumstances, then RC reciprocates co-operation with co-operation and defection with defection. If PI were to co-operate even though RC had not given her any assurance that he would co-operate in return, then RC chooses to defect regardless of Pi's move. Note that this interpretation makes RC's decision procedure equivalent to the following: (1) against a PI who would play D were RC to use a response function that returned D for Pi's move, RC behaves like a conditional co-operator; (2) against a PI who would play C were RC to use a response function that returned D for Pi's move, RC behaves like an unconditional defector. Therefore, RC can be seen as choosing a response function, which assigns a move X for Pi's move Y, on the basis of Pi's decision procedure.22 Game theory requires us to say that if P2 accesses Pi's decision procedure in order to make his decision, then P2 knows Pi's chosen decision procedure before selecting his own. Note that in the XPD2, P2 has one information set at which he knows nothing. Therefore, the addition of RCs to our list of agents engenders a new strategic situation, in which PI must be modeled as moving first. She chooses a contingent plan which reveals the action that she will take {C or D} as a function of the response function adopted by P2. P2 is modeled as moving second, choosing a response function on the basis of Pi's decision rule. Figure 3 shows the extensive form of the XPD3. PI plays first. She chooses a strategy from the set and then announces her strategy to P2. Given the strategy announced by PI, P2 chooses a response function from , - This representation can be motivated as follows. The set of strategies G, with typical element gjk : F H-> JC,D}, contains Pi's possible decision procedures. These strategies are precisely the ones which were available to PI in the XPD2. The definitions of the strategies in G are given in the following table:
366
Chantale LaCasse and Don Ross
The first, second, and fourth of these strategies have already been identified above as the decision procedures that a UC, a UD, and an RC would adopt, respectively. Of the remaining five strategies, notice that gj3 corresponds to the CC (since the agent plans to co-operate with the unconditional co-operator and the conditional co-operator )) and gj6 corresponds to CUC, who only co-operates with the unconditional co-operator.
Figure 3: The extensive form of the XPD3.
Morality's Last Chance
367
At each of his information sets, P2, knowing the decision procedure of PI, chooses an action which is an element of the set of response functions F. His strategy is a function, h/ : G >-> F, which maps P2's information sets (identified by a choice of decision procedure by PI) onto an action for P2 (a response function). This strategy is a decision procedure in the usual sense; it completely specifies how P2 will behave against each possible PI agent.23 For example, RC's decision procedure can be translated into the notation just developed in the following way: (1) if gii(f22) = D, then h2RC(g1i) = f23; (2) otherwise, n RC 2 (gi') = ^22- That is, if a PI, holding strategy gj1, plans to defect against UD (f22), then RC plans to behave with this PI after the fashion of a conditional co-operator (f23). Otherwise, RC plans to behave like an unconditional defector. The second column of the table, which provides Pi's move when she meets UD (f22), can then be used to construct RC's strategy. This yields: h2RC(g]i) = f23 for i = 2, 3, 4 and 6 and h2RC(g1i) = f22 for 1 = 1,5,7 and 8. Representing the decision procedures of the other agents in Danielson's tournament is straightforward. These agents' decision procedures as P2 do not depend on the decision procedure chosen by PI, so that the function which assigns a response function to Pi's decision procedure will be constant. Therefore, we have h2uc(g1i) = f2* Vi, h2UD(g1i) = f22 Vi and h2cc(g1i) = f23 Vi, where the superscripts identify the agents. Just as in the XPD2, specifying actions in this game is sufficient to "induce" an XPD outcome. For instance, suppose that P2 chooses to be a, RC (h2RC) and PI chooses to be a CUC (gj6). Then, h2RC(gl6) = f23 says that RC acts like a conditional co-operator with CUC, g!6(f23) = D says that faced with a conditional co-operator the CUC chooses D as her move, and f23(D) = D says that RC responds to Pi's move by defecting as well. The outcome is thus joint defection and the payoffs are (1,1). We now analyze the XPD3 in search of equilibria. A pair of SPE strategies in the XPD3 game provides a strategy for P2 which is a best reply to each and every PI agent in the population. The population in question is identified with the possible decision procedures {gj 1 ,..., gj8} for PI. Again, we find these SPE by backwards induction on the game tree. One notices immediately that there will be no unique subgame perfect equilibrium strategy for P2. Indeed, when PI acts like an unconditional defector by choosing gj2, then P2 maximizes his payoff (at the second left-most information set) by choosing either the response function associated with UD (f22) or the response function associated with CC (f23). Similarly, against a PI who chooses CC (gj3), both the f2: and the f23 response functions, which correspond, respectively, to those of UC and CC, maximize P2's payoff. At all other information sets for P2,
368
Chantale LaCasse and Don Ross
exactly one reply function maximizes his payoff. The specification of an SPE strategy for P2 must therefore include h^g!1) = f22/ h/(gi4) = f23, h/(g]5) = f22, V(gi6) = f2\ Mgi7) = f22, h/(gl8) = f22, where the '*' designates an equilibrium strategy. This analysis implies immediately that there are four subgame perfect equilibrium strategies for P2. For any one of these four strategies for P2, there are three strategies which are best replies for PI: g23, corresponding to CC; g24, corresponding to RC; and g26, corresponding to CUC. There are therefore twelve SPEs in this game, and each of them induces the joint co-operation outcome in the XPD game. Analyzing each of these equilibria specifically would require a long detour through a number of issues which do not directly bear on our main argument. Instead, we now use the preceding analysis to assess the relationship between rationality and the morality as concepts relevant to consideration of the circumstances of RC in the role of P2. The P2s who do best against the population of agents included in the XPD3 are those who use one of the four strategies h/. Comparing the requirements for the SPE strategies h/, with the strategy associated with an RC agent, h2RC, reveals that the strategy for RC specifies a payoff maximizing action against every type of PI agent except against CUC (gj6). As mentioned earlier, the meeting of a CUC and an RC results in mutual defection, because RC behaves like a conditional cooperator with CUC while CUC only co-operates with unconditional co-operators. The strategy h2*, in contrast, specifies that if P2 recognizes that PI has adopted the disposition of a CUC, then P2 commits to acting like an unconditional co-operator with her. In this way, the players succeed in co-operating and thus obtain higher payoffs. This analysis implies immediately that, in a population of Pis who can choose any of the dispositions in G except gj6, RC is an ERA. Indeed, in that case, RC chooses a payoff maximizing action at each of his information sets, and thus uses a best reply against each possible type of PI agent. In particular, note that the set of dispositions of the agents in Danielson's tournaments where RC is declared a winner is a subset of G which does not include gj6. In light of the fact that RC is then an ERA, this is, again, not surprising.24 Having shown that RC is an ERA for a large class of populations in the XPD3, we now turn to assessing the morality of such an agent. As noted previously, to be moral, RC's decision procedure must be such that, given that PI chooses a disposition which leads her to co-operate, RC chooses a response function which leads him to co-operate when he could have defected. To ascertain whether this condition holds, it suffices to check whether, when RC reaches the joint co-operative outcome with PI, there exists a choice of response function which would
Morality's Last Chance
369
have led to unilateral defection on the part of RC. Examining the game tree for the XPD3, we find that the answer is no. RC in the role of P2 cooperates only with CC (gj3) and RC (g!4), precisely the agents with whom exploitive behaviour is impossible. Therefore, RC never chooses to defect unilaterally against them because he cannot. We conclude that RC, in the role of P2, still is not moral. This is unsurprising. Given the criterion for moral agency applied by the economic contractarians, an agent who is an ERA relative to a given population is, by definition, not moral. An ERA maximizes his payoff against each possible disposition. This means that if an ERA chooses a disposition which leads to joint co-operation, with a payoff of 2, then a disposition which allows unilateral defection, and a payoff of 3, must not have been available. Our general conclusion beckons, at last. The project of the economic contractarians seeks to provide a fundamental justification for morality. In particular, it seeks to show that agents who choose to constrain their behaviour can achieve co-operative outcomes in SDs and hence can make themselves better off. An agent who behaves in this way is moral, because he chooses to co-operate with others when he could have profited from unilateral deviation; he is simultaneously rational, because he improves his ultimate lot by accepting the constraint. There exist, in a given situation, different ways in which an agent can choose to constrain his behaviour. In choosing a disposition, an agent simply chooses the precise nature of his constraint. We have shown that, far from being a "new way to maximize," or a new way to behave in a strategic context, a disposition is best interpreted as a commitment to a particular strategy in a well-specified game. Now, is an agent who chooses a disposition by committing to a particular strategy constrained? Is he, in particular, constrained when compared to an agent who does not commit to a plan of action? It is natural, at first glance, to suppose so. After all, the agent who commits to a particular strategy is constrained to obey it, while the uncommitted agent can always change his mind. This intuition is unreliable, however. A constraint, by definition, effects a reduction in a set of alternatives, of choices, or of the feasible actions to be considered. It is immediate that the addition of a constraint cannot improve the result of a maximization procedure. An agent who, in a strategic situation, chooses an action with the aim of maximizing his payoff, cannot benefit from a constraint that restricts his possible actions. To offer a crude example, if a firm were constrained to hire only workers named Zeke, it would experience no inconvenience if all its workers were Zekes to begin with, but its efficiency and profits would have to suffer otherwise; in no possible world could the constraint make the firm better off.
370
Chantale LaCasse and Don Ross
Given that agents who choose dispositions such as RC and CC do achieve co-operative outcomes and do make themselves better off in some games, it follows that such agents cannot be construed as acting under constraint, relative to their competitors. We have already argued that agents who choose such dispositions in an *SD like the *XPD are in fact playing a new game, which the proper XPD no longer represents. The difference in the information that such players must hold before choosing their actions engenders a new strategic situation. This new situation involves a larger set of possible strategies, not a smaller one; it expands the set of feasible actions available to the agents, rather than constraining it. To illustrate the point, compare again the XPD, XPD2, and XPD3 games. Agents without dispositions play the XPD; in that case, the PI agent can choose either C or D, and the P2 agent chooses a response function from the set F. Agents who commit to dispositions play either the XPD2 or the XPD3, depending on the information structure of the interaction. In the case of the XPD3, the set of feasible actions is larger if the agents can do exactly what the agents in the XPD can, and have available actions which the agents in the XPD lack. Can a PI agent in the XPD3 "simply" choose C or choose D? Certainly. This would just mean that, regardless of P2's choice of response function, PI would choose C (gj^) = C Vk) or would choose D ((g!(f2k) = D Vk). But PI has, in addition, other strategies available, because she can condition her action on P2's behaviour. It should by now be clear that analogous reasoning holds for P2. The point of this argument is that, when players achieve co-operation through strategic commitment, it is not the presence of moral constraint that does the work. What is crucial is simply the ability of agents to commit. Gauthier's difficulties result from the fact that this is missing in genuine SDs. However, the possibility of payoff improving commitment in games other than SDs has been widely recognized in economics. For example, a seller who faces a group of buyers, and who does not know how much the individual buyers are willing to pay for the object of sale, cannot do better than to commit himself to selling the object by auction with a reserve price (that is, a minimum price below which the seller is committed not to sell).25 Various possible mechanisms make such commitment by the seller possible. He can, for example, simply leave suitable instructions with his auctioneer; by removing himself from the selling process, he prevents himself from giving in to the temptation of reneging on his announced strategy. This temptation is real: if our seller were to hold the auction himself and find that there were no takers for his object because buyers found his reserve price too high for their tastes, he would want to reconsider. However, the fact that, through his agent, the seller can signal to the buyers that
Morality's Last Chance
371
his reserve price is binding does improve his expected return. The auctioneer functions as an agent for the seller - an enforcement mechanism for commitment - in exactly the same sense as Hobbes' sovereign acts as an agent for the players in the state of nature. Genuine agency, in the everyday sense, can thus be rationally effective. Moral agency, however, is perfectly useless. Morality is a deeply peculiar sort of constraint, quite unlike the Hobbesian sovereign or the seller's auctioneer. The moral agent - if appeal to his morality is to serve any explanatory purpose not achieved by appeal to his rationality alone - must be free to escape his self-constraint, that is, must be capable of abandoning the dictates of his moral disposition if he so chooses. But in that case he has not bound himself at all; if he "does the right thing," from the Pareto point of view, he has simply chosen to act co-operatively, and has thereby revealed that doing so is perceived by him to be in his own best interest. The contractarian seems to conceive of morality as a set of non-binding bonds. Put this way, the conception strikes us as simply absurd. This point has not escaped other critics of the contractarian moral concept (see, for example, Copp 1991). Most of these critics, however, have intended to show that the contractarians are wrong about morality. We suggest something else: that the contractarians have successfully regimented the folk concept of morality, and, in doing so, have exposed its ultimate incoherence. We find it entirely plausible to suppose that the idea of morality is, like the medieval economists' idea of a just price, a notion that makes sense only on the presupposition of objectively given values. Economists abandoned this presupposition many years ago, but the continuing existence of moral theory suggests that philosophers have not yet followed suit. We hope that their recent, but now widespread, contact with game theory may provide a useful push in the right direction.
Acknowledgments We would like to thank Ig Horstmann and Wayne Norman for their suggestions and comments. Ross is also grateful to the Social Sciences and Humanities Research Council of Canada, for financially facilitating this research.
Notes 1 We say "a version of" moral scepticism because there are several actual and possible positions that go by that name. One might be a moral sceptic in a strictly epistemic sense, denying that people can know what is and what is not moral. We are sceptics in this sense, but trivially, since we deny that there is anything moral for people to know. In the contractarian literature that we will be discussing, a sceptic who more commonly serves as the foil
372
2
3 4
5
6
7
8
Chantale LaCasse and Don Ross is someone who denies that moral conduct can be rationally justified. We are also sceptics in this sense, for reasons that will be given. Indeed, we may be moral sceptics in every possible sense, since the variety of moral scepticism with which we sympathize is perhaps the logically strongest kind: we believe that the concept of morality is ultimately incoherent. We will not be providing an argument for this very ambitious claim in the present paper, however. Binmore is content with technical arguments against the economic contractarians. We endorse his technical arguments. However, we think that proper engagement of the basic economic contractarian idea requires attention to deeper philosophical issues. Binmore's agents, behind the veil, know which sets of preferences exist; but they do not know whose preferences they have. This should not be read as implying that Binmore defends the actual social status quo. He makes clear that if we imagined ourselves playing the game of morals as systematically as his theory suggests, we would discover that our arrangement in the game of life is inferior in many ways. We say "first-order" moral concepts because in considering the psychology of players of social choice games one might have to take account of their beliefs about moral concepts. Obviously, however, the fact that agents may have beliefs about moral concepts does not imply that those concepts are coherent. As noted previously, "unconstrained maximizers" do not always achieve the Pareto-optimal outcome in co-ordination games. But this is not because they are unconstrained. Our point, in any case, is that it is only in SDs that the cooperative outcome is impossible, on the strict game-theoretic understanding. This point is related to our reasons for being unhappy with the phrase "social dilemma." For an ERA - call him I - SDs do not, by definition, present dilemmas of any kind. In such cases, if Fs fellow players act without regard for the greater social good, then I would be pointlessly sacrificing himself if he failed to follow their example. If, however, the others do behave in such a way as to bring about the best for society, then I also maximizes his utility by defecting unilaterally. So I should act to maximize his utility regardless of what the others do. Where is the dilemma? Perhaps I will be rendered gloomy by his recognition that he wants what game theory tells him is impossible, namely, the Pareto-superior outcome. But this constitutes a dilemma for him only in the sense that being wingless constitutes a dilemma for the person who wishes he could fly like a bird. Agents are merely takers of actions; no constituent psychology is presupposed. Elsewhere (LaCasse and Ross 1994), we have defended the use of the phrase "Dennettian agents," after Dennett (1987), to distinguish this class of entities.
Morality's Last Chance
373
9 Most of the agents in Danielson's tournaments do not make these sorts of choices; their decision rules are hard-wired. However, the reason for this is mainly procedural. The point of the tournaments is to demonstrate the "substantive rationality" (see text, below) of indirect maximization, and for this purpose it does not matter that the agents are not autonomous RAMAs. However, Danielson's understanding of morality - like Gauthier's and unlike Binmore's - requires that morality be choosable as rational. Thus, he says that his agents, in being "rigidly fixed ... fall short of the ideal of morality: autonomous moral choice that somehow combines freedom and constraint" (1992, p.15). This is why, late in the book, he experiments with rudimentary learning capacities in some of his agents. 10 Contrary to what one might suppose, biological evolution is not the best analogy. Following Dawkins (1976), Danielson (1992, pp. 40-45) holds that the players in evolutionary games are genes rather than organisms, and that genes are straightforward maximizers. There thus need be no gap between their preferences and their behaviour. 11 It might be objected that UD will have no incentive for making her decision rule public, and that it is therefore unnatural to set up the *game this way. However, since there are only three types of agents, and since UC and CC make their decision rules public in Danielson's tournament, an agent who refuses to reveal her decision procedure must be UD. In general, any agent who co-operates will wish to reveal herself so as to assure the other player of her co-operation; therefore, any agent who does not act in this way must be presumed to want to defect. 12 Confusion is threatened by the fact that Danielson sometimes uses "strategy" and "decision procedure" interchangeably. In the XPD2, strategies are the objects over which agents maximize. 13 In general, given that P2 chooses a strategy f2h and that PI chooses a strategy g/ : F i-» {C,D}, then gik(f2h) = move 1 is Pi's C or D choice and f h z [gik(f2h)l = f2h[move 1] = move 2 is P2's C or D choice. 14 In games like the PD, where each agent has exactly one move and where these moves are simultaneous, each player has only one information set, at which she has no information. An agent's strategy in this context apparently has nothing to respond to or to condition on. However, the strategy of an agent in such a context could still itself be a conditional statement of action. This would be true, for instance, in a simultaneous move-game where both agents announced their decision rule in the PD (that is, a strategy inF'). 15 Trivially, when an agent has a unique dominant strategy, as in the PD, then, by definition, this strategy is a best reply to any strategy chosen by the other agents, and there is no need to refer to their choices. Perhaps the distortions incorporated into D-game theory are a manifestation of over-concentration on the PD in the philosophical literature connected with game theory.
374
Chantale LaCasse and Don Ross
16 To give Danielson his due, UD is still a best reply to UD in the XPD2: it is an NE. This means that if I know that the other player will defect always, then so should I. But it does not mean that if I were to learn that the other agent is a CC then I would still wish to be a UD. 17 This is the unique SPE, but there are quite a few other NE. That is, there are other strategy vectors for which PI chooses a best reply to the disposition actually chosen by P2 and for which the disposition chosen by P2 is a best reply to Pi's strategy. However, there are no other strategy vectors for which PI chooses a best reply to the disposition actually chosen by P2 and to all other dispositions that P2 could have chosen. We return to this point shortly. 18 Binmore (1994, pp. 175-77) makes a similar point in connection with a criticism of Gauthier. 19 In this and the following quotation, we have slightly altered Danielson's notation to make it consistent with ours. 20 In what follows, we continue to restrict the set of possible actions for P2 to the set F rather than F'. This changes the equilibrium of the game that we consider below, but it does not alter the substantive conclusions that we draw from the analysis. 21 By "response function" we mean a function which gives P2's response to any move by PI. We are not using the term "decision rule" here, since this term is reserved for players' strategies. As will become clear shortly, a player's response function is not her strategy. 22 In Danielson's tournaments, RC should be executing the code for his adversary's move 1. However, there are difficulties in implementing this. Danielson (1992, p. 91), avoids them by assuming that players are "unified" so that RC actually executes the code for his adversary's move 2. 23 It seems apparent, from the procedure we used to construct XPD2 from XPD, and again to construct XPD3 from XPD2, that *XPD can be represented as a game regardless of the population of agents that one wishes to represent. More precisely, new agents can always be manufactured by having their choice of decision procedures be conditional once again on that of their opponents. 24 Danielson does not obtain the result that the straightforward maximizer, even if he is sensitive, co-operates when he is second mover. This is because Danielson does not endow his "sensitive direct maximizer" with theability to commit. We return to this point shortly. 25 This is strictly true only in the "independent private values model." See Myerson (1981).
References Binmore, K. (1994). Game Theory and the Social Contract, Volume 1: Playing Fair. Cambridge, MA: MIT Press.
Morality's Last Chance
375
Copp, D. (1991). Contractarianism and moral scepticism. In P. Vallentyne (ed.), Contmctarianism and Rational Choice. New York: Cambridge University Press. Danielson, P. (1991). Closing the compliance dilemma: How it's rational to be moral in a Lamarckian world. In P. Vallentyne (ed.), Contractarianism and Rational Choice (New York: Cambridge University Press). (1992). Artificial Morality. London: Routledge. Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press. Dennett, D. (1987). The Intentional Stance. Cambridge, MA: MIT Press/ Bradford. Gauthier, D. (1986). Morals By Agreement. Oxford: Oxford University Press. (1993). Uniting separate persons. In D. Gauthier and R. Sugden (eds.), Rationality, justice and the Social Contract (Hemel Hempstead, UK: Harvester Wheatsheaf). LaCasse, C, and D. Ross (1994). The microeconomic interpretation of games. In R. Burian, M. Forbes and D. Hull (eds.),PSA 1994, vol. 1 (East Lansing, MI: Philosophy of Science Association), pp. 379-387. Myerson, R. B. (1981). Optimal Auction Design. Mathematics of Operations Research, 6 (1): 58-73. Smith, H. (1991). Deriving morality from rationality. In P. Vallentyne (ed.),Contractarianism and Rational Choice (New York: Cambridge University Press). Vallentyne, P., (ed.) (1991). Contractarianism and Rational Choice. New York: Cambridge University Press
This page intentionally left blank
Evolution
This page intentionally left blank
17 Mutual Aid: Darwin Meets The Logic of Decision Brian Skyrms
1. Mutual Aid On June 18,1862 Karl Marx wrote to Frederick Engels, "It is remarkable how Darwin has discerned anew among beasts and plants his English society ... It is Hobbes' bellum omnium contra omnes." Marx is being somewhat unfair to Darwin. But in 1888 "Darwin's Bulldog," Thomas Henry Huxley, published an essay entitled "The Struggle for Existence and its Bearing Upon Man," which was close to Marx's caricature: The weakest and the stupidest went to the wall, while the toughest and the shrewdest, those who were best fitted to cope with their circumstances, but not the best in any other way survived. Life was a continuous free fight, and beyond the limited and temporary relations of the family, the Hobbesian war of each against all was the normal state of existence. (Huxley 1888, p. 165)
Huxley's portrayal of "nature red in tooth and claw" had a great popular impact, and contributed to paving the way for the social Darwinism that he himself detested. The great anarchist, Prince Petr Kropotkin, was moved to publish an extended rebuttal in same periodical, Nineteenth Century, which had carried the piece by Huxley. Kropotkin's articles, which appeared over a period from 1890-1896, were collected in a book entitled Mutual Aid: A Factor of Evolution. The introduction begins: Two aspects of animal life impressed me most during my youth in Eastern Siberia and Northern Manchuria. One of them was the extreme severity of the struggle which most species of animals have to carry on against an inclement Nature ... And the other was that even in those few spots where animal life teemed in abundance, I failed to find, although I was eagerly looking for it - that bitter struggle for the means of existence, among animals belonging to the same species, which was considered by most Darwinists (though not always by Darwin himself) as the dominant characteristic of the struggle for life, and the main factor of evolution ... 379
380
Brian Skyrms ... In all these scenes of animal life which passed before my eyes, I saw Mutual aid and Mutual Support carried on to an extent which made me suspect in it a feature of the greatest importance for the maintenance of life, the preservation of each species, and its further evolution.
Kropotkin believes that mutual aid plays as important part in evolution as mutual struggle, and he goes on to document instances of mutual aid among animals and men. The correctness of Kropotkin's main conclusion is indisputable. Both mutual aid and pure altruistic behaviour are widespread in nature. Worker bees defend the hive against predators at the cost of their own lives. Ground squirrels, prairie dogs and various birds and monkeys give alarm calls in the presence of predators to alert the group, when they might serve their own individual interests best by keeping silent and immediately escaping. Vampire bats who fail to find a blood meal during the night are given regurgitated blood by roost mates, and return the favour when the previous donor is in need. Many more examples can be found in the biological literature (e.g., see Krebs and Davies 1993, chs. 11-13). Darwin was quite aware of co-operation in nature. He discussed it at length in The Descent of Man. But his attempts to give an explanation did not succeed in terms of his own evokltionary principles. In The Descent of Man Darwin pointed out the benefit to the group of co-operation, but his principles required explanation in terms of the reproductive success of the individual. We are left with the question: How can the evolutionary dynamics, which is driven by differential reproduction, lead to the fixation of co-operative and altruistic behaviour?
2. The Logic of Decision In The Logic of Decision (1965) Richard Jeffrey introduced a new framework for decision theory, which was meant to modify and generalize the classic treatment of Savage (1954). Savage sharply distinguishes acts, states of the world, and consequences. All utility resides in consequences. Acts together with states jointly determine consequences. In the special case in which there are only a finite number of states, we can write the Savage expected utility of an act as a probability weighted average of the utilities of the consequence resulting from that act in that state: Savage: Utility (Act) = S; Probability (State-,) Utility (Ac^StateJ) It is important here that the probabilities of the states are unconditional. They are simply your degrees of belief that the world is in that state. The same probabilities arc used in computing the expected utility for each act.
Mutual Aid: Darwin Meets The Logic of Decision
381
Jeffrey wanted to allow for the possibility that the act chosen might influence the probability of the states. Jeffrey makes no formal distinction between acts, states, and consequences. There is just a Boolean algebra, whose elements are to be thought of as propositions. Each proposition has a probability and each proposition with positive probability has a utility. In the application of the theory the decision-maker can identify a partition of propositions which represent the alternative possible acts of her decision problem, and a partition representing alternative states of the world. Jeffrey takes the expected utility of an act to be a weighted average of the utilities of act-state conjunctions, with the weighting of the average being the conditional probability of state conditional on act instead on the unconditional probability used in Savage: Jeffrey: UtilityCAct) = S; Probability [State; I Act] Utility [Act & State{] Here the probabilities of the states are conditional on the acts, so the weighting of the states may be different in computing the expected utilities of different acts. The Jeffrey expected utility makes sense for any element, A, of the probability space, relative to any finite partition {Sjj whether the former is intuitively an act and the latter a partition of states or not. Furthermore, for fixed A, the expected utility of A comes out the same when calculated relative to any finite partition. This is the reason that it is possible for Jeffrey to dispense with a formal distinction between acts and states, and to endow all (non-null) elements of the basic probability algebra with an expected utility as well as a probability. The expected utility of the whole space is of special interest. This is the expected utility of the status quo: Jeffrey Utility of the Status Quo: USQ = Xj Probability (Act) Utility (Act.) There is, however, a difficulty when Jeffrey's system is interpreted as a system for rational decision. The probabilities in question are just the agent's degrees of belief. But then probabilistic dependence between act and state may arise for reasons other than the one that Jeffrey had in mind - that the agent takes the act as tending to bring about the state. The dependence in degrees of belief might rather reflect that an act is evidence for a state obtaining; for instance, because the act and state are symptoms of a common cause. This raises the prospect of "voodoo decision theory," that is, of basing decisions on spurious correlation (see Gibbard and Harper 1981; Lewis 1981; Nozick 1969; Skyrms 1980,1984; Stalnaker 1981). Prisoner's Dilemma with a clone -
Brian Skyrms
382
or a near clone - is a well-known kind of illustration of the difficulty (Lewis 1979; Gibbard and Harper 1981). Max and Moritz are apprehended by the authorities and are forced to play the Prisoner's Dilemma (for biographical data, see Busch 1865). Each is given the choice to remain silent (= co-operate) or turn state's evidence (= defect). We discuss the decision problem from the point of view of Max, but Moritz's situation is taken to be symmetrical. Max's payoffs depend both on what he does and what Moritz does, and he takes his utilities to be as given below: Moritz Co-operates
Moritz Defects
Max Co-operates
0.9
0.0
Max Defects
1.0
0.6
Max also believes that Moritz and he are much alike, and although he is not sure what he will do, he thinks it likely that Moritz and he will end up deciding the same way. His beliefs do not make his act probabilistically independent of that of Moritz even though we assume that they are sequestered so that one act cannot influence the other. We have evidential relevance with causal independence. For definiteness, we assume that Max has the following probabilities for joint outcomes: Moritz Co-operates
Moritz Defects
Max Co-operates
0.45
0.05
Max Defects
0.05
0.45
(Thus, for example, Max's probability that he and Moritz both co-operate is .45, and his conditional probability of Moritz co-operating given that he does is .9.) If Max applies Savage's theory and takes Moritz's acts as constituting his own states, he will take the states as equiprobable and calculate Savage expected utility of his co-operating as (.5) (.9) = .45 and Savage expected utility of his defecting as (.5)(1) + (.5)(.6) = .8. He will maximize Savage expected utility by defecting. This conclusion is independent of the probabilities assumed, since because Savage expected utility uses the same weights in calculating the expected utility of both acts and defection strictly dominates co-operation in the payoff matrix - that is to say, whatever Moritz does, Max is better off defecting. But if Max applies Jeffrey expected utility, using conditional probabilities as weights, he will calculate the Jeffrey expected utility of
Mutual Aid: Darwin Meets The Logic of Decision
383
co-operating as (.9)(.9) = .81 and the Jeffrey expected utility of defecting as (.!)(!) + (.9)(.6) = .64. He will maximize Jeffrey expected utility by co-operating. This is because Jeffrey expected utility uses conditional probabilities as weights, and the probabilities conditional on the two acts are different. This is because of the spurious correlation the probabilistic dependence which does not reflect a causal dependence. Here, maximization of Jeffrey expected utility selects a strictly dominated act. In response to these difficulties, Jeffrey introduced a new concept in the second edition of The Logic of Decision: that of ratifiability (for related ideas, see Eells 1982, 1984). Jeffrey's idea was that during the process of deliberation, the probabilities conditional on the acts might not stay constant, but instead evolve in such a way that the spurious correlation was washed out. In other words, it is assumed that at the end of deliberation the states will be probabilistically independent of the acts. Under these conditions, the Jeffrey expected utility will be equal to the Savage expected utility. Thus, in the previous example expected utility at the end of deliberation would respect dominance, and defection would then maximize Jeffrey expected utility. Consider the conditional probabilities that an agent would have on the brink of doing act A, and let U^ be the Jeffrey expected utility calculated according to these probabilities. An act A, is said to be ratifiable just in case: UA(A) > UA(B) for all B different from A Jeffrey suggested that a choiceworthy act should be a ratifiable one. The reason for talking about "the brink" is that when the probability of an act is equal to one, the probabilities conditional on the alternative acts have no natural definition. The idea of ratifiability, so expressed, is ambiguous according to how "the brink" is construed. Thus, the conditional probabilities that one would have "on the brink" of doing act A might be construed as limits taken along some trajectory in probability space converging to probability one of doing act A. The limiting conditional probabilities depend on the trajectory along which the limit is taken, and for some trajectories the spurious correlation is not washed out. The requirement of Ratifiability does not, in itself, eliminate the sensitivity of Jeffrey decision theory to spurious correlations but it will prove to be of crucial importance in another setting. The behaviour of the co-operators of section one is something like that of decision-makers using the Jeffrey expected utility model in the Max and Moritz situation. Are ground squirrels and vampire bats using voodoo decision theory?
384
Brian Skyrms
3. Differential Reproduction: Replicator Dynamics and Evolutionarily Stable Strategies The basic logic of differential reproduction is largely captured by a simplified dynamical model, known as the replicator dynamics. Here are the simplifying assumptions. Individuals can have various alternative "strategies" or dispositions to act in certain ways in pairwise encounters. These strategies are genetically determined. Reproduction is asexual and individuals breed true. Each individual engages in one contest per generation, and plays its strategy. Payoffs are in terms of evolutionary fitness (expected number of offspring). The payoff for an individual playing strategy A{ against one playing strategy A will be written as U(Aj I AJ). The population is very large (effectively infinite). Individuals are paired at random. Let us write p(A,) for the proportion of the population playing strategy Aj. This is also the probability that an individual playing A{ is selected in a random selection from the population. Then, under the foregoing assumptions, the expected fitness for an individual playing A' is gotten by averaging over all the strategies that A; may be played against: U(Aj) = 2: p(A) U(AL I A). The average fitness of the population U is obtained by averaging over all strategies: U = S; p(A;) U(A;). If the population is large enough, then the expected number of offspring to individuals playing strategy A;, U(A;), is with high probability close to the actual number of offspring. It is assumed that the population is large enough that a useful approximation can be obtained by studying the deterministic map which identifies the expected number of offspring to individuals playing a strategy with the actual number of offspring (for a careful discussion of this reasoning see Boylan 1992). Under this assumption the proportion of the population playing a strategy in the next generation, p', is equal to:
That is to say, that considered as a dynamical system with discrete time, the population evolves according to the difference equation:
If the time between generations is small, this may be approximated by a continuous dynamical system governed by the differential equation:
Mutual Aid: Darwin Meets The Logic of Decision
385
Providing average fitness of the population is positive, the orbits of this differential equation on the simplex of population proportions for various strategies are the same as those of the simpler differential equation:
although the velocity along the orbits may differ.1 This latter equation was introduced by Taylor and Jonker (1978). It was later studied by Zeeman (1980), Bomze (1986), Hofbauer and Sigmund (1988), Nachbar (1990). Shuster and Sigmund (1983) find it at various levels of biological dynamics and call it the replicator dynamics. A dynamic equilibrium is a fixed point of the dynamics under consideration. In the case of discrete time, it is a point, x of the state space which the dynamics maps onto itself. For continuous time, it is a state, x = (xlr . . . , % ; , . . . ) such that dx{/dt = 0, for all i. An equilibrium x is stable if points near to it remain near to it. More precisely, x is stable if for every neighbourhood, V of x, there is a neighbourhood, V, of x such that if the state y is in V at time t = 0, it remains in V for all time t > 0. An equilibrium, x, is strongly stable (or asymptotically stable) if nearby points tend towards it. That is, to the definition of stability we add the clause that the limit as t goes to infinity of y(t) = x. The states of interest to us are vectors of population proportions. We treat these formally as probabilities. Since these must add to one, the state space is a probability simplex. We will say that an equilibrium is globally stable in the replicator dynamics if it is the dynamical limit as time goes to infinity of every point in the interior of the state space. Taylor and Jonker introduced the replicator dynamics to provide a dynamical foundation for an equilibrium notion introduced by Maynard Smith and Price (1973), that of an evolutionarily stable strategy. The informal idea is that if all members of the population adopt an evolutionarily stable strategy then no mutant can invade. Maynard Smith and Parker (1976) propose the following formal realization of this idea: Strategy x is evolutionarily stable just in case U(x I x) > U(y I x) or \J(x 1 x) = U(y x) and \J(x y) > U(y I y) for all y different from x. Equivalently, x is evolutionarily stable if: (1) U(x I x) > U(y I x)
(2) If U(x I x) = U(y I x), then U(x I y) > U(y I y) Maynard Smith and Parker had in mind a set of strategies including all randomized strategies that can be made from members of the set. In
386
Brian Skyrms
the Taylor-Jonker framework individuals play pure (non-random) strategies, and the place of randomized strategies is taken by mixed or polymorphic states of the population, where different proportions of the population play different pure strategies. The mathematics remains the same, but the interpretation changes - we must think of mixed states of the population that satisfy the definition as evolutionarily stable states. When this is done, the notion of an evolutionarily stable state is stronger than that of a strongly stable equilibrium point in the replicator dynamics. Taylor and Jonker show that every evolutionarily stable state is a strongly stable equilibrium point in the replicator dynamics, but give an example where a mixed state is a strongly stable equilibrium point, but not an evolutionarily stable state. This dynamics exhibits close connections with individual rational decision theory and with the theory of games. I will give a sketch of the most important parts of the picture - but must refer the reader to the literature for a full exposition: Taylor and Jonker, Zeeman, Nachbar, van Damme, Bomze, Friedman, Hofbauer and Sigmund. First, notice that if we think of evolutionary fitness as utility and population proportion as probability, then the formula for expected fitness of an individual playing a strategy is the same as that for the Savage expected utility of an act. In a two-person, finite, non-co-operative, normal form game there are a finite number of players and each player has a finite number of possible strategies. Each possible combination of strategies determines the payoffs for each of the players. (The games are to be thought of as non-co-operative. There is no communication or precommitment before the players make their choices.) A specification of the number of strategies for each of the two players and the payoff function determines the game. A Nash equilibrium of the game is a strategy combination such that no player does better on any unilateral deviation. We extend players' possible acts to include randomized choices at specified probabilities over the originally available acts. The new randomized acts are called mixed strategies, and the original acts are called pure strategies. The payoffs for mixed strategies are defined as their expected values using the probabilities in the mixed acts to define the expectation. We will assume that mixed acts are always available. Then every finite non-co-operative normal form game has a Nash equilibrium. For any evolutionary game given by a fitness matrix, there is a corresponding, symmetric, two-person, non-co-operative game. (It is symmetric because the payoff for one strategy played against another is the same if row plays the first and column plays the second, or conversely. The identity of the players doesn't matter.) If x is an evolutionarily stable state, then (x, x) is - by condition 1 above - a symmetric
Mutual Aid: Darwin Meets The Logic of Decision
387
Nash equilibrium of that two-person, non-co-operative game. Condition 2 adds a kind of stability requirement. If (x, x) is a Nash equilibrium of the two-person game, then x is a dynamic equilibrium of the replicator dynamics (but not conversely). If x is a stable dynamic equilibrium of the replicator dynamics, then (x, x) is a Nash equilibrium of the two-person game (but not conversely.) (For details and proofs see van Damme (1987).) The foregoing model motivating the replicator dynamics relies on many simplifying assumptions and idealizations which might profitably be questioned. Here, however, we will focus on the assumption of random pairing. There is no mechanism for random pairing in nature and ample reason to believe that pairing is often not random (Hamilton 1964). Random pairing gets one a certain mathematical simplicity and striking connections with the Nash equilibrium concept of the von Neumann and Morgenstern theory of games, but a theory which can accommodate all kinds of non-random pairing would be a more adequate framework for realistic models. How should we formulate such a general theory?
4. Darwin Meets The Logic of Decision Let us retain the model of the previous section with the single modification that pairing is not random. Non-random pairing might occur because individuals using the same strategies tend to live together, or because individuals using different strategies present some sensory cue that affects pairing, or for other reasons. We would like to have a framework general enough to accommodate all kinds of non-random pairing. Then the characterization of a state of the biological system must specify conditional proportions, p(Aj!Aj), consistent with the population proportions - which give the proportion of individuals using strategy Ai which will interact with individuals using strategy A:. Now the expected fitness for an individual playing Ai is obtained by averaging over all the strategies that A; may be played against, using the conditional proportions rather than the unconditional proportions as weights of the average: U(A;) = Sj p(Aj I Aj) U(Ai I Aj). Formally, this is just Jeffrey's move from Savage expected utility to Jeffrey expected utility. The average fitness of the population is obtained by averaging over the strategies using the proportions of the population playing them as weights: U = Sj p(Aj) U(A;). This is just the Jeffrey expected utility of the status quo. The replicator dynamics then goes exactly as before with the proviso that utility be read as Jeffrey expected utility calculated according to the conditional pairing proportions. There is more to the dynamics if the conditional pairing proportions are not fixed, but are themselves subject to dynamical evolution. This
388
Brian Skyrms
will often be the case in realistic models, and in certain cases may be forced upon us by the requirement that the pairing proportions be consistent with the population proportions. To take an extreme case, suppose that there are two strategies initially represented in equal proportions in the population and suppose that there is a strong tendency for each strategy to be paired with the other. If the fitnesses are such that strategy one flourishes and strategy two is driven towards extinction, the strong anticorrelation cannot be maintained because there are not enough strategy two individuals to pair with all the strategy one players at a given point in time. There are no such consistency problems in maintaining strong, positive correlations between strategies in twostrategy games. In the case just described each strategy could almost always be paired with itself. However, the specific biological motivation for correlation could very well motivate a dynamical evolution of the conditional pairing proportions in this case as well. What are the relevant notions of equilibrium and stable equilibrium for pure strategies in correlated evolutionary game theory? Every pure strategy is a dynamical equilibrium in the replicator dynamics because its potential competitors have zero population proportion. The formal definition of an evolutionarily stable strategy introduced by Maynard Smith and Parker, and discussed in the previous section only makes sense in the context of the random pairing assumption. It does not take correlation into account. For example, according to the definition of Maynard Smith and Price, Defect is the unique evolutionarily stable strategy in the Prisoner's Dilemma. But with sufficiently high correlation co-operators could invade a population of defectors. We want a stability concept that gives correlation its due weight, and which applies in the general case when the conditional pairing proportions are not fixed during the dynamical evolution of the population. For such a notion we return to Richard Jeffrey's concept of ratifiability. Transposing Jeffrey's idea to this context, we want to say that a pure strategy is ratifiable if it maximizes expected fitness when it is on the brink of fixation. (The population is at a state of fixation of strategy A, when 100% of the population uses strategy A.) This would be to say that there is some neighbourhood of the state of fixation of the strategy such that the strategy maximizes expected utility in that state (where the state of the system is specified in the model so as to determine both the population proportions and the conditional pairing proportions). Let us restrict our attention for the moment to models where the conditional pairing proportions are functions of the population proportions, so that the population proportions specify the state of the system and the replicator dynamics specifies a complete dynamics for the sys-
Mutual Aid: Darwin Meets The Logic of Decision
389
tem. Since we are interested in strong stability, the natural concept to consider is that of strict ratifiability. Let x be vector of population proportions specifying the state of the system; let a be the state of the system which gives pure strategy, A, probability one; let U X (B) be the expected fitness of B when the system is in state x, and Ux be the average fitness of the population in state x. Then a pure strategy, A, is strictly ratifiable if for all pure strategies, B, different from A: UX(A) > UX(B)
for all x ^ a in some neighbourhood2 of a (the point of fixation of A). There is, however, reason to explore a weaker variation on the general theme of ratifiability. Here we ask only that the expected fitness of A is higher than that of the average fitness of the population throughout a neighbourhood of the point of fixation of A. What is required to hold throughout the neighbourhood is not that A is optimal but only that A is adaptive, that A is better than the status quo. I will call this concept adaptive ratifiability. A pure strategy, A, is adaptive-ratifiable if:
for all x ^ a in some neighbourhood of a (the point of fixation of A). It is obvious that strict ratifiability entails adaptive ratifiability since the average population fitness, Ux, is an average of the fitnesses of the pure strategies, U x (Bj). For an example that shows that adaptive ratifiability does not entail strict ratifiability, consider the following fitness matrix together with the assumption of random pairing: Strategy I
Strategy 2
Strategy 3
Strategy 1
3
3
3
Strategy 2
3
0
4
Strategy 3
3
4
0
Then strategy 1 is not strictly ratifiable, because wherever p(S2) / p(S3) > 3, strategy 3 has higher fitness than strategy 1, and wherever p(S3) / p(S2) > 3, strategy 2 has greater fitness that strategy 1. Strategies 2 and 3 each prosper when rare relative to the other, but the rare strategy cannot make as great an impact on the average fitness of the population as the other strategy which cannot prosper. In fact, the average fitness of the population is at its unique maximum at the point of fixation of strategy 1. Strategy 1 is therefore adaptive-ratifiable.
390
Brian Skyrms
We can extend the concept of adaptive-ratifiability from pure strategies to mixed states of the population. If p is the vector of population proportions, then Up = 2j p(A{) U(Ai) is the average fitness of the population in mixed state p. Let x be another vector of population proportions, and consider U x (p) = S; p(Ai) UX(A;). This quantity is what the average population fitness would be if the expected fitness of each pure strategy, A, were determined by vector x, but the average fitness were determined by vector p. (Alternatively, it could be thought of as the payoff to a mutant playing a true mixed strategy, p, in a population in state x.) Then we can say that p is an adaptively ratifiable state if:
for all x ^ p in some neighbourhood of p. Two facts already known from the analysis of conventional evolutionary game theory show that adaptive ratifiability has a central role to play in correlated evolutionary game theory. The first is that adaptive ratifiability generalizes the evolutionarily stable strategies of Maynard Smith and Parker: In evolutionary game theory with random pairing, a state is Evolutionarily Stable if and only if it is Adaptive-Ratifiable. (van Damme 1987, Th. 9.2.8) The second is that Adaptive Ratifiability guarantees strong stability in the replicator dynamics: If a pure strategy is Adaptive-Ratifiable, then it is an attracting equilibrium in the replicator dynamics, (van Damme 1987, Th. 9.4.8) Thus, adaptive ratifiability captures the leading idea of Maynard Smith and Parker in a correlated setting. We have seen that three characteristic features of Jeffrey's discussion of rational decision - Jeffrey Expected Utility, Expected Utility of the Status Quo, and Ratifiability - all have essential parts to play in correlated evolutionary game theory.
5. Some Simple Examples Example 1 Suppose that the fitnesses for pairwise encounters are given by the payoff matrix for the Prisoner's Dilemma played by Max and Moritz. (I should emphasize that these are one-shot Prisoner's Dilemma games - not the indefinitely repeated Prisoner's Dilemma games
Mutual Aid: Darwin Meets The Logic of Decision
391
widely discussed in the literature, and defection is the unique evolutionary stable strategy, as defined by Maynard Smith.) Starting from any mixed population, the replicator dynamics with random pairing converges to a population of 100% defectors. Now consider the extreme case of Prisoner's Dilemma with a clone; individuals are paired with like-minded individuals with perfect correlation. The conditional proportions are p(C C) = p(D I D) = 1 and p(C I D) = p(D I C) = 0 and remain fixed at these values during the evolution of the system. With perfect correlation the expected fitness for a co-operator is 0.9 and that of a defector is 0.6. The pure strategy of co-operation is a strongly stable equilibrium in the replicator dynamics, and that dynamics carries any initial population with some positive proportion of co-operators to a population with 100% co-operators. With this kind of correlation Co-operation rather than Defection is adaptively ratifiable. This example shows in the simplest way how difficulties for Jeffrey expected utility in the theory of rational choice become strengths in the context of correlated evolutionary game theory.
Example 2 Correlation will usually not be perfect and the relevant conditional probabilities may depend on population proportions. The specifics depend on how correlation is supposed to arise. Correlation may be established by some sort of sensory detection. For instance, co-operators and defectors might emit different chemical markers. Suppose correlation arises as follows. At each moment there is a two-stage process. First, individuals are randomly paired from the population. If a cooperator detects another co-operator they interact. If not, there is no interaction, for we assume here that defectors wish to avoid each other as much as co-operators wish to avoid them. Then the members of the population that did not pair on the first try are paired at random among themselves; they give up on detection and interact with whomever they are paired. We assume here that detection accuracy is perfect, so that imperfect correlation among co-operators is due entirely to the possibility of initial failure to meet with a like-minded individual. (This assumption would obviously be relaxed in a more realistic model, as would the assumption that individuals would simply give up on detection after just one try.) The conditional probabilities that arise from this two-stage process then depend on population frequencies as follows:
392
Brian Skyrms
Using the payoffs for Prisoner's Dilemma of Section 2 we get expected fitnesses (= Jeffrey Utilities) of:
In Figure 1 the expected fitnesses of co-operation and defection are graphed as a function of the proportion of co-operators in the population. In a population composed of almost all defectors, hardly anyone pairs on the first stage and almost all co-operators end up pairing with defectors as do almost all defectors. The limiting expected fitnesses as defection goes to fixation are just those on the right column of the fitness matrix: U(D) = 0.6 and U(C) = 0.0. Defection is strictly ratifiable; a population composed entirely of defectors is strongly stable in the replicator dynamics. However, defection is not the only strictly ratifiable pure strategy. Cooperation qualifies as well. As the population approaches 100% co-operators, co-operators almost always pair with co-operators at the first
Figure 1: Expected fitnesses of co-operation and defection.
Mutual Aid: Darwin Meets The Logic of Decision
393
stage. Defectors can pair randomly with those left at the second stage, but there are not many co-operators left. The result is that the expected fitness of co-operation exceeds that of defection. There is an unstable mixed equilibrium where the fitness curves cross at p(C) = 0.75. This example illustrates a general technique of obtaining correlated pairing by superimposing some kind of a "filter" on a random pairing model. It also shows that there is nothing especially pathological about multiple strictly ratifiable strategies in evolutionary game theory.
Example 3 For an example of a game with no adaptively ratifiable pure strategies in essentially the same framework, consider the fitness matrix shown below and the same model of frequency-dependent correlation except that individuals try to pair with individuals of the other type at the first stage of the pairing process. Here, strategy 1 does better in a population composed mostly of individuals following strategy 2, and strategy 2 does better in a population composed of individuals following mostly strategy 1. The replicator dynamics carries the system to a stable state where half the population plays strategy 1 and half the population plays strategy 2. This is the same polymorphism that one would get in the absence of correlation, but here both strategies derive a greater payoff in the correlated polymorphic equilibrium (U(S1) = U(S2) = %) than in the uncorrelated one (U(S1) = U(S2) =]/a). Fitness Strategy 1
Strategy 2
Strategy 1
0
1
Strategy 2
1
0
Example 4 This example takes us somewhat outside the framework of the previous examples. The population is finite, the dynamics is discrete, and the population proportions are not sufficient to specify the state of the system. As Hamilton (1964) emphasizes, correlated interactions may take place in the absence of detection or signals when like individuals cluster together spatially. Hamilton discusses non-dispersive or "viscous" populations where individuals living together are more likely to be related. In replicator models relatedness is an all-or-nothing affair and the effects of viscosity can be striking. For the simplest possible spatial example, we let space be one dimensional. A large, fixed, finite number of individuals are arranged
394
Brian Skyrms
in a row. Each, except those on the ends, has two neighbours. Suppose that in each time period each individual plays a Prisoner's Dilemma with each of its neighbours and receives the average of the payoffs of these games. We assume that like individuals cluster, so that a group expands or contracts around the periphery. The population proportions will be governed by the discrete replicator dynamics (with roundoff), and the expansion or contraction of a connected group of like individuals will be determined by the fitnesses of members of that group. The state of the system here depends not only on the population frequency but also on the spatial configuration of individuals playing various strategies. If we introduce a single co-operator in a space otherwise populated by defectors, the co-operator interacts only with defectors and is eliminated. Scattered isolated co-operators or groups of two are also eliminated. Defection is strongly stable in a sense appropriate for this discrete system. However, if a colony of four contiguous co-operators is introduced in the middle of the space (or three at an end of the space) co-operators will have a higher average fitness than defectors and will increase. Co-operation, however, will not go to fixation. The hypothetical last defector would interact only with co-operators and so will have fitness higher that the average fitness of the co-operators. Defectors cannot be completely eliminated. They will persist as predators on the periphery of the community of co-operators. Co-operation fails to be stable. Even though defection is the unique stable pure strategy in this example, many possible initial states of the system will be carried to states that include both co-operators and defectors. These simple models should give some indication of the importance of correlation in evolutionary settings and of the striking differences in outcomes which it is capable of producing. A variety of other models which incorporate correlation in one way or another, and which can be accommodated within in the framework of correlated evolutionary game theory can be found in the biological, economic and philosophical literature. Some pointers to this literature will be given in Section 8. 6. Correlation in Evolutionary and Economic Game Theory In the absence of correlation there is almost coincidence between the Nash equilibrium of the rational players of classical economic game theory and the equilibria of the unconscious adaptive processes of evolutionary game theory. For every evolutionary game, there is a corresponding, two-player, non-zero-sum von Neumann-Morgenstern game. We cannot quite say that p is an equilibrium of the replicator dynamics for the evolutionary game if and only if {p, p) is a Nash equi-
Mutual Aid: Darwin Meets The Logic of Decision
395
librium of the von Neumann-Morgenstern game, because of the fact already mentioned that any unmixed population (pure strategy) is an equilibrium of the replicator dynamics.3 Bttt we can say that if {p, p) is a Nash equilibrium of the corresponding two-person game, then p is an equilibrium of the replicator dynamics. And if p is a stable equilibrium of the replicator dynamics then (p, p) is a Nash equilibrium of the two-person game. For more information on the relation of refinements of the equilibrium concepts in the two settings. See Bomze (1986), Friedman (1991), Nachbar (1990), and van Damme (1987). On the other hand, replicator dynamics need not even converge to an equilibrium or a cycle. For a discussion of chaotic dynamics in four strategy evolutionary games, see Skyrms (1992) and (1993). In both evolutionary and economic game theory the independence assumptions of the classical theory are an unrealistic technical convenience. However, the introduction of correlation leads the two theories in quite different directions. In the game theory of von Neumann and Morgenstern and Nash, the choice of a mixed strategy is thought of as turning the choice of one's pure act over to some objective randomizing device. The player's choice is then just the choice of the probabilities of the randomizing device; for example, the choice of the bias of a coin to flip. The randomizing devices of different players are assumed to be statistically independent. The introduction of mixed strategies has the pleasant mathematical consequence of making a player's space of strategies convex and assuring the existence of equilibria in finite games. From a strategic point of view, the importance of the coin flip is that it pegs the degrees of belief of other players who know the mixed act chosen. If all players know the mixed acts chosen by other players, use these probabilities together with independence to generate degrees of belief about what all the others will do, and each player's mixed act maximizes (Savage) expected utility by these lights, then the players are at a Nash equilibrium. This picture may seem unduly restrictive. Why could there not be some commonly known correlation between the individual players' randomizing devices? Players, in fact, might all benefit from using such a joint randomizing device. Or, to take a more radical line, if the only strategic importance of the randomizing devices is to peg other players' degrees of belief, why not dispense with the metaphor of flipping a coin and define equilibrium directly at the level of belief? From this perspective, the assumption of independence appears even more artificial. These lines of thought were introduced and explored in a seminal paper by Aumann (1974). Aumann introduced the notion of a correlated equilibrium. To picture this, think of a joint randomizing device which sends each player a
396
Brian Skyrms
signal as to which of her pure acts will be performed. This gives probabilities over each player's pure acts but these probabilities may be correlated. Such a device represents a joint correlated strategy. Let us assume that all players know the joint probabilities generated by the device, but that when the signal goes out each player observes only her own signal and bases her degrees of belief about what the other players' pure acts will be on the probabilities conditional on this signal pegged by the joint randomizing device. If, under these assumptions, players have no regrets - that is to say, each player maximizes (Savage 1954) expected utility - then the joint correlated strategy is a correlated equilibrium. Aumann (1987) showed how the notion can be subjectivised and viewed as a consequence of common knowledge of Bayesian rationality together with a common prior where Bayesian rationality is taken as ex post maximization of Savage expected utility.) Notice that the definition of a correlated equilibrium involves a kind of weak ratifiability concept. If players are at a correlated equilibrium, then each player's act will maximize expected utility for that player after the player is given the information that the act was selected by the joint randomizing device. In this sense, players only play ratifiable strategies. There is, however, a crucial difference between this ratifiability concept and the evolutionary one. That is, in an Aumann correlated equilibrium the relevant ratifiability concept is defined relative to Savage expected utility, and in the context of correlated evolutionary game theory the relevant ratifiability concept is defined relative to Jeffrey expected utility. Two examples will serve to illustrate what can and cannot be a correlated equilibrium. Consider the following two-person game where row's payoffs are listed first and column's second. Strategy 1
Strategy 2
Strategy 1
5,1
0,0
Strategy 2
4,4
1,5
If we consider only uncorrelated Nash equilibria of the game, there are three. There are the pure equilibria where both players play strategy 1, and where both players play strategy 2. There is a mixed equilibrium where each player plays each strategy with equal probability. Given the assumption of independence, each pair of strategies is played with probability l/t, and each player has an expected payoff of 2.5. Both players can do better than they do under this mixed strategy if they can play a joint correlated strategy. For example, they might flip a coin and both play strategy 1 if heads comes up, otherwise both play strategy 2.
Mutual Aid: Darwin Meets The Logic of Decision
397
This is a correlated equilibrium, which gives each player an expected payoff of 3. There is an even better correlated equilibrium where the joint correlated strategy chooses the strategy combinations (2,2), (1,1) and (2,1) with equal probability. Since each player is only informed of his own pure act, there is no incentive to deviate. For instance, if row is informed that he does strategy 2, he assigns equal probabilities to column doing strategies 1 and 2, and thus strategy 2 maximizes expected utility for him. In this correlated equilibrium, each player gets an expected payoff of SVs. Correlated equilibrium does not help, however, with Prisoner's Dilemma: Moritz Co-operates
Monte Defects
Max Co-operates
,9,.9
0,1
Max Defects
1,0
.6,.6
Whatever the probability distribution of the joint correlated strategy if Max is told to co-operate, co-operation will not maximize expected utility for him. This is a consequence of two facts: (1) defection strongly dominates co-operation. No matter whether Moritz co-operates or defects it is better for Max to defect; (2) the relevant expected utility is Savage expected utility rather than Jeffrey expected utility. There is only one correlated equilibrium in Prisoner's Dilemma, and that is the pure strategy combination (Defect, Defect). However, as we saw in example 1 of Section 4, Co-operation can be a strictly ratifiable and dynamically strongly stable strategy in correlated evolutionary game theory, providing that the correlation of interactions is favourable enough. This example shows how wide the gap is between the effects of correlation in evolutionary game theory and in economic game theory. This is not to say that Aumann's sort of correlated equilibrium may not also have a part to play in evolutionary game theory, but only that the kind of correlation introduced by non-random pairing is quite different.
7. Efficiency in Evolutionary Games The example of the last section generalizes. The Prisoner's Dilemma has captured the imaginations of philosophers and political theorists because it is a simple prototype of a general problem. Interacting individuals attempting to maximize their own payoffs may both end up worse off because of the nature of the interaction. Everyone would prefer to be a co-operator in a society of co-operators to a defector in a society of defectors. Universal co-operation makes everyone better off than
398
Brian Skyrms
universal defection, but co-operation is neither an evolutionarily stable strategy of the Maynard Smith evolutionary game nor a Nash equilibrium of the associated two-person, non-co-operative game. Let us consider an arbitrary evolutionary game, given by a fitness matrix and say that a strategy, S;, is strictly efficient if in interaction with itself it has a higher fitness than any other strategy, S-}, in self-interaction: Uu > Ujj. Thus, if a strategy, S; is strictly efficient, a population composed of individuals all playing S; will have greater average fitness than a population of individuals all playing another strategy, Sj. One version of the general problem of social philosophy in this setting is that the adaptive process of evolution may prevent the fixation of strictly efficient strategies, and indeed drive them to extinction. One route to efficiency in evolutionary games that has attracted wide interest involves the consideration of repeated games. One can consider either an infinitely repeated series of games with discounted payoffs or equivalently an indefinitely repeated series of games with some constant probability of one more play as one moves along the series. In an evolutionary setting, each encounter between two individuals is assumed to consist of just such a series of repeated games. This approach has become widely known through the work of Axelrod and Hamilton on indefinitely repeated Prisoner's Dilemma. If the probability of one more play is high enough, Axelrod shows that the repeated game strategy of Tit-for-Tat, that is, initially co-operating and then doing what the other did the last time, is a Nash equilibrium. It has been shown quite generally by Fudenberg and Maskin (1986) that efficient outcomes of one-shot games are sustainable as Nash equilibria of repeated games. Tit-for-Tat is not, as is sometimes claimed, an evolutionarily stable strategy in the sense of Maynard Smith and Parker since the strategy Always Co-operate does as well against Titfor-Tat as Tit-for-Tat does against itself, and as well against itself as Titfor-Tat does. The point generalizes to other repeated games. See Farrell and Ware (1988). There are, however, two major difficulties with the repeated game approach to efficiency. One is that a wide variety of repeated game strategies - some quite inefficient - can be sustained in this way as equilibria in indefinitely repeated games. The second, more serious difficulty, is that the assumptions of the theorem never really apply. Individuals have some finite upper bound to their lifetimes and certainly a finite upper bound to the number of repetitions of a game with a given other individual. Under these conditions the relevant theorems fail. Tit-for-Tat, for example, is no longer even a Nash equilibrium. The discussion of this paper suggests that there is another way to sustain efficiency. That is through correlation. Under the most favourable conditions of correlation, gratifying results follow immediately:
Mutual Aid: Darwin Meets The Logic of Decision
399
If there is a strictly efficient strategy and conditional pairing proportions are constant at p(S,l S,) 5 1 for all i, then the strictly efficient strategy is strictly ratifiable and is globally stable in the replicator dynamics.4 Things are even slightly better than stated since one will not quite need perfect correlation if the strategy in question is strictly efficient. The situation is not quite so simple and straightforward with respect to the efficiency of mixed or polymorphic populations. It is clear that correlation can enhance efficiency here in interesting ways. Consider a system with the following fitnesses: Strategy 1
Strategy 2
Strategy 3
Strategy 1
10
20
0
Strategy 2
20
10
0
Strategy 3
17
17
10
If the interactions between population members are uncorrelated, then a population consisting of equal proportions of strategy 1 and strategy 2 individuals has an average fitness of 15 and can be invaded by strategy 3 individuals which have an average fitness of 17. Then the uncorrelated replicator dynamics will carry strategy 3 to fixation for an average fitness of 10. However, if we allow for correlated encounters there is the possibility of an anticorrelated population equally divided between strategy 1 individuals and strategy 2 individuals with p(Sl I S2) = p(S2 I SI) = 1. This population has a fitness of 20, and cannot be invaded by strategy 3 individuals no matter what pairing proportions are specified conditional on being a strategy 3 individual. If we consider a small perturbation of the population in the direction of strategy 2, (0.5 — e strategy 1, 0.5 + e strategy 2), then there will not be enough strategy 1 players to maintain perfect anticorrelation. Assuming it is maintained insofar as consistent with the population proportions, all of the strategy 1 players will interact with strategy 2 players, but a few of the strategy 2 players will have to interact with each other. This lowers the expected fitness of strategy 2 below that of strategy 1. In like manner, an excess of strategy 1 players lowers the expected fitness of strategy 1 below that of strategy 2. Thus, under the assumption that anticorrelation is maintained consistent with the population proportions, this efficient polymorphic population is strongly dynamically stable in the correlated replicator dynamics. Efficiency in polymorphic populations is, however, not always so straightforward. An efficient polymorphic population may fail to be an
Brian Skyrms
400
equilibrium in the correlated replicator dynamics, even assuming the most favourable correlation consistent with population proportions. We modify the foregoing example by enhancing the fitness of S2 played against SI. Strategy 1
Strategy 2
Strategy 3
Strategy 1
10
20
0
Strategy 2
30
10
0
Strategy 3
17
17
10
Now at a population equally divided between SI and S2 with perfect anticorrelated interactions, the fitness of S2 is 30, that of SI is 20, and the average fitness of the population is 25. But since the fitness of S2 is higher than that of SI, the correlated replicator dynamics causes the proportion of S2 individuals to increase. This means that there are not enough SI individuals to pair with all S2s, so some S2s must pair with each other, and the expected fitness of S2 goes down, as before. These effects come into equilibrium in a population of Vs SI and % 52. This polymorphic population is strongly stable in the correlated replicator dynamics, but its average fitness is only 20 whereas at the (l/2,1/i) polymorphism the average fitness of the population is 25. Moreover, (l/2,V2) polymorphic state Pareto dominates the (l/3,2/?,) state in the sense that S2 individuals have higher fitness in the former, while SI individuals have equal fitness in both. In summary, correlation completely transforms the question of efficiency in evolutionary game theory. With perfect self-correlation the replicator dynamics inexorably drives a strictly efficient strategy to fixation - even if that strategy is strongly dominated. With other types of correlation efficient polymorphisms are possible which are not possible without correlation. However, the mere fact that correlation must be consistent with population proportions already circumscribes the situations in which the most favourable correlation can support efficient mixed populations. In more realistic cases, correlation will fall short of extreme values. (Why this is so raises the important question of the evolution of correlation mechanisms.) Nevertheless, the novel phenomena which stand out starkly in the extreme examples may also be found in more realistic ones. 8. Related Literature There is a rich biological literature dealing with non-random interactions, largely initiated by the important work of Hamilton (1963,1964, 1971) but going back at least to Wright (1921). Hamilton (1964) dis-
Mutual Aid: Darwin Meets The Logic of Decision
401
cusses both detection and location as factors which lead to correlated interactions. He already notes here and in (1963) that positive correlation is favourable to the evolution of altruism. This point is restated in Axelrod (1981) and (1984), and Axelrod and Hamilton (1981), where a scenario with high probability of interaction with relatives is advanced as a possible way for Tit-for-Tat to gain a foothold in a population of Always Defect. Fagin (1980) makes the point in a one-shot rather than a repeated game context. Hamilton (1971) develops models of assortative pairing (and dissortative pairing) in analogy to Wright's assortative mating. Eshel and Cavalli-Sforza (1982) further develop this theme with explicit calculation of expected fitnesses using conditional pairing probabilities. Michod and Sanderson (1985) and Sober (1992) point out that repeated game strategies in uncorrelated evolutionary game theory may be thought of as correlating devices with respect to the strategies in the constituent one-shot games. Extensive form games other than conventional repeated games could also play the role of correlating devices. Feldman and Thomas (1987) and Kitcher (1993) discuss various kinds of modified repeated games where the choice of whether to play again with the same partner - or more generally the probability of another repetition depends on the last play. The basic idea is already in Hamilton: "Rather than continue in the jangling partnership, the disillusioned co-operator can part quietly from the selfish companion at the first clear sign of unfairness and try his luck in another union. The result would be some degree of assortative pairing" (Hamilton 1971, p. 65). Gauthier (1986) and Hirshleifer and Martinez Coll (1988) discuss perfect detection models. Robson (1990) considers selection of an efficient evolutionarily stable strategy in a repeated game context by introduction of a mutant who can send costless signals. This is done within the context of uncorrelated evolutionary game theory, with the signals inducing correlation in plays of the initial game embedded in the signalling game. The evolutionary selection of efficient equilibria in repeated games is also treated in Fudenberg and Maskin (1990) and Binmore and Samuelson (1992). Wilson (1980) discusses models where individuals interact within isolated subpopulations. Even if the subpopulations were generated by random sampling from the population as a whole and individuals pair at random within their subpopulations, the subpopulation structure can create correlation. (The basic idea is already in Wright 1945, p. 417.) Pollock (1989) explores consequences of correlation generated by Hamilton's population viscosity for the evolution of reciprocity, where players are located on a spatial lattice. Myerson, Pollock, and Swinkels (1991) develop a solution concept for evolutionary games based on taking a limit as Hamilton's population viscosity goes to zero. Nowak and May (1992, 1993) and Grim (1993) explore the effects of space in cellular automata models.
402
Brian Skyrms
9. Conclusion Correlated interactions are the norm in many biological situations. These may be a consequence of a tendency to interact with relatives (Hamilton's kin selection), of identification, discrimination and communication, of spatial location, or of strategies established in repeated game situations (the reciprocal altruism of Trivers 1971 and Axelrod and Hamilton 1981). The crucial step in modifying evolutionary game theory to take account of correlations is just to calculate expected fitness according to The Logic of Decision rather than The Foundations of Statistics. This means that strategies such as co-operation in one-shot Prisoner's Dilemma with a clone are converted to legitimate possibilities in correlated evolutionary game theory. It is not in general true that evolutionary adaptive processes will lead the population to behave in accordance with the principles of economic game theory. The consonance of evolutionary and economic game theory only holds in the special case of independence. When correlation enters, the two theories part ways. Correlated evolution can even lead to fixation of a strongly dominated strategy. Correlation of interactions should continue to play a part, perhaps an even more important part, in the theory of cultural evolution (Boyd and Richerson 1985; Cavalli-Sforza and Feldman 1981; and Lumsden and Wilson 1981). If so, then the special characteristics of correlation in evolutionary game theory may be important for understanding the evolution of social norms and social institutions. Contexts which involve both social institutions and strategic rational choice may call for the interaction of correlated evolutionary game theory with correlated economic game theory. Positive correlation of strategies with themselves is favourable to the development of co-operation and efficiency. In the limiting model of perfect autocorrelation, evolutionary dynamics enforces a Darwinian version of Kant's categorical imperative: "Act only so that if others act likewise fitness is maximized." Strategies which violate this imperative are driven to extinction. If there is a unique strategy which obeys it, a strictly efficient strategy, then that strategy goes to fixation. In the real world correlation is never perfect, but positive correlation is not uncommon. The categorical imperative is weakened to a tendency, a very interesting tendency, for the evolution of strategies which violate principles of individual rational choice in pursuit of the common good. We can understand how Kropotkin was right: "... besides the law of Mutual Struggle there is in nature the law of Mutual Aid."5
Acknowledgments There is a large amount of overlap between this paper and "Darwin meets The Logic of Decision: Correlation in Evolutionary Game Theory"
Mutual Aid: Darwin Meets The Logic of Decision
403
(Philosophy of Science 61[1994]: 503-28). Earlier versions of this paper were read at colloquia at the University of California at Berkeley, Stanford University, the Center for Advanced Study in the Behavioral Sciences, the University of Western Ontario Conference on Game Theory and the Evolution of Norms, the Vancouver Conference on Modeling Rational and Moral Agents, and the 1994 meetings of the Central Division of the American Philosophical Association. I would like to thank Francisco Ayala, Peter Danielson, Dan Dennett, John Dupre, Robert Frank, Alan Gibbard, John Harsanyi, Tamara Horowitz, Branden Fitelson, Bas van Fraassen, Peter Godfrey-Smith, Patrick Grim, Richard Jeffrey, Jim Joyce, Philip Kitcher, Paul Milgrom, Elliott Sober, Patrick Suppes, Peter Vanderschraaf, and audiences at the colloquia mentioned for discussion and suggestions. Remaining defects are the sole responsibility of the author. This paper was completed at the Center for Advanced Study in the Behavioral Sciences. I am grateful for financial support provided by the National Science Foundation, the Andrew Mellon Foundation, and the University of California President's Fellowship in the Humanities.
Notes 1 See van Damme (1987), Sec. 9.4. The equivalence would fail if we considered evolutionary games played between two different populations, because of differences in the average fitnesses of the two populations. The "Battle of the Sexes" game provides an example. See Maynard Smith (1982), Appendix J and Hofbauer and Sigmund (1988), Part VII. 2 The neighborhood is in the topology determined by the Euclidean distance in the simplex of population proportions. If, contrary to our assumptions here, the conditional pairing proportions were not a function of the population proportions, we would have to consider two spaces rather than one. 3 This is because mutation is not explicitly part of the replicator dynamics, and if the initial population is unmixed there are no other strategies around to replicate. The desirable step of incorporating mutation into the model leads from the simple deterministic dynamics discussed here to a stochastic process model. See Foster and Young (1990). The framework for correlation used in this paper can also be applied to stochastic replicator dynamics. 4 If S is strictly efficient and the conditional pairing proportions give perfect self-correlation, then U(S) and U(S') are constant with U(S) > U(S') for any S' different from S throughout the space. Then, by definition, U(S) > U everywhere except at the point of fixation of S and S is strictly ratifiable. Considering the replicator dynamics, since both [U(S) — U] and p(S) are positive throughout the interior of the space the replicator dynamics makes d p(S) / df positive throughout the interior. p(S) itself is a global Liapounov function. It assumes its unique maximum at the point of fixation of S and it is increasing along all orbits. It follows that the point of fixation of S is a globally
404
Brian Skyrms
stable attractor in the replicator dynamics. (See Boyce and DiPrima 1977 or Hirsch and Smale 1974). 5 Kropotkin attributes the idea to Professor Kessler, Dean of St. Petersburg University who delivered a lecture entitled "On the Law of Mutual Aid" to the Russian Congress of Naturalists in January 1880. Cf. Kropotkin (1908), p. x.
References Aumann, R. J. (1974). Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1: 67-96. (1987). Correlated equilibrium as an expression of bayesian rationality. Econometrics, 55:1-18. Axelrod, R. (1981). The emergence of cooperation among egoists. American Political Science Review, 75: 306-318. (1984) The Evolution of Cooperation. New York: Basic Books. Axelrod, R., and W. D. Hamilton (1981). The evolution of cooperation. Science, 211: 1390-96. Binmore, K., and L. Samuelson (1992). Evolutionary stability in repeated games played by finite automata. Journal of Economic Theory, 57: 278-305. Bomze, I. (1986). Non-cooperative two-person games in biology: A classification. International Journal of Game Theory, 15: 31-57. Boyce, W. E. and R. C. DiPrima (1977). Elementary Differential Equations. 3rd. ed. New York: Wiley. Boyd, R., and J. P. Loberbaum (1987). No pure strategy is evolutionarily stable in the repeated Prisoner's Dilemma game. Nature, 327: 59. Boyd, R., and P. Richerson (1985). Culture and the Evolutionary Process. Chicago: University of Chicago Press. Boylan, R. T. (1992). Laws of large numbers for dynamical systems with randomly matched individuals. Journal of Economic Theory, 57: 473-504. Busch, Wilhelm (1865). Max und Monte eine Bubengeschicte in sieben Streichen. Munchen: Braun und Schneider. Cavalli-Sforza, L. L., and M. Feldman (1981). Cultural Transmission and Evolution: A Quantitative Approach. Princeton: Princeton University Press. Darwin, C. (1859). On the Origin of Species. London: Murray. (1871). The Descent of Man and Selection in Relation to Sex. London: Murray. Eshel, I., and L. L. Cavalli-Sforza (1982). Assortment of encounters and the evolution of cooperativeness. Proceedings of the 'National Academy of Sciences, USA, 79:1331-35. (1982). Rational Decision and Causality. Cambridge: Cambridge University Press. Eells, E. (1984). Metatickles and the dynamics of deliberation. Theory and Decision, 17: 71-95. Fagen, R. M. (1980). When doves conspire: Evolution of nondamaging fighting tactics in a nonrandom-encounter animal conflict model. American Naturalist, 115: 858-69.
Mutual Aid: Darwin Meets The Logic of Decision
405
Farrell, J., and R. Ware (1988). Evolutionary stability in the repeated Prisoner's Dilemma game. Theoretical Population Biology, 36:161-66. Feldman, M., and E. Thomas (1987). Behavior-dependent contexts for repeated plays of the Prisoner's Dilemma II: Dynamical aspects of the evolution of cooperation. Journal of Theoretical Biology, 128: 297-315. Fisher, R. A. (1930). The Genetical Theory of Natural Selection. Oxford: Oxford University Press. Foster, D. and P. Young (1990). Stochastic evolutionary game dynamics. Journal of Theoretical Biology, 38: 219-32. Friedman, D. (1991). Evolutionary games in economics. Econometrica. 59: 637-66. Fudenberg, D., and E. Maskin (1986). The folk theorem in repeated games with discounting and with complete information. Econometrica, 54: 533-54. (1990). Evolution and cooperation in noisy repeated games. American Economic Review, 80: 274-79. Gauthier, D. (1986). Morals by Agreement. Oxford: Oxford University Press. Gibbard, A., and W. Harper (1981). Counterfactuals and two kinds of expected utility. In W. Harper, R. Stalnaker, and G. Pearce, Ifs (Dordrecht: Reidel), pp. 153-90. Grim, P. (1993). Greater generosity favored in a spatialized Prisoner's Dilemma. Working paper, Department of Philosophy. Stony Brook: State University of New York. Hamilton, W. D. (1963). The evolution of altruistic behavior. American Naturalist, 97: 354-56. (1964). The genetical evolution of social behavior. Journal of Theoretical Biology, 7:1-52. (1971). Selection of selfish and altruistic behavior in some extreme models. In J. F. Eisenberg, and W. S. Dillon, Man and Beast (Washington: Smithsonian Institution Press), pp. 59-91. Harper, W., R. Stalnaker, and G. Pearce (1981). Ifs. Dordrecht: Reidel. Hirsch, M. W., and S. Smale (1974). Differential Equations, Dynamical Systems and Linear Algebra. New York: Academic Press. Hirshliefer, J., and J. C. Martinez Colt (1988). What strategies can support the evolutionary emergence of cooperation? Journal of Conflict Resolution, 32: 367-98. Hofbauer, J., and K. Sigmund (1988). The Theory and Evolution of Dynamical Systems. Cambridge: Cambridge University Press. Huxley, T. H. (1888). The struggle for existence and its bearing upon Man.Nz'neteenth Century, February: 161-80. Jeffrey, R. (1965). The Logic of Decision. New York: McGraw Hill; 2nd rev. ed. 1983. Chicago: University of Chicago Press. Kitcher, P. (1993). The evolution of human altruism. The journal of Philosophy, 10: 497-516. Krebs, J. R., and N. B. Davies (1993). An Introduction to Behavioral Ecology. 3rd ed. London: Blackwell.
406
Brian Skyrms
Kropotkin, P. (1908). Mutual Aid: A Factor of Evolution. London: Heinemann. The chapters were originally published inNineteenth Century, September and November 1890, April 1891, January 1892, August and September 1894 and January and June 1896. Lewis, D. (1979). Prisoner's Dilemma is a Newcomb problem. Philosophy and Public Affairs, 8: 235-40. (1981). Causal Decision Theory. Australasian Journal of Philosophy, 58:5-30. Lumsden, C., and E. O. Wilson (1981). Genes, Mind and Culture. Cambridge, MA: Harvard University Press. Marx, K. In Saul K. Padover (ed.), The Letters of Karl Marx (Englewood Cliffs, NJ: Prentice Hall, 1979). Maynard Smith, J. (1982). Evolution and the Theory of Games. Cambridge: Cambridge University Press. Maynard Smith, J., and G. R. Price (1973). The logic of animal conflict. Nature, 146:15-18. Maynard Smith, J., and G. R. Parker (1976). The logic of asymmetric contests. Animal Behavior, 24:159-75. Michod, R., and M. Sanderson (1985). Behavioral structure and the evolution of cooperation. In J. P. Greenwood, Harvey, and M. Slatkin (eds.),Evolution: Essays in Honor of John Maynard Smith (Cambridge: Cambridge University Press), pp. 95-104. Myerson, R.B., G. B. Pollock, and J. M. Swinkels (1991). Viscous population equilibria. Games and Economic Behavior, 3:101-09. Nachbar, J. (1990). "Evolutionary" selection dynamics in games: Convergence and limit properties. International Journal of Game Theory, 19: 59-89. Nozick, R. (1969). Newcomb's problem and two principles of choice. In N. Rescher (ed.), Essays in Honor ofC. G. Hempel (Dordrecht: Reidel), pp. 114-46. Nowak, M. A., and R. M. May (1992). Evolutionary games and spatial chaos. Nature, 359: 826-29. (1993). The spatial dilemmas of evolution. International Journal of Bifurcation and Chaos, 3: 35-78. Pollock, G. B. (1989). Evolutionary stability in a viscous lattice. Social Networks, 11: 175-212. Robson, A. (1990). Efficiency in evolutionary games: Darwin, Nash and the secret handshake. Journal of Theoretical Biology, 144: 379-96. Savage, L. J. (1954). The Foundations of Statistics. New York: Wiley. Schuster, P., and K. Sigmund (1983). Replicator dynamics. Journal of Theoretical Biology, 100: 535-38. Skyrms, B. (1980). Causal Necessity. New Haven, CT: Yale University Press. (1984). Pragmatics and Empiricism. New Haven, CT: Yale University Press. (1990). The Dynamics of Rational Deliberation. Cambridge, MA: Harvard University Press.
Mutual Aid: Darwin Meets The Logic of Decision
407
— (1990). Ratifiability and the logic of decision. In P. A. French et al. (eds.), Midwest Studies in Philosophy XV: The Philosophy of the Human Sciences (Notre Dame: University of Notre Dame Press), pp. 44-56. — (1992). Chaos in game dynamics. Journal of Logic Language and Information, 1:111-30. • (1993). Chaos and the explanatory significance of equilibrium: Strange attractors in evolutionary game dynamics. In D. Hull, M. Forbes, and K. Okruhlik, PSA 1992, vol. 2 (East Lansing, MI: Philosophy of Science Association), pp. 374-94. Sober, E. (1992). The evolution of altruism: Correlation, cost and benefit. Biology and Philosophy, 7:177-87. Stalnaker, R. (1981). Letter to David Lewis. In W. Harper, R. Stalnaker, and G. Pearce, Ifs (Dordrecht: Reide), pp. 151-52. Taylor, P., and L. Jonker (1978). Evolutionarily stable strategies and game dynamics. Mathematical Biosciences, 40: 145-56. Trivers, R. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46: 35-57. van Damme, E. (1987). Stability and Perfection of Nash Equilibria. Berlin: Springer. von Neumann, J., and O. Morgenstern (1947). Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press. Wilson, D. S. (1980). The Natural Selection of Populations and Communities. Menlo Park: Benjamin/Cummings. Wright, S. (1921). Systems of mating, III: Assortative Mating based on somatic resemblance. Genetics, 6: 144-61. (1945). Tempo and mode in evolution: A critical review. Ecology, 26: 415-419. Zeeman, E. C. (1980). Population dynamics from game theory. In Z. Niteck and C. Robinson (eds.), Global Theory of Dynamical Systems, Lecture Notes in Mathematics 819 (Berlin: Springer Verlag), pp. 471-97.
18 Three Differences between Deliberation and Evolution Elliott Sober
1. Introduction The title of this paper may seem absurd. Why bother to write about differences between two processes that are so obviously different? Next thing you know, somebody will write a paper about the distinction between square roots and albatrosses. Deliberation is something done by an organism that has a mind. The organism considers a range of alternative actions and chooses the one that seems to best advance the organism's goals. Evolution, on the other hand, involves a population of organisms, who may or may not have minds. When the population evolves by natural selection, the organisms display different characteristics; the trait that evolves is the one that best advances the organism's chance of surviving and reproducing.1 Deliberation involves a change that occurs in an individual', evolution effects a change in the composition of a population, the individual members of which need never change their traits at all. The outcome of evolution by natural selection is determined by the fitnesses of the traits that are present in the population. The outcome of rational deliberation is determined by the expected utilities of the actions that the agent considers performing. In natural selection, the fittest trait evolves; in rational deliberation, the act with the highest expected utility is the one that the agent chooses to perform. Of course, both these criteria involve simplifications; rightly or wrongly, we often decide to ignore factors that influence the two processes additional to the ones just cited.2 But the fact remains that in considering the process of natural selection and the process of rational deliberation, we use similar rules of thumb. This is not to deny that fitness and utility are different. Fitness is an objective property of an organism; it has nothing to do with what the organism thinks. Utility, on the other hand, is a subjective quantity, which reflects how much the agent likes or dislikes a possible outcome. Mindless organisms do not have (subjective) utilities, but it remains 408
Three Differences between Deliberation and Evolution
409
true that some traits are better than others as far as survival and reproduction are concerned. And for organisms such as ourselves who do have preferences, utility can be and often is orthogonal to survival and reproductive success. Yet, in spite of these manifest differences, there seems to be an important isomorphism between the two processes. Selection and deliberation, understood in terms of the usual idealizations, are optimizing processes. Just as the (objectively) fittest trait evolves, so the (subjectively) best action gets performed.3 This isomorphism plays an important heuristic role in the way biologists think about the evolutionary process. When biologists consider which of an array of traits will evolve, they often ask themselves: If I were an organism in this population and I wanted to maximize fitness, which of these traits would I want to have? This way of thinking about evolution deploys what I will call the heuristic of personification: If natural selection controls which of traits T, Alr A2,..., An evolves in a given population, then T will evolve, rather than the alternatives listed, if and only if a rational agent who wanted to maximize fitness would choose T over Alr A2,..., An. Often this heuristic is harmless. When running speed evolves in a population of zebras, we may ask ourselves whether we would want to be fast or slow, if we were zebras who wanted to survive and reproduce. We ge.t the right answer by this line of questioning; since we would want to be fast, it follows ihatfast is a fitter trait than slow, which means that selection will lead the population to a configuration in which all the organisms are fast and none are slow. In this paper, I'll explore three contexts in which this heuristic yields the wrong answer. They all come from game theoretic discussion of altruism and the Prisoner's Dilemma. Whether it is applied to evolution or to rational deliberation, game theory models situations that involve frequency dependence. In the evolutionary case, how fit a trait is, and whether it is more or less fit than the alternatives, depends on the composition of the population (Maynard Smith 1982). In the case of rational deliberation, which act is best for the agent depends on what other actors are likely to do. As we now will see, frequency dependence can throw a monkey wrench into the convenient relationship between deliberation and evolution posited by the heuristic of personification. 2. Which Trait is Best for Me versus Which Trait Does Best on Average Game theorists who discuss the Prisoner's Dilemma label the two actions "co-operate" and "defect." Evolutionists use the terms "altruism" and
410
Elliott Sober
"selfishness" instead. In a one-shot Prisoner's Dilemma, co-operating is bad for the actor, though it benefits the other player. This is the essence of evolutionary altruism, which is usually defined as an action that reduces the actor's fitness but benefits the other individual(s) in the group. For the sake of convenience, I'll use the evolutionary terminology throughout. But let us be clear that we are here describing an action's consequences for fitness or utility, not the psychological motives that produce it. Whether game theory is applied to evolution or to rational deliberation, behaviour and its attendant payoffs are ultimately what matter, not the proximate mechanisms (psychological or otherwise) that happen to produce the behaviour (Sober 1985). The payoffs to row in a one-shot Prisoner's Dilemma may be represented as follows: Altruist
Selfish
Altruist
x + b — c
x —c
Selfish
x +b
x
If you are paired with someone who is an altruist, you receive the benefit b from this person's actions. If you yourself are an altruist, you pay a cost of c when you help the other person with whom you are paired.4 What should a rational deliberator do in this circumstance? Given the payoffs displayed, a simple dominance argument shows that the selfish behaviour is better. No matter what the other person does, you are better off by acting altruistically rather than by acting selfishly: (1) A rational deliberator in the one-shot Prisoner's Dilemma should be selfish if and only if c > 0. Now let us discuss the evolutionary case. When will selfishness have the higher average fitness in a population in which pairs of individuals are each playing a one-shot Prisoner's Dilemma? Let Pr(AIS) represent the probability that one individual in a pair is altruist, conditional on the other individual's being selfish. This, and the other conditional probabilities Pr(S I A), Pr(A I A), and Pr(S I S) allow one to describe whether individuals pair at random or tend to seek out individuals like (or unlike) themselves.5 We now may represent the fitnesses of the two behaviours in this population of pairs of individuals as follows: w(Altruism) = (x + b - c)Pr(A I A) + (x - c)Pr(S I A) w(Selfish) = (x + b)Pr(A I S) + (x)Pr(S I S).
Three Differences between Deliberation and Evolution
411
This simplifies to the following criterion for the evolution of altruism: (2) In a population of pairs of individuals playing one-shot Prisoner's Dilemma, selfishness is the fitter trait if and only if b[Pr(A I A) Pr(A IS)] < c. Notice that (1) and (2) state different quantitative criteria. In particular, c > 0 is not sufficient for selfishness to be the fitter trait in (2). For example, suppose that altruists tend to pair with altruists and selfish individuals tend to pair with selfish individuals. If like interacts with like, then Pr(A A) - Pr(A I S) > 0. In this case, altruism can be the fitter trait, even when c > O.6 So it is quite possible for altruists to be fitter than selfish individuals, even though each individual would do better by being selfish than by being altruistic. The advice you would give to an individual, based on (1), is to be selfish. However, this does not accurately predict which trait will be fitter when you average over the entire population. The simple rule of thumb we saw before in the zebra example does not apply. We get the wrong answer if we use the heuristic of personification. 7 would be better off being selfish in a one-shot Prisoner's Dilemma, but it does not follow that selfish individuals do better than altruists in a population of pairs of individuals playing a one-shot Prisoner's Dilemma.7 There is a special evolutionary circumstance in which the heuristic of personification must deliver the right advice. This occurs when players are not correlated. If Pr(A I A) - Pr(A I S) = 0, then criterion (2) and criterion (1) are equivalent (Eells 1982; Sober 1993; Skyrms 1994). However, with positive correlation between interacting individuals, natural selection and rational deliberation can part ways. One half of this conclusion is more controversial than the other. It is not controversial that the fitness of a trait is an average over all the individuals who have the trait. The fact that altruists are less fit than selfish individuals in every pair in which both traits are present does not tell you which trait is fitter overall. The reason is that this fact about mixed pairs fails to take into account what is true in pairs that are homogeneous. Rather more controversial is what I have said in (1) about the decision problem. If the dominance principle is correct, selfishness is the rational act. But why buy the dominance principle? It is in conflict with some formulations of decision theory, as aficionados of the Newcomb problem well realize. I will not try to track this argument back to first principles, so perhaps my conclusion should be more conditional: if the dominance principle is a correct rule for rational deliberation, then the one-shot Prisoner's Dilemma provides a counter-example to the heuristic of personification.
412
Elliott Sober
3. Backwards Inductions in Iterated Prisoner's Dilemmas of Known Finite Length What strategy should a player choose when playing an iterated Prisoner's Dilemma of known finite length? Luce and Raiffa (1957, pp. 94102) proposed a backwards induction - an unraveling argument - to show that both players should choose to defect (to be selfish) on every move: On the last move, it makes sense for both players to defect. On the next-to-last move, there is no reason for them to co-operate, so both choose to defect then too. By working from the end to the beginning, the conclusion is drawn that the players should defect on every move. Game theorists have mostly accepted the correctness of this argument, and have moved on to consider games in which the game's length is not fixed beforehand, but is a matter of chance (cf. e.g., Axelrod 1984). However, I want to argue that the backwards induction argument is invalid when formulated in a certain unconditional form. Without some set of qualifying assumptions, it reaches a conclusion about correct action that cannot be justified by strictly adhering to the policy of maximizing expected utility. Let us begin by recalling an elementary fact about utility maximization. Consider a game in which there are two moves, X and Y. The payoffs to row are as follows:
x
y
X
9
2
Y
7
3
No dominance argument can establish whether X or Y is the better move. However, if one can assign a probability to what the column player will do, one then will be able to say whether X or Y is better. And even if no point value for this probability can be assigned, one could reach a decision provided one knew whether or not the probability that the column player will perform action X exceeds l/3. However, with no information about this probability, no solution can be defended. After all, decision theory's criterion for action is maximizing expected utility; if neither action dominates the other, this theory cannot deliver a verdict when one is wholly ignorant of the probabilities involved. Of course, in this circumstance, one could adopt a maximin strategy, and choose Y, since the worst-case outcome then is that one receives 3. Alternatively one could adopt a maximax strategy, and choose X, since the best-case outcome then is that one receives 9. But no uncontroversial rational principle dictates whether one should maximin or maximax, so game theory has no definite recommendation to make here.
Three Differences between Deliberation and Evolution
413
With that preamble, let us consider a three-round Prisoner's Dilemma, in which the pay-offs to row on each move are as follows: Altruist
Selfish
Altruist
2
0
Selfish
5
1
I want to consider four possible strategies that might be used in this three-round game. They are TFT3, TFT2, TFT1, and TFTO. TFT3 plays Tit-For-Tat on each of the three moves.8 TFT2 plays Tit-For-Tat on the first two moves and then defects on the last one. TFTO defects on all moves; it is ALLD by another name. The payoffs to row in a threeround game are as follows: TFT3
TFT2
TFT1
TFTO
TFT3
9
6
4
2
TFT2
11
7
4
2
TFT1
9
9
5
2
TFTO
7
7
7
3
What should a rational player do in this game? If the other player plays TFT3, the best response is TFT2. If the other player plays TFT2, the best response is TFT1. And if the other player plays TFT1, the best response is TFTO. All this is true, but does this show that the rational strategy is TFTO? I would say no. There is no way to establish which strategy will maximize expected utility, for reasons captured by the simple game involving strategies X and Y. To be sure, if we consider just TFT3 and TFT2, TFT2 is the dominant strategy. And if we consider just TFT2 and TFT1, TFT1 is the dominant strategy. And if we consider just TFT1 and TFTO, TFTO is the dominant strategy. All this is true and irrelevant to the game before us in which all four strategies need to be considered: (3) In a three-round Prisoner's Dilemma in which each player may play TFT3, TFT2, TFT1, or TFTO, a rational deliberator cannot say which strategy is best without information concerning -which strategy the other player will use. In contrast, we do get a solution to the parallel evolutionary problem. If the population begins with TFT3 in the majority, that configura-
414
Elliott Sober
tion is unstable. It will evolve to a configuration in which TFT2 is the most common strategy.9 But this configuration, in turn, is also unstable; it will give way to a configuration in which TFT1 is in the majority, and this will, finally, be replaced by TFTO. Of the four strategies listed, TFTO is the only evolutionary stable strategy (Maynard Smith 1982). Evolution will lead TFT3 to be replaced, in step-wise fashion, until TFTO comes to predominate: (4) In a population of pairs of individuals playing a three-round Prisoner's Dilemma in which the strategies used are TFT3, TFT2, TFT1, and TFTO, what will evolve is a population in which everyone plays TFTO. The backwards induction makes good sense in this evolutionary context: It shows that 100% TFTO is the only stable population configuration. However, the backwards induction is not valid when it comes to rational deliberation; no principle of rationality singles out TFTO as the best strategy to use in this problem (Sober 1992). I said at the beginning of this section that the principle of maximizing expected utility does not, by itself, endorse the conclusion that the backwards induction argument generates. However, if we adopt the assumption that all players in the game are rational and that the players know this (and know that they know this, etc.), then the argument can be defended (Bicchieri 1993). It is important to recognize that this assumption is an extremely demanding one. Something like it may be defensible in specific contexts, but human beings are ignorant enough and irrational enough that it can hardly be accepted uncritically.10 Game theory is supposed to describe what rational players ought to do in strategic situations, but it is not inevitable that rational players always find themselves playing with people who are themselves perfectly rational and fully informed. In evolutionary game theory, it is certainly not an assumption that organisms interact solely with optimally adapted conspecifics. Indeed, the whole point of the subject is to describe the result of interactions that occur in polymorphic populations. Just as rational deliberators may have to interact with agents who are different, so well-adapted organisms may have to interact with organisms unlike themselves. But this similarity between the theory of strategic deliberation and the theory of evolutionary game theory belies this difference: In the example under discussion, if we assume that agents may differ in their rationality, the backwards induction does not work, but if we assume that organisms differ in their fitness, the conclusion drawn by backwards induction is perfectly correct.
Three Differences between Deliberation and Evolution
415
4. Actual Fitnesses and Counterfactual Preferences The two-person Prisoner's Dilemma is a special case. If there are n-individuals in the group, of whom a are altruists, which trait will have the higher fitness? If altruists donate a benefit of b to each of the other individuals in the group, and incur a cost of c in doing so, the fitnesses are: w (Selfishness) = x + ab w (Altruism) = x - c + (a-l)fc In this circumstance, selfishness has the higher fitness precisely when b + 0 0. The conclusion is slightly different if we imagine that altruists create public goods - meaning that their donations benefit everyone in the group, themselves included. In this case, the fitnesses are w (Selfishness) = x + ab w (Altruism) = x — c + ab, which entails that selfishness has the higher fitness precisely when c > 0. Just to be clear on the difference between these two cases, let us consider two somewhat fictional examples. If a crow issues a sentinel cry to warn other individuals in the group that a predator is approaching, the crow receives no benefit from doing this, but incurs an energetic cost and places itself at greater risk by becoming more salient to the predator.11 The sentinel's warning is an altruistic behaviour of the first type. Contrast this with a group of human beings who build a stockade for common defence. Building the stockade creates a public good, because builders and non-builders alike share in the benefits. This is altruism of the second type. Although these situations are different, the evolutionary outcome is the same, if the process is determined by individual selection; in both cases, the selfish trait (of not issuing a sentinel cry, of not helping to build the stockade) will evolve, as long as the cost of donation and the benefit to the recipient are both positive. Although I shall return to the difference between other-directed donations and public goods altruism in a moment, let us now focus on what they have in common. The basic relationship between selfishness and altruism in the n-person case is depicted in Figure 1. Notice that selfish individuals do better than altruists, regardless of what the composition of the group is. However, everyone suffers the more the group is saturated with selfishness. This is illustrated by the fact that w, the average fitness of individuals in the group, has a downhill slope.
416
Elliott Sober
Figure 1
The upshot of natural selection in this case is straightforward, if the selection process takes place within the confines of a single group.12 Selfishness is fitter than altruism at every population configuration, so the population evolves to a configuration of 100% selfishness. Note that this endpoint is the minimum value of w. In this case, what evolves is not good for the group. Nor is it especially good for the individuals in the population, who would have been better off if everyone had been altruistic. We have here a "pessimistic" picture of natural selection; it reduces fitness, rather than increasing it: (5) If selfishness is fitter than altruism at every frequency, then individual selection will drive the population to 100% selfishness. Regardless of whether the altruism under discussion involves benefits donated exclusively to others or involves the creation of public goods, altruism cannot evolve within the confines of a single group as long as b and c are both positive. What happens when we shift this problem into the context of rational deliberation? We imagine the agent is trying to maximize expected utility. What should the agent do? Although the evolutionary problem depicted in Figure 1 has an obvious solution, the present problem is more subtle. Figure 2 shows two ways in which the relationship between selfishness and altruism in Figure 1 might be "magnified," with utility substituted for fitness. Rather than imagining that the frequency of selfishness in the population can take any value between 0 and 1, we are thinking of the population as containing n individuals. The agent must choose between a selfish and an altruistic act. There are already i (0 < i < n) selfish individuals in the population. If the
Three Differences between Deliberation and Evolution
417
Figure 2
agent chooses selfishness, this number is augmented to i + 1; if the agent chooses altruism, the number of selfish individuals remains at i. Which choice is better for the agent depends on whether the payoffs are as displayed in Figure (2a) or (2b). In (2a), point y has a higher utility than point x, so the agent should choose selfishness. However, in (2b), point y has a lower payoff than point x, so the agent should choose altruism: (6) In an n-person Prisoner's Dilemma, a rational deliberator should behave selfishly if and only if the payoff to selfish individuals when there are i + 1 selfish individuals exceeds the payoff to altruists when there are i selfish individuals. The contrast between propositions (5) and (6) reflects a difference between the process of natural selection and the process of rational deliberation. To see what happens in natural selection, you compare the fitnesses that alternative traits in the population actually have. However, to deliberate, you must think counter/actually. You must compare what would happen in one circumstance with what would happen in another. An evolutionist who looks at Figure (2a) or Figure (2b) and wishes to predict which trait will evolve will compare points x and zl or points y and z2, depending on which correctly describes the population at hand. In contrast, a rational deliberator who wishes to choose an action will compare points x and y. I began this section by contrasting altruistic donations that exclusively benefit others with altruistic donations that create public goods. Individual natural selection favours neither of these. But rational deliberators who aim to maximize their own fitness will sometimes choose
418
Elliott Sober
to create public goods even though they will never make altruistic donations that exclusively benefit others. They will decline to issue sentinel cries, but they may build stockades. It is in the context of public goods altruism that the heuristic of personification is found wanting. In an evolutionary context, "selfishness" names the trait that evolves if individual selection is the only evolutionary force at work, while "altruism" is the name of the trait that evolves if group selection is the only cause of evolutionary change. When both types of selection act simultaneously, the outcome depends on which force is stronger. As noted earlier, the biological concepts of selfishness and altruism do not require that the organisms so labelled have minds. But now let us suppose that they do. Let us suppose that a selfish or an altruistic behaviour has evolved, and ask whether rational deliberation could be the psychological mechanism that causes individuals to produce these behaviours. We shall begin by considering the simple case of agents who want only to maximize their fitness. If selfishness evolves and the fitnesses are those displayed in Figure (2a), this behaviour could be produced by individuals who are rational deliberators. However, if selfishness evolves and the fitnesses are those disposed in Figure (2b), matters are different. Rational deliberators will not choose to be selfish in this case. A mirror-image pair of claims applies if altruism evolves. If the fitness relations are as depicted in Figure (2a), then the altruists who are the product of natural selection cannot be rational deliberators. However, if the fitness relations are as shown in Figure (2b), they can be rational deliberators. These conclusions are described in the following table. The cell entries answer the question: Could the trait that evolves be produced by a rational agent whose goal is to maximize fitness? What Trait Evolves? Selfishness
Altruism
(2a)
YES
NO
(2b)
NO
YES
Fitnesses
This set of conclusions does not mean that rational deliberation cannot evolve in two out of the four cases described. It means that if rational deliberation is to evolve in the upper-right- or lower-left-hand cases, then the agents' preferences must not be perfectly correlated with maximizing fitness. It is commonly said as a criticism of sociobiology that human beings care about more than just staying alive and having babies. Usually, this
Three Differences between Deliberation and Evolution
419
is taken to be a testimony to the influence of culture, which is supposed to displace biological imperatives from centre-stage. The present argument, however, shows that caring about things other than fitness can be a direct consequence of evolutionary processes. Rational deliberation may confer a biological benefit, but sometimes its evolution depends on having utilities that do not directly correspond to the fitness consequences of actions. Some differences between the psychological concepts of egoism and altruism and the evolutionary concepts that go by the same names are obvious. The psychological concepts involve motives, whereas the evolutionary ones do not; the evolutionary concept concerns the fitness consequences of a behaviour, whereas the psychological categories make use of a much more general notion of welfare (Wilson 1991; Sober 1994b). However, the present discussion identifies a lack of correspondence between the two sets of categories that I think goes beyond the usual clarifying remarks. It is often supposed that a trait is evolutionarily selfish if a psychological egoist interested only in maximizing his own fitness would choose to have it. This application of the heuristic of personification is as natural as it is mistaken.13
5. Concluding Remarks Darwin realized that his term "natural selection" was a metaphor drawn from the literal notion of rational deliberation. Nature is not a conscious agent, but in most instances it is harmless to think of natural selection as if a conscious agent were choosing traits on the basis of a fitness criterion. The imperfections of this analogy did not prevent the development of a literal theory of how natural selection works. In fact, only with this theory in hand can we return to the metaphor from which it derived and assess that metaphor's scope and limits. There is much to be said for the heuristic of personification. My goal here has been to suggest that we not overestimate its power. I began this paper by asking why there was any point in writing it. I now can offer something by way of an answer. Because it is so natural and intuitive to use the heuristic of personification to think about the process of natural selection, it is well to have clearly in focus the fact that the heuristic can be misleading. So one reason it is worth detailing the differences between deliberation and natural selection is to hone one's understanding of the latter process. But there is a second type of illumination that this enquiry may be able to provide. To understand how the ability to deliberate evolved, it is important first to have a clear appreciation of what that ability involves. Popper (1972) is hardly the only person to have suggested that deliberation is a selection process in which our "theories die in our stead."14 To understand what
420
Elliott Sober
deliberation is, it is important to see that it does not simply replicate the structure of the process of natural selection. Engaging in rational deliberation is a phenotypic trait for which an evolutionary story needs to be told. The phenotype is indeed a curious one. Why did evolution lead it to exhibit its present contours?
Acknowledgments I am grateful to Ellery Eells, Larry Samuelson, Brian Skyrms, and David S. Wilson for comments on an earlier draft of this paper.
Notes 1 In putting matters this way, 1 am ignoring selection processes that can produce traits that are not good for the organism - namely, group selection on the one hand and true genie selection on the other. For an introduction to the issues involved here, see the chapter on the units of selection controversy in Sober (1993). 2 Adaptationists ignore non-selective processes because they think that they have a negligible effect on evolutionary outcomes. See Sober (1993) for discussion of the debate about adaptationism. 3 This does not mean that the processes must lead to an outcome that is optimal, for reasons that will become clear later on in the paper. 1 use "optimizing" to describe processes whose instantaneous laws of motion involve change in the direction of some "best" state; such processes need not, in the end, come to rest at some globally optimally state. 4 This table represents the "additive" case in which (1) altruism imposes the same cost on self, regardless of what the other player does, and (2) altruism confers the same benefit on the recipient, regardless of whether the recipient is altruistic or selfish. Non-additive payoffs can certainly be considered; the lessons I shall draw from the additive case would apply to them as well. 5 I assume here that organisms do not choose their phenotypes. They are either altruistic or selfish, and then the biology of their situation determines what the rules of pair formation are. For example, if individuals are reared in sibgroups, this will mean that altruists interact with altruists more than selfish individuals do. See Sober (1993), p. 114 for discussion. A more complicated model would allow organisms to "choose" their phenotypes in the sense that an organism's phenotype would be conditional on some detectable environmental cue. For discussion of the issue of phenotypic plasticity, see Sober (1994a). 6 Nor is c > 0 necessary for selfishness to be the fitter trait. If c and b are both negative, the inequality stated in (2) may be satisfied. Of course, when b < 0, it is odd to use the term "altruism." Rather, a more general format is then being explored in which both positive and negative interventions of one organism in the affairs of another are being explored.
Three Differences between Deliberation and Evolution
421
7 See the discussion of evidential and causal decision theories (as represented by the distinction by type-A and type-B beliefs) in Eells (1982). 8 Tit-for-Tat means that the player acts altruistically on the first move and then does on the next move whatever the partner did on the previous one. 9 As is usual in evolutionary game theory, one assumes the input of a few mutations in order to "test" the stability of monomorphic configurations. 10 In my opinion, the assumption that others are rational is nothing more than a useful heuristic that is often approximately correct. Davidson (1984) argues that the assumption of rationality is an a priori requirement if the beliefs and behaviours of others are to be interpreted; the empirical character of claims of rationality (and irrationality) is defended in Kahnemann, Tversky, and Slovic (1982). 11 For the purposes of this example, I assume that the sentinel crow does not gain protection from the predator by sending the rest of the flock into a flurry of activity. 12 Not so if group selection occurs, but I am ignoring that possibility here. 13 This defect in the heuristic of personification has helped make hypotheses of group selection seem less worth considering than they really are. Suppose hunters must share their kill equally with everyone else in the group. If the hunter's share exceeds the cost of hunting, many biologists would conclude that it is in the self-interest of an individual to hunt and that the trait will therefore evolve by individual selection. This ignores the fact that free-riders do better than hunters in the same group. It takes a group selection process for this type of "self-interest" to evolve. See Wilson and Sober (1994) for further discussion. 14 Bradie (1994) provides a useful review of work in evolutionary epistemology that elaborates the idea that change in opinion can be modeled as a selection process.
References Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books. Bicchieri, C. (1993). Rationality and Coordination. Cambridge: Cambridge University Press. Bradie, M. (1994). Epistemology from an evolutionary point of view. In E. Sober (ed.), Conceptual Issues in Evolutionary Biology (Cambridge, MA: MIT Press). Davidson, D. (1984). Inquiries into Truth and Interpretation. Oxford: Oxford University Press. Eells, E. (1982). Rational Decision and Causation. Cambridge: Cambridge University Press. Kahnemann, D., A. Tversky, and P. Slovic (1982). judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Luce, D., and H. Raiffa (1957). Games and Decision. New York: Wiley.
422
Elliott Sober
Maynard Smith, J. (1982). Evolution and the Theory of Games. Cambridge: Cambridge University Press. Popper, K. (1972). Objective Knowledge. Oxford: Clarendon Press. Skyrms, B. (1994). Darwin meets The Logic of Decision: Correlation in evolutionary game theory. Philosophy of Science, 61: 503-28. Sober, E. (1985). Methodological behaviorism, evolution, and game theory. In J. Fetzer (ed.), Sodobiology and Epistemology (Dordrecht: Reidel). (1992). Stable cooperation in iterated prisoners' dilemmas. Economics and Philosophy, 8:127-39. (1993). Philosophy of Biology. Boulder, CO: Westview Press. (1994a). The adaptive advantage of learning and a priori prejudice. In from a Biological Point of View: Essays in Evolutionary Philosophy (Cambridge: Cambridge University Press). (1994b). Did Evolution Make Us Psychological Egoists? InFrom a Biological Point of View: Essays in Evolutionary Philosophy (Cambridge: Cambridge University Press). Wilson, D. (1991). On the relationship between evolutionary and psychological definitions of altruism and egoism. Biology and Philosophy, 7: 61-68. Wilson, D., and Sober, E. (1994). Reintroducing group selection to the human behavioral sciences. Behavioral and Brain Sciences, 17: 585-654.
19 Evolutionary Models of Co-operative Mechanisms: Artificial Morality and Genetic Programming Peter A. Danielson
1. Introduction Social dilemmas, modeled by the Prisoner's Dilemma shown below, contrast rationality and morality. The moral appeal of C is obvious, as joint co-operation is mutually beneficial. But D (defection) is the rational choice because each does better by defecting. C
D
C
2,2
0,3
D
3,0
1,1
The contrast is not complete because agents that safely achieve joint cooperation do better than rational agents, and so steal some of the intuitive pragmatic appeal that the theory of rational choice attempts to formalize. On the other hand, the contrast is not sharp because while we know quite precisely what rational agents should do, the recommendations of instrumental moral theory are less well worked out. Both points lead us in the same direction; we would like to know if there are agents that can capture the benefits of mutual co-operation in these difficult social situations. Artificial Morality (Danielson 1992) argued for the existence of instrumentally robust moral agents by designing some. This Introduction sketches that argument, indicates some of its problems, and suggests why an evolutionary elaboration of the model might solve these problems. The remainder of the paper introduces Evolutionary Artificial Morality.
Artificial Morality The simplest case in which a morally constrained agent might do better than a rational agent is the Extended version of the Prisoner's 423
424
Peter A. Danielson
Dilemma (XPD). Player X chooses between C and D at tl and then player O, knowing what X has done, chooses between C and D at t2; see Figure I.1 A simple, look-ahead algorithm selects rational moves in this sequential situation: begin at the end of the interaction, where each move is associated with an outcome for the chooser. Here, player O's choice is clear; in each case (whatever X does) D is better for O. Therefore, player X, looking ahead, is faced with outcomes of 0 for C and 1 for Dand chooses D. This suggests how moral agents might work differently. Consider a conditional co-operator (CC), who, in position O commits or promises to return C for C and D for D. O's commitment changes X's situation. Now X faces outcomes of 2 for C and 1 for D and chooses C. Not only will CC co-operate with other co-operators, but CC's constraint makes C the rational choice for X. In both cases, CC does better than had it chosen the locally best D. These indirect benefits of constraint are the basis of Gauthier's (1986) pragmatic argument for his proposed moral principle of constrained maximization. This argument raises further questions. Procedurally, how do constrained agents communicate their constraint and how do agents assure themPayoffto X 0
Figure 1: The Extended Prisoner's Dilemma.
Evolutionary Models of Cooperative Mechanisms
425
selves of others' constraint? Substantively, how morally constrained an agent can still resist exploitation? For example, unconditional co-operators (UC) model extreme impartiality, but they will be exploited; CC discriminates against defectors but tolerates UC. Artificial Morality2 addressed these questions by constructing models of rational and co-operative agents and testing them in interaction. By introducing various agents, some of whom do not respond directly to payoffs, we have moved away from modeling situations as games. AM goes one level deeper, modeling the cognitive equipment used by co-operative and rational players. On the procedural side, constructing moral and rational players in a computer programming language tests them for coherence. The results are reassuring: discriminating agents, such as CC, are possible. But the pragmatic problem is more difficult, because other discriminating constrained agents can also be constructed and not all of them are so morally attractive. For example, we built the nastier reciprocal co-operator (RC), which co-operates when but only when it is necessary, and so exploits unconditional co-operators. Incidentally, this result helps us defend AM against the charge that it is soft-headed. True, AM is willing to entertain moral conceptions to generate co-operative players which use non-behavioural information and extra-rational constraint. But AM continues to use the cynical techniques of game theory to test these proposed players. Indeed, it takes cynicism a step further. While game theory pits every proposal against the rational agent, AM constructs exploitive agents who use the new techniques introduced to further moralized co-operation. Thus, RC exploits the very techniques that enable CC to discriminate in favour of the co-operative to discriminate as well against the naively co-operative, and in the game of Chicken, bullies can successfully threaten more responsive agents. However, allowing new agents introduces new problems. We will consider two of them. Arbitrary Populations In AM I chose to test CC against RC, but perhaps there are yet other players, built from these same mechanisms, that do better than either. For example, my RC player is similar to testing players known to do well against Tit-for-Tat (TFT) - CC's analogue - in small, Iterated PD tournaments. But testers tend to do less well in larger tournaments.3 AM's results are limited by the arbitrary small populations tested, in two ways. First, we may have neglected players that would do better than RC, and second, the populations may lack crucial king-makers or breakers.4 Arbitrary small populations are the first problem for Artificial Morality.
426
Peter A. Danielson
We need a way more generally to construct appropriate populations of players. Following Axelrod, AM used evolutionary game theory to reduce the arbitrariness of the tournament populations. By replicating successful strategies one reduces the influence of those who do well only with the unsuccessful. However, in these simplified models where "like begets like" (Maynard Smith 1982), no new players are created; this is "evolution" only in a weak sense. Axelrod (1984) appropriately labels this replicator approach "ecological." Truly evolutionary testing would allow new players to be constructed from parts of the successful players. How can we do this? Unfortunately, leaving game theory's mathematically tractable domain of strategies, we have no general procedure for constructing these more complex agents. As a starting point, I propose that we focus on the mechanisms used by a proposed principle, like CC, and ask whether these mechanisms will allow other players to exploit or otherwise undermine the proposed principle. Thus, we need to look beneath players to the mechanisms from which they are constructed. These mechanisms will be the building blocks for generating non-arbitrary populations by evolutionary techniques.5
A Surfeit of Mechanisms When we ask which mechanisms we should model, we confront a second problem with Artificial Morality: a surfeit of mechanisms. AM stresses the importance of discrimination. Successful co-operators need to know something about the other player in the XPD, to avoid being exploited. But it is not clear how much they need to know, nor how they should go about finding it out. AM focused on CC and RC, who use the same procedural strategy. They test other (transparent) players by executing their programs: will you return C given C? However, this testing approach is complex and risky: programs that test each other may loop. An alternative approach to discrimination does not execute the other player's program, but instead matches the quoted programs of the two players. In Artificial Morality I introduced a Selfsame Co-operator (SC) that co-operates just in case of a match. Since only identical players will match, and they both will be disposed to cooperate in this case, SC players will co-operate with each other. SC is safer than either CC or RC, because matching cannot loop, but will cooperate only with the most narrow clique of its identical clones. In contrast, testing is a more general method that allows non-identical pairs like {CC, UQ and (RC, CC} to co-operate. It is obviously better (other things equal) to be safer, but it is also better (other things equal) to co-operate more broadly.6 For example, RC gains by tolerating CC; SC misses similar opportunities. On the other hand, matching is much simpler than testing, applies to the more diffi-
Evolutionary Models of Cooperative Mechanisms
427
cult one-shot PD as well as the XPD (testing only works in the latter) and defaults, as we have seen in the case of SC, to something close to RC, which we suspect to be a successful principle. So it is far from clear which of these mechanisms will turn out to produce better players. Again, we need a method to generate and test the players that these various mechanisms allow to be constructed.
Evolutionary Artificial Morality Evolutionary Artificial Morality (EAM) addresses these two problems. EAM uses techniques developed by Artificial Life researchers, the genetic algorithm and genetic programming, by which they automatically generate players constructed from basic mechanisms for rational and moral play. "Evolutionary models of co-operative mechanisms" is an introduction to EAM techniques and shows how they address the two problems posed by the AM research program. Section 2 shows how to generate relevant new players from representations of basic mechanisms, using the technique of genetic programming. Section 3 discusses some interesting results of running the Artificial Morality/Genetic Programming (AMGP) model. Section 4 considers one way to extend the evolutionary model. We conclude by reconsidering our two problems in the light of the evolutionary elaboration of AM. 2. An Evolutionary Framework EAM will take a conservative approach, leaving something very close to AM's mechanisms and tournaments in place and building an evolutionary apparatus around them. Evolution is a generate and test procedure. Round-robin XPD tournaments provide the means of testing players; we need to add a way automatically to generate players. The problem is that our tournaments require players implemented as computer programs. Computer programs are fragile; it is not clear how to mutate programs into working programs. "Mating or mutating the text of a FORTRAN program, for example, would in most cases not produce a better or worse FORTRAN program but rather no program at all" (Holland 1992, p. 66). My players were written in Prolog instead of FORTRAN, but they would fare no better when mutated. To build an evolutionary model we need to change the way players are represented. Holland's solution to this problem is radically to simplify and constrain the executable representations to be tested, in order to meet the demands of mutation. Inspired by nature's solution to this problem, he applies crossover to fixed-length bitstring "chromosomes." For example, Axelrod (1987) represents Iterated PD strategies as decision tables coded as binary strings. But the expressive capability of this representation is limited. In particular, it is difficult - perhaps impossible - to
428
Peter A. Danielson
represent the testing strategies that AM uses. Nor is it obvious how to combine matching and testing methods in one tournament test. Therefore, in this paper we explore another approach which is able directly to express AM's matching and testing mechanisms.7 Genetic Programming John Koza's remarkable method of genetic programming (1992) allows us to mutate a general-purpose computing language. This makes the job of applying evolution to AM's players much easier; we avoid the radical translation into an entirely new representation that the genetic algorithm demands. None the less, we must rethink the way AM represented players for two reasons. First, AM focused on players rather than mechanisms. So we need to design functions for three basic mechanisms: moving and responding to moves, matching programs, and testing the other player. Second, Kosa's method greatly relieves but does not entirely remove the constraining influence of mutation. To be suitable for genetic manipulation, the set of functions must be closed under composition. Every function must be able to accept, as an argument, any possible output of any other function. To meet this constraint, we need to design a set of mechanisms with composition in mind.8 (Readers uninterested in the design issues might skip ahead to Section 3.) We begin with the basic game-playing function, which reads the XPD game tree, generating legal moves for players in roles X and O. This function is not specific to AM; any XPD player needs to be able to choose legitimate moves whether he moves first or second in the game. The game tree has three states which we represent by a variable, XM, recording player's X move. XM is either undefined (U) when X is still to move, or set to C or D once he has moved. Our basic function CXM - for Case XM - selects on the basis of XM among its three argument slots: (CXM U C D).9In the simplest instance, these slots are filled with the terminals, C and D, giving us eight elementary players such as those: (CXM c c c) (CXM c c d) (CXM d d d)
;UC ;Tit-for-Tat ;UD
Tit-for-Tat shows how the CXM function works, selecting the first C when playing role X, and the second C if the O-player chooses C, and the third slot, D, if the O-player chooses D. This may seem like a lot of apparatus to play a simple game. It is. The point of this representation is to allow crossover to combine functions to generate new players. Notice that our CXM function can only return
Evolutionary Models of Cooperative Mechanisms
CC
429
RC
(CXM (TEST-C d
(CXM (TEST-C d
c
(TEST-D d d d c d)
d)
c
(TEST-D d d c
d)
d)
Figure 2: CC and RC Contrasted.
C or D, therefore a CXM function could be used in any of its slots, and so on for the slots of these embedded functions. So (CXM (CXM CDC) D C) is well formed. However, this function is strategically no different from Tit-for-Tat.10
Testing and Matching Now we add the testing mechanism used by CC and RC. The testing functions are three-way switches like CXM, but instead of branching on X's move, they determine an O-move for the other player, by calling the other's program with XM (temporarily) set to C or D. The (TEST-C U C D) function executes the other with XM = C. There are three possibilities; since the first is the most complicated, consider them in reverse order: (3) if the other is UD the result will be D; (2) if the other is, say, UC or TFT, the result of the test will be C; (1) if the other player is (TEST-C C C D) what will happen? This second program will test the first, which will test the second, and so on. The testing functions will loop. By design, at an arbitrary level of nesting, the test returns Undefined and TEST-C selects its first argument. While this design decision is kind to testers, it does not trivialize the evolution of conditional co-operators.11 The simple LOOPY player, (TEST-C C C D) is not a successful conditional co-operator. While it can distinguish UC, Tit-for-Tat and itself from UD, it can be readily exploited. The reader might try to design a player that will exploit LOOPY. The answer is hidden in a note. But testers need not loop nor be so simple. What saves them from looping, even in the case both players use testing, is the possibility of functional composition. The testing mechanism executes the O-part of a player, so using it in one's own X-part is safe. Figure 2 exhibits a nonlooping CC on the left side. The Reciprocal Co-operator needs additional tests to determine whether the other player will respond to D with D. So wherever CC has a 'C', RC adds the function (TEST-D D D C): test the other with XM = D and respond to C with D and D with C. Figure 2 contrasts an RC player with the simpler Conditional Co-operator.
430
Peter A. Danielson
Matching is much easier to implement; the two players' programs are compared as lists of literals and the YES and NO options are selected in case the programs match or not, respectively. Here is the simple code to implement matching as well as two distinct SC players who will co-operate with themselves but not with each other:13 (match YES NO) (match c d) (match (match c d) d)
; (if (equal SELF OTHER) ,YES ,NO)) ; Simple matcher ; More complex matcher
The set of functions (CXM, TEST-C, TEST-D, and MATCH), while not expressively complete, allows us to implement the main players discussed in AM.14 More important, by representing the mechanisms around which these players are constructed, in a form suitable to genetic manipulation, this function set should allow new variations to be generated, thus meeting our first goal. Second, these functions allow us to test in a unified tournament environment the matching and testing approaches. Indeed, the extremely open-ended GP framework goes further. It allows players combining our three mechanisms to evolve in the tournament. For example, here are two ways to combine matching and Tit-for-Tat, to produce match-based conditional co-operators: (cxm (match c d) c d) (match c (cxm d c d)) Crossover The flexibility of genetic programming comes from allowing functions to replace functions. Instead of merely replacing fixed length of binary string with another fixed length, Kosa's central innovation makes crossover smarter, capable of parsing functions. The process is best shown by an example. Each of the parents in Figure 4 has four nodes; let the second and first nodes be selected (randomly); these are marked with (*). Crossover replaces the selected nodes in each parent with the selected node in the other parent. The lower part of Figure 3 shows the results in this case.15 We end up with two offspring - one quite complex and the other very simple. (This shows that length and complexity need not be fixed.) The first offspring is a fully functional conditional co-operator; the second a simple unconditional co-operator. Notice as well that the players' programs are open-ended; they may grow more complex (or simpler) via crossover. Finally, this example of cross-over producing CC may mislead, by suggesting that evolution comes too close to directed design. Not so. Most products of crossover are much messier, go nowhere, and make less strategic sense than CC.
Evolutionary Models of Cooperative Mechanisms
431
Figure 3: Parents and offspring
3. Results We are ready to run some experiments with AMGP. In this section I survey a sampling of results beginning with populations assembled from the simplest functions, and then add additional mechanisms. Set-up and Calibration All the tests use a population of sixty players in a round-robin tournament. The first generation of players were generated by randomly constructing well-formed functions from the fixed terminal set {C, D} and a set of functors characterizing that test. Each player was paired with each of the others (but not itself) in position X and then independently in position O.16 Therefore, the maximum possible score was 2 (roles) X 3 (points) X 59 (co-players) = 354; joint co-operation with the whole population would yield 236, and joint defection 118. Using the total score to a player as the fitness measure, the best thirty players were retained in - that is, copied unchanged into - the next generation to provide continuity for the co-evolutionary model. We want to know how each does with those who do well, not only with the (likely mutated) offspring of those that do well. In addition, thirty parents were selected from the whole population; one's chance of selection is proportional to fitness. Crossover was applied to these parents in pairs to spawn thirty offspring; no (other) mutation was used. In selection for crossover, functions were given a weight of 2:1 over terminals. The standard test is twenty runs of forty generations each. Most runs were stable after forty generations. The AMGP model is a complex software mechanism which should be tested on a problem with a known outcome before we use it to probe what we understand less well. We can confidently predict that the CXM functor by itself will not generate any strategy more successful than unconditional defection, so this is a good calibrating test.
432
Peter A. Danielson
Figure 4: Best and average scores for CXM players.
Our prediction bares out. In every generation, unconditional defection (UD) does best, and by generation 7 all but six players are UD, a pattern that continues through generation 40. Saturation with UD is shown in Figure 4 by the convergence of the upper line - the score of the best player - with the lower - the average of all the players. The steep initial slope records the exploitation of the rapidly disappearing stock of co-operative players. Many of these players count as morally constrained - in that they choose C rather than the individually dominant D - but none of these morally constrained agents is discriminating enough to persist. In particular, while Tit-for-Tat, (CXM C C D), will discriminate in favour of those that choose C rather than D in move X, there is no means for any player to discriminate in favour of Tit-for-Tat's responsiveness. Therefore, UD does better than Tit-forTat. (Tit-for-Tat comes in 59th in generation 40.) It is important to note this in order to stress what AMGP cannot do. It cannot produce successful moralized co-operation with the strategically impoverished players generated by the function set {CXM, C, D} alone. Put the other way around, were AMGP to produce such a miracle, it would be an unreliable method.
Conditional Co-operation Our initial results broadly support the main thrust of Artificial Morality: if we add functions to allow moral commitment to be perceived and thus communicated, various responsive agents evolve to solve the PD co-operatively. The difference between runs with and without these techniques is striking. Contrast Figure 4, with only CXM players, with the run in Figure 5, which includes the testing functions, TEST-C and TEST-D as well. Conditional co-operators evolve rapidly after the initial crash.
Evolutionary Models of Cooperative Mechanisms
433
Generation Figure 5: Conditional co-operators evolve.
Moreover, the results confirm the more cynical predictions of AM: nastier narrow co-operators do better than more generous CC. In seventeen of twenty runs matching SC predominated. However, there are two surprises. First, the agents that evolve under AMGP are not those featured in (Danielson 1992): matchers are much more successful that testers, for example, and RC rarely appears spontaneously. These results are probably related: like RC, matchers defect on unconditional co-operators, so they do better than CC in the initial random population. But matchers are much simpler than testing RC, so they evolve faster and find partners more quickly, in spite of the fact that they demand more of their co-operative partners (that is, exact matching). Second, where AM had predicted that CC would be indirectly invaded by UC and then RC, this did not always happen. Some variations on CC proved robust (in environments that excluded matchers). These CC players also tended to grow remarkably long programs. One might think that length was an artifact of the evolutionary process but this is not so. Consider Figure 6, which contrasts the change in length of programs over time under three selection regimes: no selection (dotted), match-based SC (lower solid), and testing CC (upper solid). AMGP is length neutral; the no-selection regime walks randomly. But strategies can select for length. MATCH selects for short clones (since (MATCH C D) does not match the strategically equivalent next longer (MATCH (MATCH C D) C D). CC grows very long. (Recall from Figure 3 that CC need only be 7 atoms long.) Why does CC grow? CC is defined strategically as co-operating with UC. But CCs may have many representations and some of these may have the further genetic property of spawning UC or not. Those that genetically produce UCs will be invaded by players that can exploit UC, and become extinct. (This was AM's criticism of Gauthier mentioned in note 4.) So there is selection pressure towards
434
Peter A. Danielson
Generation Figure 6: Average length of players.
what we might call genetic strategies that block UC. One that reappeared several times I call the BS-Tail: programs which concentrate their strategic essence (typically in the head of the program), with a long (strategically) irrelevant tail which can be sliced by crossover with little effect. (Genetic strategies are discussed again below.)
Looser Matching Match-based SC agents were a surprising success, in spite of their evident strategic liabilities, namely that they cannot co-operate with lexically distinct but strategically identical players. This led me to investigate looser matchers that would tolerate irrelevant differences. The most successful of these are built from a function (GB for Green Beard) that cares only about the positive results of matching; they ignore differences in the code selected by a mismatch.17 GB agents are more tolerant than matchers; (GB C D) co-operates with its non-identical offspring (GB C (GB C D)), in contrast with (MATCH C D), which does not co-operate with (MATCH C (MATCH C D)).18 GBs are very successful. They do better than less tolerant strict matchers on both pragmatic and moral grounds due to their greater toleration. They find co-operative partners more quickly and co-operate with more of the population. Morally, they make more relevant distinctions and allow a higher average population average score. Finally, like CC, GBs tend to grow longer, allowing the non-matched third component to increase, thereby decreasing the possibility of a functionally disruptive crossover.
Genetic Strategies We have noted the way both CC and GB-based players use length as a form of what we have called a genetic strategy. A genetic strategy is
Evolutionary Models of Cooperative Mechanisms
435
Generation Figure 7: GB best and average.
one that influences outcomes indirectly, through genetic transmission, rather than directly, via interaction in a game with the members of its own generation. We should try to get clearer on this aspect of our model. Genetic programming allows strategically equivalent programs to take quite different forms. Consider these two versions of UD: (cxm d d d) (cxm (cxm d e c ) (cxm c d c) (cxm c c d))
; Short-UK ; Long-UD
Long-UD is designed to output D just when Short-UD does, so the two forms are equivalent in the strategic context of the XPD game. But Long-UD will have very different consequences in the context of reproduction. Since more than half of its nodes are C, Long-UD's will spawn many more co-operators. Therefore, a population of Long-UD will do better than a population of Short-UD, since the former will generate more exploitive opportunities. However, there is no way for Long-UD to capture this opportunity that it spawns. It must share the prey it creates with all UD, short or long, as well as SC, RC, and the like.19 So Long-UD is not an evolutionary stable strategy. None the less, it indicates a possibility - strategies at the genetic level - that we should explore further. The easiest way to do this is to extend partial matchers to provide a striking illustration of the power of genetic strategies. We introduce a new function, (LGR YES NO), which selects on the basis of comparing the length of SELF and OTHER. Now we can construct an agent designed to exploit a genetic strategy. I call it Sacrificing GB (SAC-GB): (gb (longer d c) d) (gb (longer d c) (cxm d d d))
short SAC-GB longer SAC-GB
436
Peter A. Danielson
SAC-GB will co-operate only with SAC-GB as long or longer than itself. So the longer example above will co-operate with itself but exploit shorter versions. As we saw above, GB players will tend to spawn longer offspring because they tolerate them. But in this case GB players will respond to these differences in length, not by tolerating them, but by allowing the longer to exploit the shorter. Further, unlike LONG-UD, SAC-GB can control the exploitive opportunities that it creates, because the sacrifice is limited to fellow SAC-GB. The result of seeding a population with SAC-GB will eventually be a hierarchy, with the longest player exploiting all, and each shorter player exploiting the yet shorter still. The longest players do extraordinarily well, and this speeds the spread of the SAC-GB program, which is selected for how well it does at best, not on average. To get a SAC-GB to evolve from a wild population is not easy; GB cooperators tend to form first. But if we make the task easier, by truncating (LONGER D C) into a special purpose function, we can get the predicted result. SAC-GB produce dominance hierarchies as shown in Figure 8. Note the extraordinarily long programs compared with Figure 6 and how scores and length are neatly ordered. These results are cause for several concerns. First, they indicate an unexpected possibility in matching strategies. While our example exploits GB's tolerance, even strict matchers can end up trusting a mechanism that produces different outcomes if there are any differences at all between players. (The point of the LONGER function is to make an irrelevant difference - length - relevant.) This greatly restricts the trustworthiness of matching mechanisms to the limiting case of completely identical players. However, note that the possibility of exploitive strategies does not undermine the pragmatic appeal of partial matching. SAC-GB are attractive because some of them do so extraordinarily well; the longest achieves the highest possible score by successfully exploiting the entire remainder of the population, and they do this stably by building a population structured to sustain SAC-GB. Second, therefore our concern with the result is partially moral: the average outcome in the exploitive hierarchy is a suboptimal 3, worse than
Figure 8: SAC-SC length and score in a hierarchy.
Evolutionary Models of Cooperative Mechanisms
437
the co-operative 4. Third and finally, SAC-GB works by manipulating the reproductive aspect of our evolutionary framework. We need to know more about how this aspect of the model influences outcomes.
4. Extending the Model Genetic programming allowed us to extend Artificial Morality in the evolutionary dimension by building on its programming approach to modeling mechanisms. As we have seen, this approach leads to unexpected results and new complexities. We would like to know which results flow from the underlying problem of unstable social co-operation and which are artefacts of our modeling methodology. For example, is our finding that matchers do better than testers a deep result or an artefact of modeling matchers as simpler than testers? Or, to take another example, how is the possibility of an exploitive hierarchy related to reproduction? In this section I shall sketch one way we might modify the AMGP model to address questions about its assumptions. The strategy of Evolutionary AM is to bring features that were exogenously fixed by design in AM into testable parameters of a more general evolutionary model. How might we extend this strategy? The most obvious external feature of EAM is reproduction. Evolutionary models make reproduction crucial; the interests in play are reproductive interests. Our players must succeed in two situations to get ahead into the next generation: the Prisoner's Dilemma and the reproduction processes. Since we use sexual reproduction (i.e., crossover), both are interactions, but only the first is explicitly modeled as a game. Reproduction just happens; players get to make no reproductive choices. It would be interesting to allow them to do so.20 We can do this by adding a second set of functions which are invoked for reproduction.21 Consider two simple extensions of our models in this direction. First, some models are biased slightly in favour of co-operative players by allowing players to play themselves (Danielson 1986). Similarly, if we produce offspring in pairs of twins, we would favour co-operative players. However, it is not clear that we have any reason not to allow twins; after all, we are not modeling any naturalistic biology here. Fortunately, I think there is a way around this problem. These questions need not be begged (one way or the other); we can internalize this external assumption by making twinning an option - at the reproductive level - and subjecting it to selective pressure. Players would have the choice: two (most likely distinct) offspring produced by crossover from both parents, or, instead, one pair of twins produced the same way. It might seem obvious that successful, evolved co-operative players will incorporate the twinning reproductive strategy. But this prediction ignores our general methodology: what other alternatives
438
Peter A. Danielson
can be cobbled together with the new mechanism, in this case, reproductive choice? A second obvious alternative is cloning. We could allow players to choose cloning themselves over mutating via crossover. (We need to be careful to keep the reproductive system fair; we cannot give each cloning parent two clones.) Here, the tempting reproductive strategy is to form a co-operative clan with one's identical offspring. Allowing twinning or cloning permits us to address fairly general questions about whether there are distinctly co-operative reproductive strategies: which of these is more successful for which types of PD strategies is an open question.22 A more focused use for reproductive choice is to deal with the challenge of SAC-GB. Recall (from Section 3) that this exploitive strategy depends on the fact that parents are chosen for individual success. Fitness proportional selection ignores SAC-GB's low average score. But we can construct reproductive mechanisms that take this feature into account. Perhaps such moralized reproductive mechanisms - for example, one that considers how well a player type does on average - will be able to avoid the reproductive parasite responsible for SAC-RC.23 This short sketch of reproductive alternatives suggests that they will make evolutionary Artificial Morality even more complex. (And we have not considered interaction between the two levels; do matchers require that reproductive strategies also match?) While complexity is daunting, it is less so for evolutionary methods, and reminds us of one of their deep methodological attractions. When evolution repeatedly selects stable configurations of attributes out of extremely large search spaces we gain confidence that these configurations are more than mere assumptions. Finally, our extension builds on genetic programming's ability to generate variations on anything programmable. Thus, we are able to bring selective pressure to bear on what were previously fixed external assumptions. Conclusions In the Introduction, I noted two problems faced by Artificial Morality. Let us consider the results of our evolutionary model in the light of these problems. The first problem was arbitrariness: our results should not depend on a particular set of hand-crafted players. A robust tournament test requires a varied generator of players. On this problem we have been quite successful. We have been able to capture AM's main mechanisms in a form that allows new players to be generated automatically. Moreover, these new agents, while varied, are not arbitrary. They represent an evolutionary search of variations on the most successful agents generated so far. The second problem was whether matching or testing was the more effective test of other players. Here, the results are unequivocal. By
Evolutionary Models of Cooperative Mechanisms
439
allowing us to test together the two strategies, AMGP shows the great strengths of matching. Even in a game - the XPD - selected to be kind to testers, and even with the complexity of the testing process largely hidden away, simple matchers still prevailed in most runs. In addition, the main problem with the matching strategy, intolerance of irrelevant differences, is largely overcome by the introduction of a partial matching function, GB. Partial matchers are even more successful that simple matchers, and morally more appealing as well. However, partial matching allows the introduction of a genetic parasite that undermines co-operation in favour of an exploitive hierarchy. Finally, we should ask what new problems our new methods might create. Here, we face the most unexpected results: the role of genetic strategies. Increased length is a genetic strategy that protects CC from UC and GB from intolerance; both are benign. But the possibility of genetic strategies is disturbing as well, since the genetic arrangements are part of the evolutionary framework, the apparatus, not the core model of the XPD game itself. We are led to ask how much of our results are artefacts, due to particularities of our implementation of an evolutionary framework. We suggested a way to bring reproductive strategies into the model in order to address questions like this.
Acknowledgments I have benefited from presenting related material to the ALife Group at U.C.L.A., the Tri-University Philosophy Colloquium at Simon Fraser University, and the Computer Science Colloquium, U.B.C. Special thanks to the Centre for Applied Ethics for providing an ideal research environment. Some of the software discussed in this paper is available from .
Notes 1 This game has the same structure as the sequential game embedded in Figures 1 and 2 in McClennen (1995), when one drops his initial node 1. 2 Danielson (1992); abbreviated to AM. 3 Contrast Axelrod (1978) and (1984). 4 Indeed, AM's criticism of Gauthier (1986) takes precisely this form: his defence of CC depends on neglecting the effect of the king-breaker UC on CC and failing to consider nastier variations on CC. 5 I stress that the focus on mechanisms is a starting point to signal that while players will evolve in what follows, the mechanisms will not; they are designed inputs. See also note 8 below. 6 This is not to say that it is better to co-operate with UC; that would beg a central question of the relation of morality to interest. 7 1 substantiate these claims and further contrast Holland's and Koza's approaches in Danielson (1994).
440
Peter A. Danielson
8 I stress the role of design at the mechanism level to avoid any appearance of claiming that we are evolving players from raw LISP. We are evolving players from functions designed to represent special-purpose AM and game theoretic mechanisms in a form suited to genetic manipulation. 9 This design comes from Koza (1992), chs. 15 and 16. 10 Indeed, no composition limited to the CXM function can introducesfrafegic variety, as any embedded function (CXM argt argj argk) in argument slot n of another CXM function can be replaced witharg:l. However, we shall see below that genetic strategies are another source of variety. 11 The decision to catch looping within the testing function needs justification, since it is friendly to the testing approach. The evolutionary method, by permitting looping players to be constructed, requires that we trap looping behaviour, else the simulation will be brought to a halt. An alternative to trapping loops within the TEST function would be to make looping players output undefined at the game level. But this greatly complicates the game. First, it no longer is a 2 X 2 game, but a 3 X 3, with each player having a third move, Undefined. Second, if A tests B and they loop, who is responsible? Since we force players to accept testing, it seems unfair to penalize the tested player. Third and finally, this choice opens up the strategic possibility of getting a tester to loop. I do not suppose that these are conclusive reasons to treat the risk of looping so gently, but they seem strong enough to justify proceeding as I have. 12 (CXM D C D) will exploit LOOPY when LOOPY is in role Q 13 This is a reminder that we have not exhibited the more complex underlying code for the testing mechanism. 14 For example, we have not provided a mechanism for implementing look ahead rational agents, such as ESM discussed in Danielson (1992), section 8.1. While we will add additional functions in the next section, EAM players remain cruder than those in AM in several respects. The evolutionary players are all transparent, for example, and cannot learn. 15 Compare Dosi et.al (1997), Figure 1 in this volume, p. . 16 That is, O knows what X just did but not what X did as O in their previous pairing. This information would make the game iterated, not merely extended. 17 For the Green Beard Effect, cf. Dawkins (1989), p. 89. 18 But the tolerance of GB is limited; (GB C D) will not match with (GB (GB C D) D) because GB requires that the functor and first argument match. 19 There is an ironic parallel here with indiscriminate moral agents like UC that cannot protect the co-operative opportunity that they create. 20 One way to deal with this problem is to make the PD game directly into a reproductive game: co-operators get to produce 2 joint offspring, exploiters 3, and suckers none. However, this moves too quickly by identifying the two levels. For example, we may not want to limit exploiters to offspring they can build from themselves and suckers. So I favour, initially, keeping the two games separate.
Evolutionary Models of Cooperative Mechanisms
441
21 The extension complicates GP models because reproductive functions, which take programs into programs, will not be composable with XPD functions, which take moves into moves. We solve this problem with David Montana's (1993) device of strong typing. 22 And to answer it, we need to develop the model more explicitly, answering questions like: what happens when two parents choose incompatible reproductive strategies? 23 This assumes that parents want to avoid infecting their offspring with this parasite. In the case of a parasite that favours parents over offspring (e.g., replace LONGER with OLDER) this would not be the case. So a more fully moralized reproductive model would give offspring, rather than parents, the choice. Here, we may be approaching a deep constraint imposed by evolutionary models on the study of morality.
References Axelrod, Robert (1978). Artificial intelligence and the iterated prisoner's dilemma. Discussion Paper 120, Institute of Public Policy Studies. Ann Arbor, MI: University of Michigan. (1984). The Evolution of Cooperation. New York: Basic Books. (1987). The evolution of strategies in the iterated Prisoner's Dilemma. In L. Davis (ed.), Genetic Algorithms and Simulated Annealing (Los Angeles, CA: Morgan Kauffman), pp. 32-41. Danielson, Peter (1986). The moral and ethical significance of Tit-for-Tat. Dialogue, 25: 449-470. (1992). Artificial Morality. London: Routledge. (1994). Artificial morality and genetic algorithms. Paper presented at the Philosophy Department. Edmonton, AB: University of Alberta. (1995). Evolving artificial moralities: Genetic strategies, spontaneous orders, and moral catastrophe. In Alain Albert (ed.), Chaos and Society (Amsterdam, NL: IOS Press). Dawkins, Richard (1987). The Selfish Gene. New Edition. New York: Oxford University Press. Dosi, Giovanni, Luigi Marengo, Andrea Bassanini, and Marco Valente (1997). Norms as emergent properties of adaptive learning: The case of economic routines (this volume). Gauthier, David (1986). Morals by Agreement. Oxford: Oxford University Press. Holland, John (1992). Genetic algorithms. Scientific American, 267(1, July): 66-72. Koza, John (1992). Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge, MA: MIT Press. Maynard Smith, John (1982). Evolution and the Theory of Games. Cambridge: Cambridge University Press. McClennen, Edward (1997). Rationality and rules. This volume. Montana, David (1993). Strongly typed genetic programming. BBN Technical Report 7866. Cambridge, MA: Bolt, Beranek and Newman.
20 Norms as Emergent Properties of Adaptive Learning: The Case of Economic Routines Giovanni Dosi, Luigi Marengo, Andrea Bassanini, and Marco Valente
1. Introduction As Kenneth Arrow - himself one of the major contributors to rational decision theory - puts it, a system of literally maximizing norm-free agents "... would be the end of organized society as we know it" (Arrow 1987, p. 233). And indeed one only rarely observes behaviours and decision processes which closely resemble the canonical view from decision theory as formalized by von Neumann, Morgenstern, Savage, and Arrow. What then are the characteristics of norm-guided behaviours? And where do norms come from? Can they be assumed to derive from some higher-level rational choice? Or can one show different kinds of processes accounting for their emergence? In this work we shall discuss these issues and present an evolutionary view of the emergence of norm-guided behaviours (i.e., routines) in economics. We shall call rules all the procedures linking actions and some representation of the environment. In turn, representations are likely to involve relations between environmental states and variables and require the fulfilment of certain conditions (IF-THEN rules). It is a familiar definition in Artificial Intelligence and cognitive psychology (see Newell and Simon 1972 and Holland et al. 1986). Of course, representations may encompass both environmental states and internal states of the actor; and the action part may equally be a behaviour in the environment or an internal state, such as a cognitive act.1 Further, we shall call norms that subset of rules which pertain to socially interactive behaviours and, in addition, have the following characteristics: 442
Norms as Emergent Properties of Adaptive Learning
443
(1) they are context-dependent (in ways that we shall specify below), and (2) given the context, they are, to varying degrees, event independent, in the sense that, within the boundaries of a recognized context, they yield patterns of behaviour which are not contingent on particular states of the world. This definition of norms is extremely broad in scope and encompasses also behavioural routines, social conventions and morally constrained behaviours.2 Thus, our definition includes the norm of not robbing banks, but excludes robbing or not robbing banks according to such criteria as expected utility maximization; it includes the "rules of the games" in game theoretical set-ups, but excludes the highly contingent behaviours which rational players are supposed by that theory to engage in thereafter. Our argument is divided into two parts. First, we ask what is the link between norms, so defined, and the "rational" decision model familiar in the economic literature. In particular we shall address the question whether, whenever one observes those types of norm-guided behaviours, they can be referred back to some kind of higher-level rational act of choice among alternative patterns of action. We shall claim that this is not generally the case. The empirical evidence, even in simple contexts, of systematic departures of judgments and actions from the predictions of the rationality model is now overwhelming.3 Here, however, we are not going to discuss such evidence; rather, we shall pursue a complementary line of enquiry and show that, with respect to an extremely broad set of problems, a "rational" choice procedure cannot even be theoretically constructed, let alone adopted by empirical agents. Drawing from computation theory and from the results of Lewis (1985a) and (1985b), it can be shown that many choice set-ups involve algorithmically unsolvable problems: in other words, there is not and there cannot be a universal rational procedure of choice. An optimization procedure cannot be devised even in principle: this is the negative part of the argument. But what do people do, then? We shall suggest precisely that agents employ problem-solving rules and interactive norms, which: (1) cannot be derived from any general optimization criterion; and (2) are "robust," in the sense that they apply to entire classes of events and problems (Dosi and Egidi 1991). The second part of this work considers the origin and nature of these rules. The cases we shall consider regard the emergence of corporate routines applied to the most familiar control variables in economics, i.e., prices and quantities. However, there appear to be no a priori reason
444
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
to restrict the applicability of the argument to economic behaviours. In fact, a similar analytical approach could be applied to several other forms of patterned behaviour in social interactions. Concerning the origin of behavioural norms, we develop a model broadly in the perspective outlined by Holland (1975) and Holland et al. (1986): various forms of inductive procedures generate, via adaptive learning and discovery, representations or "mental models" and, together, patterns of behaviour: "the study of induction, then, is the study of how knowledge is modified through its use" (Holland et al. 1986, p. 5). In our model, artificial computer-simulated agents progressively develop behavioural rules by building cognitive structures and patterns of action, on the grounds of initially randomly generated and progressively improved symbolic building blocks and no knowledge of the environment in which they are going to operate. The implementation technique is a modified version of Genetic Programming (cf. Koza 1992 and 1993), in which agents (firms) are modeled by sets of symbolically represented decision procedures which undergo structural modifications in order to improve adaptation to the environment. Learning takes place in an evolutionary fashion, and is driven by a selection dynamics whereby markets reward or penalize agents according to their revealed performances. A major point in the analysis which follows is that representations of the world in which agents operate and behavioural patterns coevolve through the interaction with the environment and the inductive exploratory efforts of agents to make sense of it (actually, in our model, they cannot be explicitly distinguished).4 Indeed, we show that, despite the complexity of the search space (technically, the space of X-functions), relatively coherent behavioural procedures emerge. Of course, none of us would claim that empirical agents do learn and adapt in a way which is anything like Genetic Programming, or, for that matter, any other artificially implementable formalism (but, similarly, we trust that no supporter of more rationalist views of behaviour would claim that human beings choose their course of action by using fixed-point theorems, Bellman equations, etc.). We do, however, conjecture that there might be a sort of "weak isomorphism" between artificial procedures of induction and the ways actual agents adapt to their environment. The final question that we address concerns the nature of the behavioural patterns that emerge through our process of learning and market selection. In particular, in the economic settings that we consider, are these patterns an algorithmic approximation to the purported rational behaviours which the theory simply assumes? Or, do they have the features of relatively invariant and context-specific norms (or
Norms as Emergent Properties of Adaptive Learning
445
routines) as defined earlier? It turns out that, in general, the latter appears to be the case: surviving agents display routines, like mark-up pricing or simple imitative behaviour (of the type "follow-the-leader") in all environments that we experimented, except the simplest and most stationary ones. Only in the latter do we see the emergence of behaviours not far from what supposedly rational agents would do (and, even then, co-operative behaviours are more likely to come out than what simple Nash equilibria would predict5). The context dependence of emerging routines can be given a rather rigorous meaning: the degrees of complexity of the environment and of the problem-solving tasks can be mapped into the characteristics of the emerging routines. Interestingly enough, it appears that the higher the complexity, the simpler behavioural norms tend to be and the more potentially relevant information tends to be neglected. In that sense, social norms seem to be the typical and most robust form of evolutionary adaptation to uncertainty and change. In Section 2 we shall show that, in general, it is theoretically impossible to assume that the rationality of behaviours could be founded in some kind of general algorithmic ability of the agents to get the right representation of the environment and choose the right course of action. Section 3 presents a model of inductive learning where representations and actions co-evolve. Finally, in Section 4 we present some results showing the evolutionary emergence of behavioural routines, such as mark-up pricing. 2. Rational vs. Norm-Guided Behaviour Let us start from the familiar view of rational behaviour grounded on some sort of linear sequence leading from (1) representations to (2) judgment, (3) choice and, finally, (4) action. Clearly, that ideal sequence can apply to pure problem-solving (for example, proving a theorem, discovering a new chemical compound with certain characteristics, etc.), as well as to interactive situations (how to deal with competitors, what to do if someone tries to mug you, etc.). At least two assumptions are crucial to this "rationalist" view, namely, first, that the linearity of the sequence strictly holds (for example, one must rule out circumstances in which people act and then adapt their preferences and representations to what they have already done) and, second, that at each step of the process the agents are able to build the appropriate algorithm in order to tackle the task at hand. Regarding the first issue, the literature in sociology and social psychology is rich of empirical counter-examples and alternative theories.6 Indeed, in the next section of this work, we shall present a model whereby representations and actions co-evolve.
446
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
The second issue is even more at the heart of the "constructivist" idea of rationality so widespread in economics, claiming that agents are at the very least procedurally rational.7 In turn, this implies that they could algorithmically solve every problem they had to face, if they were provided with the necessary information about the environment and the degrees of rationality of their possible opponents or partners. Conversely, the very notion of rational behaviour would turn out to be rather ambiguous if one could show that, even in principle, the appropriate algorithms cannot be constructed. It happens in fact that computability theory provides quite a few impossibility theorems, i.e., theorems showing examples of algorithmically unsolvable problems. Many of them bear direct implications also for the micro assumptions of economic theory and, particularly, for the possibility of "naturally" assuming the algorithmic solubility of social and strategic interaction problems.8 We can distinguish between two kinds of impossibility results. First, it is possible to show the existence of classes of problems which are not solvable by means of a general recursive procedure (cf. Lewis 1985a and 1985b). This implies that economic agents who look for efficient procedures for the solution of specific problems cannot draw on general rules for the construction of algorithms, because such general rules do not and cannot exist (cf., also, Dosi and Egidi 1991). Broadly speaking, we can say that nobody may be endowed with the meta-algorithm for the generation of every necessary algorithm. Second, it is possible to prove the existence of single problems whose optimal solution cannot be implemented by means of specific algorithms. Hence, one faces truly algorithmically unsolvable problems: economic agents cannot have readily available algorithms designing optimal strategies to tackle such problems. Therefore, unless they have been told what the optimal solutions are by an omniscient entity, they have actually to find other criteria and procedures to solve them in a "satisfactory" way. In fact, they need novel criteria to define what a satisfactory solution is and inductively discover new procedures to accomplish their tasks (see, e.g., Dosi and Egidi 1991). Let us briefly examine these two kinds of impossibility results: Lewis (1985a) and (1985b) proves a general result about the uncomputability of rational choice functions (on computable functions see also Cutland 1980 and Cohen 1987). Let P(X) be the set of all subsets of a space of alternatives X where an asymmetric and transitive preference relation has been identified, we can roughly define a rational choice function as a set function C: P(X) -> P(X) such that, for every A e P(X), C(A) is the set of acceptable alternatives.9 Lewis considers some compact, convex subset of R"\{0) as the
Norms as Emergent Properties of Adaptive Learning
447
space X of alternatives. Among these alternatives he takes into account only the set of recursive real numbers in the sense of Kleene and Post, i.e., the set of real numbers which can be codified as natural numbers by means of a particular Godel numbering (for more details see Lewis 1985a). Moreover, one operates directly on the codified values (which are called R-indices). Given a preference relation defined only on the space of R-indices and numerically representable by a computable function and given some non-triviality conditions, Lewis does not only show that the related rational choice function is uncomputable, but also that so is its restriction over the sole decidable subsets.10 Even more important than the proposition on undecidable sets (since in this case it may seem that the uncomputability of the function necessarily derives from the undecidability of the subsets), the result concerning only its restriction to the decidable subsets of R" is quite powerful. It means in fact that the functions are uncomputable even if their domains are computable. Obviously this result does not imply that the optimal solution cannot be algorithmically determined for every A e P(X). Lewis's theorems actually prove only that no automatic procedure can generate uniformly optimal solutions over the whole family of optimization problems identified by the set of all recursive subsets of R-indices of elements of X. This would be true even if there existed some specific solution algorithm for every single problem of this family (see Lewis 1985a, p. 67). This result shows actually that there exist small enough classes (i.e., not so broad to be meaningless from a decision-theoretic point of view) of well-structured choice problems whose solution cannot be obtained by means of a general recursive procedure. In economic theory, environmental or social interactions are usually represented by using subsets of R" as spaces of alternative strategies. Thus, Lewis's results can be naturally extended to give proof of the generic uncomputability of the class of general economic equilibria and consequently of the class of Nash equilibria for games (see Lewis 1987). Indeed, every definition of General Equilibrium requires that agents be able to solve optimally some decision problems. Although Lewis's result is only a general impossibility one, which does not imply the uncomputability of single equilibria, its significance should not be underestimated. Concerning game theory, it is possible to find even stronger results about the computability of Nash equilibria for specific games. Rabin's theorem (see Rabin 1957 and Lewis 1985a) shows that there is at least one infinite stage, two-person, zero-sum game with perfect information, whose optimal strategies are uncomputable. This is a particular Gale-Stewart game, which can be described as follows: let g: N —» N be
448
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
a predefined total function; player A moves first and chooses an integer i e N; then player B, knowing A's choice, chooses j e N; finally, A, who knows both i and /, chooses k e N. If g(k) = i + j, A wins the game, otherwise B does. Gale-Stewart games admit always the existence of one winning strategy: if N\range(g) is infinite, whatever i has been chosen by A, B has at least one reply that lets him win the game, otherwise A does. Consequently, it is easy to show that every Gale-Stewart game has infinite Nash equilibria, with at least a subgame perfect one among them. Now, let g be computable and let range( g) be a recursively enumerable set S such that N\S is infinite and does not have any infinite recursively enumerable subset.11 In such a case Rabin's theorem states that this game has no computable winning strategies. Moreover it is important to notice that the existence of simple sets has been proved (see Cutland 1980), so that Rabin's theorem provides a strong result about the existence of games wherein there exist Nash equilibria which are not algorithmically realizable. Another example from the second group of results mentioned in the text can be found in the properties of Post systems. A Post system (Post 1943) is a formal logical system defined by a set of transformation rules, which operate on symbolic strings, and by a set of initial strings (for more details see Cutland 1980). The problem of establishing whether a string can be generated by the initial set of a fixed Post system is called a "word problem." Through computability theory it is possible to show that there is an infinite number of Post systems whose word problems are unsolvable even by specific algorithms (see, e.g., Thrakhtenbrot 1963). This result has a wide significance in the economic domain. Consider, for example, production theory: it is possible to show that there is no guarantee that optimal productive processes can be algorithmically identified, even under exogenous technical progress. Therefore, it is impossible to assume that economic agents always make use of optimal processes without giving a contextspecific proof. It is worth emphasizing that these impossibility results entail quite disruptive implications not only for the "constructivist" concept of rationality, but also for the so-called as-if hypothesis (see Friedman 1953 and the discussion in Winter 1986). In order to assume that agents behave as if they were rational maximizers, one needs to represent a thoroughly autonomous selection process which converges to an optimal strategy equilibrium, i.e., one must be able to formalize something like an automatic procedure which ends up with the elimination of every non-optimizing agent (or behaviour). However, the first group of results mentioned above, implies that, for some classes of problems, we are not allowed to assume the existence
Norms as Emergent Properties of Adaptive Learning
449
of a general and algorithmically implementable selection mechanism leading in finite time to the exclusive survival of optimal behaviours. In addition, the second group of results provides examples where one can definitely rule out the existence of every such a selection mechanism. Moreover, the minimal prerequisite one needs for a selection-based as-if hypothesis on behavioural rationality is the existence of some agents who use the optimal strategy in the first place (cf. Winter 1971). But, if the set of optimal strategies is undecidable, how can we be sure of having endowed some agent with one optimal strategy? An approximate easy answer could be that if we consider a sufficiently large population of differentiated agents, we can safely suppose that some of them play optimal strategies and will be eventually selected. But how big should our population be, given that we cannot have any idea about the size of the set of possible strategies? Finally, there is also a problem of complexity which arises in connection with rational behaviour (both under a "constructivist" view and under the as-if hypothesis). Broadly speaking, we can roughly define the complexity of a problem as the speed of the best computation processes we could theoretically use to solve it (cf. Cutland 1980). But then the speed of environmental change becomes a crucial issue: as Winter (1986) and Arthur (1992) pointed out, the as-if view is primarily connected with a situation without change. In fact, even when the only kind of change we allow is an exogenous one, a necessary, albeit by no means sufficient condition for the hypothesis to hold is that the speed of convergence be higher than the pace of change. However, it is easy to find many examples of games whose optimal strategies, while existing and being computable, require too much time to be effectively pursued even by a modern computer.12 Even more so, all these results on uncomputability apply to non-stationary environments, wherein the "fundamentals" of the economy are allowed to change and, in particular, various types of innovation always appear. Hence, in all such circumstances, which plausibly are the general case with respect to problem-solving and social interactions, agents cannot be assumed to "naturally" possess the appropriate rational algorithm for the true representation of their environment (whatever that means) and for the computation of the correct action procedures (note that, of course, these impossibility theorems establish only the upper bound of computability for empirical agents). A fundamental consequence of these negative results is that one is then required to explicitly analyze the processes of formation of representations and behavioural rules. This is what we shall do in the next section, by considering the emergence of rules of cognition / action in some familiar economic examples of decision and interaction.
450
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
3. Genetic Programming as a Model of Procedural Learning Genetic Programming (cf. Koza 1992 and 1993) is a computational model which simulates learning and adaptation through a search in the space of representations / procedures. Similarly to John Holland's Genetic Algorithms (cf. Holland 1975), Genetic Programming (henceforth GP) pursues learning and adaptation by processing in an evolutionary fashion a population of structures which are represented by fixed-length binary strings in the case of Genetic Algorithms and by symbolic functions in the case of GP. In GP the learning system (an artificial learning agent) is endowed with a set of basic "primitive" operations (such as the four arithmetic operations, Boolean operators, if-then operators) and combines them in order to build complex procedures (functions) which map environmental variables into actions. Each artificial agent is represented by a set of such procedures and learns to adapt to the environment through an evolutionary process which involves both fitness-driven selection among existing procedures and generation of new ones through mutation and genetic recombination (crossover) of the old ones. General features of this model are the following: 1. Representations and rule behaviour. A common feature to many computational models of learning, including the one presented here, is that of modeling the learning process not just as acquisition of information and probability updating, but as modification of representations and models of the world. But contrary to other similar models (such as genetic algorithms and classifiers systems), GP models learning and adaptation as an explicit search in the space of procedures, i.e., functions in their symbolic representation, which link perceived environmental states to actions.13 2. Adaptive selection. Each artificial agent stores in its memory a set of alternative procedures of representation / action and selects at each moment of time a preferred one according to its fitness, i.e., the payoff cumulated by each procedure in the past. 3. Generation of new rules. Learning does not involve only adaptive selection of the most effective decision rules among the existing ones, but also generation of new ones. Learning and adaptation require a calibration of the complicated trade-off between exploitation and refinement of the available knowledge and exploration of new possibilities. GP uses genetic recombination to create new sequences of functions: sub-procedures of the existing most successful ones are re-combined
Norms as Emergent Properties of Adaptive Learning
451
with the crossover operator in order to generate new and possibly more effective combinations. In GP symbolic functions are represented by trees, whose nodes contain either operators or variables. Operators have connections (as many as the number of operands they need) to other operators and / or variables, if they are variables they do not have, of course, any further connection and constitute, therefore, the leaves of the tree. Thus, every node can be chosen in a set of basic functions (e.g., the arithmetic, Boolean, relation, if-then operators) plus some variables and constants:
But basic functions can be freely defined depending on the kind of problem which is being faced (see Koza 1993) for a wide range of examples of applications in different problem domains). The execution cycle of a GP system proceeds along the following steps: (0) An initial set of function/trees is randomly generated. Each tree is created by randomly selecting a basic function; if the latter needs parameters, other basic functions are randomly selected for each connection. The operation continues until variables (which can be considered as zero-parameter functions) close every branch of the tree. (1) Once a population of trees is so created, the relative strength of each function is determined by calculating its own fitness in the given environment. (2) A new generation of functions/trees is generated. Two mechanisms serve this purpose: selection and genetic operators. Selection consists in preserving the fittest rules and discarding the less fit ones. Genetic operators instead generate new rules by modifying and recombining the fittest among the existing ones. The generation of new (possibly better) functions / trees in GP is similar to the genetic operators proposed by Holland for the Genetic Algorithms and is mainly based on the crossover operators.14 Crossover operates by selecting randomly two nodes in the parents' trees and swapping the sub-trees which have such nodes as roots. Consider, for example, the two parents' functions:
which are depicted below in their tree representation. Suppose that node 4 in the first function and node 7 in the second one are randomly
452
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
selected: crossover will generate two new "offspring" trees which correspond to the functions:
Such offspring substitute the weakest existing rules, so that the number of rules which are stored at every moment in time is kept constant. (3) Go back to (I).
4. Learning Pricing Procedures in Oligopolistic Markets Let us consider one of the most typical problems of economic interaction, namely, an oligopolistic market. A small group of firms face a downward-sloped and unknown demand curve, and have to set simultaneously their prices at discrete time intervals. To do so they can observe both the past values taken by the relevant market variables (quantity and prices) and the current value of such firm-specific vari-
Figure 1: Example of crossover
Norms as Emergent Properties of Adaptive Learning
453
ables as costs. However, they do not know either the parameters of the demand function or the prices competitors are about to set. Once all prices have been simultaneously set, the corresponding aggregate demand can be determined and individual market shares are updated according to relative prices. This interactive set-up and the substantive uncertainty about both the exogenous environment (i.e., the demand function) and the competitors' behaviour require agents to perform a joint search in the space of representations and in the space of decision functions: GP seems, therefore, a natural way of modeling it. Let us examine more precisely the structure of the market we analyze in our simulations. There exist an exogenous linear demand function:
and n firms which compete in this market by choosing a price pt. Firms are supposed to start up all with the same market share sf:
Price decisions are taken independently (no communication is possible between firms) and simultaneously at regular time intervals (t = 1, 2, . . .). Each firm is supposed to incur into a constant unitary cost ci for each unit of production. Once all decisions have been taken, the aggregate market price can be computed as the average of individual prices:
and the corresponding demanded quantity is thus determined. Such a quantity is divided up into individual shares which evolve according to a sort of replicator dynamics equation in discrete time:
where TJ is the reciprocal of the degree of inertia of the market.15 Finally, individual profits are given by:
where F; are fixed costs, independent of the scale of production, but small enough to allow the firms to break even for an excess of prices over variable costs, were they to pursue Bertrand-type competition.
454
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
Figure 2: Monopoly in a stationary environment.
We model these firms as artificial agents, each represented by an autonomous GP system, which, at each time step t, must select one pricing rule among those which it currently stores. Each artificial agent can observe at each moment of time t the following past (i.e. the values taken at time t—l) variables: (1) average industry price p(f-l); (2) aggregate demanded quantity q(t—l); (3) individual prices of each agent p ; (£—1), for f = 1, 2,..., n; (4) own unitary cost c •(£ — !); (5) own market share sft—l). Moreover it can observe its current unitary cost cf(t). Each agent is then endowed with a few basic "elementary" operations, i.e., the four arithmetic operations, if-then operators, Boolean operators, and equality / inequality operators; in addition, a few integers are given as constant to each GP system. Each agent's decision rules are randomly generated at the outset, and a preferred one is chosen for action in a random way, with probabilities proportional to the payoffs cumulated by each rule in the previous iterations. Periodically, new rules are generated through crossover and replace the weaker ones. In order to test the learning capabilities of the model, we started with the simplest model with a single agent in a monopolistic market. As shown in Figure 2, in this case with constant costs and stable demand, price converges rapidly to the optimal one. Figure 3 presents the behaviour16 of our artificial monopolist in more complex situations in which both costs and the parameters of the demand curve randomly shift. It can be noticed that, in spite of the complexity of the task, our artificial monopolist "learns" a pricing rule which behaves approximately like the optimal one.17
Norms as Emergent Properties of Adaptive Learning
455
Figure 3: Monopoly: random costs and demand.
Let us now consider an oligopolistic market. We explore two different environmental and learning scenarios. In the first one we suppose that the demand function is fixed and equal to: p = 10000 - 10<7 Moreover, unitary costs, identical for every agent, are a random variable uniformly distributed on a finite support. Finally, on the representation / action side, our artificial agents are allowed to experiment with each set of rules, on average, for 100 iterations. In Figure 4 we report the results of a simulation which concerns an oligopolistic market with 9 firms. The average price is plotted against costs, while in Figures 5a and 5b we report some price series for individual firms. It appears that many firms, as for example firm 4 in the particular simulation that we show, follow a pricing strategy which strictly follows cost variations. Although emerging rules are usually quite complex,18 they behave "as if" they were simple mark-up rules. Another typical behaviour that
Figure 4: Oligopoly: inertial learning case, average price and average unit costs.
456
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
Figure 5a: Oligopoly: inertial learning costs and prices of agent 4.
we observe is a follow-the-leader type of pricing rule - in 2 out of 9 firms (e.g., firm 0). Following the agent with lowest mark-up level is an extremely simple pricing rule (as other agents' prices in t—\ are directly observable), which allows, anyway, a positive average rate of profit. Moreover, with a higher number of firms, the complexity of the co-ordination task increases and this, in turn, favours the emergence of simple imitative behaviour.19 Under the second scenario, the intercept of the demand function randomly fluctuates, drawing from uniform distribution on the support [8000 2 12000]. In addition, the individual unitary costs are given by the ratio between two variables: a component which is common to the entire industry and is represented by a random variable uniformly distributed over the interval [0,8000] and an individual productivity component, different for each firm, which is a random walk with a drift. Finally, in this scenario, we allow agents to change stochastically their sequences of rule at each period, i.e., to switch among the procedures of representation / action which they store. In this way, one forces
Figure 5b: Prices of agent 0 and agent 2.
Norms as Emergent Properties of Adaptive Learning
457
Figure 6: Oligopoly: continuous adjustment case, average costs and prices.
behavioural variability (and, of course, this decreases predictability of each and every competitor). This extreme learning set-up prevents any rule from settling down and from proving its value in the long term, while facing rather stable behaviours of the competitors. Despite all this, the main conclusions reached under the former scenario hold: mark-up-type policies still turn out to be the most frequent and most efficient response to environmental uncertainty.20 Figures 6 and 7 illustrate costs and price dynamics for the industry. In other exercises, not shown here, we consider similar artificial agents whose control variables are quantities rather than prices. Again, as in the example presented above, a monopolist facing a stationary environment does discover the optimal quantity rule. However, under strategic interactions the agents do not appear to converge to the underlying Cournot-Nash equilibrium, but, rather, co-operative behaviours emerge. In particular, in the duopoly case, the decision rule has "Tit-for-Tat" features (cf. Axelrod 1984) and displays a pattern of the type "do at time t what your opponent did at time t—\."
Figure 7: Oligopoly: continuous adjustment case, costs and prices of market leader.
458
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
It has been already mentioned that a straightforward "semantic" interpretation of the procedures which emerge is often impossible. However, their inspection - in the simplest cases - together with the examination of the behavioural patterns that they entail, allows an assessment of their nature. Some remarkable patterns appear. First, procedures which "look like" optimization rules emerge only in rather simple and stationary environments. Second, as the complexity of the representation / decision problem increases, rules evolve towards simpler ones, involving the neglect of notionally useful information and very little contingent behaviour. More precisely, the procedure which the evolutionary dynamics appear to select either neglect the strategic nature of the interactive set-up - thus transforming the decision problem into a game "against nature" - or develop very simple imitative behaviours. In all these circumstances the resulting collective outcomes of the interaction significantly depart from the equilibria prescribed by a theory of behaviours grounded on standard rationality assumptions (this applies both to the Cournot-Nash and to the Bertrand set-ups, corresponding to quantity-based and price-based decision rules). 5. Conclusions In this work we have begun to explore the properties of the procedures of representation / decision which emerge in an evolutionary fashion via adaptive learning and stochastic exploration in a space of elementary functions. Following a negative argument on the general impossibility of endowing agents with some generic and natural optimization algorithms, we presented some preliminary exercises on the co-evolution of cognition and action rules. The results highlight the evolutionary robustness of procedures which - except for the simplest environments - have the characteristics of norms or routines, as defined earlier. Of course, one can easily object that real agents indeed base their understanding of the world on a pre-existing cognitive structure much more sophisticated than the elementary functions we have assumed here, and that, therefore, our result might not bear any implication for the understanding of the actual evolution of norms. On the other hand, the problem-solving tasks that empirical agents (and, even more so, real organizations) face are several orders of magnitude more complex than those depicted in this work. There is no claim of realism in the model we have presented; however, we suggest that some basic features of the evolution of the rules for cognition and action presented here might well hold in all those circumstances where a "representation gap" exists between the ability that agents pre-possess in interpreting their environment and the "true" structure of the latter. This is obviously a field of analysis where stylized modeling exercises on evo-
Norms as Emergent Properties of Adaptive Learning
459
lutionary learning can only complement more inductive inquiries from, for example, social psychology and organizational sciences.
Acknowledgments Support for the research at different stages has been provided by the International Institute of Applied Systems Analysis (IIASA), Laxenburg, Austria; the Italian Ministry of University and Research (Murst 40%); the Italian Research Council (CNR, Progetto Strategico "Cambiamento Tecnologico e Sviluppo Economico"); and the Center for Research in Management, University of California, Berkeley.
Notes 1 Clearly, this very general definition of rules includes as particular cases also the procedures for decision and action postulated by "rational" theories. 2 These finer categorizations are quite familiar in political sciences: see for example, the discussion in Koford and Miller (1991). On the contrary, the broader notion of norms adopted here includes both moral constraints and positive behavioural prescriptions (i.e., both "morality" and "ethicality" in the sense of Hegel). 3 Cf., for instance, Kahneman, Slovic, and Tversky (1982), Kahneman and Tversky (1979), Herrnstein and Prelec (1991). 4 On the evolution of representations, see also Margolis (1987). In economics, such a co-evolutionary perspective is held by a growing minority of practitioners. More on it can be found in Nelson and Winter (1982), Dosi et al. (1988), March (1988), Marengo (1992), Dosi and Marengo (1994), Arthur (1992). 5 This is, of course, in line with the findings of Axelrod (1984) and Miller (1988). 6 See for discussions, among others, Elster (1986), Luhmann (1979), and with respect to economics, also Dosi and Metcalfe (1991). 7 The central reference on the distinction between "substantive" and "procedural" rationality is, of course, Herbert Simon: see especially Simon (1976), (1981), (1986). 8 See Lewis (1985a) and Rustem and Velupillai (1990). Note that, loosely speaking, algorithmic solvability means that one is able to define a recursive procedure that will get you, say, to a Nash equilibrium. This turns out to be a question quite independent from proving a theorem which shows the existence of such an equilibrium. 9 Given a preference relation > on a set of objects X and a non-empty set A belonging to X, the set of acceptable alternatives is defined as: c(A,>) = {x e A: there is no y e A such that y > x}. 10 Broadly speaking, we call a set decidable if there exist an algorithm which is always able to completely identify its elements, i.e., if the membership function which characterizes the set is computable.
460
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
11 A recursively enumerable set can be defined as a set whose partial characteristic function is computable or, equivalently, as the range of some computable function (see, e.g., Cutland 1980 or Cohen 1987). 12 Think, for instance, of chess or Rubik's Cube. 13 A more general formal tool in the same spirit and which we intend to apply in the near future is presented in Fontana (1992) and Fontana and Buss (1994), applied in the domain of biology to self-reproducing systems. 14 For a discussion of the power of crossover as a device for boosting adaptation, see Holland (1975) and Goldberg (1989). 15 Note that this replication-type dynamics is consistent with the assumptions of homogenous-good output whenever one allows for imperfect information, or search costs, or inertial behaviour by consumers. The latter are not explicitly modeled here but they implicitly underlay the system-level mechanisms of formation of industry demand and their distribution across firms as defined by equations (1) to (3). The possible metaphor of these mechanisms is the following sequence: (1) each firm sets its price; (2) a "public statistical office" collects all of them and announces the "price index" of the period (as from equation [2]); (3) on the grounds of that index consumers decide the quantity they want to buy; (4) as a function of the difference between the announced average price and the price charged by their previous-period suppliers, consumers decide whether to stick to them or go to a lower-priced one. Clearly, the stochastic reformulation of equation form would be more adequate to describe the mechanism, but, for our purposes, the main property that we want to capture - namely, inertial adjustment of the market to price differential - is retained also by the simpler deterministic dynamics. Were agents to behave as in conventional Bertrand models, equation (3) would still converge, in the limit, to canonic Bertrand equilibria. It must be also pointed out that our model is not concerned with the population dynamics of the industry but primarily with the evolution of pricing rules. Therefore, we artificially set a minimum market share (1%) under which firms cannot shrink. According to the past performance record, firms may die, in which case they are replaced by a new agent which stochastically recombines some of the behavioural rules of the incumbents. 16 For an easier interpretation, we plot in these figures only the last 100 iterations of the best-emerging rule. 17 In this case and in the following ones the pricing rules which are actually learned by our artificial agents are usually long and difficult to interpret semantically, but they behave "as if" they were nearly optimal pricing rules. 18 The complexity of the rules is at least partly due to the fact that our agents have to produce constants (such as mark-up coefficients) that they do not possess in their set of primitive operations, and have therefore to be obtained by means of such operations on variables as to yield constant values (e.g., (X+X)/X = 2). '
Norms as Emergent Properties of Adaptive Learning
461
19 Econometric estimates of the form:
for the industry as a whole, always yield R2 above 0.90 with significant coefficients for current costs and the first lag on prices only, and always insignificant lagged costs. Conversely, for the majority of the firms, no lagged variable significantly adds to the explanation: firms appear to follow a stationary rule of the simplest mark-up type,p,! = m,-(cf). However, for some firms (the "imitators") current prices seem to be set as a log-linear combination between costs and lagged average prices of the industry, or the lagged price of one of the competitors (as in the example presented in figure 5). 20 As may be expected, estimates of the form presented in footnote 15 yield somewhat lower R2 as compared to the previous case - both for the industry aggregate and for the individual firms - but still most often in the range between 0.6 and 0.8. Also, the other properties of individual pricing procedures stand, and in particular, simple stationary rules characterize the most successful players, as assessed in terms of cumulated profits or average market shares. Finally, in analogy to the previous learning scenario, the adjustment dynamics in aggregate prices - where the first lag on prices themselves turns out to be significant - appear to be due primarily to an aggregation effect over most usual stationary rules. For a general theoretical point on this issue, cf. Lippi (1988).
References Arrow, K. (1987). Oral history: An interview, in G. R. Feiwel (ed.), Arrow and the Ascent of Modern Economic Theory. London: Macmillan. Arthur, W. B. (1992). On Learning and Adaptation in the Economy. Working paper 92-07-038, Santa Fe Institute. Santa Fe, NM: Santa Fe Institute. Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books. Cohen, D. (1.987). Computability and Logic. Chichester: Ellis Horwood. Cutland, N. J. (1980). Computability: An Introduction to Recursive Function Theory. Cambridge: Cambridge University Press. Dosi, G., and M. Egidi (1991). Substantive and procedural uncertainty. An exploration of economic behaviours in complex and changing environments. Journal of Evolutionary Economics, 1: 145-68. Dosi, G., Ch. Freeman, R. Nelson, G. Silverberg, and L. Soete (eds.) (1988).Tec/znical Change and Economic Theory. London: Francis Pinter. Dosi, G., and L. Marengo (1994). Some elements of an evolutionary theory of organizational competences. In R. W. England (ed.), Evolutionary Concepts in Contemporary Economics (Ann Arbor, MI: University of Michigan Press). Dosi, G., and J. S. Metcalfe (1991). On some notions of irreversibility in economics. In P. P. Saviotti and J. S. Metcalfe (eds.),Evolutionary Theories of Economic and Technological Change (Chur: Harwood Academic).
462
G. Dosi, L. Marengo, A. Bassanini, and M. Valente
Elster, J. (1986). The Multiple Self. Cambridge: Cambridge University Press. Fontana, W. (1992). Algorithmic chemistry. In C. Langton, J. D. Farmer and S. Rasmussen (eds.), Artificial Life (Redwood City, CA: Addison Wesley). Fontana, W., and L. W. Buss (1994). What would be conserved if "the tape were played twice"? Proceedings of the National Academy of Sciences USA, 91: 757-61. Friedman, M. (1953). Essays in Positive Economics. Chicago: University of Chicago Press. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Learning. Reading, MA: Addison Wesley. Herrnstein, R.J., and D. Prelec (1991). Melioration: A theory of distributed choice. Journal of Economic Perspectives, 5: 137-56. Hirschman, A. (1977). The Passions and the Interests. Princeton, NJ: Princeton University Press. Hodgson, G. (1988). Economics and Institutions. London: Polity Press. Hogart, R. M., and M. W. Reder (eds.) (1986). Rational Choice. Chicago: Chicago University Press. Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. Ann Arbor, MI: University of Michigan Press. Holland, J. H., K. J. Holyoak, R. E. Nisbett, and P. R. Thagard (1986).Induction: Processes of Inference, Learning and Discovery. Cambridge, MA: MIT Press. Kahneman, D., P. Slovic, and A. Tversky (eds.) (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kahneman, D., and A. Tversky (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47: 263-91. Koford, K. J., and J. B. Miller (eds.) (1991). Social Norms and Economic Institutions. Ann Arbor, MI: University of Michigan Press. Koza, J. R. (1992). The genetic programming paradigm: Genetically breeding populations of computer programs to solve problems. In B. Soucek (ed.), Dynamic, Genetic and Chaotic Programming (New York: John Wiley). Koza, J.R. (1993). Genetic Programming. Cambridge, MA: MIT Press. Lewis, A. (1985a). On effectively computable realization of choice functions. Mathematical Social Sciences, 10: 43-80. (1985b). The minimum degree of recursively representable choice functions. Mathematical Social Sciences, 10:179-88. (1986). Structure and Complexity. The Use of Recursion Theory in the Foundations of'Neoclassical Mathematical Economics and the Theory of Games. Unpublished paper, Dep't of Mathematics. Ithaca, NY: Cornell University. • (1987). On Turing Degrees ofWalrasian Models and a General Impossibility Result in the Theory of Decision-Making. Technical report n. 512, Institute for Mathematical Studies in the Social Sciences. Standford, CA: Stanford University. Lippi, M. (1988). On the dynamics of aggregate macro equations: From simple micro behaviours to complex macro relationships. In G. Dosi, Ch. Free-
Norms as Emergent Properties of Adaptive Learning
463
man, R. Nelson, G. Silverberg, and L. Soete (eds), Technical Change and Economic Theory (London: Francis Pinter, 1988). Luhmann, N. (1979). Trust and Power. Chicester: Wiley. March,}. G. (1988). Decisions and Organizations. Oxford: Basil Blackwell. Marengo, L. (1992). Coordination and organizational learning in the firm. Journal of Evolutionary Economics, 2: 313-26. Margolis, H. (1987). Patterns, Thinking and Cognition: A Theory of Judgement. Chicago: Chicago University Press. Miller, J. H. (1988). The Evolution of Automata in the Repeated Prisoner's Dilemma. Working paper 89-003, Santa Fe Institute. Santa Fe, NM: Santa Fe Institute. Nelson, R. R., and S. G. Winter (1982). An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard University Press. Newell, A., and H. Simon (1972). Human Problem Solving. Englewood Cliffs NJ: Prentice-Hall. Post, E. (1943). Formal reductions of the general combinatorial decision problem. American Journal of Mathematics, 65:197-215. Rabin, M. O. (1957). Effective computability of winning strategies: Contributions to the theory of games III. Annals of Mathematics Studies, 39: 147-57. Rustem, B., and K. Velupillai (1990). Rationality, computability and complexity. Journal of Economic Dynamics and Control, 14: 419-32. Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3: 211-29. Simon, H. A. (1976). From substantive to procedural rationality. In S. J. Latsis, (ed.), Method and Appraisal in Economics (Cambridge: Cambridge University Press). (1981). The Sciences of the Artificial Cambridge, MA: MIT Press. (1986). Rationality in psychology and economics,/OMrau/ of Business, 59, supplement: 209-24. Thrakhtenbrot, D. A. (1963). Algorithms and Automatic Computing Machines. Boston, MA: D. C. Heath. Winter, S. G. (1971). Satisficing, selection and innovating remnant. Quarterly Journal of Economics, 85: 237-61. Winter, S. G. (1986). Adaptive behaviour and economic rationality: Comments on Arrow and Lucas. Journal of Business, 59, supplement: 427-34.