Collective Rationality
This page intentionally left blank
Collective Rationality Equilibrium in Cooperative Games
...
37 downloads
967 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Collective Rationality
This page intentionally left blank
Collective Rationality Equilibrium in Cooperative Games
Paul Weirich
1 2009
3 Oxford University Press, Inc., publishes works that further Oxford University’s objective of excellence in research, scholarship, and education. Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Copyright # 2009 by Oxford University Press, Inc. Published by Oxford University Press, Inc. 198 Madison Avenue, New York, New York 10016 www.oup.com Oxford is a registered trademark of Oxford University Press All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of Oxford University Press. Library of Congress Cataloging-in-Publication Data Weirich, Paul, 1946– Collective rationality : equilibrium in cooperative games / Paul Weirich. p. cm. Includes bibliographical references and index. ISBN: 978-0-19-538838-1 1. Game theory. 2. Cooperation. 3. Equilibrium (Economics) I. Title HB144.W46 2009 330.01’5193—dc22 2009009550
1 3 5 7 9 8 6 4 2 Printed in the United States of America on acid-free paper
For my brothers and sisters
This page intentionally left blank
Preface
Groups of agents perform acts. What are the standards of rationality for a group’s acts? Is a group’s act rational if it results from each member’s acting rationally? These are questions of perennial philosophical interest. This book presents standards of rationality for a group’s acts. They are generalizations of standards for individuals. I argue that the individual rationality of acts by the group’s members ensures the rationality of the group’s acts. I also argue that standards of collective rationality are attainable, in contrast with goals of collective rationality that circumstances may put out of reach. Collective rationality is a theoretical concept belonging to a general theory of rationality, and its explication enriches that theory. Game theory treats complex interactions of individuals in social situations. It formulates for ideal cases standards of collective rationality such as efficiency and shows how the rational acts of individuals ensure their attainment. An account of collective rationality constructs philosophical foundations for game theory. In Equilibrium and Rationality (1998) I introduce an attainable type of equilibrium, strategic equilibrium, that generalizes Nash equilibrium in noncooperative games. This book extends strategic equilibrium to cooperative games. The study of collective rationality has a rich history and invigorates contemporary social philosophy. It illuminates social institutions, such as social contracts and economic markets, and thereby contributes to the foundations of the social and behavioral sciences. This book addresses scholars investigating human interaction. Its arguments and results are accessible to college students and of interest to specialists. I am grateful to the Mellon Foundation for a postdoctoral fellowship 1978–80 and an interdisciplinary faculty fellowship 1985–86, both at the University of Rochester. These fellowships introduced me to game theory and theories of collective rationality. I thank David Austen-Smith, Richard Niemi, William Riker, William Thomson, and David Weimer for guiding my studies during those fellowships. William Lucas, while visiting the University of Rochester in 1984, explained to me Robert Aumann and Michael Maschler’s ideas about
viii
Preface
objections and counterobjections in coalitional games. He stimulated a train of thought that culminated in my account of strategic equilibrium in coalitional games. The University of Missouri Research Council and the University of Missouri System Research Board funded preliminary work during the academic year 2002–03, and they funded completion of my project during the academic years 2006–08. For comments on various sections, I thank participants in conferences at Carnegie Mellon University the University of Caen, the University of Colorado, the University of Liverpool, the University of Missouri, the University of North Carolina–Chapel Hill, the University of Oklahoma, the University of Tilburg, Washington University, the 2006 meeting of the Philosophy of Science Association, and my 2008 seminar on decision theory. I am especially grateful for comments from my manuscript’s anonymous readers and from Sara Chant, Robert Johnson, Christopher Haugen, Kirk Ludwig, Andrew Melnyk, Peter Vallentyne, and Xinghe Wang. It was a pleasure to work with Peter Ohlin (my editor), Molly Wagener, Joseph Albert Andre, and all the other fine staff at Oxford University Press and its associates.
Contents
1. Rationality Writ Large 3 1.1. Collective Acts 3 1.2. Method 4 1.3. Guide 6 2. Agents and Acts 7 2.1. Agents 7 2.2. Acts 12 2.3. Control 16 2.4. Evaluability 23 3. Rationality 31 3.1. Metatheory 31 3.2. Attainability 37 3.3. Comprehensiveness 43 3.4. Compositionality 45 4. Groups 54 4.1. Extension 54 4.2. Efficiency 58 4.3. Collective Utility 4.4. Compositionality
64 69
5. Games of Strategy 75 5.1. Games 75 5.2. Solutions 81 5.3. Standards 88 6. Equilibrium 97 6.1. Standards and Procedures 97 6.2. Utility Maximization 100
x
Contents 6.3. Self-Support 104 6.4. Strategic Equilibrium 108 6.5. Realization of an Equilibrium 111 6.6. Appendix: Realization of a Nash Equilibrium 115 7. Coordination 120 7.1. Strategy and Learning 120 7.2. Changing the Rules 122 7.3. An Efficient Equilibrium 127 7.4. Preparation 131 7.5. Intentions 135 8. Cooperative Games 139 8.1. Joint Action 139 8.2. Opportunities for Joint Action 145 8.3. Coalitional Games 153 8.4. The Core 156 8.5. An Empty Core 159 9. Strategy for Coalitions 163 9.1. A Coalition’s Incentives 163 9.2. Paths of Incentives 169 9.3. Strategic Equilibria in Coalitional Games 175
10. Illustrations and Comparisons 183 10.1. The Majority-Rule Game 183 10.2. Comparisons 185 10.3. Conflict 190 10.4. Collective Standards
195
11. Compositionality 201 11.1. Underlying Games 201 11.2. Confirmation 204 11.3. Agreement Games 206 11.4. The Core and Utility Maximization 210 11.5. Strategic Equilibrium and Self-Support 212
Contents 12. Implications 216 12.1. Social Institutions 216 12.2. Strategic Equilibrium and Institutions 221 12.3. Theoretical Unity 223 12.4. Future Research 225 Notes
229
Bibliography Index
263
249
xi
This page intentionally left blank
Collective Rationality
This page intentionally left blank
1
Rationality Writ Large
C
OLLECTIVE rationality is rationality for groups of people. It has a role in a host of philosophical projects such as the design of a social contract. This brief introductory chapter sets the stage for the book’s theory of collective rationality. It orients the book’s project with respect to the large literature on collective rationality.
1.1 C OLLECTIVE ACTS Many acts are products of several people working together. A crew sails a ship. A team wins a game. A committee adopts a resolution. The agents responsible for the sailing, the victory, and the resolution are composed of other agents. Composite agents are so commonplace that basic conventions of language recognize them. Take the sentence, “We carry the table.” The act it reports is not the same as your carrying the table and my carrying the table. The carrying is not an act each of us performs but rather an act the two of us perform together. Basic grammar provides for expression of an agent’s plurality. It yields action sentences with plural subjects. Just as anthropomorphizing about the behavior of dogs may lead from the literal attribution of desires to the metaphorical attribution of deliberations, anthropomorphizing about groups may lead from the literal attribution of acts to the metaphorical attribution of sensations. Binmore (1994: 142) criticizes accounts of collective rationality as fruitless anthropomorphizing. A fruitful theory of collective rationality rests on literal truths about groups. Its companion account of collective agency separates the literal from the nonliteral. Groups literally perform acts. On the other hand, because they lack minds, they do not literally have desires and beliefs to guide their acts. How they manage to be agents despite lacking minds is the next chapter’s main topic. It discusses agents, acts, and free acts, and argues that to be evaluable for rationality, an act does not require a mind, just freedom. 3
4
Collective Rationality
The acts of groups of people are objects of evaluation. A partnership may run its business badly, and a couple may raise their children well, for instance. What standards of rationality apply to a group’s acts? How may a group meet those standards? This book addresses these two questions. It puts evaluations of collective acts on solid ground and uses these evaluations to fruitfully direct groups and their members. A common principle of collective rationality is consistency. A committee should not act inconsistently. It should not resolve to award two fellowships and then select three fellowship recipients. Another common principle is efficiency. A group should not adopt an option if another option is better for all members. For example, a family should not go to a particular restaurant if all family members prefer another restaurant. Symmetry is also a common principle. If a group’s members are in equivalent situations, the group’s acts should promote their interests equally. If two people bargaining to divide a windfall have equal leverage, they should divide the windfall equally. Later chapters explain these and additional principles. What acts by a group’s members suffice for the group’s rationality? Suppose that some acts of members constitute a certain act of the group. For example, the votes of a committee’s members constitute the committee’s passing a resolution. Chapter 4 argues that if the members’ acts are rational, then the group’s act is rational. This sufficient condition of collective rationality, the book’s main principle of collective rationality, offers a way of checking proposed standards of collective rationality. Chapters 5 through 12 use it to appraise game theory’s principles of collective rationality without appeal to collective intentions or other collective analogues of an individual’s mental states. Although the book formulates general standards of rationality applicable to groups as well as individuals, it holds that collective rationality emerges from rationality for individuals. 1.2 M ETHOD Groups act, and their acts are evaluable for rationality. Their evaluation is a normative enterprise. Theorists pursue various projects concerning collective rationality. I formulate standards of collective rationality treating, for example, coordination and cooperation. I do not explain human behavior that meets those standards. That project falls within the social and behavioral sciences. My project is strictly philosophical. The literature on collective rationality approaches it from many directions. Philosophical game theory is my approach. Game theory is a source of principles of collective rationality. I formulate principles such as realization of an equilibrium in a game. A theory of collective rationality in return refines ideas of game theory. Solutions to games are collective acts, and a theory of collective rationality elucidates them. This book’s theory of collective rationality
Rationality Writ Large
5
Behavior / Descriptive principles
\\ Normative principles
// \ Strategic situations Nonstrategic situations // \ Single-stage games Multistage games / Individual acts
\\ Collective acts
FIGURE 1.1 Research topics.
incorporates principles of game theory and contributes to game theory. It generalizes proposals about equilibrium in games. The tree in Figure 1.1 shows my topic’s relation to other similar topics. The double lines indicate my topic’s lineage in this tree of topics concerning behavior. This book formulates normative principles concerning rational behavior. It treats principles of rationality for a group of individuals in a strategic situation, that is, a game of strategy, because such principles are more general than are principles for a group of individuals not interacting strategically. It treats principles for single-stage games because they are simpler than are principles for multistage games. Principles of rationality for collective acts in single-stage games form the book’s main topic. A unified theory of rationality gains plausibility from coherence and gains explanatory power from organization. For theoretical unity, I embed the book’s account of collective rationality in a general theory of rationality. A systematic approach to rationality for individuals and for groups adjusts standards of individual and collective rationality to obtain a consistent, organized set of standards. Principles of rationality restricted to individuals are not general. Introducing general principles applying to individuals and groups promotes unity. Organizing the principles so that realization of principles for individuals entails realization of principles for groups also promotes unity. Because individual rationality suffices for collective rationality, principles of collective rationality are consistent with principles of individual rationality. This book’s general theory of rationality applies to collective acts in games of strategy. Equilibrium is a standard of rationality for such collective acts. So the theory’s application yields a unified account of equilibrium in cooperative and noncooperative games of strategy. The same type of equilibrium governs both types of game, taking account of opportunities for joint action that individuals enjoy in cooperative games. The general theory reconciles diverse ways of evaluating collective acts in cooperative and noncooperative games.
6
Collective Rationality
The general theory also derives principles of collective rationality from principles of individual rationality. It shows that equilibrium in a game emerges from the rationality of individual players. These results unify decision theory and game theory.1 1.3 G UIDE Chapters 2 through 4 explain collective rationality. Chapter 2 analyzes collective acts, and Chapter 3 characterizes rationality. Chapter 4 combines their conclusions to obtain an account of rational collective acts. A lengthy presentation of collective rationality is necessary because no short, technical definition captures its richness. Chapters 5 through 12 formulate standards of collective rationality, show that meeting them follows from individual rationality, and apply them. Chapters 5 through 7 treat standards of collective rationality in noncooperative games, and Chapters 8 through 11 treat such standards in cooperative games. Chapter 12, the final chapter, applies standards of collective rationality to arenas of collective action such as the law and markets. Solutions to games are paradigm examples of collective rationality, and a theory of collective rationality has the job of justifying them. The account of collective rationality formulated in Chapters 2 through 4 grounds a systematic assessment of solutions to noncooperative and cooperative games.
2
Agents and Acts
P
RINCIPLES of collective rationality apply to acts a group performs. Rationality does not evaluate all acts a group performs, however. So this chapter specifies the collective acts that standards of rationality govern. It settles points about collective agents and collective acts that ground rationality’s evaluation of collective acts. For theoretical unity, identification of collective agents and collective acts should apply a general account of agents and acts covering both individuals and groups. Which collective acts are evaluable for rationality? The answer should rest on a general account of the features of acts that make them evaluable for rationality. How does rationality evaluate collective acts? The answer should stem from a general treatment of composite acts, including acts composed of a single individual’s acts as well as acts composed of several individuals’ acts. For unity, this chapter treats individual and collective agents together. The chapter’s main topic is free agents and free acts, because rationality focuses on them. However, for orientation, it begins with a general account of agents and acts. It motivates its position but, to move expeditiously toward collective rationality, does not argue against alternative positions. First, the chapter identifies the simplest agents that rationality evaluates. Then, for those agents, it identifies the simplest acts that rationality evaluates. Finally, it explains how rationality’s evaluation of composite acts, including collective acts, depends on its evaluation of simple acts.
2.1 A GENTS An agent is an entity that may perform an act. An agent’s performing an act is its causing an event. Hence, agents are causes of events. Not all agents are free. Because rationality treats free agents, this section attends especially to them. I use the term free in its ordinary sense. A free agent is capable of performing free acts, and an agent autonomously controls its free acts. This section states its assumptions about freedom but does not present a philosophical definition of 7
8
Collective Rationality
freedom. It has another task. Rationality evaluates differently individuals and groups because individuals are simple agents and groups are composite agents. This section’s task is explaining simple agency and then composite agency.1 There are many ways of dividing agents into simple and nonsimple agents. Some divisions classify an agent as nonsimple if other agents temporally or physically constitute it. A theory of rationality attends to simplicity with respect to autonomy. A simple agent is a source of autonomous control. It is a source of autonomous control because it has a unified mind. That is, it has a mind with psychological integration. Consequently, if it has beliefs, desires, and intentions, then the beliefs and desires influence the intentions. Other agents may influence a simple agent through persuasion, but, having a unified mind, it originates free action. A nonsimple agent is not a source of autonomous control. It acts through other agents. A nonsimple agent composed of other agents acts through its members’ acts. Its act has its members’ acts as parts, whereas a simple agent’s act does not have other agents’ acts as parts. The book’s principles of rationality evaluate acts of ordinary people and acts of ordinary groups of people, not acts of every conceivable agent. For instance, they do not evaluate acts of nonsimple, noncomposite agents (assuming that these are possible). This section explains the common view that people are simple agents, whereas groups of people are composite agents. Its brief explanation bypasses metaphysical issues a theory of rationality need not settle and puts aside exceptional cases that the book’s principles do not address. A person is a simple agent not composed of other free agents. Attending to the relevant type of simplicity and composition handles objections to this view. A person is composed of interacting molecules. That composition does not refute simple agency, because the molecules are not autonomous agents. Suppose that the brain is composed of modules that operate independently. Reflex, for example, operates independently of a person’s will so that a person cannot commit suicide by holding his breath. Nonetheless, reflex is not free; independence is not the same as autonomy. Reflex is not an autonomous agent forming a component of a person. Suppose that a person’s left brain and right brain are autonomous agents. Still, a person has a unified mind and is a simple agent. She is a source of autonomous control. She does not act through the agents that realize her. Her control does not divide into the control of each of her brain’s hemispheres. Their acts realize but do not compose her acts. Only if the left and right brains do not form a unified mind, does she act as a group does. Only in that case is she a composite agent. I assume that this case does not arise.2 A person has temporal stages. Do those stages make a person a composite agent? Persons deliberate over time, and their deliberations involve standing desires and beliefs concerning the long-term consequences for themselves of their options. They adopt for themselves plans for a period of time. Personinstants, or person-slices, are fleeting entities. They do not deliberate over time,
Agents and Acts
9
they have no lasting desires or beliefs, they do not entertain an option’s long-term consequences for themselves, and they do not adopt for themselves plans for the future. Forming the beliefs and desires that ground free acts takes time. Therefore, a free act generally requires a temporally extended agent, even if the act is just a finger movement performed on a whim for its own sake.3 Although a momentary, God-like perfect agent may have beliefs and desires and act autonomously, all in an instant, human psychology requires time for acquisition of beliefs and desires and their generation of decisions. To put aside the objection that person-stages are too brief to be agents, I take the candidate person-stages to be person-intervals rather than person-instants. Does a person-stage perform an act? Perhaps a person at a time performs an act, and person-stages are just means by which a person acts. If so, a person-stage is not an agent. I also put aside this objection and grant that a person-stage acts. Does a person-stage act freely? Perhaps a person-stage merely serves a person who acts freely through it. Its agency depends on a person’s agency. A person-stage inherits beliefs and desires from prior stages, for instance. These are grounds for doubting its autonomy. To put aside this objection, too, I grant that a personstage is a free agent. It may even have a unified mind, and so meet a prerequisite of simple agency. Nonetheless, temporal composition by person-stages does not make a person a nonsimple agent. Although person-stages realize a person, persons and not person-stages are simple agents. Person-stages are dependent agents. They act while a person acts. A person controls a person-stage; a person-stage does not control a person. A person-stage’s control of an act derives from a person’s control of the act. A person, not a person-stage, is a source of control. Using source of control as the criterion, a person, not a person-stage, is a simple agent. An unusual person with multiple personalities operating serially may be a nonsimple agent, but I put aside such cases. Treating person-stages as agents forming a group that constitutes a person fits some facts. Nonetheless, there are many disanalogies between groups and persons taken as sets of person-stages. The members of a group may be members of other groups, but a person-stage belongs to one person only. It cannot be a stage of another person. If a person-stage performs an act, so does the person. If a group’s member performs an act, it does not follow that the group performs the act. Stages of a person form distinct agents, but a person acts through each of his stages, so he also acts at each stage of an extended act. A person’s extended act is a composite of person-stages’ acts, but it is also a composite of the person’s acts. Take, for example, a person’s walking to the store. Suppose that a series of person-intervals act, each taking a step, and the collection of their acts forms the person’s walk. The person-intervals do not form a composite agent that performs the extended act as the clapping of individuals constitutes an audience’s applause. The intervals do not act simultaneously. Next, consider two overlapping intervals of the same person. They are not distinct agents who simultaneously perform
10
Collective Rationality
distinct acts that together constitute a composite act of a composite agent. When the overlapping person-intervals act simultaneously, a person-interval forming the overlap acts for both. The act realized during the overlap constitutes the act the two overlapping person-intervals perform simultaneously. Rationality’s basic evaluations treat simple agents’ acts. Rationality evaluates a person-stage’s act by evaluating the person’s act during the stage. It does not evaluate an act of a group’s member by evaluating a contemporaneous act of the group. A person-stage acts rationally by serving the person’s interest and not the stage’s interest. A group’s member acts rationally by serving his own interest and not the group’s interest, except in cases that warrant altruism. A club has interests, perhaps to maintain the treasury, different from the interests of its members at a time, which may be to exhaust the treasury. A club-stage does not serve the club as a person-stage serves the person. The person’s interest settles the person-stage’s interest, but the club’s interest does not settle the club-stage’s interest. Rationality requires the person-stage to care about the interest of the person but does not require the club-stage to care about the interest of the club, except in cases that warrant altruism.4 According to the traditional dualistic view of a person, a person is composed of body and soul. Consequently, a person is neither a simple nor a composite agent. A person is not a simple agent because the soul, not the whole person is the source of control. A person is not a composite agent because the body is not an agent. Consider a shortest person-stage that counts as a free agent. It is neither a simple nor a composite agent. It is not a source of control and is not composed of other free agents. This book’s principles of rationality apply only to simple agents and agents composed of simple agents. They treat people and groups of people, assuming that people are simple agents. Groups of people are agents but not persons. They are not alive and lack bodies, minds, consciousness, sentience, and moral rights. It is generally permissible to terminate a group by disbanding its members, for instance. The law treats corporations as legal persons. It accords corporations legal powers, rights, and duties similar to the legal powers, rights, and duties of humans. Nonetheless, corporations are persons only in a technical, legal sense.5 How do people combine to form collective agents? Sober and Wilson (1998: 92–98) define a group as a collection of interacting agents. Interaction is part of a common conception of a group, but I recognize groups without interacting members. Suppose that people in Scotland put on their woolies in September. They do this in response to a change in the weather and not in response to each other. I count the group as a collective agent and their combined acts as a collective act. An area’s population forms a group in my sense and performs a collective act when the area’s people act. Space and time may separate a group’s members. Because they need not causally interact, they need not be proximate. Why make the requirements of collective agency so easy? Combined acts of arbitrary collections of people are evaluable for rationality. Standards of collective
Agents and Acts
11
rationality are more useful the greater is their reach. Greater scope improves the theory of rationality. What makes a group an agent able to perform collective acts? People push a boulder. They act as one and think as one, but do not form one body or one mind. Suppose that they push, but none is aware that the others push. They still act together. If each person in a group contributes to a charity, they cooperate in supporting the charity even if they do not coordinate and are unaware of the others’ contributions. Groups are agents in virtue of their autonomy. Their members have freedom, and so groups have freedom. An agent causes events and if free is an agent in the sense the theory of rationality adopts. An amoeba and a school of fish move, but their acts are not free and so they are not agents in rationality’s sense. Control of events and goal-directed behavior are not sufficient for agency in rationality’s sense because even thermostats control events and behave in goal-directed ways. Does the agency of free agents depend on traits besides their freedom? Ludwig (2004: 347–48) says that agency requires coherent beliefs and reasonableness in acting. However, autonomy, not reflection, separates humans and other animals, and makes humans agents. Groups without beliefs and a faculty of reason may be free and so agents. A group performs a collective act freely, even without the members’ awareness that they participate in its performance, provided that each member performs his part freely. An agent’s autonomous control of its acts does not require a free will or a mind in cases where control does not arise from possession of a will or a mind. Some collective agents such as crowds may last only a short time. Other collective agents endure and may have a temporal analysis into agent-stages. A club may undergo membership changes and may be temporally composed of collective agent-stages. It may perform an extended act through the momentary acts of many temporal stages. Are groups simple agents despite being spatially and temporally composed, just as people are simple agents despite being temporally composed? They are not because they lack unified minds. A person is a simple agent because a person has a unified mind in which beliefs, desires, and intentions may form and interact. Having a unified mind is a necessary condition for being a simple agent. Spatiotemporal composition does not disqualify a group as a simple agent, but lacking a unified mind does. Only agents with unified minds are sources of autonomous action. A combination of objects may realize multiple objects. Molecules of clay may realize both a lump of clay and a statue, for instance. If a group of people were somehow organized psychologically so that it had a unified mind, then it would realize a simple agent besides realizing a collective agent. Every combination of agents forms a collective agent, but in science-fiction cases it may also form an individual agent. Those cases are outside the scope of the book’s theory of rationality.6
12
Collective Rationality
Hurley (1989: 145–48) compares person-stages, persons, and groups. She claims that the unit of agency is matter of choice. Metaphysics is not that accommodating, however. Although a person may decide to act as a team player, that decision does not change the character of her agency. A person is a simple agent. Persons are the source of acts of person-stages and of groups. A simple agent has a unified mind and may act freely on its own. A composite agent may act freely because of the acts of others. Neither a person-stage nor a group is a simple agent although both act freely. A person-stage acts because a person acts, and a group acts because its members act. 2.2 A CTS Rationality evaluates acts. This section characterizes acts both individual and collective. It also reviews the role that freedom, intentions, and reasons play in an act’s production. An act’s role in explanations reveals its nature. Why did the class laugh? Not because the teacher spoke, but because the teacher told a joke. Although the teacher’s speaking and the teacher’s telling a joke have the same realization, their propositional characterizations distinguish these events. Explanations relate events, and events are individuated propositionally. That is, a proposition characterizes an event and distinguishes it from other events. Acts are propositional as are other events figuring in explanations. An act’s expression indicates the proposition that individuates the act.7 Because acts are propositional, they are fine-grained. Raising an arm differs from signaling the speaker, although by doing the first act one does the second. Evaluation uses acts’ fine-grained individuation to prevent inconsistencies. Suppose that raising an arm is good because it is a way of stretching, but signaling the speaker is bad because it creates a distraction. These evaluations are inconsistent if raising an arm is the same as signaling the speaker. Acts’ fine-grained individuation prevents inconsistency. Evaluation does not target a coarse-grained act that is simultaneously an arm raising and a signaling. It targets a fine-grained act a proposition represents. Two fine-grained acts with the same realization may receive different evaluations.8 A referee’s contribution to the collective act of officiating a football game may be raising two arms to signal a touchdown. His raising his left arm is not a component of that collective act. His raising his left arm is a component of a different fine-grained collective act such as the movement of the referees during the game. An individual’s act may be a component of his contribution to a collective act and yet not be a component of the collective act, as the nose of a sergeant is a part of him but not part of a regiment to which he belongs. Transitivity of parthood may fail when the relevant sense of parthood varies. Agents perform acts. Specifying an act requires specifying its agent. Caesar’s murder is an incompletely specified act. Brutus’s stabbing Caesar is a completely
Agents and Acts
13
specified act. Rationality’s evaluation of an act begins by specifying who performs the act. A group performs a collective act such as building a house. No member of the group performs that act. May an act have two or more agents? Suppose that according to club rules, the president votes as a club member, and her vote is a tiebreaker in case the members’ votes are evenly split. Is her vote both a club member’s vote and the president’s vote? No, if the agents finely individuated differ, then the acts also differ because they are finely individuated. An agent is free if and only if capable of performing free acts. A free act is an exercise of an agent’s autonomous control of its acts. A free act need not be a product of deliberation. A basketball player may spontaneously catch a ball thrown her way. A tennis player may react to her opponent’s volley before becoming conscious of the ball’s direction. The catching and reacting are unreflective but free. Rationality evaluates only free acts. Are free acts the same as intentional acts? An answer starts with a characterization of intentional acts. The simple view says that a person’s act is intentional if and only if it is intended. However, not every intentional act is intended. A person walking with a friend may step over a puddle without being aware of doing that because conversation absorbs him. He does not intend to step over the puddle but his stepping over it is intentional. An agent may intend to take a walk but not intend to take every step. Every step is intentional nonetheless. A baby may cry without an intention to cry. The intention requires a concept of the act. An intentional act does not require a concept of the act. It may occur without an intention to perform the act. So the baby’s crying may be intentional. Moreover, an intentional act need not be performed intentionally and need not be performed on purpose. A person in a fright may scream neither intentionally nor on purpose. The scream may nonetheless be an intentional act. A person’s decision is an intentional act even if the person does not decide intentionally or on purpose. Mele (1992: 5–9, 105–6, 112–15, 123, 133) espouses Davidson’s and Goldman’s view that to be intentional an act must be done for a reason. This view allows for intentional acts performed without an intention to perform them. Some theorists object to it, however. Millar (2002) presents objections drawn from Anscombe and Hursthouse. Anscombe notices that doodling may be intentional although not done for a reason. Hursthouse describes a case in which someone out of frustration kicks a car that won’t start. She observes that the act is intentional but not done for a reason. In defense of Mele’s view, Millar (2002: 122) suggests that the intentional acts in these cases may be done for reasons, such as satisfying an urge. Whether or not intentional acts are acts done for a reason, intentional acts differ from free acts. Some addictive acts, although intentional and done for a reason, are not free. Also, some free acts are neither intentional nor done for a reason. For example, a person closing a door in a drafty room may slam the door and do it freely without intending to do it and without doing it for a reason.
14
Collective Rationality
Moreover, a free act need not arise from any sort of intention or reason. I freely blink now although I do not intend to blink or blink for a reason. A free act may even run contrary to intention and reason. Because of habit, a typist may freely press the shift key to type a capital letter when typing his name although he intends and has reason on this occasion to type exclusively in lower case. His slip is free even if behind it is no reason, intention, intentional act, or act intentionally done. Theorists adopt various accounts of collective acts for various purposes. How should a theory of collective rationality characterize collective acts? As with collective agents, I adopt a broad account to make rationality’s evaluations have broad scope. A group’s act is a composite act whose components are acts of the group’s members. Concerted action is not necessary. Jackson (1987: 93) takes any combination of acts of a group’s members as an act of the group. I follow this usage of the term collective act. Some collective acts, as some extended acts, do not require universal participation. A person may obtain an education even if he plays hooky some days. He does not need the cooperation of every person-stage. Similarly, a crowd may raise a ruckus even if not all members are rowdy. I count such collective acts as combinations of acts of all the group’s members but recognize that some members’ contributions are not significant. The acts of a proper subset of a group constitute an act of the group in special cases, such as cases of delegation, in which the subset acts for the whole group, for example, cases in which a quorum acts for a whole committee.9 To simplify, I put aside acts performed on behalf of a group by another agent such as a subgroup. I put aside action by proxy and treat only cases in which a group acts for itself. When a group acts in the sense I adopt, every member contributes an act, although it may be a null act. For example, a group’s meeting includes a contribution by every member, even those who do not attend the meeting. If a member of a group walks and other members are idle, the combination is a collective act. A collective act not done by proxy requires an act of each member. In an election, the majority of voters forms a proxy for the electorate. It settles the election for the electorate. The acts of all of a group’s members taken together make a collective act of the group, but its type may differ from the type of a subset’s act. One percent of the audience’s applauding does not constitute the audience’s applauding. Similarly, a group does not scratch if some member scratches. The member’s scratching and the others’ null acts form an act of the group, but not a collective scratching. Conventions define some collective acts. A convention may settle whether a committee’s voting profile constitutes its passing a resolution. However, convention does not pronounce on the percentage of a crowd whose moving constitutes the crowd’s moving. Conventions do not always settle what counts as a collective act.
Agents and Acts
15
Some authors restrict collective acts more than I do. Some require a mutually beneficial plan of joint action, or require that a group have a collective intention. For example, McMahon (2001: 39–40) requires that a group’s act follow a cooperative scheme. I do not adopt these restrictions on collective action. Rationality evaluates collective acts that do not involve coordination or collective intentions. Voters may elect a candidate without coordinating their votes or forming a collective intention to elect the candidate. Rationality nonetheless evaluates the election. Searle (1995: 23–26), Tuomela (1995: Chap. 2), Gilbert (1996: 1–2; 2000: 2–3; 2001: 114–16), Bratman (1999: Part 2, 105–6), and Pettit (2003) take collective acts to arise from collective intentions. This restriction requires an account of collective intentions. Even if an intention is a functional state, it seems to be so complex a functional state that only agents with minds realize the state. So the restriction needs a technical account of collective intentions. I do not adopt the restriction because free acts do not require intentions. Groups may act freely despite lacking intentions. That a bad collective act was unintentional may excuse it according to certain principles of evaluation. However, I do not investigate such principles. I join theorists who do not require a collective intention for a collective act. Ludwig (2007) argues that joint action does not require a collective intention. He notes that we may, for example, pollute the environment together although we do not intend to do that either individually or collectively. Chant (2006) argues that a collective act does not require a collective intention. For example, two people each flip a switch. The switches together sound an alarm. So the two people sound the alarm. Their sounding it is a collective act even if the act does not result from a collective intention. Schelling (1971) and Young (1998: Chap. 1) point out that a group may create segregation without any individual or the group having an intention to create it. Segregation may arise from individual decisions to be near members of one’s race. Some theorists may require that a group realize an event for a reason before counting the realization of the event as a collective act. This differs from requiring a collection intention, if acting for a reason has an externalist interpretation so that reasons are states of affairs in the external world. Bittner (2001: Chap. 12) takes rational agency as the capacity to act for a reason. He says that to act for a reason is to respond to one’s environment, to a feature one is aware of, a state of affairs in one’s ken. Ducks act for reasons and so have rational agency. May groups act for a reason? Suppose that an agent may respond to external reasons without being aware of them and without acting intentionally. May such responsiveness to reasons characterize a group’s act? In contrast to the response of ducks, a collective act is free, so the appropriate responsiveness to reasons must be free. May acts, including collective acts, be events an agent freely realizes for a reason? Reasons for a group may be its interests, such as efficiency, survival, and other promotions of the interests of
16
Collective Rationality
the group’s members and the group’s function. A group without thinking may advance its interests as a colony of bacteria without thinking advances its interests. Taking acts to be events freely realized for a reason is too restrictive, however. Sometimes a group acts freely even when it does not promote its interest. Its collective interest may not even exist when members’ interests conflict. Suppose, however, that a group’s interest, its good, exists whenever the group acts. Still, not all its acts arise from its interest. Some are contrary to its interest. A corporation’s function of maximizing profits may yield interests contrary to its shareholders’ interests in protecting the environment. Through its shareholders’ acts, the corporation may protect the environment against its interest. A collective act need not promote any collective interest and need not be done for any collective reason in any plausible sense. For breadth of treatment, it is better not to limit a group’s acts to those its reasons prompt. The literature on collective acts asks whether they reduce to acts of individuals in some sense so that some form of methodological individualism holds. Are collective agents anything over and above the individuals that compose them? Do the acts of individuals explain the acts of groups? Kincaid (1990), Gilbert (1996, 2000), Vogler (2001), Graham (2002: 2, 80–84), and Yi (2002) argue that collective acts are in some sense irreducible to individual acts. Pettit (1993: 111–12, 148–51; 2001) blends a type of individualism with a type of holism. Bratman (1999: 122–23, 129), Moulin (2003: 3–4), and Ludwig (2007) endorse forms of individualism. I do not enter this debate, which turns on empirical matters. I do not assume reducibility of all collective acts to individual acts. I claim only that a collective act’s realization requires a combination of individual acts. This claim is not controversial. Both sides of the current debate about individualism agree that individual acts constitute collective acts.10 2.3 C ONTROL A theory of rationality treats free acts. A free act is an exercise of control. Rationality evaluates a free agent’s exercise of control of his acts. What type of control does it assess? This section distinguishes types of control. The next section identifies the type of control standards of rationality govern. Inanimate objects, events, animals, and groups act in addition to people. The rock broke the window. The storm ripped off the shingles. The dog licked the bone. The committee imposed a new graduation requirement. Only the last act is clearly autonomous. May an animal act freely? Animals may have some autonomy. A dog’s owner may train it to heel. That response to training indicates a degree of autonomy. Although autonomy comes in degrees, I assume that among common agents only people and groups composed of them have enough autonomy to be classified as free.11 Not every event an agent causes is his act. A pedestrian enters a crosswalk and causes a driver to stop. The driver’s stopping is not the pedestrian’s act. However,
Agents and Acts
17
an agent’s complete control of an event he causes makes it an act of his. Control, even when sufficient for an act’s attribution to an agent, is not enough to make an act free. Not all acts of an agent are exercises of free control. Snoring during sleep, for example, is not a free act. The sleeper controls his snoring, but not freely. According to the compatibilist tradition, which I follow, a person’s act is free only if it is caused by her mind. Although I do not offer an account of mental causation, a topic of Mele (1992: Chap. 1) and Pettit (1993), I assume that mental states such as beliefs and desires have a role in causing acts. Mele (2003: Chap. 2) presents a congenial causal account of free or human action. He says (p. 38), “Human actions are, essentially, events that are suitably caused by appropriate mental items, or neural realizations of those items.” Beliefs and desires are among the appropriate mental items.12 Freedom does not require awareness. An infant freely acts without awareness. A free act may not be intended. A typical decision is free but not necessarily intended. An act’s being voluntary entails awareness of its performance. A person may scratch himself without being aware of the act. The act may be free without being voluntary. An act may fail to be voluntary without being involuntary and so not free. A free act is not compelled and results from but does not require a voluntary act. An infant acts freely but not voluntarily because not cognitively developed enough to have a will. Groups act freely but not voluntarily because they lack the mental faculty of volition. If we say that a group of friends goes bowling voluntarily, we mean that each member goes voluntarily. A person’s acting freely is not the same as her doing what she wants to do. In cases of weak will, a person acts freely but does not do what she wants to do. The act is a product of her beliefs and desires and so free although contrary to her preferences all things considered. A person’s acting freely at a time does not require her action’s independence from acts of other agents. In science-fiction cases, other agents realize her unified mind. As long as she has a unified mind and her acts proceed from her mental states, she acts freely. It does not matter if acts of other free agents realize her acts. What features of a collective act make it a free act? Groups of people perform acts but lack many features of individuals who are agents. A collective agent has interests but lacks desires because it lacks a mind. Although a group’s members have minds, the group itself does not have a mind. Nonetheless, in typical cases factors outside a group do not settle its act, and the group could have done otherwise. It acts freely. A committee freely passes a motion to adjourn if its members freely vote to adjourn. A group’s act is free if free acts of the group’s members constitute it. Desires of the group need not cause it. Its freedom is derivative. Attention to collective acts prompts expansion of accounts of free acts to allow for derivatively free acts. A characterization of free collective acts does not use mental states except indirectly in an account of individuals’ freedom. Groups of people may act freely without intentions, just as infants may act freely without intentions. A group’s free
18
Collective Rationality
acts do not depend on collective mental states such as collective intentions, beliefs, desires, preferences, and decisions. Attribution of a mental state to a group is common, especially when every member of the group has the mental state, but it is nonliteral because groups lack minds. Although we speak of groups deciding, this manner of speech rests on analogy between people and groups of people. Deciding is forming an intention. A group does not decide because it does not form an intention. A group, we say, decides to have a party. This locution is not literal and typically describes the group’s agreement to have a party. An agreement is a collective act but occurs without a collective mind. A committee decides in a standard sense to table a motion, but that sense is not literal. Pace Copp (1995: 119), Searle (1995), and Gilbert (1996, 2000), groups do not decide, strictly speaking. Person-stages and a group’s members have autonomous control over acts. Groups have distributed and decentralized but still autonomous control. Groups act through members, and persons act through person-stages. The person and the group act freely. They control their acts through their stages and their members, respectively. Plans may coordinate person-stages and group members to perform composite acts with a purpose, but a combination of acts of personstages or group members may be free without coordination. A free collective act resembles a free sequence of an individual’s acts. The sequence is free if its steps are free even if the sequence is unplanned. Does a group control its members, or do the members control the group? Control may proceed in both directions, but I put aside an agent’s control of other agents and treat only an agent’s control of its acts. A group’s control over its acts arises from its members’ control over their acts. It realizes acts by its members’ realizing acts. A group’s act depends on its members’ acts, so a group controls its acts through its members’ control of their acts. For a group to have autonomous control over an act, its control must be independent of outside control. However, a group’s act is autonomous despite its dependency on members’ acts because the dependency is internal. A person’s extended act is autonomous although it requires act-stages realized by person-stages. An act’s autonomy may survive dependency on other acts and features of the act’s agent and its constituents. For example, a group of hikers controls its pace by each member’s controlling her pace. The members’ control of their paces is compatible with the group’s control of its pace. The members control acts that constitute the group’s act. The group and a member do not both control the same act. I do not define control analytically but introduce it by description. Control need not be free. A thermostat exercises control. Free control originates in a unified mind. Free control need not be intentional. A group of investors may control the stock market’s collapse even if they do not intentionally cause its collapse. Also, control may be partial. A single investor may exercise partial control over the stock market.
Agents and Acts
19
There are many types of control. This section’s primary interest is direct control. Control is direct when it does not operate through anything besides the agent. Direct control does not require awareness or an intention. It operates without temporal or spatial intermediaries and without the operation of anything outside the agent and the moment. An infant has direct control of its limbs. A person may scratch himself, exercising direct control, without being aware of scratching and without intending to scratch. A person directly controls an act if, given his current internal states and immediate external circumstances as background conditions, his will suffices for the act. That is, if he were to will the act, then he would perform it. Keeping constant the other factors of the local causal system to which his will and his act belong, his will settles the act’s realization. An ideal agent is certain of an act in his direct control that his willing it suffices for its performance. Direct control covers decisions but extends beyond them and may be exercised without them. The act of moving a finger may arise without a decision and still be directly controlled.13 Direct control is relative to a time. An agent’s signature may suffice for completion of a sale. Then he has direct control of the sale. However, because the sale once depended on others’ acts, he did not always have direct control of it. For convenience, one may spatially and temporally expand background conditions to extend direct control if the extension does not affect evaluation. For instance, one may attribute to a person direct control of taking a deep breath (which takes time) by expanding background conditions. In some cases one may grant that a person has direct control of flipping a switch or taking a step although these acts extend beyond the person and the current moment. One may say that a driver directly controls a car’s path, although, strictly speaking, she exercises control through the steering mechanism. An evaluation of an act such as taking a walk may assume that the agent lives during its period and may count the act as directly controlled. Extensions of direct control that include an agent’s proximate spatial and temporal environment among background conditions often do not skew an act’s evaluation. Does direct control apply only to basic acts? According to action theory’s standard terminology, a basic act is an act not performed by performing other acts. A person’s moving his finger is not done by his doing other things. That act is basic. Basic acts are directly controlled, momentary acts.14 They yield nonbasic acts. For example, waving a hand may yield greeting a friend. Similarly, moving a finger may yield turning on the light. These are examples of nonbasic acts that are consequences of basic acts. A basic act causes nonbasic acts but may also constitute rather than cause nonbasic acts. In a case of raising both arms by raising each arm, constitution yields a nonbasic act. A decision is the formation of an intention to act. It is an immediate, momentary, basic mental act. An agent directly controls it. A reflective agent is certain of his ability to realize it. He realizes it freely, knowing that he does. An agent need not perform an act he directly controls by trying directly to perform it. He
20
Collective Rationality
need not first form the intention to perform it. Spontaneous directly controlled action is possible. A basic act’s realization may involve neurons’ firings. These firings are not acts of a person by which he performs the basic act, however. A person’s decision is a basic act of his because it is not achieved by other acts of his even if, say, it is realized by independent components of his mind. Extended acts are performed by performing momentary acts. Hence extended acts are not basic. Only momentary acts are basic. Not all momentary acts are basic, however. A momentary act may be composite and so nonbasic. A driver may at the same moment accelerate and change lanes. Because such momentary composite acts are performed by performing their components, they are not basic. Moreover, even a momentary act’s being noncomposite does not ensure basicness. A person may perform a noncomposite act by performing another noncomposite act. She may move her hand by flexing her muscles.15 A person directly controls basic acts, such as decisions, but also some nonbasic acts, such as raising two arms, that are composites of basic acts. A referee raising two arms to signal a touchdown directly controls each arm and so the two together. An agent does not directly control a nonbasic act that is an external causal consequence of a basic act, for example, turning on a light by moving a finger. Of course, spatial and temporal extensions of strict direct control may count acts as directly controlled even if they are not basic acts or composites of basic acts. A sentence’s utterance may count as directly controlled according to an extension although it is performed by moving the mouth and tongue and is composed of utterances of words. Temporally immediate control contrasts with extended control. Direct control generates an act immediately. No time passes between the exercise of control and the act’s performance. If performance required time, obstacles blocking it might arise, and, given that possibility, control is not direct. Direct control entails immediate control, but the reverse entailment does not hold. A person directly controls waving his hand, whereby he simultaneously greets his friend. He immediately controls greeting his friend but does not directly control greeting him. Direct control dispenses with external intermediaries even if they operate instantaneously. Immediate control operates without time’s passage but may involve auxiliary acts and so may not be direct. An agent directly and immediately controls his basic acts, such as his decisions. The theory of rationality treats control and so takes acts directly controlled to be simple acts. These simple acts may be composite, as in the case of raising two arms to signal a touchdown. Hence, not all simple acts are basic acts, although all basic acts of individuals are simple acts. Classification of acts directs their evaluation. Fundamental standards treat simple acts. Are any collective acts simple? Perhaps there are basic collective acts, and those basic acts are simple. Are any collective acts basic? A group’s act is performed by its members performing their parts. This dependency does not make all collective acts nonbasic, however. The agent of a collective act differs
Agents and Acts
21
from the agents of acts constituting the collective act. An agent’s act is basic for the agent if the agent does not perform it by performing other acts. A group passes a resolution not by performing another act but because its members vote a certain way. Passing the resolution is a basic act of the group. Although some collective acts are basic, a group does not directly control any of its acts. A group acts through its members’ acts even when it performs a basic act. So basic collective acts are not simple. Besides direct control, another less demanding type of control is full control. A person may move a collection of books from one house to another house by moving each book from the first house to the second. The move is under her full control. A group elects a president by each member’s voting for that candidate. The election is under the group’s full control. An agent fully controls an act if the agent controls it, and nothing outside the agent also controls it. If the agent were to realize the act, no outside agent of any sort would exercise any control over it. The absence of outside control excludes even outside nonfree control of the type that thermostats exemplify. An agent’s control of an act may be partial because the act depends on features of the environment that nature or other agents control. Direct control is immediate, whereas full control may not be immediate. Future extended acts are matters of indirect control because they are composed of acts not in the agent’s immediate control. An agent does not directly control a sequence of future basic acts. Performing the sequence requires continuing to live, which is outside his direct control. However, an agent fully controls a future extended act if at each moment in the extended act’s period, he directly controls the extended act’s stage at that moment. A group fully controls an act if each member directly controls his component of that act. Full control depends on direct control with respect to relevant times, members, or times and members. An act is in an agent’s full control if it is composed of basic acts. An act may be composed of multiple basic acts done at the same time, such as raising two arms, or composed of multiple acts done in a sequence, say, the steps of a walk. A current act in an agent’s full control is in his immediate and direct control. Raising two arms is in an agent’s full, immediate, and direct control. Walking to the store is in an agent’s full control but not in his direct or immediate control. Direct control implies full control, but the converse entailment does not hold. A person exercises full control over a sequence of basic acts because she exercises direct control over each step at the time for it. Nothing outside of her exercises control over any step. She does not exercise direct control over the sequence because her direct control of stages occurs step by step and is not immediate. A group does not have direct control over its act because it works through its members. It has full control over its act if nothing outside the group has control over any part of the act. A group fully controls its act in the sense that its act is exclusively the product of acts its members fully control, one by one. That collective control is not direct, however.
22
Collective Rationality
The boundaries of direct and full control are indefinite. Direct control takes an agent’s internal state and in extensions the agent’s proximate environment as fixed background. Full control takes additional extensions of the agent’s temporal environment as fixed background. Context clarifies the reach of direct control and full control. I do not elaborate context’s clarification of their reach because my points about principles of rationality assume only that acts fully controlled are acts directly controlled or composites of acts directly controlled. A person has full control over a sequence of basic acts given that he lives during their period. An agent with incomplete empirical information may not know that an extended act is in his full control. An ideal agent knows at least that his current decisions are in his full, immediate, and direct control. He has that empirical knowledge of his abilities. Full control may be uncertain even for an ideal agent because it depends on whether some outside agent intervenes. An option of an agent is a possible act of the agent that is free and that the agent fully controls, for example, a possible basic act. An option at a time is composed of basic acts at the time. Does a group have options? At a time it may freely realize various acts. The free acts it fully controls are its options. At the time a committee votes on a resolution, passing it is in the committee’s full and immediate control. Its passing the resolution is an option the group has at the time. A group’s options arise from its members’ options. An option at a time is either a basic act or a combination of basic acts. An individual controls directly realization of an option at a time. An option an agent controls not directly but just fully has components. Extended acts have temporal components. Free, fully controlled, indirectly controlled, momentary acts have components. Signaling the speaker by raising a hand does not have components but is not a counterexample because it is not fully controlled. In the case of a group’s option, the components are members’ free acts rather than the group’s acts. An agent’s options at a time are acts it fully and immediately controls at the time. Full control of a composite act requires direct control of each of the composite’s components at the component’s location. A person fully controls his extended act if he directly controls each temporal component at the time for it. A group fully controls an act at a time if the group at each member (i.e., each member) directly controls the member’s component of the collective act. An agent’s control over an option entails his ability to realize it freely. It does not entail knowledge of control. An agent does not accidentally make decisions, but may accidentally perform basic acts besides decisions, such as finger movements. Direct control does not entail awareness of control. An agent may be ignorant of acts in his full control, also. He may be excused for failing to realize an option because he did not realize it was in his full control. Ideal agents know the acts in their immediate direct control, but humans may lack that knowledge. When they lack that knowledge without irrationality, their lack of knowledge may excuse defective exercises of control.
Agents and Acts
23
2.4 E VALUABILITY The next chapter examines rationality thoroughly. This section just identifies the acts that rationality evaluates. It assumes that evaluating an act for rationality entails holding the act’s agent responsible for the act and briefly examines the relevant type of responsibility. Character traits such as irascibility and emotions such as anger may be irrational despite lack of control over them. Also, beliefs and desires may be irrational despite lack of control. Although rationality evaluates states and traits an agent does not control, it evaluates an act only if its agent controls it. An agent’s control of the act is a prerequisite of responsibility for it. Acts are events and have causal consequences. One may evaluate an act by evaluating its consequences. Evaluating an act for either objective or subjective utility differs from evaluating it for rationality, however. An act’s utility depends on the utilities of its possible outcomes. Those outcomes are not evaluable for rationality because they are not free acts. To be evaluated for rationality, an act must be free. Evaluation for rationality is more selective than evaluation for utility. It targets acts an agent autonomously controls. Not all of a person’s acts are evaluable for rationality. Suppose that a person drops a vase when startled by a loud noise. His dropping the vase is an act but is not evaluable for rationality because not free. The literature on free will and responsibility distinguishes many types of responsibility. I select the type that explains evaluability for rationality and call that type normative responsibility. I introduce it by description. An agent’s responsibility for an act may be causal or normative. A free agent is causally but not normatively responsible for an act that is the product of a reflex and so is not free. A free agent has causal responsibility for all his acts but normative responsibility only for his free acts. Is a night watchman responsible for sleeping on the job although sleeping is not a free act? No, he is responsible and to blame for not taking steps to prevent sleeping while on duty. Normative responsibility for an act does not coincide with the act’s being open to reactive attitudes. An agent may be normatively responsible for a bad free act but excused from blame because the act was not performed intentionally. A hiker may kill a bug unintentionally as she takes a step freely. She is normatively responsible for the killing although her ignorance exculpates her. Similarly, an infant may be normatively responsible but not open to praise or blame for free acts performed without cognizance of their nature. Normative responsibility extends to free acts that circumstances exempt from reactive attitudes. Both causal and normative responsibility for acts may generate moral obligations. An agent may have causal responsibility for a nonfree act such as tripping and squashing some flowers. As a result, he may have a moral obligation to make restitution for damages. The obligation does not show that he has normative
24
Collective Rationality
responsibility for the act. I treat normative evaluation of acts and henceforth by responsibility mean normative responsibility.16 Responsibility does not require deep-going freedom. Suppose that an alien controls a person’s beliefs and desires, but his beliefs and desires still cause his acts so that they are free. For example, he autonomously controls wiggling his finger although the alien controls the beliefs and desires prompting the wiggling. Then the person is responsible for his act, and it is evaluable for rationality taking account of his circumstances.17 Evaluation for rationality requires responsibility but not susceptibility to praise or blame. An evaluation may conclude that a free act is neither praiseworthy nor blameworthy. Rationality’s evaluation of an act yields a judgment that it is rational or irrational. A judgment that it is irrational implies blame, but a judgment that it is rational implies only that it escapes blame and not that it deserves praise. Being rational is not the same as being clever. If an agent’s options are all equally good, then picking any is rational but not praiseworthy. An infant acting freely performs acts evaluable for rationality, and they may be fully rational despite the infant’s unawareness of her acts and despite lack of merit for them. Free agents are subject to standards of rationality. A free agent has autonomous control over the world’s future concerning some matter. The agent is responsible for bringing about a future meeting appropriate standards. Groups are agents because of their autonomous control of events, and the standards of rationality for agents apply to them. Groups do not have minds and awareness, and that may excuse performance errors because circumstances affect rationality’s evaluation of acts. An agent is responsible only for its free acts. Is an agent responsible for all its free acts? Pettit (2001: 5) holds that freedom is fitness for responsibility. However, in special cases freedom and responsibility do not coincide. Rationality does not evaluate all free acts. Suppose that a homeowner alerts a prowler to her presence by turning on a light. Her turning on the light is not the same act as her alerting the prowler (according to the account of acts in Section 2.2). Her turning on the light is intentional and rational. Her alerting the prowler is not intentional, being done in ignorance of the prowler. Both flipping the switch and alerting the prowler are free acts. The alerting is a consequence of a basic act for which the homeowner is responsible. The alerting is evaluable for utility, but it is not evaluable for rationality because she is not responsible for it. Suppose that one evaluates for rationality the act of alerting the prowler. If one uses comparison with alternatives, the evaluation declares the act to be irrational. This harsh appraisal is a mistake. Should one judge the act rational in virtue of its origin in a rational basic act expected to bring only good consequences? This generous appraisal does not ring true. Alerting the prowler is neither rational nor irrational. Neither judgment fits the case. It is not the case that alerting the prowler is evaluable for rationality but exonerated by ignorance. If it were exonerated, then it would be rational. Because it is neither rational nor irrational,
Agents and Acts
25
it is not evaluable for rationality. Being free and being a consequence of basic acts that are evaluable for rationality are not enough to make it evaluable for rationality. What type of free control does rationality’s evaluation require? To be evaluable for rationality, an act need not be in the agent’s direct control. Besides basic acts such as decisions, rationality evaluates spatially and temporally extended acts of individuals such as climbing into the car and going to the store. Full control is sufficient for rationality’s evaluation of free acts. A sequence of basic acts is in an agent’s full but not direct control. Still, it is evaluable for rationality because its stages are moment by moment in the agent’s direct control. Rationality evaluates acts fully controlled, hence acts directly controlled and acts composed of acts directly controlled. The homeowner’s alerting the prowler is not in her full control because it depends on factors outside her immediate spatial environment. Therefore rationality does not evaluate it. An agent’s options are evaluable for rationality because the agent fully controls them. Rationality’s evaluation of an option examines its consequences, which may be other acts not in the agent’s full control. Evaluating the other acts is part of an evaluation of the option’s consequences. That evaluation considers their utilities rather than their rationality. A person may be unaware of the nonbasic acts she performs by performing basic acts. The homeowner’s alerting the prowler is a consequence of basic hand movements. Because of her ignorance, the utility she assigns to alerting the prowler does not affect the utility she assigns to her basic acts or their evaluation. Suppose that an agent fully controls a free act without being aware of exercising that control. For example, suppose that a baby has full control of a leg movement without awareness of that control. The agent is responsible for the leg movement. However, the agent’s ignorance of having full control may affect its evaluation. Not only ignorance of consequences but also ignorance of control may affect an act’s evaluation. Both rationality and morality evaluate acts. Their evaluations treat different acts, however. Morality evaluates a wider class of acts. Zimmerman (1996: 40–45) considers the sort of control one has over an act that may be a moral obligation. He discusses direct and indirect control, and also hybrid control and full control. He concludes that any sort of control suffices for evaluating an act as a moral obligation. For example, consider the homeowner’s alerting the prowler. Suppose that the act has consequences better than alternative acts. Then, according to utilitarian tradition, performing the act is a moral obligation although it is not evaluable for rationality. Section 2.3 argues that some acts are free although not intentional or done for a reason. Does rationality evaluate free acts that are intentional or done for a reason rather than acts in an agent’s full control?18 Some free acts are intentional but not evaluable for rationality. An archer’s hitting the target is intentional but not evaluable for rationality because not in the
26
Collective Rationality
archer’s full control. Sometimes he misses. Rationality evaluates the archer’s trying to hit the target. His hitting it is a consequence of his trying. His hitting it is evaluable for utility only. Similarly, some acts done for a reason are not evaluable for rationality. The archer’s hitting the target may be done to win a prize but is not evaluable for rationality. Some free acts are evaluable for rationality but are not intentional. An archer’s trying to hit a target is evaluable for rationality but is not intentional. The archer does not intend to try or intentionally try to hit the target. Similarly, some acts are evaluable for rationality but are not done for a reason. An archer may on a whim try to hit a target without any reason such as attempting to win a prize. His act may be evaluable for rationality nonetheless. To reinforce these points, consider acts of habit. Suppose that someone leaves his house to take out the trash but absent-mindedly and out of habit unintentionally but freely locks the door despite his plan to reenter momentarily. His locking the door is evaluable for rationality because fully controlled even if it is not intentional or done for a reason. Its being unintentional and absent-minded may mitigate blame but does not remove responsibility. Also, a sequence of basic acts is evaluable for rationality even if not intentional. Suppose that a person first says p and then says not-p and thereby contradicts himself unintentionally. His inconsistency is evaluable for rationality despite not being intentional. An evaluable act does not require any intention at all. A perfect agent, such as God, acts directly without the intermediary of decisions and intentions. A perfect agent’s acts are evaluable for rationality although not the product of intentions.19 For the reasons just reviewed, it is best not to revise the criterion of evaluability for rationality so that an act’s being intentional or done for a reason replaces its being free and fully controlled. An act’s being evaluable for rationality does not require that it be the product of an intention or reason but only that it be a free act in the agent’s full control. Which collective acts are evaluable for rationality? As in the case of an individual, acts over which a group has full autonomous control are evaluable for rationality. A group acts freely when its members do. It has full control over acts composed of acts its members fully control. It need not be aware of an act for the act to be evaluable for rationality. In that respect a group resembles an infant with full autonomous control of her limbs but no awareness of moving them. However, in contrast with an infant’s act, an evaluation may judge a group’s act to be irrational despite the group’s lack of awareness. Its members’ awareness of its act may be enough to warrant blame. Must a collective act meet conditions beyond full autonomous control to be evaluable for rationality? Does an evaluation for rationality requires subjective goals that an act may serve? Groups do not have goals. However, evaluation for rationality does not require having goals. An individual may have no goals, and then rationality permits any act. Rationality may evaluate a group’s act by examining its members’ acts instead of the group’s goals.
Agents and Acts
27
Because free acts and reason-motivated acts are distinct, a group’s acts may be subject to evaluation for rationality even if not done for a reason. A committee may pass a resolution without being moved by a collective reason. Its members may support the resolution for various conflicting reasons. Still, its act is free and in its full control. So its act is evaluable for rationality despite not being done for a reason. In that respect its act resembles a free basic act that an individual performs without a reason. Rescher (2003: Chap. 11) expresses a common view about responsibility and so evaluability for rationality. He claims that a group is responsible only for acts it intends to perform, and it intends to perform only acts its members coordinate to perform. This claim about responsibility ignores excuses. A person may be responsible for an act he did not intend. He may open a window and let in a fly. He is responsible for his letting in a fly. He is not to blame for it because he did not intend it. The act’s not being intentional is an excuse. A person responsible for a bad free act may have an excuse for it. According to the account of responsibility that Rescher assumes, a person is not responsible for a bad act if not to blame for it. Making room for excuses requires separating being responsible for a bad act and being to blame for it. The distinction applies to collective acts, too. Groups are responsible only for their free acts. These are acts their members’ free acts constitute. Excuses may deflect blame. Groups do not have intentions. The absence of a collective intention may deflect blame for a bad collective act. For example, suppose that two people in a canoe both lean over the side to pick a bottle out of the water. As a result, their canoe capsizes. According to Rescher, they are not responsible for tipping the canoe because they did not coordinate their acts and so did not form a collective intention to act. However, the pair acted freely, fully controlled the canoe, and is responsible for tipping it. Assuming that their failure to coordinate is excused, it excuses their tipping the canoe. It exonerates them. Suppose that the canoeists are to blame for failing to coordinate. The canoeist in the stern, say, should not have leaned over without telling the canoeist in the bow. Then, all things considered, the pair acts irrationally and is to blame for its act despite the absence of collective intentions. Rationality does not evaluate only collective acts resulting from collective intentions or coordination.20 Chapters 3 and 4 advance principles for rationality’s evaluation of individual and collective acts. This section concludes with some general points about methods of evaluation. These points rest on distinctions between types of control. Rationality’s evaluation of acts attends to an agent’s type of control over them. Its evaluation of an act directly controlled is comparative. It judges the act with respect to other options directly controlled. Rationality evaluates an option that an agent controls indirectly by evaluating the directly controlled acts that yield it. Acts evaluable for rationality are options, that is, acts an agent fully controls. Some, but not all, options are directly controlled. Decisions are in an agent’s direct control. They are evaluable for rationality by comparison with rivals. An agent fully but not directly controls a sequence of basic acts. Rationality assesses the
28
Collective Rationality
sequence by assessing its components. Universal rationality of a sequence’s components suffices for its rationality.21 Options have times of performance and times of availability. Becoming a physician may be available now although it cannot be performed now. Perhaps one can now perform only the first step, say, requesting an application to medical school. One may classify options available at a time according to their times of performance. Focusing on times of performance, an agent’s options at a time are acts in his direct control at the time, such as acts composed of basic acts at the time. Options during a period are acts in his full control during the period. Rationality evaluates an option at a time by comparisons but evaluates an option during a period by components. An act at a time is in an agent’s direct control if it is a basic act at the time, or a composite of basic acts at the time. Raising two arms at a time is composite but is still in an agent’s direct control because its components are basic acts. Rationality evaluates it by comparing it with other options at the time. It may maximize utility even if raising the left arm does not and raising the right arm does not. A compound option at a time, such as raising two arms to signal a touchdown, may be better than either component by itself because its components complement each other. Fine graining of acts makes consistent a judgment that raising the left arm is irrational, whereas raising both arms is rational. The act of raising the left arm alone differs from the act of raising it while raising the right arm. An act alone may be irrational although it is rational as part of a rational composite act. Similarly, a composite option directly controlled may be irrational even if its components taken alone are all rational. For example, driving and talking on the cell phone (extending direct control to reach these acts) may each be rational taken alone but may be irrational taken together. Of course, instead of evaluating a composite’s components in isolation, one may evaluate each in the context of the others. Then each component’s evaluation amounts to evaluation of their combination. For example, given that the referee raises his right arm, raising his left arm amounts to raising both arms. However, one need not evaluate by components this way. One may evaluate the composite by comparison with other acts directly controlled. In ideal cases, at a time of action, an autonomous agent, if rational, maximizes utility among options at the time, which may include nonbasic acts in the agent’s direct control. For it to be rational to raise both arms, it does not have to be rational both to raise the left arm and to raise the right arm. Because the composite momentary act is in the agent’s direct control, it is an option at the time. Rationality evaluates it with respect to alternatives although it is nonbasic. In some action problems, one may treat a walk as directly controlled and evaluate it using comparisons with alternatives because evaluating it as fully but not directly controlled adds nothing significant. One need not dissect the walk into steps and the steps into leg movements. One may neglect its composition by momentary basic acts. However, a reliable, canonical evaluation appeals to components in an agent’s direct control.
Agents and Acts
29
Rationality’s principle of utility maximization, a comparative principle, applies to the set of acts over which an agent has direct control. Decision theorists assume direct control of decisions. In a decision problem, one has a compelling reason to make a decision, and tries to identify a decision that maximizes utility. A decision, although it is in one’s direct control, may have as object or content an act only in one’s indirect control. For example, one may decide to drive through a flooded section of highway. One directly controls the decision but not driving through the water. How does rationality evaluate a group’s act assuming that the group exercises full control of it? Rationality evaluates a collective act by components because a group does not directly control its act. Rationality relies on evaluations of the individual acts that constitute the collective act. Evaluation by components does not require identifying options to which an act is compared. This is an advantage. A general principle governing rationality’s evaluation of options follows from rationality’s attention to types of control. Compositionality: Rationality evaluates by comparison only options that an agent controls directly and evaluates by components options that an agent controls indirectly. According to this principle’s second part, the components of an option indirectly controlled explain rationality’s evaluation of the option. Their rationality is consistent with and entails the option’s rationality. Subsequent chapters defend compositionality and use its implications to refine principles of rationality. Figure 2.1 displays this chapter’s classification of an individual’s or a group’s free act. Chapter 3 supports evaluation of extended acts by components, and Chapter 3 supports evaluation of collective acts by components. A free act
Fully controlled
Not fully controlled
An option
Not an option, but a consequence
Evaluable for rationality
Evaluable for utility
Subject to immediate control
Subject to extended control
A momentary act, an option at a time
An extended act, a composite act
Evaluable according to further classification
Evaluable by components
Directly controlled
Not directly controlled
Basic or composed of basic acts
Composed of other agents’ acts
Evaluable by comparisons
Evaluable by components
FIGURE 2.1 Classification of a free act.
30
Collective Rationality
The book’s theory of collective rationality relies on this chapter’s conclusion that a group may act freely, through the free acts of its members, despite lacking a mind. When a group fully controls its act, the act is evaluable for rationality. Principles of collective rationality, honoring compositionality, govern it. Chapter 7 also uses this chapter’s points about agency to refute some proposals concerning rationality’s regulation of coordination.
3
Rationality
C
OLLECTIVE rationality extends rationality from individual agents to collective agents. Introducing collective rationality requires, besides an account of agents, an account of rationality. This chapter introduces rationality by describing it rather than by attempting to define it analytically. Because rationality is complex, a thorough description cannot be brief. For economy, this chapter draws on the reader’s familiarity with the concept of rationality from its frequent expression in ordinary language. The chapter reviews only key features of rationality, especially features important for principles of collective rationality.1 The first section briefly examines the philosophical underpinnings of a theory of rationality. The next two sections assert rationality’s attainability and explain a type of comprehensive rationality that demands the right act for the right reason. The last section supports the principle of compositionality, concerning rationality’s attention to types of control, that is discussed in Section 2.4.
3.1 M ETATHEORY Rationality is a normative concept. Being rational is being reasonable or sensible. An act’s being rational is its being justified or warranted. This is a rough characterization.2 Rationality is a theoretical concept implicitly defined by its role in principles governing behavior and mental states. Rationality’s principles require, for instance, consistency in beliefs, preferences, decisions, and acts. This section sketches a metatheory of rationality drawn from the normative literature. It treats the meaning of rationality, the metaphysical grounds of rationality, the means of knowing facts concerning rationality, and the reasons for being rational. A common view connects rationality and reasons. Copp (1995: 176) asserts that rationality is a matter of being responsive to reasons. Similarly, Sen (2002: 4) takes rationality as being subject to reason. Rationality requires more than responsiveness to reasons, however. It requires certain types of response, for instance, utility maximization in certain cases. Reasoning, because it may be erratic, does not ensure a rational act. Good reasoning reviews and weighs relevant considerations 31
32
Collective Rationality
because it aims at an appropriate response to reasons. Graham (2002: 1–2) observes that support by good reasons characterizes rationality. A version of the view connecting rationality and reasons holds that a person acts rationally if and only if he does what he has most reason to do and does it for those reasons. This view specifies that the relevant reasons are internal and so accessible. A commuter driving home has most reason to take the indirect route if the direct route is blocked by traffic, but it is irrational for her to take the indirect route if she does not know about the traffic jam ahead. Conflict may arise between rationality and external reasons but not between rationality and internal reasons such as beliefs and desires.3 What one has most reason to do is vague.4 How are reasons compared? What is the strength of a combination of reasons? Suppose that your reasons are quantitative beliefs and desires, and they combine to yield an act’s expected utility. Also, suppose that you have more reason to perform one act than to perform another act just in case the first act has greater expected utility than the second act. Then the principle to do at a time an act you have most reason to do yields the principle to perform an act at the top of your preference ranking of your options at the time, assuming that your preferences go by expected utilities. The first principle yields the second principle given cognitive resources, limited types of reasons, and rational preferences. The first principle is broader than the second principle, however. Some methods of processing reasons do not weigh and add them but, for example, make them side constraints or give them priority over other reasons. Explaining what one has most reason to do generates a theory of rational acts. A theory of rationality progresses by elaborating reasons and handling them with precision. Instead of exhaustively studying reasons, this book formulates standards of rationality that use reasons of a certain type such as preferences. Bittner (2001) and Mele and Rawling (2004) have more general accounts of reasons, having reasons to act, and acting for a reason. Bittner (2001: 17) objects to decision theory. He attributes to it mistakes about explanation of action by beliefs and desires. Some branches of decision theory and game theory explain behavior. Kahneman and Tversky (1979), Sugden (1986), Skyrms (1996, 2004), Gintis (2000), and Colman (2003), for example, explain behavior and bypass evaluation of behavior. However, standard normative decision principles do not purport to explain behavior. They are principles of rationality. Their role in explanation of behavior is an open question. People may not follow the principles. The principles’ success depends on their ability to justify behavior rather than to explain it empirically. Some theorists put aside philosophical questions about rationality to expedite mathematical results. They stipulate a precise definition of rationality to formulate theorems about rationality in a technical sense. For example, a theorist may stipulate that rationality is utility maximization.5 According to the ordinary sense of rationality, however, it is a normative issue whether a rational decision maximizes utility. The standard of utility maximization is not a logical consequence of
Rationality
33
a definition of rationality in the ordinary sense. Weirich (2001) supports the standard with principles concerning the separability of basic reasons. Whether the standard is correct depends on its following from such basic normative principles.6 Myerson (1991: 2, 11) and Binmore (1994) technically define rationality as a type of consistency among choices, more precisely, their being representable as maximizing utility. Rationality in its ordinary sense requires more than that choices be “as if ” they maximize utility. In ideal conditions utility maximization, not just compatibility with it, is necessary for rationality. Taylor (1987: 178) presents a conception of rationality common in the social sciences. According to it, rationality is egoistic instrumental rationality. Hardin (1982: 10) takes rationality to be efficiency in securing self-interest. He does this to simplify explanations of behavior (p. 14). In some empirical studies the simplification may be warranted, but in a normative theory of rationality, it is a distortion. Rationality is compatible with altruism and does not require exclusive promotion of self-interest, as Gintis (2000: 243–44) notes. An instrumental account of rationality may permit goals of any type. Osborne and Rubinstein (1994: 1) take rationality as pursuit of well-defined exogenous objectives. A rational agent “is aware of his alternatives, forms expectations about any unknowns, has clear preferences, and chooses his action deliberately after some process of optimization” (p. 4). An instrumental account of rationality is too narrow, however. Rationality demands more than adopting means appropriate to one’s ends. Noninstrumental principles of rationality governing basic preferences, for example, prohibit pure time-preference and require reasonable attitudes toward risk.7 Because a general theory of rationality takes instrumental rationality as only a necessary condition of rationality, it complies with Searle’s demand (2001: 128–31, 167) to account for prudence. It makes room for a principle requiring concern for future well-being. Searle criticizes decision theory for not explaining preferences among options (2001: 125–26). However, utility analysis, in particular, intrinsic utility analysis as presented in Weirich (2001: Chap. 2), explains those preferences. An explanation of the rationality of preferences is part of a general theory of rationality. Pettit (1993: 246–48) attributes to austere decision theory a nihilistic subjectivism, that is, the view that any probability and utility functions are as good as any others as long as they are coherent. However, decision theory easily accommodates constraints on probability and utility functions besides coherence. A general theory of rationality includes such constraints. Principles of rationality are wide ranging. Rationality provides an evaluation of beliefs, desires, decisions, acts, and people, among other things. The diversity of objects of evaluation generates a division of labor in the study of rationality. Different fields attend to different aspects of rationality. Logic studies rationality in inference. Epistemology focuses on rationality in belief. Statistics treats
34
Collective Rationality
rationality in probability assignments. Philosophy studies the rationality of basic goals. Decision and game theory treat rationality in action. Economics treats rationality in exchanging goods. The disciplines use the same concept of rationality but apply it to different subjects. A single concept of rationality unifies branches of the theory of rationality. The principles of rationality the various disciplines advance treat controversial matters and so sometimes diverge. The principles use the same concept of rationality although they disagree about the requirements that rationality imposes. It is tempting to resolve controversies by distinguishing different kinds of rationality, each obeying its own characteristic principles. However, a theory of rationality is more fruitful if it settles controversies through argumentation rather than fragmentation. Theoretical and practical rationality, for example, are not different types of rationality but just rationality in the service of different goals. Theoretical rationality serves cognitive goals such as those pertaining to belief, whereas practical rationality serves practical goals such as those pertaining to desire and especially action. Theoretical and practical goals form distinct categories, although some goals within the two categories are similar. Cognitive goals include consistency among preferences as well as consistency among beliefs. For some agents, practical goals include predictions and explanations. Even if vague, rationality in its ordinary sense is a single concept, not a collection of concepts. A single concept of rationality issues standards for individuals and also standards for groups. Its applications are multifold as they must be to handle nuances of evaluation. As Section 3.3 explains, rationality’s evaluations may adopt conditions and may have more or less comprehensive scope. A theory of rationality uses a single concept of rationality applied with respect to various conditions and with adjustable scope to obtain multiple standards of rationality. Theorists differ about the relation between being a free agent and compliance with principles of rationality. I follow the common view that free agents may act irrationally, even most of the time. An agent’s irrationality does not undermine his freedom, although it diminishes evidence of freedom. Intransitivity does not subvert preferences, although comparisons’ intransitivity weakens evidence that they are preferences. Interpreting the acts of free agents may require attributing rationality to the agents. That epistemological requirement, however, does not establish that free agents necessarily act rationally. Rationality and morality are both normative, but rationality has narrower scope than morality does. Rationality imposes on an agent requirements and goals arising from considerations concerning the agent, whereas morality uses a broader set of considerations concerning all agents and more. Rationality handles reasons concerning the agent’s goals, which may, but need not, include being moral. Morality handles reasons of all sorts including other agents’ goals. Rationality attends to internal reasons only, whereas morality attends to external
Rationality
35
reasons also. Because it attends to all reasons, morality’s evaluations of a person’s act rest on facts and intrinsic values, whereas rationality’s evaluations rest on the person’s beliefs and desires. Nonculpable ignorance excuses a person’s failure to meet her moral obligations, but it reduces rationality’s obligations rather than excuses failing to meet them.8 Sometimes a rational agent does not care about morality. Then morality may override rationality. Acting morally does not entail frustration of goals of rationality, however. Every agent may have had moral goals. If an agent has moral goals, then a moral act rationally serves those goals. Rationality explains what an agent should do putting aside the possibility that his basic goals neglect morality. In a typical decision problem morality permits any decision, assuming that maximizing intrinsic value is supererogatory. Rationality resolves the decision problem. It settles what ought to be done in an unqualified sense of ought. I treat only decision problems in which all relevant options are moral. Rationality’s evaluation of a person’s act depends on the person’s beliefs and desires. That does not make its standards subjective. A person need not want to be rational to have a reason to be rational. Agents have a reason to be rational in action whether or not they endorse that reason.9 As agents, they control realization of their options. In an ideal decision problem, they have a reason to realize a top-ranked option whether or not they recognize that reason. Reasons for beliefs and desires also arise whether or not an agent acknowledges them. This independence makes reasons and the standards they support objective. Some principles of rationality I argue for, but others I adopt as basic without argumentation. The basic principles are plausible starting points for justification of other principles. For example, I assume that an ideal agent’s rational degrees of belief obey the standard laws of probability. Principles of rationality have an a priori justification. My method of justifying principles is, first, intuitive support for the principles and, second, reflective equilibrium among judgments about cases and judgments about principles. The equilibrium is intersubjective because it arises from shared judgments of people who think about the issues. I do not specify the type of coherence that obtains between judgments in equilibrium but presume that it demands logical consistency and that it absorbs arguments from basic principles to nonbasic principles. The method of reflective equilibrium incorporates any reasons for principles that other methods invoke. It is at least as comprehensive in acknowledgment of relevant reasons for principles as is any rival method. The method leaves open the possibility of multiple reflective equilibria, as Bates (1999) observes. Each equilibrium takes account of all relevant considerations and so leaves none for equilibrium selection. Should multiple reflective equilibria arise, their common elements yield rationality’s principles.10 Rationality’s principles are immutable although judgments about them may change. The method of reflective equilibrium does not adjust those principles to achieve consistency. It adjusts judgments about cases and judgments about principles. Adjustment in judgments is an epistemic process. Some evidence
36
Collective Rationality
supporting judgments is independent of coherence with other judgments. That independent evidence gives an equilibrium a foundation. A typical argument for a principle of rationality uses only coherence with other principles, however. The method of reflective equilibrium is a priori in the way that mathematics is a priori. Of course, applications of mathematics may rest on a posteriori matters. For example, if there are two cows in the barn and two cows in the pasture, and the farm comprises just the barn and the pasture, then there are four cows on the farm. The addition is a priori even if learning the number of cows in the barn and in the pasture and the extent of the farm is a posteriori. Similarly, an application of the method of reflective equilibrium, although a priori, may reach an equilibrium among a posteriori judgments. Principles of rationality are a priori. They are evident to ideal agents in ideal circumstances for understanding the principles. Their a priori status is independent of the classification of human methods of discovering them. Humans may use a posteriori methods to discover a priori truths. Many people learn some a priori truths by the testimony of experts rather than by a priori methods. Given principles of rationality, one may ask why be rational? This question requests a bedrock reason to perform an act the principles support. A good brief answer is that rationality aims at success, such as true beliefs and attainment of personal goals. Schmidtz (1995: Chap. 1) answers the question more thoroughly. I assume that one should be rational and offer only brief comments on the value of being rational. Contrary to a popular view, rationality is not cool emotionless calculation. It is compatible with love and spontaneity. Rational goals include everything desirable and nothing undesirable. Being rational is itself desirable because it is good for a person and good impersonally, too. Practical rationality grounds theoretical rationality. True beliefs are useful. A rational person also intrinsically values theoretical rationality. Consistency is a cognitive goal as well as a necessary condition of true beliefs prompting successful acts. Although rationality aims at success, it is not the same as success. A rational act may be unsuccessful because the agent is ignorant of its consequences. An act’s rationality depends on the agent’s beliefs and goals, whereas an act’s success depends on the agent’s goals and not on the agents’ beliefs. Epistemologists point out that an irrational pattern of reasoning, such as optimism in the face of repeated setbacks, may evolve because of its survival value. Irrational rage may make an agent strong enough to push his car out of a snowbank. Although irrationality may be rewarded in special cases, being rational promotes success in the subjectively best way, that is, the way best in light of an agent’s limited information and cognitive abilities. Given a principle of rationality, one may ask what makes it true? Convention, culture, and human psychology are possible grounds. Because this brief section cannot do justice to the issue, I just mention that Gibbard (2002; 2003:
Rationality
37
Chaps. 5, 6) and Papineau (2003: Chaps. 1–3) make strong cases that normativity has a naturalistic foundation.
3.2 ATTAINABILITY Standards of rationality comply with the principle that “ought” implies “can.” When applied to requirements of rationality, the principle attributes to “ought” a subjective sense that considers an agent’s beliefs and desires. A failure to do what one ought in that sense is blameworthy. The principle’s sense of “can” adjusts to the type of obligation. If “ought” has an objective sense as in moral principles, physical ability may be the relevant ability. If “ought” has a subjective sense, the principle may substitute a more demanding type of ability, such an ability to fulfill an obligation by choice. Applied to obligations of rationality, the principle’s sense of “can” is vague. This section’s arguments about rationality’s attainability assume only the physical ability to meet a requirement of rationality. The characterization of evaluability for rationality discussed in Section 2.4, however, authorizes a more demanding type of ability. For requirements of rationality, it permits taking “can” to indicate an ability to control fully and autonomously. Accordingly, rationality requires an act only if it is an option in an action problem. It does not require character traits because they are not in a person’s full control. Saying that one ought to be perspicacious expresses an ideal and not an obligation because one cannot summon perspicuity at will. The word “ought” has a comprehensive sense and a noncomprehensive sense. The principle that “ought” implies “can” assumes the noncomprehensive sense. It grants current conditions as they are, even if they are the result of mistakes. Suppose that a person puts himself out of position to perform an obligatory act. The act may still be obligatory in a comprehensive sense although currently the person cannot perform it. However, it is not obligatory in the noncomprehensive sense that grants current conditions as they are. Hence, the case does not refute the principle that “ought” implies “can.” The requirement of full control disqualifies as obligations acts that an agent can perform only by luck. Suppose that an infant presses the garage door opener as the car enters the driveway. By luck, she opens the garage door for the driver. Suppose that a child writes next to a complex mathematical formula the number that is the formula’s solution. By luck, she writes the correct number. Suppose that a person fiddling with the dial on a combination safe enters the combination. By luck, she opens the safe. Suppose that a gambler picks a card from a downturned, shuffled deck of cards. By luck, she picks the ace of spades. In these cases an act the agent controls realizes another act that the agent performs by luck. The agent does not fully control the second act. Its performance requires environmental factors beyond the agent’s control. It is not an obligation.
38
Collective Rationality
The principle that “ought” implies “can” is ambiguous as well as vague. According to one reading, if rationality imposes an obligation on an agent, then in some world he fulfills rationality’s obligation. According to another reading, if rationality imposes on an agent an obligation to do x, then in some world he does x. To bring out the difference, suppose that there are just two worlds w1 and w2. In w1 an agent ought to do b but does a, and in w2 he ought to do a but does b. In each world he fails to meet his obligation, but in each world he can do the act he ought to do, assuming that in each world both a and b are in his full control. Rationality’s attainability adopts the principle’s first reading. An agent can be rational. Accordingly, in every action problem, the agent rationally resolves the problem in some world. The agent’s act affects his information and so his estimation of his act’s value. A world in which it occurs includes his information in that world. Irrationality is blameworthy, and blameworthiness considers an agent’s abilities and circumstances. Rationality adjusts its demands to an agent’s abilities. An agent’s circumstances may create various impediments to reaching goals of rationality and thereby good excuses for failing to reach them. Cognitive limits, for instance, lower requirements of rationality. Standards of rationality are also sensitive to social context, as Gigerenzer (2000: Part 4) and Gigerenzer and Selten (2000) observe. Rationality’s demands on an individual depend on factors such as membership in a group. Vallentyne (1999: 686) reports that some theorists reject the attainability of rationality. Those that do may have in mind a technical definition of rationality, such as utility maximization, and hold that rationality so defined is not always attainable. The ordinary concept of rationality imposes only satisfiable demands. Failure to meet those demands makes an agent blameworthy. An agent escapes blame only by meeting rationality’s requirements. An agent can escape blame and attain rationality.11 Irrationality implies blame. Blame entails the absence of excuses. Hence excused irrationality does not exist. Excuses for failing to maximize utility arise. When they arise, rationality does not require utility maximization. It substitutes an attainable standard. The absence of excused irrationality is a sign of rationality’s attainability. Suppose that a student in kindergarten acquires from her teacher an irrational belief that 2 þ 2 ¼ 5. May the irrational belief be excused? In this case the belief is irrational because for most people believing its content is irrational. It is not irrational for the student to hold the belief, however. The student’s belief is not held irrationally but excusably. Suppose that fear of the dark causes a child to believe that a monster lurks under the bed. Is the belief irrationally but excusably held? The belief is held without sufficient epistemic reasons. But epistemic reasons are not the only factors affecting a belief ’s evaluation. If the belief is held excusably, then it is not held irrationally all things considered. Being rational and being irrational exhaust the possibilities for an option in an action problem. Every act subject to evaluation for rationality either is rational
Rationality
39
or is irrational. There is no middle ground, nor any intermediate grade of rationality, although some departures from rationality are graver than are others. Principles of rationality direct choice. They cannot direct choice unless they recommend a choice. So in every decision problem they recommend at least one option. In every decision problem, taking for granted an agent’s circumstances, some choice is rational. In a dilemma of rationality an agent cannot meet all standards of comprehensive rationality. Weirich (2004: Chap. 7) argues that in some cases an agent cannot make the right decision for the right reasons. A dilemma may arise from mistakes made elsewhere. Then the agent is to blame for the dilemma. However, he can still decide rationally given his dilemma. This type of conditional rationality is attainable. Granting the agent’s circumstances, some way of proceeding is rational. Rationality yields requirements and goals. For example, it requires consistency in beliefs insofar as one is reasonably able to achieve it and advances the goal of complete consistency in beliefs. Standards of rationality express requirements or necessary conditions of rationality and are attainable. Goals motivate rational agents but may not be attainable. Standards, but not goals, are sensitive to circumstances. A goal expresses a requirement for ideal agents in ideal circumstances, that is, when conditions are ideal for meeting the goal, and meeting it does not conflict with other goals of rationality. For ideal agents in ideal circumstances, meeting goals of rationality is necessary for rationality. Given the idealizations, the goals also express standards of rationality. For simplicity, many principles of rationality express goals of rationality and yield requirements of rationality in favorable conditions. For instance, consistency in beliefs is a goal of rationality and is a requirement of rationality in ideal conditions. In ideal conditions an agent is aware of an inconsistency and can correct it costlessly. However, in nonideal conditions, given the costs of detecting and correcting inconsistencies, an agent may hold inconsistent beliefs without irrationality. Consistency among acts is a common goal of rationality. In favorable circumstances agents should act consistently. For example, they should not reverse previous acts without justification. That inconsistency is wasteful of time and energy, and is cognitively deficient even if not wasteful. However, sometimes agents are excused for acting inconsistently. An administrator may excusably adopt inconsistent policies if the inconsistency is so subtle that it escapes detection despite diligence. In many cases an agent falls short of a goal of rationality but nonetheless acts rationally. An agent’s limited resources and the difficulty of an action problem may furnish good excuses for falling short of goals of rationality. According to a common principle of rationality, an individual’s act at a time is rational only if it maximizes utility among her options at the time. Utility depends on basic desires and on information. It is often called expected utility when information influences it. This principle assumes an ideal agent in an ideal action
40
Collective Rationality
problem. In a problem it addresses, options’ utilities rest on stable comparisons of options, and some option has maximum utility. According to the principle, a possible act’s rationality, if it is not performed, depends on its consequences if it were performed. Its rationality depends on results in a hypothetical situation and hence is a dispositional property. Suppose that a worker entertains quitting her job. She considers the consequences of quitting. If she were to quit, would she quickly find another job? Her beliefs about such hypothetical conditionals settle the utility of her quitting, and its comparison with the utilities of alternative possible acts settles quitting’s rationality.12 Rationality’s evaluation of an agent’s act depends on background assumptions about the agent’s action problem and resources for addressing it. Standard idealizations govern the cognitive states, normative status, and powers of agents and also the difficulty of their action problems. They assume that agents have quantitative probability and utility assignments, know all logical and mathematical truths, have unlimited cognitive power to apply those truths, and outside the current action problem do not run afoul of any standard of rationality. An ideal agent knows his probability and utility assignments, knows he knows them, and so on.13 The idealizations together make conditions perfect for meeting the standard of utility maximization. Then rationality requires meeting the standard. In ideal cases agents are comprehensively rational except perhaps in the current action problem. Comprehensive rationality requires utility maximization with respect to rational probability and utility assignments and may impose standards besides utility maximization. In nonideal cases where utility maximization is impossible, rational action is still possible. Standards of rationality generalizing utility maximization are attainable. For example, the standard of following preferences applies when options lack quantitative utilities. Later chapters treat mainly idealized principles for acts that an agent controls fully, namely, options. Principles for evaluating options form the core but not the whole of a theory of rational action. Although humans are only approximately rational, an idealized theory is useful. It may apply approximately to real cases. Also, an idealized theory builds a foundation for a more general, realistic theory that dispenses with idealizations. The idealizations control for explanatory factors and thereby yield partial explanations of an act’s rationality. Principles assuming the idealizations set the stage for future developments of the theory of rationality.14 A theory of rationality advances standards of evaluation for acts and also rules and procedures for performance of acts. Evaluation for rationality may not yield practical direction. An evaluation may arrive after the fact and so have a judgmental rather than an advisory role. I treat utility maximization as a standard of rationality, not a procedure. As a standard of evaluation, it issues only an oblique directive. An agent should take rational measures to ensure a rational act, and so to meet the standard. This directive does not articulate a procedure for meeting the standard. Just as the directive, “Perform a rational act,” does not
Rationality
41
dictate a procedure, the directive “Maximize utility,” does not dictate a procedure. An agent may comply with it without calculating the utility of each act and selecting an act of maximum utility. Standards of rationality govern resolution of an action problem and procedures for its resolution. The rationality of the act in which a procedure culminates does not ensure the procedure’s rationality. Identifying an option of maximum utility is not the rational course for an agent with limited cognitive resources. The calculation has high costs for a limited agent and may not be worth the candle, as Morton (2004) explains. Also, requiring calculation for every act prompts an infinite regress of calculations because each step of a calculation is an act that demands a calculation according to that requirement. One proposal recommends a procedure that has maximum utility among procedures. This proposal is mistaken. A typical procedure has multiple steps. A procedure that has steps should have utility maximizing steps. A typical procedure is an extended act and so should be evaluated by evaluating its components. Another proposal recommends a procedure that maximizes prospects of a decision that maximizes utility. This proposal ignores the costs of following a procedure. A procedure’s utility depends on the procedure’s costs and benefits. A decision may maximize utility even if not generated by a utilitymaximizing decision procedure.15 Acts are steps in procedures. Later chapters treat standards for evaluating acts rather than procedures for selecting acts. They entertain substantive standards for acts and not procedural standards that say an act is rational only if it is the culmination of such and such a procedure. The substantive standards say that an act is rational only if it has certain nongenetic properties, such as maximizing utility. Distinguishing between procedures and standards deflects some objections to principles of decision theory. Pettit (1993: 239–48) objects to using expected utility maximization as a decision procedure. First, we do not always have access to our probability and utility assignments. Second, we cannot always calculate and compare expected utilities. Similarly, Searle (2001: 127) denies that a rational decision requires calculations and maximization of expected utility. This procedure is too demanding for humans in most realistic cases. Pettit’s and Searle’s points about procedures for humans do not apply to maximization of expected utility taken as an evaluative standard, especially a standard for ideal agents in ideal cases. An agent’s options depend on the agent’s abilities. An individual’s psychology establishes the basic parameters of action. This chapter adopts common psychological assumptions about humans, which idealized yield its assumptions about ideal agents. The assumptions about humans are, of course, open to revision in light of new observations.16 People, besides being free, often act according to preferences. Exceptions arise for various reasons. A person may act without first forming a preference. She may
42
Collective Rationality
do this when she breaks ties, for instance. Also, contrary to the theory of revealed preference, a person may act contrary to all-things-considered preferences, for example, in cases of weakness of will. Conditional preferences or fleeting preferences may prompt a weak-willed act. Also, preferences resting on a narrow range of considerations may prompt acts contrary to all-things-considered preferences. Distractions and temptations may keep out of mind or weaken appreciation of considerations that put an act at the top of an all-things-considered preference ranking. In contrast with humans, ideal agents do not act contrary to all-things-considered preferences. I usually treat all-things-considered preferences but often for brevity just call them preferences. Current preferences, which direct current acts, depend on current beliefs and desires. Past and future beliefs and desires influence current action only through present beliefs and desires. For instance, foreseeing a desire to speak French when in Paris next year, a traveler may desire now to learn French and so begin lessons. Likewise, an intention formed in the past influences present acts only through a present desire to fulfill the intention. For instance, having formed the intention to go to Paris, one may now have the desire to purchase a plane ticket. Beliefs and desires must be current to influence causally current acts. Features of the past and future may furnish reasons for current preferences, but the past and the future are not immediate causes of the formation or maintenance of current preferences. If a past event or state causes a current preference, it operates through a chain of causes connecting the past to the present. For rational free agents, current beliefs and desires are decisive. They cause current acts. A future desire may be a reason for, but not a cause of, a current act. A future desire is a reason given a current belief that it will obtain and a current desire that it be satisfied. If current beliefs and desires do not direct current acts, then the agent does not have full autonomous control of those acts. Full autonomous control requires the potential decisiveness of current beliefs and desires. A rational agent confused about his identity and the present time’s identity serves the self now without regard for nonindexical identification of the self and the present. He flees a fire now even if he is unsure of who he is and what time it is. Candidates for generation of acts include, besides preferences, emotions such as fear. Current emotions may cause current action. In humans, they may operate independently of preference. Fear may prompt flight before the mind has time to form a preference for flight. A soldier out of fear may cower in his foxhole despite a preference for joining his comrades in battle. Nonetheless, in an ideal agent emotions do not generate rational acts contrary to preferences. It is irrational for an ideal agent to act contrary to preferences in an ideal action problem. If future psychological research shows that an intention formed in the past and persisting into the present exerts an influence on a current act independently of current preferences, then it is an open question whether that influence operates rationally. Under my assumptions, in ideal individuals, intentions rest on current preferences. So the issue of their rationally operating independently of current
Rationality
43
preferences does not arise. Granting that current preferences ground an individual’s acts, a theory of her acts’ rationality need treat only current preferences, their origin, and their upshot.
3.3 C OMPREHENSIVENESS Rationality’s evaluations have variable scope. For instance, a decision may be evaluated taking for granted the agent’s beliefs and desires, or it may be evaluated taking account of the beliefs and desires on which the decision rests. Taking beliefs and desires for granted yields an instrumental evaluation of the decision. Appraising the beliefs and desires grounding the decision increases the evaluation’s comprehensiveness. A person who forgoes air travel because of an irrational fear of flying may decide in accordance with preferences but may have defective preferences resting on irrational beliefs and desires. Despite his decision’s instrumental rationality, his decision may be irrational. It does not follow preferences after hypothetical correction for inexcusable, corrigible errors, and so revised to accord with reasonable beliefs and desires. His choice is rational taking his preferences for granted, yet it is irrational all things considered. An evaluation’s scope affects its verdict. A comprehensive evaluation does not take goals for granted and may declare a decision irrational because it serves irrational goals. Evaluation of a decision may be more or less comprehensive depending on whether it includes an evaluation of the decision’s grounds. An act’s evaluation for rationality may separate the act and the reasons that prompt it. A noncomprehensive evaluation may evaluate an act without evaluating the reasons for it. It may conclude that an act is rational although its origin is defective. Comprehensive rationality has wide scope. Standards of comprehensive rationality for an act consider the act and its origin in beliefs, desires, motives, intentions, and character. Noncomprehensive rationality has narrower scope. Its standards take for granted some features of the act’s context. An act may be noncomprehensively rational putting aside various flaws in its origin.17 An act’s evaluation with respect to the agent’s goals may be more comprehensive than its evaluation for instrumental rationality only, that is, as a means of attaining the goals. The evaluation may not take for granted the act’s circumstances. It may consider whether the agent took advantage of opportunities to improve prospects of attaining the goals. A goal motivates strategic acts such as putting oneself in position to reach the goal. An act’s evaluation may examine the agent’s control over circumstances as well as the act’s responsiveness to circumstances. An act may be irrational because of the agent’s bad character and missed opportunities to improve it. A comprehensive decision principle, which evaluates preparations for decisions, may examine the conduct of a whole life, that is, a maximally extended act. It may recommend an equilibrium, if attainable, among the elements of a life, including pursuit of incentives and reduction of regret.
44
Collective Rationality
An extended act may have a few irrational components and yet be comprehensively rational. Comprehensive rationality does not ensure full rationality because it forgives inconsequential irrational components, as Weirich (2004: Chaps. 6, 7) argues. For example, suppose that a shopper irrationally forms a preference for a red bike instead of a blue bike. Then he rationally forms a preference for a yellow bike instead of a blue bike and also instead of a red bike. Finally, he rationally buys a yellow bike. The irrational initial step is inconsequential and so forgiven. The sequence is comprehensively rational although not fully rational. An extended act’s comprehensive rationality entails the rationality of significant steps only. A sequence of acts need not be composed exclusively of rational steps to be comprehensively rational. Standards of rationality have various assumptions. Some standards apply to ideal agents in ideal circumstances. Other standards apply to humans and recognize cognitive limits and obstacles to goals of rationality. For example, a decision may be rational given time pressure although not rational in the absence of time pressure. Some evaluations are conditional rather than limited in scope. Conditional rationality is similar to but distinct from noncomprehensive rationality. Conditional rationality is rationality with respect to an assumption, whereas noncomprehensive rationality is rationality given limited evaluative scope. The two agree when both take for granted an agent’s probability and utility assignments. Assuming those assignments is equivalent to narrowing evaluative scope. However, the two diverge when a lottery ticket holder fancifully considers comprehensively rational action given the counterfactual supposition that she has won the lottery. Assuming a winning ticket is not equivalent to narrowing evaluative scope. Comprehensive rationality may be conditional. It may introduce hypothetical conditions without ignoring any mistakes. Conditional rationality comes in two forms. One grants mistakes that an assumption covers. Another does not grant those mistakes. The first arises in evaluation of an agent’s options granting his preferences as they are. The second arises in evaluation of an agent’s options if his preference ranking of options were reversed, or if his set of options were larger. Conditional rationality not granting mistakes obeys the rule of detachment. If an act is rational given some hypothetical circumstances, then if the circumstances were to obtain, the act would be rational. Conditional rationality granting mistakes has an evaluative, noninferential purpose. It does not obey the rule of detachment. An agent may choose rationally given the options that he considers but inexcusably fail to consider a top option and so choose irrationally. An option rational granting past mistakes may not be rational nonconditionally because past mistakes influence current options.18 Conditional rationality granting mistakes restricts evaluative scope. It puts aside mistakes in background conditions when it assumes those background conditions. The conditions it entertains may simultaneously introduce hypothetical circumstances and also put aside evaluation of circumstances. For example,
Rationality
45
it may evaluate an option given the agent’s consideration of a set of options larger than those actually considered but still omitting the best option. It puts aside considerations but not as noncomprehensive rationality does. Noncomprehensive rationality puts aside some areas of evaluation and does not let their evaluation affect its evaluation. Conditional rationality granting mistakes puts aside all and only mistakes in the circumstances it assumes. For instance, take the comprehensive rationality of an option given the options the agent considers. The evaluation assesses the rationality of preferences among options considered but not the rationality of the agent’s consideration of options. It is as comprehensive as the condition allows because on its own it does not put aside any area of evaluation. Rationality’s sensitivity to circumstances does not make it invariably conditional. Permissible goals an agent adopts ground the nonconditional rationality of acts that serve those goals. Even if rationality does not require those goals, the acts are nonconditionally rational and not merely rational given the goals. Conditional rationality and noncomprehensive rationality use the same concept of rationality with respect to different conditions and with varying evaluative scope to obtain a variety of standards of evaluation. Subsequent chapters treat mainly ideal cases. In ideal cases changing evaluative scope does not alter evaluations. Comprehensive and noncomprehensive rationality agree because agents are error-free. When the chapters treat nonideal cases, they usually treat comprehensive, nonconditional rationality. If they treat conditional rationality, it is usually the mistake-granting kind. 3.4 C OMPOSITIONALITY An option is a free act in an agent’s full control. According to the principle of compositionality that is discussed in Section 2.4, basic evaluation by comparison applies to options directly controlled and nonbasic evaluation by components applies to options indirectly controlled. Consequently, the rationality of acts directly controlled entails and explains the rationality of acts indirectly controlled. For example, the rationality of an extended act’s components explains the rationality of the extended act. This section argues for compositionality’s application to an individual’s acts. To illustrate compositionality, consider a person’s maximal extended act, a way of leading his life. A rational person seeks a good life. This goal influences rationality’s evaluation of momentary acts. It evaluates them by their contribution to a good life. Nonetheless, their rationality explains the rationality of the person’s conduct of his life. He conducts his life rationally if at each moment he acts rationally. If acting rationally moment to moment he fails to maximize over ways of conducting his life, he is nonetheless rational. The standard of rationality for a life is compositional. All indirectly controlled acts evaluable for rationality, and so fully controlled, have components. Their components are acts directly controlled through which
46
Collective Rationality
the agent exercises indirect control. To guide control of a sequence of acts, rationality guides control of its momentary components. The components are the objects of direct control. When rationality tells an individual to change a sequence of steps, it tells him to change a step. Evaluation of an agent’s extended act by its steps uses the agent’s information at each step. That information changes as the extended act progresses. An evaluation of a past extended act and an evaluation of a future extended act draw on different information. An agent knows she has lived through a past act’s period but does not know she will live through a future act’s period. Compositionality prevents inconsistency. Evaluating all options by comparison generates conflicting evaluations. In some cases a maximizing sequence has a nonmaximizing step. One cannot perform the sequence without performing the step. Advice to perform the sequence but not the step is inconsistent. Suppose, for example, that a person receives $5 today if tomorrow he refuses $1. If given $5 today, he may keep that money whatever he does tomorrow. A reliable psychologist predicts his future behavior and gives him $5 today if and only if she predicts his refusing $1 tomorrow. Before seeing the psychologist, he forms the intention to turn down tomorrow’s offer of $1. She predicts his refusal, so he gains $5 today. He abides by his intention tomorrow. His acts during the two-day period yield $5, whereas rival act-sequences yield at most $1. Declining $1 tomorrow, although not maximizing, is part of a maximizing sequence. A rational person in this situation does not decline tomorrow’s offer because doing that is contrary to his all-things-considered preferences. Because he knows that he will accept tomorrow’s offer, he does not today form an intention to decline tomorrow’s offer. An intention today to decline tomorrow is ineffective and so unrewarding. Because he cannot form an effective intention to decline tomorrow’s offer, the $5 prize is out of reach. Maximizing utility today, with respect to his information about tomorrow, does not yield that prize. A rational person maximizes utility today and tomorrow by first forming no intention to decline tomorrow’s offer and then accepting the offer. This sequence does not maximize utility with respect to rival act-sequences because it precludes winning $5 today. A rational person can decline tomorrow’s offer and given that act can today form an effective and rewarding intention to decline. So a sequence with those components is a superior option.19 How may principles of utility maximization avoid inconsistency in such cases? Compositionality achieves consistency by limiting utility maximization to acts directly controlled and evaluating indirectly but fully controlled acts by components. It reserves the basic form of evaluation for basic objects of evaluation. Because an agent directly controls momentary acts but not extended acts, evaluation by comparison applies to momentary acts but not to extended acts. Evaluation of momentary acts directs rationality’s evaluation of extended acts. In the example, restricted utility maximization recommends accepting tomorrow’s offer of $1 although that act is not part of a maximizing sequence. Given
Rationality
47
compositionality’s restrictions, principles of utility maximization do not recommend both the sequence of intending to decline and then declining and also the step of not declining. Applying basic evaluation to extended acts is a rival method of achieving consistency. Is it better than compositionality? Compositionality applies basic evaluation to momentary acts because it takes their rationality to explain the rationality of extended acts and not vice versa. How may one show that it correctly identifies the direction of explanation? Suppose that an extended act and the directly controlled momentary acts composing it are rational. How may one show that the rationality of the momentary acts explains the rationality of the extended act, and that the rationality of the extended act does not explain the rationality of the momentary acts? In science, basic explanatory principles are general. Principles concerning constituents are more general than principles concerning wholes because the constituents may exist when the wholes do not. Atoms constitute balls. The laws of atoms are more general than the laws of balls. They govern atoms even when atoms are not parts of balls. A ball’s motion is constituted by its atoms’ motion. The motion of the atoms explains the motion of the ball. The laws of atoms are more basic than the laws of balls. Are those laws more basic because they are more general than the laws of balls? Generality is necessary but not sufficient for basicness. That water is H2O is a general but not basic law. The principles of rationality for acts directly controlled are general. They govern acts directly controlled whether or not they constitute extended acts. Their generality does not establish their basicness, however. For normative principles explaining the rationality of acts, generality is necessary but not sufficient for basicness. Suppose that an extended act’s rationality explains the rationality of the momentary acts composing it. Then in cases where all components are significant, no extended act is composed of a mixture of rational and irrational momentary acts. Each component is rational if and only if the extended act is rational. Mixtures are possible, however. Also, in some cases the same rational momentary act is part of two extended acts, one rational and one irrational. If evaluative status moves from an extended act to its momentary components, the momentary act is both rational and irrational. Furthermore, a momentary act may be rational even if it does not contribute to any rational extended act. For these reasons, an extended act’s rationality does not explain its components’ rationality. Rationality flows from momentary acts to extended acts. Multiple realizability also indicates the direction of explanation. Various micro features yield hardness. Each explains a type of hardness, but hardness does not explain each. Nor does each explain hardness in general. Similarly, different subsets of rational components generate a rational extended act if its rationality tolerates a few irrational components. So the extended act’s rationality does not explain each rational component’s rationality.
48
Collective Rationality
These points make a case that evaluation of directly controlled acts is basic but do not explain its basicness. An account of its basicness identifies the relation between directly controlled acts and indirectly controlled acts that make evaluation of directly controlled acts basic. Consider, for instance, a fully controlled extended act that has directly controlled momentary acts as components. The momentary acts entail the extended act. However the entailment does not account for the direction of explanation. Entailment is not an asymmetric relation. The extended act supervenes on the momentary acts. However, supervenience is not an asymmetric relation either. For example, triangularity supervenes on trilaterality, and vice versa. The momentary acts constitute the extended act. Constitution is an asymmetric relation. However the rationality of a composite act directly controlled, such as raising two arms to signal a touchdown, may explain the rationality of its components. So constitution does not account for the direction of explanation either. Although constitution in general does not settle the direction of explanation, constitution comes in various forms. For example, the integers are simple mathematical objects that may form composites such as the set {1, 2}. According to another type of constitution, 2 and 1 compose 3, so 3 is not simple. What counts as simple and without constituents depends on the relevant form of constitution. Constitution, not by basic acts but of another sort, may settle the direction of explanation. A theory’s fundamental entities depend on its topic. If a theory’s topic is triangularity, then having three angles explains having three sides. If a theory’s topic is trilaterality, then having three sides explains having three angles. The form of constitution the theory of rationality adopts settles the theory’s account of simple acts. The theory stops analysis when it reaches acts directly controlled. Its analyses go no deeper because its topic is control, and direct control is the basic type of control. Rationality takes acts an individual directly controls as simple acts. Its basic principles address exercises of direct control. They target the elements of control. Rationality is sensitive to responsibility and accountability. The purpose of a theory of rational acts is to guide exercises of control. Its first principles monitor acts that an agent controls directly. A theory of rational acts treats direct control as basic and the rationality of an exercise of direct control as basic. Because of its topic, it puts aside an all-purpose metaphysical sense of constitution and adopts a specialized sense of constitution that divides simple and composite acts according to type of control. Directly controlled acts are simple, and these simple acts constitute composite acts.20 Suppose that objects of type x and objects of type y are both evaluable for rationality. If objects of type x constitute objects of type y, then evaluation of objects of type x is more basic than evaluation of objects of type y and evaluations of x’s constituting a y explain the evaluation of that y. This principle schema asserts that the rationality of a whole depends on the rationality of its
Rationality
49
parts. From the many accounts of constitution available, the schema selects an account that serves theoretical purposes. Among acts evaluable for rationality and so fully controlled, it takes acts directly controlled as simple and acts not directly controlled as composite. When applied to extended acts that directly controlled acts constitute, the resulting principle yields their evaluation by components. Because the theory of rationality uses direct control to settle an act’s simplicity, the principle that explanation flows from parts to whole supports the principle of compositionality. Explanation of the rationality of a nonbasic act directly controlled, such as raising two arms to signal a touchdown, appears not to go from parts to whole. However, the nonbasic act is a simple act if acts directly controlled count as simple. It is composite and not simple only if basic acts count as simple. Rationality, because of its interest in control, counts directly controlled acts as simple and uncomposed. It does not count a combination of directly controlled acts as a composite act if the combination is also directly controlled. Because direct control is the appropriate criterion of simplicity, rationality evaluates nonbasic but directly controlled acts by comparison. Because such acts have no relevant parts, their evaluation is not contrary to the principle that evaluation proceeds from parts to whole. What may explain the rationality of a simple act? May it be the rationality of another simple act? Explanation of simple by simple is possible for relational properties. For example, atom A’s being to the left of atom B may explain atom B’s being to the right of atom A. The rationality of a simple act depends, not on its components’ rationality, but on its relation to its environment, for example, its being utility maximizing in its environment. Rationality evaluates raising both arms with respect to that simple act’s context. It evaluates raising the left arm with respect to that simple act’s context, which may include raising the right arm. The rationality of some directly controlled acts may explain the rationality of other directly controlled acts because the acts’ rationality is relational. The relation of raising both arms to its environment explains its rationality and may also explain the relations of raising the left arm and of raising the right arm to their environments and so may explain their rationality.21 According to compositionality, the rationality of an act partly explains the rationality of following a rule that requires the act. However, the rationality of following a rule seems to explain the rationality of an act that falls under the rule. Rachlin (2002) gives priority to the rationality of following rules. Following good rules is better than maximizing utility among acts, he argues. Can compositionality accommodate the value of rules? A decision’s comprehensive evaluation considers the rationality of factors that influence the decision. A comprehensively rational decision not only maximizes utility among other possible decisions but also has a rational foundation. If an agent has rationally adopted a rule, then its adoption rationally influences the agent’s utility assignment, and following the rule maximizes utility with
50
Collective Rationality
respect to the resultant rational utility assignment. This influence is compatible with utility maximization among simple acts. A comprehensively rational agent may follow rules that she rationally adopts and also maximize utility among simple acts. The comprehensive rationality of the acts falling under the rule explains the comprehensive rationality of repeatedly following the rule. Compositionality accommodates the value of rules.22 Elster (1985: 150) notes that rule-following aids cooperation and so may explain cooperation.23 The rationality of following a rule depends on circumstances, in particular, whether an agent prefers following the rule to departing from it, all things considered. Suppose that the agent does not value following the rule for its own sake. Also, suppose that constantly observing the rule maximizes utility with respect to constant observance of any rival rule, but at some point the rule requires the agent to perform an act that he directly controls and that does not maximize utility. Performing an alternative act that maximizes utility is rational. Following the rule at that point is irrational despite the rule’s benefits. It requires acting contrary to all-things-considered preferences. Following a plan is closely related to following a rule. Suppose that a person decides to go to the store. As a result, he climbs into his car. Is not the act of climbing into the car rational because it is a means of going to the store? It seems so, yet the act of climbing into the car is a component of the act of going to the store. According to compositionality, the rationality of climbing into the car helps explain the rationality of going to the store. Deciding to perform an extended act, such as going to the store, provides reasons to perform its components. The plan’s rationality may make plans for its steps rational. Nonetheless, the rationality of all the steps makes the extended act rational. Explanation distinguishes the plan to perform the extended act from the extended act. An agent does not directly control an extended act. In contrast, the plan and subplans to perform its steps are all simple, directly controlled intentions. An agent intends A and B directly and not by intending A and intending B. Although the plan makes the extended act’s components rational to perform, rational components make the extended act rational. The plan motivates its steps’ executions, but an evaluation of the plan’s execution depends on an evaluation of its steps’ executions.24 A desire to realize an extended act may generate a desire to realize momentary acts that constitute the extended act. Still, the momentary acts explain the extended act, not the reverse. The extended act is not a cause of the momentary acts even if the desire for the extended act is a cause of the momentary acts. A desire to go to the store may be a reason for climbing into the car. That reason, not going to the store, is a cause of climbing into the car. Momentary acts may be justified by reasons for an extended act they realize. Nonetheless, a justification that appeals to the extended act is not an explanation of the momentary acts. The reasons for plans may have generational primacy among reasons, but the occurrence of momentary acts has generational primacy among occurrences of
Rationality
51
acts. Basic evaluation for rationality follows primacy among occurrences of acts. The rationality of acts directly controlled explains the rationality of extended acts even if desires to perform extended acts generate desires to perform acts directly controlled. Direction of explanation differs for desire and for rationality. A person may want the whole and so want the parts, but the parts’ rationality explains the whole’s rationality. Suppose that the rationality of executing a plan explains the plan’s rationality. The plan’s rationality explains the rationality of desires to perform its components, and the rationality of those desires explains the rationality of performing the components. Then the rationality of executing the plan explains the rationality of performing its components. Does this line of reasoning refute compositionality? No, the suppositions are not all true. In a typical case, a plan’s adoption generates desires to perform its components, but the rationality of the components explains the rationality of the plan’s execution. A plan to go to the store explains the desire to climb into the car, but the desire just partly explains the act’s rationality.25 Suppose that in a variant of the case, when the person opens the car door, he remembers that the car is out of gas. The plan gives him a reason to climb into the car, but it does not alter his belief that the act is pointless. So he does not climb into the car. The reasons plans furnish are effective in rational people only if they generate current beliefs and desires, which, as Section 3.2 observes, direct current action.26 All plans with rational steps are rational. Suppose that some achieve superior forms of coordination. Does rationality require one of them and so more than rational steps? No, the value of the superior plans accrues to their steps. The rationality of their steps makes their execution rational. Gauthier (1997) and McClennen (2000: 23–26) hold that a plan is rational if and only if its steps are rational but say that the plan’s rationality rules the roost. According to their view, a plan is rational if it maximizes utility, and a plan’s maximizing utility makes its steps rational. If a utility-maximizing plan has a nonmaximizing part, they say that the nonmaximizing part is rational.27 An act may be part of two plans, one rational and one irrational. The statuses of the plans do not settle the status of the act. For example, a shopper may plan to go to the store and buy bread and may also plan to go to the store and buy bread and withdraw cash from the ATM. Suppose that the first plan is rational and the second is not because the shopper recently closed his bank account. Going to the store is not both rational and irrational, although it is part of a rational plan and also part of an irrational plan. According to McClennen (2004), a rational person adopts a maximizing plan and resolutely executes it. An act is rational if it is part of a maximizing plan adopted. As I understand his view, relevant information and goals are constant so that resoluteness is not obstinacy. Background idealizations ensure that any plan adopted has consistent steps. Spontaneous acts count as belonging to one-stage plans so that every act performed belongs to a plan adopted. Also, a plan adopted
52
Collective Rationality
has a starting time and duration, and these parameters establish a set of rival plans with which the plan adopted may be compared. If an act is part of several possible plans in the set, some maximizing and some not maximizing, the act is rational just in case the plan adopted is maximizing. The other plans do not matter. Belonging to a utility-maximizing plan nonetheless does not ensure an act’s rationality. In this section’s example of irrationality rewarded, a plan to gain $5 today by refusing $1 tomorrow is utility maximizing but has an irrational step. A rational individual does not adopt a plan that asks him at a time to forgo maximizing utility even if following the whole plan maximizes utility. At each time in a plan’s execution he freely acts according to his assignment of utility to options at the time. In ideal conditions that assignment weighs every relevant consideration.28 Rationality’s first principles evaluate options that are extended acts by examining their components. Derivative, shortcut principles may evaluate extended acts by evaluating plans to execute them. This shortcut evaluation works in many cases, but is not reliable when plans have relevant consequences in addition to the extended acts they generate. In routine cases, a plan, an intention to perform an extended act, and the extended act stand or fall together. But in unusual cases a plan’s evaluation may differ from the extended act’s evaluation. An objection to compositionality arises from Feldman’s case (1986: 52–57) of the two medicines. To treat an illness, it is best to take medicine A today and tomorrow. It is second best to take medicine B today and tomorrow. Mixing medicines is bad. Taking medicine B today is best given that one takes B tomorrow, and taking B tomorrow is best given that one takes B today. Yet two wrongs do not make a right. The sequence of acts, B today and B tomorrow, is not optimal. The patient should coordinate his acts at different times and should realize the superior form of coordination. He ought to realize the best sequence, not just a sequence in which each strategy is best given the other. Does utility maximization therefore apply to act-sequences rather than to steps in act-sequences? No, compositionality ensures consistency among evaluations with the same scope and conditions. Suppose that with respect to the condition that tomorrow the patient takes B, one evaluates the sequence of B today and B tomorrow and also evaluates B today. Both the sequence and today’s act are conditionally rational with respect to tomorrow’s act. Suppose that one evaluates acts and sequences for comprehensive rationality. The best sequence is comprehensively rational. Hence, the acts B today and B tomorrow are not comprehensively rational. Comprehensive rationality prevents mutually justifying mistakes. Their prevention does not require utility maximization among extended acts. Compositionality stands. The explanatory dependence of acts indirectly controlled on acts directly controlled guides the formulation of standards of rationality for directly and indirectly controlled acts. Initial versions of standards for acts indirectly
Rationality
53
controlled may conflict with initial versions of standards for acts directly controlled because acts directly controlled constitute acts indirectly controlled. If the standards are inconsistent, then revisions to achieve consistency favor the standards for acts directly controlled because they are explanatorily basic. Other things being equal, one should achieve consistency by revising the standards for acts indirectly controlled. Chapter 7 relies on compositionality’s claims about direction of explanation to settle points concerning coordination, but other chapters rely only on compositionality’s weaker claims about entailment. They infer the rationality of an indirectly controlled act from the rationality of directly controlled acts that constitute it. This inference is the book’s main use of compositionality. This chapter draws three conclusions about rationality. First, rationality is attainable because its demands adjust to an agent’s cognitive limits and to limits decision problems impose. Chapter 7 revises common standards of rationality to make them attainable. Second, rationality may generate comprehensive evaluations of acts. These evaluations identify solutions to decision problems, including those that arise in games of strategy, the topic of Chapters 5 through 12. Third, rationality is compositional in its evaluation of extended acts. The next chapter shows that rationality is similarly compositional in its evaluation of collective acts, such as solutions to games of strategy.
4
Groups
T
HIS chapter extends to groups the account of rationality presented in Chapter 3. The extension advances the book’s general theory of rationality. The first section explains collective rationality. The second examines efficiency, the most commonly proposed standard of collective rationality. The third weighs another familiar proposal about collective rationality, namely, maximization of collective utility. The fourth argues for collective rationality’s compositionality. It contends that individual rationality explains collective rationality.
4.1 E XTENSION This book takes collective rationality to be rationality applied to groups, in particular, their acts. It does not technically define collective rationality as efficiency, but instead treats efficiency’s collective rationality as a normative matter that basic principles of rationality settle. No common technical definition of collective rationality exactly expresses rationality’s extension to groups. Most technical definitions are best taken as normative principles of collective rationality. To introduce collective rationality, this chapter does not analytically define it by presenting necessary and sufficient conditions for it, but instead explicates collective rationality by describing its role in a general theory of rationality. The chapter’s characterization of collective rationality, although an application of the ordinary concept of rationality to groups, presents a theoretical construct. The general theory of rationality guides its construction. The general theory shapes an understanding of collective rationality to yield the best overall set of principles for individuals and groups. Some points about collective rationality immediately follow from its roots in ordinary rationality. Rationality is prescriptive, and so collective rationality directs a group. Its chief directive is to meet the standards of evaluation for collective acts. Because rationality in the ordinary sense differs from morality, collective rationality also differs from morality. In particular, rational collective 54
Groups
55
action differs from fair collective action. Also, because an agent’s rationality yields rational acts, a group’s rationality yields rational acts. Moreover, a collective act is evaluable for rationality just in case it is free and in the agent’s full control so that it is exclusively the agent’s act and not also another agent’s act. Standards of individual and collective rationality are consistent within a general theory of rationality. Therefore, studies of individual and collective rationality are epistemically symbiotic. The demands of collective rationality establish points about requirements of individual rationality. In conflicts between intuitions about individual and collective rationality, intuitions about collective rationality usually yield because they are less firm. Nonetheless, the epistemic interplay between the two sets of standards is bidirectional. One may learn about individual rationality by considering the demands collective rationality places on individuals. New knowledge of standards of collective rationality may prompt reformulation of standards of individual rationality. An epistemic perspective anticipates mutual adjustment of formulations of standards for individuals and for groups. Extending rationality to groups reveals its general features. Those features are constant across rationality’s applications to individuals and to groups. Principles of rationality that seem general may apply only to individuals and not to groups. Then the principles are restricted, not general. Collective rationality is rationality writ large. Studying it displays rationality’s essence.1 Rationality evaluates acts. The essence of an act’s evaluability is the same for individuals and for groups. It is autonomous, full control. If a group’s act is constituted by acts its members control fully, then the group’s act is in the group’s full control. Standards of collective rationality arise from a group’s being a free agent, just as standards of individual rationality arise from an individual’s being a free agent. Both sets of standards arise from the same general standards of rationality. Any differences in applications of the general standards stem from differences in the agents to which they apply. The differences between collective rationality and individual rationality in particular cases stem from differences in the abilities and circumstances of the agents that are groups and the agents that are individuals. The differences in agents yield different derivative, special principles for individuals and for groups. The principles for individuals use mental states, for example. Rationality’s nonessential features may vary in applications to individuals and to groups because of variations in agents’ traits. Rationality’s general principles are for all agents, individual and collective alike, but distinguish between evaluation of acts directly and indirectly controlled. They evaluate acts directly controlled according to alternatives, and acts not directly controlled according to components. An evaluation of an individual’s simple act compares it with alternatives. An evaluation of an individual’s sequence of acts examines its steps. An evaluation of a group’s act examines the members’ acts constituting it.
56
Collective Rationality
Two familiar methods extend principles of rationality from individuals to groups. The first method proceeds by analogy. It starts with a principle of individual rationality, and then constructs a collective analogue. For example, it may start with the principle that an individual should pick an option at the top of her preference ranking of options. Then it may define a collective preference ranking of options and advance the principle that a group should pick an option at the top of its preference ranking of options. Similarly, to extend the principle of utility maximization, it may define collective utility, perhaps by summing individual utilities. Extending familiar principles for individuals requires defining a group’s options, beliefs, and desires. The definitions may require special conditions that restrict a principle’s application. To illustrate the need for restrictions, consider the analogy between coordination of a group’s members and coordination of an individual’s goals. Extension by analogy may direct a group to coordinate its members just as an individual coordinates pursuit of his goals. As a rational individual resolves conflict between his goals, a rational group resolves conflict between its members.2 The analogy between coordination of a group’s members and coordination of an individual’s goals does not yield analogous principles for evaluation of individuals’ and groups’ acts. An individual may rationally decide to put aside a goal in order to pursue other goals. He may abandon his dream of becoming a rock star in order to pursue a career in medicine, for example. A group, however, may not rationally discount a member’s interests. If a member acquiesces in a group’s neglect of his interests, he acts irrationally and the group does not act rationally in a comprehensive sense. Although in some cases one member’s irrationality may be inconsequential and not entail collective irrationality, a member’s irrational self-sacrifice is not an acceptable mistake and so entails a lack of comprehensive collective rationality. Another analogy holds between a group’s coordination of its members and an individual’s coordination of her life’s stages. A principle of collective rationality may claim that just as an individual acting rationally achieves coordination among her present and future stages, a group acting rationally achieves coordination among its members.3 Because persons are simple and not composite agents, different standards of rationality apply to persons and to groups. Standards of rationality obligate an individual to coordinate stages of her life. They may require forgoing happiness in one stage to achieve enduring happiness later. They do not obligate the members of a group to coordinate in similar ways. In particular, they do not obligate a member of a group to sacrifice her interest for the group’s interest. Instrumental rationality requires a person-stage to care about the person’s whole life, but does not require a group’s member to care about the whole group. A person’s stages are united by personal identity. They share a common consciousness. A group’s members lack that unification. They are, of course, members of the same group, but they have no common consciousness. For an
Groups
57
application of standards of rationality, a person’s stages are not analogous to a group’s members. Because of their metaphysical differences, an application of standards of rationality to a collection of stages of the same person diverges from an application of the standards to a collection of people. The second method of extension starts with general principles of rationality for all agents. Then it applies those principles to individuals to obtain principles of individual rationality, and also applies those principles to groups to obtain principles of collective rationality. For example, it applies the principle of consistency among acts to all agents in ideal circumstances. The method’s general principles respond to differences in agents. Take the general principle that a composite act is rational if its components are rational. Applying this principle to individuals, it says that an extended act is rational if the momentary acts that constitute it are rational. Applying this principle to groups, it says that a collective act is rational if the individual acts that constitute it are rational. Because applying the method requires an analysis of agents, I say that it works by analysis. Although both methods of extension are fruitful, Section 4.4 argues that the method of analysis is basic, and the method of analogy is derivative. Compositionality supports a basic principle of collective rationality asserting that the individual rationality of a group’s members suffices for the group’s collective rationality. This principle does not require attributing to groups preferences, beliefs, or anything mental at all. Evaluation by components is independent of technical definitions of collective analogues of an individual’s mental states. Because those definitions are stipulative, they may yield misguided standards of collective rationality at odds with individuals’ rationality. Individual rationality’s entailment of collective rationality refines the definitions so that they contribute to a general theory of rationality. Besides having general principles of rationality, a general theory of rationality should be unified. Its general principles should be applicable consistently. If its principles instruct a group to perform a certain act, its principles should not instruct the group’s members to perform incompatible acts. Section 4.4 unifies the book’s general theory of rationality by arguing that standards of collective rationality are compositional. A group’s act is rational if the acts of members constituting it are rational. The general theory of rationality makes collective and individual rationality consistent by making individual rationality foundational, so that it explains collective rationality. Analogy and intuition may suggest principles of collective rationality, but principles of individual rationality ground principles of collective rationality. The theory of collective rationality explains how to design a society to make it behave rationally, and the theory reconciles its design with the theory of individual rationality, which governs each member of the society. Individuals create and utilize opportunities for joint action if they are rational. Membership in a group influences appraisals of an individual’s rationality.
58
Collective Rationality
The theory similarly suggests plans for artificial societies such as teams of interacting robots. Each robot obeys its program, whose objectives yield its preferences, but its program may contain instructions for sensibly using opportunities for coordination. The instructions to each robot should yield a team of robots that act well collectively.4 A theory of collective rationality directs groups. Principles of rationality direct a group by directing its members. The principles tell the members to take advantage of opportunities for coordination. A group achieves collective rationality if its members are rational individually. In favorable circumstances the members’ individual rationality achieves a goal of collective rationality, such as efficiency. Being in a group alters personal rankings of options, and favors options that contribute to profitable joint acts. Rational individuals take reasonable steps to put themselves in a position to realize goals of collective rationality. Comprehensive standards of collective rationality require that individuals prepare for collective action problems, and for opportunities to perform mutually beneficial joint acts. Being a member of a group may influence options, beliefs, and desires so that individual rationality leads the group’s members to realize a goal of collective rationality. Standards of collective rationality are sensitive to circumstances and vary according to background assumptions. Individuals who rationally prepare for collective action problems and then rationally resolve those problems attain the standards of collective rationality that apply to them. They need not neglect their own rationality to achieve collective rationality. 4.2 E FFICIENCY Sen (2002: 290) mentions (without endorsing) a purely procedural conception of collective rationality. It holds that collective rationality is whatever emerges from sensible social institutions. This view is too tolerant. Sensible institutions in the hands of irrational individuals may fail to generate collectively rational acts. Standards of collective rationality come in various forms. Besides procedural standards, substantive standards of collective rationality govern groups. Consistency among multiple acts is a common substantive standard or necessary condition of collective rationality. A paradigm case of irrational collective action is inconsistent collective action, say, a committee’s first imposing a regulation, next repealing it, and then imposing it again without any relevant change in circumstances. The committee’s members should alter their votes or voting procedures to prevent the cycle of committee acts, at least if conditions are ideal for joint action. Evaluating multiple acts by a group for consistency does not require attributing a mind to the group. Collective consistency is a straightforward extension of individual consistency.5 Efficiency is another common standard of collective rationality. It requires a collective act such that no rival has everyone’s favor. A group’s achieving
Groups
59
efficiency is analogous to an individual’s picking an act not bested by another act according to each of his goals. This section examines the standard’s grounding. It asks when collective rationality requires efficiency. Efficiency taken as a product of collective rationality is Pareto optimality. An outcome is Pareto optimal if and only if no feasible alternative is Pareto superior. Theorists such as Myerson (1991: 378) distinguish ordinary (or strong) and weak Pareto optimality. The distinction rests on the definition of Pareto superiority. Ordinary Pareto superiority requires improvement for some without worsening for any. Strict Pareto superiority requires improvement for all. It makes superior outcomes rarer, optimal outcomes more common, and Pareto optimality weaker. An outcome is (strongly) Pareto optimal if and only if no feasible alternative is better for some and at least as good for all. An outcome is weakly Pareto optimal if and only if no feasible alternative is better for all. Because justifying the weaker requirement is easier than justifying the stronger requirement, I take efficiency as weak Pareto optimality (except occasionally when I specify otherwise). Efficiency may be defined in terms of individuals’ preferences instead of improvements for individuals. Then it has a subjective instead of an objective interpretation. When individuals prefer what is better for them, the two interpretations agree. To accommodate cases in which individuals do not know what is better for them, I adopt the subjective interpretation of efficiency. Accordingly, in a group’s collective action problem, for any inefficient collective act possible, there is a collective act possible such that every member of the group prefers it. Efficiency does not require collective preferences. That is a source of its popularity. To ground efficiency in a group’s pursuit of its preferences, however, one may stipulate that a collective preference arises only from unanimous individual preferences. Then a group prefers no alternative act to an efficient act, for no alternative act is such that all members prefer it to an efficient act. Because efficiency is comparative, its application requires specifying collective options. Groups lack minds, and so a group’s options are not possible decisions. Because a group does not make decisions, it does not incur decision costs. Its members incur them. Similarly, a group, lacking doubts, is not subject to uncertainty about execution of its acts. However, its members may be uncertain that doing their parts in a collective act will yield that act. A group’s options should be specified so that principles of collective rationality accommodate members’ costs of realizing an option and their uncertainty about realizing it. This chapter accommodates costs and uncertainty by assuming that a group’s options are combinations of its members’ options. Moulin (1995: 6; 2003: 24) takes efficiency as the sole noncontroversial normative standard in economic theory and states that collective rationality is efficiency. The Prisoner’s Dilemma, however, raises doubts about using efficiency as a standard of collective rationality. Section 5.1 presents this two-person game in detail. This section just previews its basic features. Each prisoner does better by not cooperating than by cooperating, no matter what the other prisoner does.
60
Collective Rationality
Utility maximization recommends not cooperating, because the prisoners cannot communicate, and their acts are causally independent. If both prisoners follow this recommendation, each is worse off than if both had cooperated. Sen (2002: 212) expresses the common view that collective rationality and individual rationality conflict in the Prisoner’s Dilemma.6 The two prisoners form a group (according to Chapter 2) despite their lack of interaction. Standards of collective rationality apply to their combination of acts. However, collective rationality does not conflict with individual rationality. Reconciliation sets the standards of collective rationality at a modest level. The prisoners cannot communicate to reach a binding agreement to cooperate. Although the prisoners understand their situation, because their conditions are not ideal for joint action, collective rationality does not demand cooperation. Adverse circumstances excuse a failure to cooperate. The demands of collective rationality adjust to the prisoners’ circumstances, and do not insist on efficiency. In general, efficiency is just a goal of collective rationality. Rational individuals in nonideal circumstances may fall short of a goal of collective rationality and yet produce a combination of acts that is collectively rational. Efficiency’s being a goal rather than a requirement resolves the conflict between individual rationality and collective rationality.7 Collective rationality is not a burden imposed on a group’s members. It arises from the members’ rationally exercising their membership in the group. Collective rationality is good, and a group’s members do not have to act contrary to their preferences to achieve this good. Their preferences, if rational, lead to collective rationality. Collective rationality, understood as an extension of rationality to groups, is not at odds with individual rationality. It is attainable, sensitive to circumstances, and unified with individual rationality. Although a pair in the Prisoner’s Dilemma may achieve collective rationality without achieving efficiency, it should take advantage of opportunities to transform its situation so that achieving collective rationality in the new situation yields efficiency. For instance, suppose that the prisoners may arrange opportunities for binding contracts. That arrangement advances goals of collective rationality. It removes obstacles to efficiency. Expanded and unified, the theory of rationality recommends that the prisoners make the arrangement, if possible. Collective rationality is extrapolation, and intuitions about it need confirmation by first principles. Confirmation derives a requirement for a group’s rationality from requirements for each member’s rationality. Efficiency lacks that confirmation in nonideal cases such as the Prisoner’s Dilemma. The same holds for nonideal cases in which a group’s members lack relevant information. For example, collective rationality does not demand a group’s efficiency if its members are not aware of other members’ options and desires. Standards are necessary conditions of collective rationality. Goals are standards in ideal conditions. A group’s failure to meet a goal of collective rationality may be excused. In the Prisoner’s Dilemma, isolation excuses the prisoners’ failure to
Groups
61
meet the goal of efficiency. Goals of collective rationality, because their attainment benefits each, motivate rational ideal agents even when they face obstacles. They may motivate agents to design social institutions that remove obstacles. Each individual favors an institution that promotes efficiency because each profits from efficiency. As Chapter 12 explains, a state should design social institutions so that citizens pursuing personal utility achieve efficiency. Efficiency is a goal of collective rationality because it is universally beneficial. In the Prisoner’s Dilemma, each agent desires an outcome better for each than joint defection. Together the agents have the means of achieving it, and yet if rational they do not achieve it. The answer to this puzzle is that the means involve irrational acts. Rational agents need means rational to use. In the Prisoner’s Dilemma, agents have only irrational means of realizing efficiency. If they realized that goal, they nonetheless would be collectively irrational from a comprehensive perspective. A group’s goal is technically defined as a goal of each member of the group. Effective motivation for a group requires effective motivation for its members. A goal of collective rationality must be supported by individual rationality. Efficiency is a goal of collective rationality because attaining it offers a benefit to each member. A rational individual need not prefer efficiency of any sort to inefficiency of any sort. An agent may rationally prefer a collective act that is not efficient to one that is efficient because the former benefits him more than the latter does. However, given an inefficient collective act, there is an efficient collective act that every member prefers, if informed. Members can bring it about by joint action. A feature of a collective act, such as efficiency, is a goal of collective rationality if in ideal conditions for joint action rational ideal agents generate a collective act with that feature. Each is motivated to do his part. It is not necessary that each is motivated to achieve the collective goal. Adam Smith ([1776] 1976) imagined a society achieving the common good as if an invisible hand directed it, while in fact its achievement results from acts of individuals promoting their own interests. A group may realize a goal of collective rationality as if an invisible hand directs it. Individuals need not endorse the goal. Its realization need only be a product of acts they endorse. They need not intend to go where the invisible hand leads. The goal’s realization requires only individual rationality in favorable circumstances. Outsiders may create those circumstances. Efficiency need not be a goal for each agent in a collective action problem. Efficiency is nonetheless a goal of collective rationality. Social institutions may promote realization of a goal of collective rationality, such as efficiency, without counting on individuals to have that goal. An invisible hand, the effect of a well-designed social institution, may guide unapprised individuals to the goal’s realization. A goal of collective rationality is such that in ideal conditions, a fully rational, informed ideal individual desires to do his part. For example, each individual prefers doing his part in a move from an inefficient to an efficient outcome. In
62
Collective Rationality
ideal conditions, fully rational ideal individuals want efficiency for the benefit it brings each. This account of goals of collective rationality does not attribute goals to a group, but only to its members. Does a group’s failure to achieve efficiency entail an individual’s failure to meet a goal of rationality? The individuals in the Prisoner’s Dilemma want the benefits of cooperation and wish binding agreements were possible. Circumstances frustrate their aspirations. They wish conditions were ideal for joint action. Despite nonideal conditions, they may still maximize informed utility. They may meet goals of individual rationality. A failure to meet goals of collective rationality does not entail a failure to meet goals of individual rationality. Ideal conditions for attaining goals of individual rationality are less demanding than ideal conditions for attaining goals of collective rationality. What are the ideal conditions for joint action? If there are several goals of collective rationality, conflicts among them, if any, are resolved in ideal conditions. Individuals are comprehensively rational and so have rational beliefs and desires. Cost-free communication and binding contracts are available. Also among ideal conditions are mechanisms for distributing the risk of a beneficial collective act. To illustrate, suppose that an army patrol benefits from reconnaissance of hostile territory. While exploring, the soldier at the patrol’s front runs a higher risk of injury than do the others. The patrol can distribute risk by having soldiers take turns at the point. A group often benefits from a risky joint act, but it is not rational for each member to participate in the joint act unless the group can distribute the risk among its members so that each may expect to gain from participation. In ideal conditions for joint action, a group may distribute risk, perhaps through a binding contract. In ideal conditions, rational individuals attain goals of collective rationality as they use opportunities to communicate, coordinate, and perform joint acts. Individuals are motivated to act jointly because of coordination’s benefits. For example, they see the benefit of a contract in the Prisoner’s Dilemma. Efficiency is not a standard of collective rationality because individual rationality need not yield efficiency in nonideal conditions. It is a goal of collective rationality because in ideal conditions it emerges from individual rationality. Idealizations launch the theory of collective rationality. I treat some nonideal collective action problems, but even those problems retain many idealizations. Rolling back idealizations and generalizing principles to cover realistic collective action problems is a profitable direction for future research. Pettit (1993: 293–95) and Dutta (1999: 123) observe that symmetry is a principle of collective rationality that conflicts with efficiency. The principle says that if a game is symmetric, then its outcome should be symmetric, too. Chapter 5 examines games, and this section just reviews points about them essential for comparing symmetry and efficiency. Symmetry is a requirement of consistency demanding like treatment of like cases. It demands consistency in treatment of individuals. Pettit applies the
Groups
63
Table 4.1 Symmetry versus Efficiency
Up Down
Left
Right
3, 2 0, 0
0, 0 2, 3
principle to a symmetric game with the payoff matrix in Table 4.1. The rows represent a player Row’s choices, and the columns represent another player Column’s choices. The pair of numbers in a cell represents first the payoff for Row and second the payoff for Column, given a combination of the players’ choices. In this game the players make their choices without communication. Two rational players make a pair of choices such that each choice is a best response to the other. Their choices thus form a Nash equilibrium. Players may select probability mixtures of the pure strategies the matrix represents. For example, Row may select a probability mixture of Up and Down, say, a mixture in which the probability of Up is 0.6 and the probability of Down is 0.4. Using abbreviations of pure strategies, the mixture’s representation is (0.6 U, 0.4 D). In addition to the two Nash equilibria in pure strategies, (U, L) and (D, R), a Nash equilibrium obtains if Row adopts the mixed strategy (0.6 U, 0.4 D), and Column adopts the mixed strategy (0.4 L, 0.6 R). This Nash equilibrium yields the symmetric payoff profile (1.2, 1.2). The Nash equilibrium in mixed strategies is, however, worse for each player than the Nash equilibria in pure strategies. Realizing the symmetric equilibrium outcome conflicts with the principle of efficiency. Because the players cannot communicate, conditions are not ideal for joint action. Even so, the principle of symmetry does not overturn the case for efficiency. Departure from symmetry does not entail irrationality by any individual. If the players realize (3, 2) instead of (1.2, 1.2), neither player’s irrationality follows. Symmetry lacks confirmation by individual rationality. To reconcile symmetry with efficiency, one may make the principle of symmetry more perspicacious. It should respond to symmetry in all relevant matters, not just the payoff matrix. It should survey the players’ psychologies in a realization of the game in Table 4.1. Perhaps both players know that Column will follow Row’s lead. That asymmetry makes the game’s outcome (3, 2), but it is an asymmetry that the payoff matrix does not represent. Requiring pervasive symmetry preserves the principle of symmetry. Also, a reasonable principle of symmetry constrains rationality’s requirements and not its permissions. It permits an asymmetric outcome and rejects only requiring an asymmetric outcome. In the game shown in Table 4.1 rationality permits (3, 2) but does not require it. It also permits (2, 3). Collective rationality may require realizing either (3, 2) or (2, 3) because these profiles are efficient and the mixed equilibrium is inefficient. The disjunctive requirement does not
64
Collective Rationality
violate symmetry because it is a symmetric requirement. It treats players the same way. 4.3 C OLLECTIVE U TILITY Because collective rationality originates in individual rationality, one may evaluate a group by evaluating its members. However, evaluating a group collectively may be easier than evaluating its members individually. If a group’s act falls short of a standard of collective rationality, then some member’s contribution is irrational, but criticizing the group’s act does not require identifying a faulty contribution. Principles of collective rationality, in particular, those obtained by analogy, may simplify a group’s evaluation. They may assess a collective act more easily than standards of individual rationality assess it. In ideal conditions rational individuals act so that their collective acts meet standards of collective rationality such as efficiency. Individual rationality yields collective acts that are as if the product of a collective mind. Because individual rationality yields collective rationality, one may sometimes deduce the individual acts that are rational from the collective acts that are rational. This is a useful inferential shortcut. All principles of collective rationality rest on factors that motivate rational individuals. Principles that pursue analogies are auxiliaries to analytic principles. To simplify, they evaluate collective acts using technical definitions of collective options, preferences, and so on. Analytic principles of evaluation not resting on stipulative, technical definitions justify the auxiliary principles. Suppose that one defines collective preference as unanimity. Then, following collective preferences amounts to achieving efficiency. The case for rational individuals achieving efficiency in ideal cases supports the derivative principle of following collective preferences in those cases. Suppose that one defines collective utility for applications of the standard of collective utility maximization in ideal cases. Then the standard’s agreement with combinations of rational individuals’ acts in those cases supports the standard and the accompanying definition. Definitions of collective preferences, collective utility, and so on suggest standards of collective rationality. The definitions, although stipulative, must be apt if the standards are to succeed. Subjecting a collective act to belief-desire standards of evaluation requires defining collective beliefs and desires in a way that justifies application of those standards. The standards have to ground evaluation of a group’s act in appropriate features of the group. Using the technical definitions to derive an analogical standard from basic compositional standards of collective rationality, and so from standards of individual rationality, confirms it. The derivation shows that a group meets the standard if each member acts rationally. If collective analogues of an individual’s mental states, say, collective preferences, are defined in terms of features of individuals and then are used to calculate
Groups
65
a rational collective act, the analogues are dispensable. The analogues serve only as a convenient summary of features of individuals. They yield shortcut methods of calculating a rational collective act. Calculations may move directly from individuals’ features the definition mentions to a rational collective act. For example, the standard of following collective preferences, with a collective preference defined as a unanimous preference, may dispense with collective preferences and go directly from unanimity to a collective act. This section reviews some common analogical principles of collective rationality, namely, pursuing collective interests, following collective preferences, and maximizing collective utility. Given plausible accounts of collective interests, preferences, and utility, the principles may require an individual to sacrifice himself for a group and so may exceed rationality’s demands. To accommodate differences between groups and individuals, these principles need restrictions. They are not general standards of collective rationality. Consider the principle that a group should promote its interests. The principle compares a group’s act with alternatives and declares it collectively rational only if, considering the alternatives, it promotes the group’s interests. The principle presumes an account of a group’s interests. A plausible account makes a group’s interests depend on its members’ interests and the group’s function. Because a group may endure as its membership changes, and because a group’s function may conflict with its current membership’s interests, pursuing the group’s interests may require its current members to sacrifice their interests. A philosophy department outlasts its current members and has functions that its current members may not endorse. Its interests extend beyond the interests of its current members. When it fills a faculty position, its interests may conflict with some members’ interests. Suppose that hiring in metaphysics best serves the department’s interests, but hiring in ethics best serves some members’ interests. Those members may have to vote contrary to their interests if hiring is to promote the department’s interests.8 A group acts rationally at a time if its members at the time act rationally. Collective rationality does not require that a group promote its interests. A club may rationally disband, although that is contrary to the club’s interests. Rationality does not require a group’s members to care about the group’s interests. Its current members may neglect the group’s interests because of indifference to them. To make the principle of collective interest compatible with compositionality, one may restrict it to groups constituted exclusively by their current membership and having no function other than serving the interests of the current membership. Moreover, an account of a group’s interests must aggregate its members’ conflicting interests, and that aggregation may succeed only in special cases. Then one must restrict the principle to those cases. In addition, the principle requires common idealizations about members’ rationality and extra idealizations about members’ knowledge of the group’s interests.
66
Collective Rationality
Next, consider the principle that collective rationality requires following collective preferences. This principle evaluates a collective act by comparing it with alternatives. It needs a technical definition of collective preferences. Arrow’s Theorem (1951) establishes the incompatibility of certain plausible conditions for a group’s preference ranking of options. It casts doubt on the existence of collective preferences in cases where individuals’ preferences diverge. To put aside those doubts, the definition and principle may treat cases in which a group’s members are unanimous. Accordingly, if a group’s members unanimously prefer one course of action to another, the group has a preference for the first course of action and, if rational, acts according to that preference in ideal conditions for joint action. The principle to follow collective preferences does not go beyond efficiency, however, if only unanimity generates a collective preference.9 To make the principle agree with compositionality, collective preferences should be defined so that in cases where they exist, if a group acts contrary to its collective preferences, then some members act irrationally. Under some conditions, collective preferences going beyond unanimity may be plausible. If individual preferences are single-peaked, then majority rule yields a collective preference ranking of collective options, as Black (1948) observes. In these circumstances, individuals voting according to their preferences select a collective act at the top of their collective preference ranking. A problem remains. If collective preferences resolve conflicting preferences among members, then following collective preferences is contrary to some members’ preferences. Suppose that a collective benefit is at the top of the collective preference ranking, but obtaining it requires a member’s self-sacrifice. He rationally declines to sacrifice himself. Then the group does not exercise collective rationality by achieving the benefit. Following collective preferences goes wrong because evaluation by comparison presumes direct control of options at a time, and a group does not directly control options at a time. It works through its members. The principle needs restriction to agree with compositional standards of collective rationality. It may restrict options. Instead of taking options as all collective acts that a group fully controls, it may take options as all such acts that involve no irrational act by any member. Another common analogical, comparative standard for a collective act is social optimality, that is, maximization of collective utility. According to the usual definition of collective utility, it is a sum of individuals’ utilities. The principle to maximize total utility assumes interpersonal comparison of individuals’ utilities. This assumption limits the principle’s range of application.10 A group may fail to maximize collective utility although each member maximizes utility individually. This happens in the Prisoner’s Dilemma. The group has an excuse for failing to maximize collective utility. The players are unable to communicate and to reach a binding agreement. Collective rationality differs from collective utility’s maximization. It is sensitive to agents’ limitations. Context provides excuses for failure to maximize collective utility but not for collective
Groups
67
irrationality. Collective rationality is attainable despite obstacles to collective utility’s maximization. A group may be collectively rational even if it fails to maximize collective utility. A plausible standard of social optimality treats cases ideal for joint action. To apply a utilitarian standard of rationality to groups, one must define a group’s options and utilities. The technical definitions have as their objective an assessment of a group’s act that agrees with an assessment of members’ acts constituting the group’s act. The definitions, if useful, make the members’ rationality entail the group’s rationality according to the utilitarian standard. That is, rational acts by the members yield a collective act that is rational according to the utilitarian standard applied using the group’s options and utilities as technically defined. Utility maximization’s extension to groups defines collective options and utilities so that maximizing collective utility yields a result of rational individual actions in ideal conditions. Application of the utilitarian standard to a group is a shortcut for evaluation of its members’ acts. Individuals in ideal conditions should act so that they realize an act rational according to the utilitarian standard for the group.11 When information is incomplete, the principle to maximize collective utility requires an account of a group’s probability assignments to outcomes. Collective probability requires pooling information and applying inductive logic to reach a common probability assignment. Otherwise, incoherent decisions may arise from maximization of collective utility. Suppose that a group’s members have unanimous preferences, and these preferences generate their collective acts. Their collective acts although nonconditionally rational may not be comprehensively rational because the members’ preferences arise from conflicting beliefs. For illustration, suppose that two politicians, Hawk and Dove, each favor a summit meeting. Hawk wants it because he believes it will exacerbate tensions and provoke war. Dove wants it because she believes it will oil the waters and promote peace. They unite to arrange the meeting. Their joint act is nonconditionally rational but not comprehensively rational. Comprehensive collective rationality, assuming ideal conditions, requires joint acts justified by a shared consistent body of information. In nonideal conditions, individuals may have asymmetric information and no reason to share information. A requirement to maximize collective utility should assume ideal conditions in which comprehensive rationality yields collective probabilities.12 An analogue of Good’s generalization of utility maximization (1952) applies to cases in which individuals’ attitudes do not ground precise collective probability and utility assignments. It says to maximize collective utility under a quantization of collective beliefs and desires, that is, a collective probability and utility assignment compatible with collective beliefs and desires. Assuming unanimity about the top option, that option is rational according to the principle of quantization. All admissible probability and utility assignments put it at the top. The principle of efficiency follows from maximization of collective utility under a quantization.
68
Collective Rationality
Revealed preference theory applied to groups stipulates that a group (weakly) prefers a collective act it performs to alternative collective acts. This technical definition of collective preference does not require any aggregation of individual preferences. Maximization of collective utility may treat collective probability and utility as mathematical representations of collective preferences revealed in rational collective acts. The standard then says that a group should act so that the acts it performs are utility maximizing under some probability and utility assignment. This standard makes collective utility maximization a requirement of coherence for multiple collective acts. Such maximization ensures conformity with a coherent collective preference ranking of options. As background for a set of collective acts, the standard assumes, besides the usual idealizations, shared probability-settling information about possible outcomes and individuals’ constant unanimous preferences concerning possible outcomes. Suppose that collective utility is defined as a sum of individuals’ utilities, rather than in terms of collective acts, and that individuals’ utilities exist. Then maximization of collective utility directs a group’s act even when many options are efficient, as in bargaining problems, and even when individuals are uncertain of the efficient options. These are practical features. However, maximization of collective utility is not a general standard of collective rationality. The requirements for collective utility assignments limit its scope. Moreover, it is not a general goal of collective rationality and so a requirement in ideal cases with conditions perfect for its attainment. Maximization of collective utility ignores distribution of collective utility and may demand irrational self-sacrifice. Suppose that a rich man without reason gives money to two poor associates. They gain more utility from the transfer than he loses. Collective utility increases. Suppose that the donation maximizes collective utility. Still, if it is unacceptably contrary to reason, collective rationality does not require the transfer. Evaluation of a group’s act may have broad or narrow scope. Collective rationality in a comprehensive sense requires rational input, process, and result. A single member’s irrationality destroys comprehensive rationality unless the mistake is acceptable. Even in ideal conditions, if a group does not use force, and its members act freely, the group can control its members’ acts only by offering incentives to its members. A collectively rational joint act has everyone’s support. Maximization of collective utility is a standard of collective rationality only if restricted to collective options composed of individuals’ rational acts. Utility maximization for a group requires only as much collective utility as can be achieved by the individual rationality of the group’s members. It should compare only collective acts that yield for each individual at least as much utility as he can obtain on his own. A group should maximize utility among options that do not require any member to act irrationally, just as an individual should maximize utility among plans that do not require him to act irrationally at any time. The principle to maximize
Groups
69
collective utility must restrict options, as must versions using quantizations and revealed preferences.13 4.4 C OMPOSITIONALITY This section argues that individual rationality entails and moreover explains collective rationality. It applies to collective rationality the points that Chapter 3 makes about rationality’s compositionality. According to compositionality, rationality evaluates acts directly controlled by comparisons and acts indirectly controlled by components. A group controls combinations of acts its members control. However, it does not directly control any acts. It controls a collective act only through its members’ control of their acts. Its acts depend on theirs. Rationality evaluates a group’s act by examining the directly controlled acts of members that constitute the group’s act. Those acts are the group’s means of performing its collective act. Autonomous, fully controlled collective acts are composite acts not directly controlled, and first principles evaluate them according to their components, namely, the acts of individuals that yield the collective acts. A collective act is rational if the individual acts that constitute it are rational. Establishing collective rationality’s compositionality requires showing that individual rationality entails collective rationality. The argument observes that members’ acts constitute a group’s act. Hence if the group should have acted differently, some member should have acted differently. Suppose that a group acts irrationally. It should have acted differently. It acts differently only if its members act differently. So they should have acted differently. At least one should have acted differently. At least one acts irrationally. Therefore, a group’s act is irrational only if some member’s act is irrational. At a committee meeting it may be rational for each committee member to speak, but irrational for all to speak together. Details preserve compositionality. It is rational for each to speak while others listen. When all speak at once, not each performs the act rational for him. If someone says that a state’s economic policy is irrational but denies that any of its citizens is irrational, the assertion typically assumes ideal conditions for policy adoption and nonideal conditions for citizens’ acts. Individual rationality’s entailment of collective rationality holds for standards of the same scope granting the same conditions for a group and each member. Is collective rationality too easy if individual rationality entails collective rationality? Rationality requires individuals to use opportunities to promote their goals. Membership in a group creates opportunities for joint action. The members of a committee, if rational, take steps to make the committee’s acts consistent. In typical cases, no member gains from wasting time in counterproductive collective acts. Collective rationality, although a consequence of individual rationality, may be demanding.
70
Collective Rationality
Suppose that a colonel orders a platoon to climb a hill, but orders each member of the platoon to stay in the valley. The orders are inconsistent. The platoon and its members cannot obey them. Suppose that the master of ceremonies directs a crowd to stand but directs each individual to remain seated. The directives are inconsistent. The crowd and its members cannot obey them. A general theory of rationality has principles treating both individuals and groups. It does not direct groups to act contrary to its directives to the group’s members. Its principles are consistent. The theory does not condemn a group’s act and condone its members’ parts in that act. The member’s acts constitute the group’s act. Constitution creates consistency constraints. Rationality cannot direct the group without directing the members, and it cannot guide people if it tells them to do the impossible. If it issues directives that cannot all be fulfilled, it fails as a guide. Individual and collective rationality issue practically consistent directives, that is, directives that can all be fulfilled together. A standard of collective rationality is incompatible with standards of individual rationality if the standards for individuals’ demand acts that do not realize the collective act that the standards for collectives demand. For standards of individual and collective rationality to be consistent, meeting a standard for a group must not entail violations of a standard for a member. Individual rationality’s entailment of collective rationality (a consequence of collective rationality’s compositionality) ensures consistency of individual rationality and collective rationality. The principle of compositionality achieves consistency among evaluations of collective acts and the individual acts that constitute them. It establishes consistency among directives to groups and to individuals. It may seem that rationality tells different agents to do incompatible acts. Doesn’t it tell each player in a chess game to win? No, because rationality is attainable, it tells each player to try to win. Each can play rationally, although both cannot win. The loser may also have played rationally.14 Some theorists claim that standards of individual and collective rationality need not be consistent. In the Prisoner’s Dilemma, they claim, individual rationality pushes a group’s members to acts that entail their collective irrationality. However, a theory of rationality fails catastrophically if it sends a group and its members in opposite directions. Its standards direct individual and collective acts, and so it resolves conflicts. All agents, individual and collective alike, may simultaneously meet rationality’s demands. No agent’s rationality requires another agent’s irrationality. A requirement R’s entailing a requirement S does not establish the consistency of R and S unless R is consistent. Because individuals may all be rational without inconsistency, their individual rationality’s entailing their collective rationality establishes the consistency of individual rationality and collective rationality. According to compositionality, if the members of a group each act rationally, the collective act they produce is rational. However, the converse entailment does not hold. A group’s collective rationality may not require each member’s
Groups
71
rationality. Its collective rationality may be realized in various ways, not only by every member’s rationality. For example, a group may elect rationally although one member votes irrationally.15 The full rationality of a group’s act, in contrast with its rationality, requires the rationality of all constitutive acts. Full rationality also entails comprehensive rationality. However, the reverse entailment does not hold. Comprehensive rationality permits inconsequential mistakes, whereas full rationality does not. Hence, a group’s act may attain comprehensive rationality without each member’s rationality. This happens when its members’ departures from rationality are inconsequential, and so are acceptable. Only in special cases is a group’s act rational just in case each member’s part is rational. For example, a group’s agreement requires every member’s consent and is not collectively rational unless each member’s consent is rational. Also, although a group’s passage of a resolution may be rational despite a few irrational votes, the members’ voting-profile, a more fine-grained collective act, is not rational unless every member’s vote is rational. A group profits from its members’ universal rationality because its members profit from their own rationality. Universal rationality is a goal of collective rationality because in ideal conditions collective rationality requires universal rationality. It may not require universal rationality to resolve a coarse-grained action problem, but requires universal rationality to resolve all action problems, including fine-grained action problems. Universal rationality furnishes a definition of a collective act’s rationality in a noncomprehensive sense that puts aside evaluation of its production. A group’s act is collectively rational if and only if it is pragmatically equivalent to an act that issues from universal rationality, that is, an act the group may perform if all its members were to act rationally. A group’s performing a collectively rational act may not require all members to act rationally. An act that issues from universal rationality may also issue from nonuniversal rationality. When a group elects the best candidate despite some irrational votes, its act is rational because it duplicates an act issuing from universal rationality. Not everything entailed by universal rationality is necessary for collective rationality. Universal rationality entails itself, and yet universal rationality is not necessary for collective rationality. It demands more than collective rationality requires. Consequently, the definition of noncomprehensive collective rationality in terms of universal rationality requires only acts’ pragmatic equivalence and not their identity. If all members of a group are rational, then its act has a certain outcome. Any act with an outcome the same in all relevant respects is also collectively rational. The relevant respects depend on what matters to the group’s members in the context of their act. Compositionality claims that rational acts by individuals create and do not just entail a rational collective act. According to it, the rationality of a group’s members explains the group’s rationality. Hence rationality’s standards for
72
Collective Rationality
individuals explain its standards for groups. For example, rationality requires a group’s act to be efficient in ideal cases because if its members are rational, efficiency results. The priority of either individual rationality or collective rationality achieves their consistency. Although neither individual nor collective rationality has epistemic priority, individual rationality has metaphysical priority. Principles of individual and collective rationality reach consistency by honoring their metaphysical relationship. Principles of collective rationality adjust to principles of individual rationality, and not the other way around. How may one support compositionality’s application to collective rationality? Intuitive judgments about rational collective action are heavily theory-laden and may not be independent of compositionality. Metaphysical principles of explanation are more reliable than are intuitive judgments. From a collective act’s supervenience on individual acts, it follows that any change in the collective act entails some change in the individual acts on which it supervenes. Supervenience does not establish direction of explanation, however, because it is not asymmetric. Constitution is an asymmetric relation grounding direction of explanation. The form of constitution relevant to rationality, given rationality’s attention to control, makes acts directly controlled constituents of acts indirectly controlled. An individual’s directly controlled act is simple according to that type of constitution, so it is the basic unit of evaluation. Because explanation springs from constitution, standards of individual rationality explain standards of collective rationality. A group’s act is rational if the members’ acts composing it are rational. Moreover, the members’ acts explain the rationality of the group’s act. Some theorists reject compositionality because of cases such as the Prisoner’s Dilemma in which individual rationality fails to yield efficiency. A pair in a Prisoner’s Dilemma cannot achieve efficiency except via members’ acts. The pair has no means of inducing members to act contrary to their interests. Efficiency is not an option the pair controls directly. It controls efficiency fully but not directly, because although nothing outside the pair controls efficiency it acts through its members. For these reasons, each prisoner’s rationality suffices for the prisoners’ collective rationality. Their individual rationality explains their uncooperativeness’s collective rationality. Theorists such as French (1998: 24–25) and Copp (2007) maintain that a group’s blame may arise without any member’s blame. Compositionality accommodates this possibility, if genuine. Suppose that voters are collectively to blame for failing to pass an initiative, although no individual voter is to blame for the initiative’s failure. Compositionality stands because it does not entail that the blame for an act passes from a group to its members but only that the blame of irrationality does. Suppose that the voters were collectively irrational. Then some voters were irrational, as compositionality claims. Although the group, not any individual, is to blame for the initiative’s defeat, some individuals are to blame for not acting differently. Suppose that one changed vote would have resulted in
Groups
73
the initiative’s passage. In the absence of coordination to enlist an additional supporter, all voters, given their information, should have voted for the initiative. Each voter should do his part in passing it unless others’ acts release him from his obligation. Consider a person’s obligation to remove an inconsistency in belief. Suppose that he believes that p and believes that not-p. He ought not to have both beliefs, but it need not be the case that for one belief he ought not to have it. Perhaps the two beliefs are equally supported and rationality permits either. Then he may resolve the inconsistency by removing either belief. An obligation that someone act differently has two interpretations. It may entail that someone should act differently, or it may entail that it should be that someone acts differently. The second proposition attaches the obligation operator to an existential generalization, and the first proposition attaches it to the formula generalized. Suppose that it ought to be that someone acts differently. Perhaps, nonetheless, it is not the case that someone ought to act differently. Perhaps anyone may act differently, and none bears the burden of change. Obligation does not in general move inside an existential quantifier. However, irrationality in action flows from a group to some members. The reason is practical. Attributing collective irrationality has no practical value unless it prompts an individual to change behavior. Collective rationality directs groups only by directing individuals. Suppose that an individual should realize a sequence of acts directly controlled. The sequence puts constraints on acts directly controlled. Similarly, standards of collective rationality put constraints on individuals’ acts. Does this show that standards of collective rationality explain standards of individual rationality, contrary to compositionality? No, the dependence of standards of collective rationality on standards of individual rationality does not preclude standards of individual rationality being sensitive to circumstances such as opportunities for attainment of goals of collective rationality. Rational members of a group have reason to act jointly to achieve goals of collective rationality. Nonetheless, a collective act’s rationality depends on its components’ rationality. Zimmerman (1996: Chap. 9) proposes a principle of cooperation arising from moral considerations concerning groups. Do requirements of rationality concerning groups yield a requirement for an individual to have a disposition to cooperative behavior? Just as when playing a Chopin piano sonata, a pattern for an individual’s sequence of acts imposes requirements on the sequence’s steps, so when performing a Beethoven symphony, a pattern for an orchestra’s act imposes requirements on its members’ acts. Requirements for composite acts may impose requirements for component acts. This may happen although a composite act’s rationality depends on the rationality of the acts that constitute it. Motivation may flow from composite to components although rationality flows from components to composite. Another line of objection to compositionality targets its support. Hardin (1982: 2) maintains that there is no useful sense in which a group is rational.
74
Collective Rationality
He holds that accounts of collective rationality commit a fallacy of composition. They attribute the rationality of members to the group they constitute. Does the compositional account of collective rationality commit a fallacy of composition? The principle of compositionality is not an argument, and so commits no fallacy. It entails a generalized conditional asserting that a group’s act is rational if the members’ acts that constitute it are rational. The argument for this generalization does not commit a fallacy of composition. The argument for compositionality legitimately projects rationality from individuals to groups. In general, an argument that fits an invalid argument form need not be fallacious. The argument may also fit a good deductive or inductive argument form. The argument from the premiss, “Every part of this object is material,” to the conclusion, “This object is material,” does not commit the fallacy of composition. Materiality is projectible from the parts to the whole. A rule of inference or an implicit premiss may license its projectibility. The points in Chapter 2 about evaluability for rationality and in Chapter 3 about rationality’s evaluations yield this chapter’s conclusions about collective rationality. If a group freely and fully controls its act through its members’ acts, then its act is evaluable for rationality (even if unprompted by a collective goal or intention), and if its member’s acts are rational, then the group’s act is rational. The defense of collective rationality’s compositionality completes this book’s account of rationality. The remaining chapters apply its account to individual and collective rationality in games of strategy.
5
Games of Strategy
C
HAPTER 1 asks what collective rationality’s standards are and how groups attain them. Game theory addresses these questions. It examines individuals in interactive situations. In a game, each player’s meeting standards for him leads to the group of players’ meeting standards for it. A solution to a game is collectively rational for the players. Game theory is a good vehicle for exploring collective rationality. Game theorists treat a multitude of topics in several disciplines. Textbooks such as Dixit and Skeath (2004), and Osborne (2004) introduce game theory, but no single volume portrays the subject’s richness. This book treats parts of game theory that without much technical apparatus philosophically illuminate collective rationality. In ideal games, attaining an equilibrium is a requirement of collective rationality. This chapter begins a systematic treatment of equilibrium in both noncooperative and cooperative games. This chapter and the next two chapters examine noncooperative games. Cooperative games are the topic of subsequent chapters.
5.1 G AMES A game is a decision-making situation for multiple agents in which an agent’s outcome depends not only on his act, but also on the other agents’ acts. Games are cooperative or noncooperative according to whether joint strategies are available. In cooperative games players may work together, and in noncooperative games players must work independently. This chapter introduces noncooperative games. The next chapter shows how rationality’s attainability revises accounts of equilibrium for these games. Chapter 7 shows how comprehensive rationality leads to coordination in them. A familiar example of a two-agent noncooperative game is the Prisoner’s Dilemma. Section 4.2 presents its basic features. This section analyzes it more fully. In the Dilemma, each of the two agents decides whether to cooperate 75
76
Collective Rationality Table 5.1 The Prisoner’s Dilemma
Cooperate Do not cooperate
Cooperate
Do not cooperate
2, 2 3, 0
0, 3 1, 1
with the other. The outcome of an agent’s act depends on the other agent’s act. Although cooperation is an option, the game is noncooperative because the agents act independently. They cannot act jointly because they cannot communicate. If both agents act cooperatively, they each do better than if both act noncooperatively. If one acts cooperatively while the other does not, then the first suffers the outcome worst for him, whereas the second secures the outcome best for her. Table 5.1 displays the structure of the game’s possible outcomes. The rows represent strategies for one agent, Row, and the columns represent strategies for the other agent, Column. A combination of a row and a column yields an outcome. A matrix cell represents it. The numbers in a matrix cell are utilities indicating the agents’ evaluations of possible outcomes. The first number is the outcome’s utility for Row, and the second number is the outcome’s utility for Column. A solution is a profile of strategies, a set of strategies with exactly one strategy for each agent. Nash equilibrium ([1950] 1997a) is the most widespread standard for a solution to a noncooperative game. A profile is a Nash equilibrium if and only if each agent’s strategy in the profile is a best reply to the others in the sense that given the profile it maximizes the agent’s payoff. Consider the profile assigning noncooperation to each agent in the Prisoner’s Dilemma. It is a Nash equilibrium because if neither agent cooperates, neither does better by unilaterally switching to cooperation. The Prisoner’s Dilemma shows that rational agents may fail to achieve efficiency in adverse conditions. The Nash equilibrium they realize is inferior to the strategy profile in which each cooperates. Without introducing opportunities for communication and binding contracts, a way to promote cooperation is to place the agents in a series of repetitions of the Prisoner’s Dilemma. In a series, play in one round may send a signal about play in later rounds. The Folk Theorem, so-called because it is not attributed to anyone in particular but rather to game theorists in general, entails that any combination of strategies in the Prisoner’s Dilemma may rationally occur during indefinite repetitions of the Dilemma. The indefinite repetition thwarts backward induction going from noncooperation in the last round to noncooperation in all rounds up to and including the first.1 Suppose that recurrence of the Prisoner’s Dilemma allows current play to influence subsequent play. The global dynamics add a local incentive to cooperate in a round if that act prompts a partner’s cooperation later. Thus, a current round of the game is not a standard Prisoner’s Dilemma. Repetition alters its payoff
Games of Strategy
77
matrix. Its payoff matrix, strictly speaking, changes to take account of current play’s influence on future play. In repetitions, the game repeated resembles a Prisoner’s Dilemma, but the Dilemma’s payoff matrix lists only utilities of immediate outcomes and ignores effects of current moves on future moves. It lists payoffs in the current round putting aside current expectations about payoffs in future rounds. An accurate matrix factors in those expectations. A Nash equilibrium’s inefficiency in a canonical Prisoner’s Dilemma does not show that realizing a Nash equilibrium is not a standard of collective rationality. The absence of opportunities for joint action excuses inefficiency. According to a common view, strategy profiles that are not Nash equilibria are not collectively rational. At least one agent adopts a strategy inferior to an alternative, given the strategies other agents adopt. This standard of collective rationality assumes certain ideal conditions, for example, that agents know the structure of their game. It is also restricted to noncooperative games, where agents act independently. Suppose that, contrary to the restriction, agents may act jointly. Then an agent may have an incentive to deviate from his strategy in a Nash equilibrium. Although he has no incentive to deviate unilaterally, his deviation may trigger a multilateral deviation from which he profits. This may happen in a cooperative version of the Prisoner’s Dilemma. One agent may initiate a joint move to cooperation. Section 5.3 asks whether standards of rationality for individuals confirm the collective standard of Nash equilibrium in ideal noncooperative games. Normative game theory is the branch that bears on collective rationality. It proposes solutions to games and constraints on solutions such as the standard of Nash equilibrium. Instead of presenting standards of rationality, experimental game theory describes and explains behavior in games. A typical study may specify the percentage of people who act cooperatively in a Prisoner’s Dilemma. Some games have a single stage and others have multiple stages. The Prisoner’s Dilemma is a single-stage game. Chess is a multistage game. Two-person, singlestage games have matrix representations. Two-person, multistage games have tree representations. Each node represents a player’s choice among moves available at a stage. A path through the tree represents a sequence of moves in the game. Singlestage games are also called normal-form or strategic-form games. Multistage games are also called extensive-form games. Fudenberg and Tirole (1991) call the two types static games and dynamic games. Dixit and Skeath (2004) call them simultaneous-move games and sequential games. In a multistage noncooperative game, player’s strategies for the whole game are causally independent, however a player’s complete strategy may specify conditional acts. It may include making a certain move at a stage if opponents make certain moves at prior stages. Consequently, opponents’ moves before a stage may causally influence a player’s move at that stage. Each player may independently adopt at the outset a strategy for playing the whole multistage game, but execution of moves in a player’s overall strategy may causally influence other players’ subsequent moves.
78
Collective Rationality Games /
\
Cooperative / Multistage
Noncooperative \ Single-stage
/ Multistage
\ Single-stage
FIGURE 5.1 Classification of games.
Figure 5.1 summarizes this chapter’s method of classifying games. Games, as I understand them, are concrete, unlike their abstract representations. Their abstract representations fall into analogous types, however. For example, a cooperative game has a cooperative representation that lists joint strategies available in the game. So, the classification of games in Figure 5.1 applies analogously to their representations. A game’s representation lists players’ strategies, and has the fine grain of the propositions expressing those strategies. As an event is a proposition’s realization, a game is a game-representation’s realization. A game inherits the fine grain of the representation whose realization it is. That representation individuates the game. That is, it characterizes the game and distinguishes it from other games. A game may constitute another game. For example, a game a tree individuates may constitute a game a matrix individuates. A cell of the matrix may depict the culmination of strategies in a branch of the tree. Also, a game a noncooperative representation individuates may constitute a game a cooperative representation individuates. The noncooperative representation may depict combinations of individuals’ strategies that yield their joint strategies. Further, a game may have a representation besides the one that individuates it. For example, a matrix may represent a game a tree individuates. A game that constitutes another also has the representation that individuates the other game. Section 5.3 illustrates these points. Opportunities for joint strategies make a cooperative representation possible, and possibilities for causal interaction among moves make a multistage representation possible. Consequently, a game is cooperative if and only if it has a cooperative representation and is multistage if and only if it has a multistage representation. A multistage representation may individuate a game constituting a game that a single-stage representation individuates. Both games are multistage. A multistage noncooperative representation may individuate a game constituting a game that a single-stage cooperative representation individuates. Both games are multistage and cooperative. Chapter 11 illustrates these points. In games with complete information, agents know the game and the types of players. This includes knowledge of the payoff matrix and the players’ rationality.
Games of Strategy
79
In games with perfect information, each player has full information about others’ moves up to the current stage. Agents choosing at a stage know the choices made at previous stages. I generally treat games of complete and perfect information.2 Rationality’s compositionality implies principles of individual rationality for sequential games. The rationality of a sequence of simple acts depends on the rationality of its components. This dependency motivates the standards of dynamic consistency and subgame perfection for games. These standards apply to sequential decision problems and games during which agents’ experiences do not justify changing preferences among possible final outcomes. The decision principle of dynamic consistency requires deciding at each time, according to the same preference ordering of final outcomes. It rules out myopic choice arising from a pure time preference. For example, it rules out abandoning one’s plan to save money each month for Christmas presents because, when the time to deposit a month’s money arrives, one wants to spend it then more than one did when one adopted the plan to deposit it. Hammond (1988, 1998: 189, 1999) reviews dynamic consistency. For a sequential game, he characterizes it this way: behavior at any decision node is the same in a subtree as in a full tree (1999: 39). Subgame perfection, introduced by Selten ([1975] 1997), strengthens dynamic consistency by adding that a rational player’s constant preferences are optimizing. It requires strategies such that each player’s strategy is optimal after any history. That is, it requires strategies in which each step is optimal. Osborne and Rubinstein (1994: 221) extend the principle of subgame perfection to extensive games with imperfect information. They call compliance sequential rationality: for each information set of each player i the strategy of player i is a best response to the other players’ strategies, given player i’s beliefs at that information set. Sequential rationality is utility maximization at every step. Games may be embedded within larger games. The maximal game is life. A player’s strategy in that game governs her conduct in the rest of her life. A player’s decision in a small game partially settles her decision in the game of life. A rational decision in the small game is consistent with a rational decision in that maximal game. Analysis of a small game is reliable if its representation covers all relevant features. Sometimes players directly control background conditions for a future small game. Then they currently play a larger game that contains that small game. Only background conditions that a player does not directly control, such as character traits, are not parts of strategies in the larger game. I treat games of strategy. Their solutions depend on strategic reasoning. The best examples are single-stage games. In multistage games strategic reasoning mixes with learning from experience. Learning as well as strategic reasoning influences moves at a stage. Problems of induction complicate rationality’s recommendations. To bypass strategic reasoning, Fudenberg and Tirole (1991: Chap. 1) assume that even in a single-stage game, a Nash equilibrium arises from learning or evolution. However, I do not make that assumption. I investigate
80
Collective Rationality
the strategic reasoning that grounds Nash equilibrium in single-stage games, following the classical tradition that, for example, Bacharach (2006) also follows. Support for standards of equilibrium in games requires some idealizations. In ideal games, players are rational and have complete information. In particular, they know their payoff matrix and that they are rational. Moreover, they have common knowledge of these facts about the game and themselves. That is, all know that all know them, all know that all know that all know them, and so on. The idealization of common knowledge is strong but standard. Chapters 6 and 7 strengthen the idealization about players’ information. They add information about players’ psychologies, including some information about their choice dispositions such as their rules for breaking ties. Evolutionary game theory treats adaptation in games, and the emergence of successful strategies. Success and rationality are not the same, but rationality aims at success. Evolutionary game theory may reveal strategies that rational agents adopt. It may show how a population moves toward a Nash equilibrium, for instance. The evolution of strategies in colonies of bacteria may reveal rational strategies, just as the evolution of color vision may reveal optimal designs that engineers may imitate to make color television sets.3 Gintis (2000: xxv–vii, xxxii, 90–91) extols the evolutionary approach to game theory. He claims that evolution’s diffusion of success is a better explanatory tool than methods attributing rationality to agents. Rationality does not explain behavior, he says. The important explanatory mechanisms apply to insects and not just to rational agents. Beliefs, for example, are just by-products of the explanatory, evolutionary factors. Although evolutionary game theory sheds light on the realization of solutions to games, I put it aside because it treats repeated games and mechanisms for production of equilibrium besides strategic reasoning. Also, as Spohn (2000: 75) observes, evolutionary game theory does not answer questions about individual rationality. It shows how equilibrium behavior may evolve in repeated play without assuming that agents are rational and exercise strategic reasoning. I investigate strategic reasoning, which, following, Osborne and Rubinstein (1994: 1), I take as reasoning responding to expectations of other agents’ acts. A key ingredient is an agent’s anticipation of other agents’ acts that are themselves in part based on anticipation of his act. In ideal cases the expectations of all agents are concordant.4 This book examines standards of collective rationality and their attainment. Its objective is not explanation of human behavior, but formulation of principles of rationality for ideal agents. Ideal agents are strategic reasoners and not mindless organisms responding to evolutionary pressures. Common evolutionary models of the realization of solutions omit steps available to agents who can think ahead. They do not ensure realization of an efficient equilibrium in a single-stage game, although strategic reasoners in ideal cases jointly select an efficient equilibrium. For noncooperative games, the main questions are these two. First, which profiles
Games of Strategy
81
of strategies in noncooperative games are collectively rational? Second, how does the rationality of individuals generate a collectively rational strategy profile? Sections 5.2 and 5.3 consider these issues in turn.
5.2 S OLUTIONS This section characterizes solutions to noncooperative games of strategy, presents a necessary condition for being a solution, and describes the relation between realizing a solution and achieving collective rationality. Being a solution to a game has an ordinary if vague meaning that philosophical analysis may elucidate. In addition, game theory may refine that meaning to make it more precise and more fruitful. Carnap (1962: Chap. 1) calls the process explication. This section explicates the concept of a solution. Aumann (1987b: 471) presents a variety of solution concepts, but says that each has its shortcomings. Perfection is unattainable. Shubik (1982: 2–3) holds that many solution concepts are serviceable because none has all desirable features, and each has some desirable features. A unified account of solutions is possible, however. A general characterization may apply differently to dissimilar games and may yield the features desirable in each game. In a game, a profile of strategies contains free acts that the players fully control. Their strategies are evaluable for rationality. I define a solution as a profile of jointly rational strategies. Joint rationality is a type of conditional rationality. The strategies in a profile are jointly rational when each strategy is rational given the whole profile. This account of a solution applies differently to dissimilar games because joint rationality varies from game to game. The literature’s divergent characterizations of solutions arise from a single concept of joint rationality. Theorists agree that a solution includes a strategy that is rational for each player, given the solution’s realization. They disagree about a solution’s characterization because they disagree about principles of rationality. Myerson (1991: 88, 105–8, 215–16, 240–42, 430) reviews criteria for solutions. He holds that a solution set should include all and only profiles of rational strategies. Acknowledging rationality’s attainability, he advocates characterizing solutions so that the set of solutions to a game is nonempty. Similarly, Osborne and Rubinstein (1994: 1–2) say that a solution is a systematic description of the outcomes that may emerge reasonably in a family of games. These general characterizations of a solution are compatible with taking a solution as a profile of jointly rational strategies. Assessing the strategies in a profile for joint rationality involves supposing the profile’s realization, and assessing each strategy under that supposition. A profile’s realization is supposed evidentially. Such suppositions are typically expressed in the indicative mood. In ideal conditions supposition of a profile carries certainty of the profile. In nonideal conditions, supposition of the profile may
82
Collective Rationality
not carry certainty of it. A profile may be realized without an agent’s being certain of its realization.5 Von Neumann and Morgenstern ([1944] 1953: 146–48) hold that a solution comes from a theory that all may know and so is a profile that all may foresee in ideal games. Rationality issues to players directions reasonable to follow even if compliance with them is common knowledge. In ideal games it must recommend to a player a strategy that is reasonable if others know about it. Suppose that a game has a unique solution. Each player knows the profile realized by the theory of rationality’s application to the game. In an ideal game, all players know the theory of rationality and use it to discover their game’s solution. Each knows that the others participate in the solution. The solution is not undermined by players’ knowledge of it. The players achieve rationality given knowledge of the profile realized. In the characterization of a solution, joint rationality is more precisely joint comprehensive rationality. Comprehensive rationality does not just evaluate players’ behavior taking their circumstances for granted. It requires more than following preferences. It also requires taking advantage of opportunities for coordination. A profile’s being a solution presumes that players rationally prepare for their game. Their preparation may include formation of dispositions to break ties, to cooperate and coordinate, and to respond to signals, conventions, and agreements. To simplify, I generally assume that players have a history of full rationality so that their comprehensive rationality is the same as their rationality given the circumstances of their game. A solution, being a strategy profile, generates an outcome but is distinct from an outcome. Suppose that a few players may ensure an outcome optimal for all regardless of other players’ strategies. Is a solution a profile with an outcome equivalent to an outcome achieved if the players are jointly rational? This new definition causes trouble. In some cases a profile of jointly rational strategies has an outcome equivalent to the outcome of a profile of irrational strategies. This happens in a coordination problem governed by a convention if players’ universal irrational deviation from the convention yields the same payoff profile as does their adherence to the convention. Hence, it is best to retain the original definition of a solution and acknowledge that a solution and a nonsolution may have equivalent outcomes.6 Some definitions of a solution use only objective facts about strategies and their payoffs. They put aside players’ beliefs and desires. According to one definition, an objective solution is a profile of strategies such that each agent’s strategy maximizes his payoff given the other agents’ strategies. Joint rationality yields a subjective rather than an objective characterization of a solution. Competitive games with just one winner distinguish objective and subjective solutions. Consider the game Matching Pennies. Two agents play the game by simultaneously displaying a penny either Heads up or Tails up. One agent seeks a match and the other a mismatch. The winner takes both pennies. Table 5.2 displays the game’s
Games of Strategy
83
Table 5.2 Matching Pennies
Heads Tails
Heads
Tails
2, 0 0, 2
0, 2 2, 0
payoff matrix. In this game, not all can win. The game lacks an objective solution. It may have a subjective solution, however. Each player may rationally believe that his strategy wins, although in fact one player’s strategy loses. If so, then their profile of strategies may be jointly rational. For another example, take Lewis’s case (1969: 5, 11–12) of the interrupted phone call. Interruption of service ends a phone conversation between two people A and B. Suppose that A calls back thinking B will wait, and B calls back thinking A will wait. Then each acts rationally, but their strategies are not an objective solution to their coordination problem. In a two-player game, if each player is ignorant of the other’s choice, then each may act rationally although their choices do not form an objective solution to their game. A subjective solution’s demands adjust to agents’ beliefs and desires. It demands more success as agents possess more relevant information. Taking solutions as profiles of jointly rational strategies yields objective solutions when agents are fully informed about their game, and their desires agree with outcomes’ objective values. I usually assume these common idealizations and put aside the distinction between objective and subjective solutions. Section 5.1 takes a game to be a concrete interactive decision situation and not just an abstract representation of such a situation. A game has more features than an abstract representation depicts. Solving a game requires an apt representation. Its representation should omit nothing that bears on its solution. An adequate representation generally provides more than a payoff matrix depicting players’ utilities. Because the agents’ information bears on a solution, the representation specifies their information, unless background assumptions imply it. A solution identifies a profile in a concrete game. The strategies in the profile are jointly rational given the strategies available to each player. A game’s solution selects a player’s strategy from her set of strategies in a concrete game. A solution is independent of the game’s representation. A solution that selects strategies from one representation is equivalent to a solution that selects strategies from another representation if both solutions yield the same utility profile, that is, assignment of utilities to players. The representation that individuates a game may not have resources sufficient to furnish a solution. A solution may come from the game’s other representations. Some game theorists take a solution to apply to an abstract representation of a concrete game. A solution to an abstract representation is a profile of strategies that are jointly rational with respect to the representation’s specification of a set of
84
Collective Rationality
strategies for each player. A representation with multiple solutions may have a concrete realization with just one solution because the representation omits details that reduce the set of solutions. It may omit some strategies, for example. An adequate representation displays all salient strategies. Each solution is pragmatically equivalent to a solution of the concrete game it represents. The two solutions yield equivalent outcomes, that is, outcomes having the same profile of players’ utilities. A solution to a representation is a solution to a concrete game for which the abstract representation is adequate. So, an account of solutions to concrete games yields an account of solutions to games’ abstract representations, too. Applying to diverse games the general characterization of a solution requires detailed work. To simplify, I examine only a necessary condition for being a solution, namely, being an equilibrium. Additional rationality requirements, such as efficiency, may narrow the set of equilibria that are solutions.7 A Nash equilibrium is a profile of strategies such that each strategy is a best response to the others. It is defined objectively with respect to a payoff matrix and independently of an agent’s beliefs. Some theorists interpret Nash equilibrium subjectively as equilibrium-in-beliefs. Equilibrium-in-beliefs rests on subjective probabilities of payoffs, taken as subjective utilities. An equilibrium-in-beliefs is a profile of strategies such that each agent maximizes (expected) utility given other agents’ common assignment of probabilities to her strategies. Randomization of an agent’s choice of strategy is the inspiration for equilibrium-in-beliefs. An agent’s randomization produces in other agents uncertainty about her strategy. Other agents attribute to her a probability mixture of her strategies. Even without randomization, another agent may be uncertain of her strategy and attribute to her a probability mixture of strategies. One may define an equilibrium in terms of such probability mixtures regardless of their origin. Although inspired by randomization, an equilibrium-in-beliefs does not require randomization. In Matching Pennies an equilibrium-in-beliefs arises if Row assigns probability 1/2 to each of Column’s strategies and if Column assigns probability 1/2 to each of Row’s strategies.8 I define another analogue of a Nash equilibrium that takes account of an agent’s beliefs and desires. A subjective Nash equilibrium is a profile in which each strategy maximizes utility, given the profile.9 This definition uses utility supposing a profile’s realization as a fact and not necessarily as knowledge. In ideal games, the Nash equilibria and the subjective Nash equilibria correspond because agents are fully informed about the game and anticipate responses to their strategies. A subjective Nash equilibrium resembles an equilibrium-in-beliefs, but does not require a common probability assignment governing all agents. In an equilibrium-in-beliefs, an agent’s strategy has the same probability according to all other agents, and the agent knows the probabilities they assign to her strategies. This need not happen in a subjective Nash equilibrium, although when it happens, an equilibrium-in-beliefs yields a subjective Nash equilibrium.
Games of Strategy
85
Also, equilibrium-in-beliefs uses nonconditional maximization of utility. An agent calculates the utility of each of her strategies with respect to constant information. In a subjective Nash equilibrium, each strategy in a profile maximizes utility, given the profile. Supposition of the profile may carry information that affects utilities of an agent’s strategies. Consequently, at most one equilibrium-in-beliefs exists in a concrete game, whereas multiple subjective Nash equilibria may exist. A Nash equilibrium in the objective sense is a necessary condition for an objective solution. A Nash equilibrium in the subjective sense is a necessary condition for a subjective solution. An objective solution may be a game’s unique objective Nash equilibrium. In nonideal cases, agents may fail to achieve it without irrationality. Some agent may rationally believe that other agents will not adopt their Nash strategies, that is, their parts in the Nash equilibrium, and, given his beliefs, a nonNash strategy may maximize utility for him. In contrast, objective and subjective Nash equilibria agree in ideal games. Because I treat mainly ideal games, I generally do not attend to the difference between these two types of equilibria. Deriving a standard of joint rationality from standards of individual rationality verifies it. To illustrate the procedure, consider rationalizable profiles. A profile is rationalizable just in case it is compatible with common knowledge of the payoff matrix and players’ (expected) utility maximization. That is, some assignment of beliefs to players compatible with these assumptions rationalizes the profile. Each agent’s part in the profile maximizes utility, given assigned beliefs about the other agents’ strategies. In Matching Pennies, without randomization no Nash equilibrium exists, but every profile is rationalizable. Each player’s picking Heads is rationalized if the matcher is sure his opponent will pick Heads, and the mismatcher is sure her opponent will pick Tails. Given these beliefs, each player’s strategy maximizes utility. Bernheim (1984) and Pearce (1984) present the standard of rationalizability. It is a standard of joint rationality and so a standard for a solution.10 In a simultaneous-move, noncooperative finite game in which agents have common knowledge of the payoff matrix and their utility maximization, iterated elimination of strictly dominated strategies yields the rationalizable profiles. The assumptions entail that dominated strategies have no chance of realization. During deliberations, each player iteratively applies the standard of nondomination to eliminate players’ strategies. When players maximize utility, they ignore strategies eliminated because those strategies are certainly not realized. They realize a rationalizable profile. Hence standards of individual rationality entail realization of this standard of joint rationality.11 A derivation of the standard of Nash equilibrium from standards of individual rationality verifies it, too. The traditional argument assumes that a game has a solution, an outcome that rational players achieve, and then shows that it has to be a Nash equilibrium. This line of argument makes strong assumptions about
86
Collective Rationality
games and players, as Bacharach (1987: 35) observes. However, granting these assumptions, it succeeds. The players in an ideal game complete their game. They realize some profile of strategies. Given suitable assumptions about the players and their game, the profile they realize must be a Nash equilibrium. The derivation relies on many idealizations. A standard idealization is players’ common knowledge of various facts about a game and the players. To simplify, instead of assuming such common knowledge, I assume its intended upshot, prescience. Prescience is an agent’s knowledge of the response to each of his strategies. It entails foreknowledge of the strategy profile realized, given that an agent has foreknowledge of his own strategy. He may acquire foreknowledge of his own strategy by inferring his strategy from his reasons for it. In ideal conditions involving prescience, a profile’s assumption as fact entails its assumption as knowledge, too. Given prescience, not realizing a Nash equilibrium entails some player’s failure to maximize utility. So maximizing utility entails a Nash equilibrium. Each agent’s following decision principles entails that agents together meet a standard for a solution. In these cases individual rationality entails realization of a Nash equilibrium. How does joint rationality compare with universal rationality? Universal rationality may be defined in various ways. A strategy profile may be universally rational if and only if (1) all its strategies are rational in actual conditions, (2) the profile would be realized if all players acted rationally, or (3) all its strategies would be rational if the profile were realized. In the first sense, universal rationality differs from joint rationality. Take a set of strategies, one for each player, such that each strategy is rational in actual circumstances. Put the strategies together, hypothetically, and they may not each be rational in these hypothetical circumstances. A set of strategies, not all realized, may be such that each is rational if realized given the other strategies actually realized, although the realization of all the strategies makes at least one irrational. The second sense permits just one profile’s achieving universal rationality, but several profiles may be jointly rational. So universal rationality in that sense also differs from joint rationality. Giving universal rationality the third sense brings it close to joint rationality but still fails to achieve an exact match. According to both types of rationality, the strategies in a profile are rational given the profile’s realization. However, supposition of the profile is evidential in the case of joint rationality and causal in the case of universal rationality.12 In an ideal noncooperative game without causal interaction, causal supposition of an agent’s change in strategy does not alter supposition of other agents’ strategies. In contrast, evidential supposition of an agent’s change in strategy may alter supposition of other agents’ strategies. For example, take the payoff matrix for the case of the interrupted phone call in Table 5.3. If the game is ideal and players anticipate each other, then (Wait, Call) is jointly rational. Each component is rational given the profile. Evidential supposition of the profile yields certainty of the profile in ideal conditions. However,
Games of Strategy
87
Table 5.3 The Interrupted Phone Call
Call Wait
Call
Wait
0, 0 1, 1
1, 1 0, 0
suppose that each player expects the other to participate in (Call, Wait) because of a convention. In that case, if (Wait, Call) were realized, it would not be universally rational. If it were realized, then, because of causal independence, neither player would anticipate the other’s departure from convention. Each component would be an irrational departure from convention. Universal rationality is not always attainable. In some games, no profile is such that if it were realized all players would be rational. Take a sequential version of Matching Pennies in which players may change choices after learning their choices. If players are ideal, this game never ends. Also, consider a perfectly symmetrical version of Matching Pennies with just one stage and without randomization. No psychological difference between the players favors any profile. If the players are ideal and prescient, this game is not playable. These versions of Matching Pennies are not ideal because ideal players cannot settle them. Ideal players do not realize a profile because universal rationality in these games requires all to maximize utility and no profile has exclusively utility-maximizing strategies. Nonideal games may have features that make one player’s rationality prevent another player’s rationality, but universal rationality is attainable in ideal games. Suppose that universal rationality is impossible. Does it follow that some agent cannot be rational given rational acts of other agents? No, each agent can be rational in any circumstances. However, if the agent were rational, then some other agent would not be rational. The impossibility of all being rational need not make some agent’s being rational impossible. Moreover, the impossibility of universal rationality arises only given fixed background conditions, and does not entail the absence of a possible world in which conditions differ and all are rational. Joint rationality also is not always attainable. It may be impossible for each player to act rationally given others’ acts. Weirich (1998: Chaps. 2, 7) presents a game with an infinite number of players that lacks a jointly rational strategy profile and a game with a finite number of players in nonideal conditions that lacks a jointly rational strategy profile. In contrast with universal and joint rationality, collective rationality is always attainable. It is achieved if each agent acts rationally. As conditions become ideal for joint action, collective rationality becomes universal rationality, then joint rationality, and finally a cooperative game’s solution.13
88
Collective Rationality
Kadane and Larkey (1982) argue that rational agents achieve universal rationality but not necessarily joint rationality. This point is correct. Nonetheless, under the idealizations that game theorists typically assume, agents anticipate other agents’ choices and their being universally rational entails their joint rationality. Although some games lack profiles of jointly rational strategies, Weirich (1998) maintains that ideal games with a finite number of players have such profiles. Is joint rationality possible in an ideal version of Matching Pennies? Joint rationality, because it involves evidential supposition of a profile, changes information with respect to which strategies are assessed. No profile has strategies maximizing utility with respect to the agents’ beliefs given its realization. Not each agent maximizes utility given the profile realized. Does rationality set standards that both players cannot meet? Must one agent be irrational? Each may maximize (expected) utility among his options. Moreover, they may simultaneously maximize utility if, contrary to the idealizations, they do not learn the profile realized. A theorist may define rationality as utility maximization, and then show that in a version of Matching Pennies where agents have prescience they do not achieve joint rationality. However rationality in the ordinary sense leaves the door open to attainment of joint rationality. Chapter 6 explains how to modify the standard of utility maximization, so that joint rationality is possible in games such as Matching Pennies even if agents know the profile realized. In some games, players do not have knowledge of the profile they realize. Informed utility maximization is a goal of rationality for each individual, and its result for a group is a goal of collective rationality. Joint rationality is a goal of collective rationality. In ideal games, joint rationality is attainable and (assuming that every player’s part is significant) is a standard of collective rationality. Joint rationality entails universal rationality. Realization of a profile of jointly rational strategies entails that every agent’s strategy is rational. The rationality of all agents entails their collective rationality. So realization of a profile of jointly rational strategies entails realization of a collectively rational profile. In ideal games realizing a solution entails collective rationality.
5.3 S TANDARDS Game theory introduces standards of collective rationality. Nash equilibrium is a standard of collective rationality for players in an ideal noncooperative game in which individual rationality requires utility maximization. Verifying Nash equilibrium as a standard of collective rationality requires deriving it from standards of individual rationality. Section 5.2 shows that individuals’ utility maximization generates a Nash equilibrium given prescience, but the derivation does not exhibit the players’ reasoning. What reasoning leads players to an equilibrium’s realization? An answer not only explains the realization of some Nash equilibrium or
Games of Strategy
89
Table 5.4 Single-Stage Representation of a Sequential Game
Up Down
Left
Right
0, 0 1, 4
2, 2 1, 3
other, but also explains the realization of a particular Nash equilibrium when multiple Nash equilibria exist. In a game represented with a single stage, players reason strategically. If the game has multiple equilibria, their reasoning yields their coordination to achieve a particular equilibrium. This section presents the problem of justifying participation in an equilibrium of a game with a single-stage representation. Chapters 6 and 7 present a partial answer. They support realization of an efficient equilibrium in an ideal game. The argument uses players’ rationality and their psychologies in a concrete game. A concrete sequential game has multiple representations. In some representations, players in a single stage settle their moves throughout the game. Suppose that a single-stage representation of a sequential game has multiple Nash equilibria. In some cases players’ have a compelling rationale for realizing a particular Nash equilibrium. Suppose the payoff matrix in Table 5.4 represents a sequential game without mixed strategies. The payoff matrix has two Nash equilibria, namely (U, R) and (D, L). Backward induction concerning hypothetical moves in the underlying sequential game may support a particular Nash equilibrium. This strategic reasoning assumes the absence of stages with multiple, equally good moves. Suppose that the tree in Figure 5.2 depicts the matrix’s sequential realization. Column’s strategies in the underlying sequential game include a response to each of Row’s moves. Column has two possible responses to Row’s choice of Up, namely, (U ! L) and (U ! R). She also has two possible responses to Row’s Row
Up Column
Down Column
Left
Right
Left
Right
0, 0
2, 2
1, 4
1, 3
FIGURE 5.2 A rollback Nash equilibrium.
90
Collective Rationality
choice of Down, namely, (D ! L) and (D ! R). Hence, one complete strategy is (U ! L) & (D ! L); that is, Left in response to any move Row makes. A rollback equilibrium is a profile of strategies such that each strategy (a sequence of moves accomodating all contingencies) is rational given the others. The figure’s double lines indicate the choices that backward induction predicts. Backward induction reveals that Column will pick Right if Row picks Up, and will pick Left if Row picks Down. Hence, Row will pick Up. Therefore the rollback equilibrium is (U, (U ! R) & (D ! L)). Simplifying, this yields (U, R). The rollback equilibrium selects a Nash equilibrium of the payoff matrix. A concrete game realizing the payoff matrix has additional features that explain a particular Nash equilibrium’s realization. A unique Nash equilibrium is the concrete game’s solution. The payoff matrix is too austere a representation of the concrete game to reveal its solution. It inadequately represents the concrete game. The profile (D, L) forms a Nash equilibrium of the matrix, but the Nash strategy Down is an irrational move given that Right is the response to Up. Row’s deviation from (D, L) initiates a joint act yielding (U, R). The sequential game’s tree discloses Down’s irrationality in the concrete game that the tree represents. It reveals an order of moves and Column’s conditional strategies. Because a solution of a concrete game attends to such factors, the concrete game has the unique solution the tree displays.14 The strategic reasoning leading to the rollback equilibrium assumes ideal agents without cognitive limits. It also assumes perfect information. That is, Column learns Row’s move before making her move. Moreover, it assumes common knowledge of the game’s tree and the players’ utility maximization. Consequently, Row knows that Column will pick Right if he picks Up.15 First principles evaluate a sequential strategy in terms of its components rather than by comparison with alternative strategies. They favor the underlying sequential game’s rollback equilibrium over the payoff matrix’s other Nash equilibrium. The rollback equilibrium is a subgame perfect Nash equilibrium. That is, its components yield a Nash equilibrium in every subgame any node of the tree starts, as Osborne (2004: 169–73) explains. Individuals may realize it without violating the principle of dynamic consistency. An agent’s rollback strategy is a sequence of utility-maximizing moves. The justification of realization of a particular Nash equilibrium relies on common knowledge. Common knowledge that p may arise from a public announcement that p. Common knowledge of a game may arise from ideal agents’ knowledge that all read the same instructions for playing the game. Assumptions of common knowledge combine with other assumptions to ground foreknowledge of agents’ choices. Common knowledge of an agent’s participation in a particular equilibrium’s realization does not entail replication of her reasoning, although in ideal cases it may arise that way.16 Common knowledge has various formal explications. For example, suppose that all know that all know that p. This may mean either that (x)(y)KxKyp or that
Games of Strategy
91
(x)Kx(y)Kyp. According to the first interpretation, each knows that each knows that p. This is backward induction’s assumption. Lewis (1969: 64–68) calls it common knowledge in sensu diviso. A treatment of common knowledge in game theory characterizes it, considers its generation, explains its generation of foreknowledge and prescience, shows how it supports Nash equilibrium, and finally distinguishes its role in strategic and sequential games.17 Gintis (2000: 13) says that if it is common knowledge that all players are rational, then under appropriate conditions they realize a Nash equilibrium. He has in mind the rollback equilibrium in a sequential game, which is a Nash equilibrium that backward induction supports. It is easier to ground a rollback equilibrium of a sequential game than a Nash equilibrium of a simultaneousmove game. Simultaneous-move games pose special problems for the justification of a particular Nash equilibrium’s realization. Backward induction does not apply. Can common knowledge of the game and the players’ rationality nonetheless yield a Nash equilibrium? Success is possible in some cases. One may use iterated elimination of strictly dominated strategies to explain realization of a Nash equilibrium. This explanation works in the Prisoner’s Dilemma, for instance. Appeal to such common knowledge does not work in general, however. As Bicchieri (2004: 190) observes, in a noncooperative game the players’ common knowledge of the payoff matrix and their utility maximization yields realization of a rationalizable profile but not realization of a Nash equilibrium. Some profiles remaining after iterated elimination of strictly dominated strategies are not Nash equilibria.18 Aumann (1974, 1987a) constructs a decision-theoretic foundation for equilibrium. He introduces a generalization of Nash equilibrium he calls correlated equilibrium. The generalization allows correlation between agents’ strategies arising from shared evidence, such as an arbitrator’s instruction to coordinate strategies a certain way. He shows that if it is common knowledge among the agents in a game that they are utility maximizers, then they will achieve a correlated equilibrium. In a three-person, simultaneous-move game, for a player, an opponent’s strategy may be evidence of the other opponent’s strategy. Then the opponents’ strategies are correlated. If the player imagines his opponents’ responses to his strategy, he may imagine a correlation in their responses rather than independence (of the sort randomization of their choices produces). Aumann’s proof of realization of a correlated equilibrium assumes that the agents have a common prior distribution of probability over strategies. If in addition each agent’s information about strategies is probabilistically independent of the information other agents receive, then the agents achieve a Nash equilibrium.19 Correlated equilibrium generalizes Nash equilibrium taken as equilibrium-inbeliefs. The correlation between a player’s strategy and opponents’ strategies is evidential, just as it is between the strategies of two opponents. Such correlation may hold in a simultaneous-move noncooperative game, where no player’s
92
Collective Rationality
strategy causally influences another player’s strategy. A correlated equilibrium yielding an equilibrium-in-beliefs also yields an objective Nash equilibrium, when beliefs are accurate because each agent is prescient and so knows others’ responses to each strategy, knows a Nash equilibrium is realized, and knows which Nash equilibrium is realized.20 Brandenburger and Dekel (1989) expand the scope of common knowledge to obtain first rationalizability, then correlated equilibrium, and finally Nash equilibrium. They start with common knowledge of the payoff matrix and agents’ utility maximization. That yields rationalizability. Then they add common knowledge of the prior assignment of probabilities to each agent’s strategies. That yields a correlated equilibrium. Next, they add common knowledge of each agent’s strategy. That yields a Nash equilibrium, in the sense of an equilibrium-in-beliefs. The last result explains the realization of a Nash equilibrium but not the reasoning that leads to it, in particular, when there are multiple Nash equilibria. Aumann and Brandenburger (1995) identify epistemic conditions sufficient for realization of a Nash equilibrium taken as an equilibrium-in-beliefs. An agent’s mixed strategies are conjectures on the part of other agents about the agent’s pure strategy. Their first theorem observes that foreknowledge of the profile realized and players’ utility maximization yields a Nash equilibrium. More precisely, in two-person games, mutual knowledge of the payoff functions, of utility maximization, and of the conjectures implies that the conjectures form a Nash equilibrium. In games with more than two players, a common prior probability assignment, mutual knowledge of payoff functions and utility maximization, and common knowledge of the conjectures imply that the conjectures form a Nash equilibrium (p. 1161). Aumann and Brandenburger show that Nash equilibrium is a constraint that rational agents in ideal conditions meet. They demonstrate realization of a Nash equilibrium but not realization of a particular Nash equilibrium. They do not explain reasoning leading players to a Nash equilibrium. The players’ knowledge is not a player’s premiss in reasoning leading to a Nash strategy. It is an outsider’s premiss in reasoning about the game’s outcome.21 The remainder of this section describes the approach Chapters 6 and 7 take to the explanation of an equilibrium’s realization. To simplify, they treat cases in which only a single equilibrium qualifies as a solution. Consider, for example, the Stag Hunt, which Table 5.5 presents. This game has two Nash equilibria, (U, L) and (D, R). (U, L) is the efficient Nash equilibrium. Its efficiency does not ensure its realization. Two rational players participate only if each is confident that the other participates. Row is rational, but wonders whether Column is rational and will do L. Column is rational, but wonders whether Row knows that she is and so will do U in response to her doing L. Any missing level of knowledge creates doubts about the wisdom of participating in (U, L). In an ideal game, however, the players’ common knowledge of their rationality banishes doubts of this sort. A modest but promising project is showing that in certain ideal games
Games of Strategy
93
Table 5.5 The Stag Hunt
Up Down
Left
Right
2, 2 1, 0
0, 1 1, 1
exactly one equilibrium is a solution, and the players’ universal rationality leads to its realization.22 Game theorists call narrowing a game’s set of Nash equilibria to obtain its set of solutions the refinement program.23 In special cases, the refinement program looks for a particular Nash equilibrium that rationality supports. Only it counts as a solution. An explanation of its realization justifies each player’s participation in it. Chapters 6 and 7 use efficiency to select an equilibrium. Efficiency does not yield a unique equilibrium in every case, so those chapters treat special cases in which it does yield a unique equilibrium. Efficiency is not a principle of individual rationality, so the chapters use principles of individual comprehensive rationality to explain realization of the efficient Nash equilibrium. The explanation relies on features of concrete games absent from payoff matrices. For example, players’ knowledge of their psychologies explains their coordination to realize the equilibrium.24 Myerson (1991: 241–42) distinguishes refinement of Nash equilibrium from selection of a Nash equilibrium. Refinement applies to all rational, intelligent, and informed agents. Selection may depend on culturally established focal points.25 The difference between an efficient Nash equilibrium and a coordination convention illustrates the distinction. The former may involve a structural focal point, and the latter may involve a cultural focal point. Kohlberg and Mertens (1986) and Harsanyi and Selten (1988: Chap. 4) use stability as a selection criterion for equilibria in noncooperative games. Stability provides a refinement of Nash equilibrium. Background factors (such as arbitrators) that payoff matrices omit may narrow the field of Nash equilibria that qualify as solutions or that emerge as coordination points. Rationality selects a Nash equilibrium as a solution. Although common knowledge may support participation in a unique solution, some symmetric games have multiple solutions, and then common knowledge is not sufficient support for a solution, as Sugden (2000b) observes. Principles of rationality may narrow the set of equilibria that count as solutions, but if a game has multiple solutions, it must rely on players’ psychologies to explain why players realize a particular equilibrium. An explanation’s resources may include, besides the players’ rationality, their beliefs and their tie-breaking methods, for instance. A theory of rationality need not explain the players’ psychologies. It need not explain every feature of an equilibrium’s realization. It may take players’ psychologies for granted, and show how rational agents with those psychologies realize a particular equilibrium. The theory may, for instance, assume players’ knowledge
94
Collective Rationality
of the profile realized, as it may assume an agent’s probability and utility assignments to compute a rational choice. To simplify explanation of an equilibrium’s realization, I treat only ideal games. In ideal games, agents are ideal and so rational and are informed about each other. They know the standards of individual rationality that they meet. They also know for each strategy how others respond. More precisely, for each strategy, they know a conditional having that strategy’s realization as antecedent and others’ strategies as consequent. They are prescient. A prescient agent knows other agents’ strategies given each of his strategies. An agent’s prescience may originate in broad common knowledge of the game and the players’ rationality, including common direct knowledge of agents’ choice dispositions and common indirect knowledge of their choices. Prescience yields and explains foreknowledge of the profile realized. Moulin (1995: 32 note 24, 33) considers the common knowledge assumption needed to support the efficient Nash equilibrium in the Stag Hunt. He objects that common knowledge, especially common knowledge of all players’ preferences, is unrealistic. These doubts about common knowledge concern empirical explanations of human behavior. Idealizations are appropriate in a normative theory of rationality. Generalization of principles eventually removes idealizations to achieve realism, but meanwhile idealizations reveal basic facts about strategic rationality. Common knowledge is a strong, but useful idealization for a theory of rationality. The beliefs and desires of agents that ground participation in a Nash equilibrium may arise in various ways. They may be the result of evolution, learning, convention, character, or dispositions. Deep explanations of Nash equilibrium are various. A general, but not deep, explanation of a Nash strategy’s rationality may use an agent’s beliefs and desires without explaining the beliefs and desires themselves. An explanation of an event is relative to assumptions and need not explain its assumptions to succeed. Representations of reasoning that leads to a Nash equilibrium vary in depth. One representation may move from a single premiss to participation in the equilibrium. Another representation may support that premiss with subsidiary premisses. Every representation may gain depth by adding support for its premisses. Representations need not and cannot achieve maximal depth. A representation justifying Nash strategies, that is, explaining their rationality, suffices for progress. Given common knowledge of players’ psychologies, a game may have a unique solution. In such cases, assuming this common knowledge amounts to assuming knowledge of the profile realized. Common knowledge of psychologies yields foreknowledge of strategies that generates the solution. In this case, multiple solutions reduce to one. An explanation of its realization may explain how knowledge of its realization emerged from knowledge of agents’ psychologies. Prescient agents’ foreknowledge of the profile realized may arise during a communication period that precedes their game. They may talk about their
Games of Strategy
95
intentions or receive public instructions during that period. If pregame communication does not occur, they may apply strategic reasoning or a common theory of rationality to discover others’ intentions. A justification of Nash equilibrium using prescience need not explain prescience’s origin. One may deepen a justification using prescience by deriving that knowledge from more basic knowledge. A deeper justification derives prescience from common knowledge of the game and players’ psychologies. An explanation may, for example, attribute to agents’ direct knowledge of their game’s payoff matrix and indirect, strategically inferred knowledge of a profile’s realization. Section 6.5 adopts this method in one type of case. The relevant features of players’ psychologies include their pursuit of incentives, their tie-breaking mechanisms, and their full, comprehensive rationality. Common knowledge of these features may come from a public announcement. However, explaining prescience does not deepen its justification of participation in a Nash equilibrium. The only way to deepen the justification is to justify prescience. An explanation of prescience does not bear on participation’s rationality but only on participation itself. Sugden (2001) asks for the origin of prescience because he has in mind an explanation of equilibrium. This section does not present a detailed account of the origin of prescience, because it addresses justification of ideal agents’ behavior given prescience and not explanation of real people’s behavior. Such justification is conditional and claims that participation in an equilibrium is rational given prescience. Its success does not require an explanation of prescience’s origin. That explanation advances another project, namely, deepening an explanation of participation in an equilibrium. Although this section does not offer a general account of prescience’s origin, it assumes that prescience is metaphysically possible in ideal games. How can it arise in an ideal version of the interrupted phone call, which Table 5.6 depicts? Although nothing in the payoff matrix explains how Row may know that if he chooses Call then Column chooses Wait, features of a concrete realization of the matrix may ground knowledge of this conditional. The concrete game need not be symmetrical. An expert psychologist may tell each agent the response to each of his strategies. Prescience is metaphysically possible. That type of possibility suffices for an idealized justification of participation in an equilibrium. Sugden (2001: 427) argues against the assumption of prescience as follows. Suppose that Column’s rational choice is strategy b. Then Row’s choice of strategy a is not evidence of Column’s strategy. Row knows that Column is rational and Table 5.6 The Interrupted Phone Call
Call Wait
Call
Wait
0, 0 1, 1
1, 1 0, 0
96
Collective Rationality
will do b. This argument is invalid. Although Column will do b and Row knows this, Row’s doing a may still be evidence that Column will do b. Column’s doing b may be a best reply to Row’s doing a. That is why b is Column’s rational choice. If Row does not pick a, then Column does not pick b because b is not rational in that case. Consider how Row knows that Column will do b, that is, how Row knows that b is a rational strategy for Column. Row knows because he knows he will do a. His doing a is evidence of Column’s doing b because her doing b is rational if he does a, and she is rational and prescient. Sugden’s argument fails to distinguish causal and evidential independence. In a single-stage noncooperative game, Column’s act is causally, but not evidentially, independent of Row’s given prescience. Knowledge of Column’s choice depends on Row’s knowledge of his own choice. Take Matching Pennies without mixed strategies (see Table 5.2), and suppose that the agents realize (H, T). Column chooses rationally by hypothesis. Because she is prescient, she responds to Row’s choice of H. Row’s choice of H is evidence of Column’s choice. Row knows that if he chooses T instead, then Column chooses H instead. His knowledge that he chooses H is part of his evidence that Column chooses T. This section presented two questions concerning single-stage noncooperative games of strategy. First, how do rational agents realize an equilibrium? Second, when multiple equilibria exist, how do rational agents realize an efficient equilibrium? Chapter 6 addresses the first question, and Chapter 7 addresses the second. Also, a solution to a game of strategy is an equilibrium and a profile of strategies that are collectively rational in ideal conditions for playing the game. The following chapters use this link between equilibrium and collective rationality to refine accounts of equilibrium in games of strategy.
6
Equilibrium
B
ECAUSE solutions are collectively rational in ideal conditions, illuminating solutions sheds light on collective rationality. This chapter treats solutions by revising accounts of equilibrium, a necessary condition for a solution. Equilibrium is a necessary condition of collective rationality, but only if equilibrium is attainable. Explaining how rational agents reach an equilibrium requires generalizing decision principles and equilibrium principles. This chapter presents a generalization of Nash equilibrium I call strategic equilibrium. Strategic equilibrium is attainable in cases where Nash equilibrium is out of reach. A generalization of the decision principle of utility maximization supports strategic equilibrium. It establishes a strategic equilibrium’s realization in every ideal noncooperative game.
6.1 S TANDARDS
AND
P ROCEDURES
Principles of rationality fall into two types. One type expresses standards that a bystander may apply to an act after it has been performed. Another type expresses a procedure that an agent should apply to generate an act. The principle to perform an act of maximum utility expresses a standard of evaluation. Simon’s principle (1982: 424–43) to satisfice, namely, to adopt the first satisfactory option one discovers, expresses a procedure. Following tradition, I take the normative to include both evaluation and direction. However, assuming that norms direct rather than evaluate, only the procedural principles are normative, strictly speaking. This book’s principles of collective and individual rationality express standards of evaluation, not procedures. The standards of evaluation apply to all acts, including acts performed during a decision procedure. If a decision procedure has multiple steps, the standard of utility maximization applies to each step. If each step maximizes utility, the standard approves the procedure. The 97
98
Collective Rationality
procedure’s step by step evaluation takes account of each step’s cost and the likelihood that it leads to a utility-maximizing decision. An agent may fail to make a rational decision for various reasons. He may use the wrong principles, or misapply the right principles, or decide without reflection. A rational procedure appropriately invests in obtaining a rational decision. The investment, if reasonable, considers other competing goals, selects rewarding steps in deliberations, and has an expected return that justifies it. It considers the importance of a decision, and whether reflection to discover the right governing principle will improve the decision more than reflection to apply a handy principle that is approximately correct. Sensible decision procedures attend to the costs of deliberation. For example, a diner may reasonably decide between two similar restaurants by flipping a coin if deliberation is not cost-effective. The cost of discovering a utility-maximizing decision may excuse a failure to reach such a decision. Cognitive psychology may assist evaluation of decision procedures by measuring the cost and expected value of deliberation. The imperative, “Maximize utility,” has three functions. It expresses a standard of rationality, a goal of rationality, and a decision procedure. Its multiple roles may obscure distinctions among standards, goals, and procedures. The decision procedure requires calculation and comparison. In contrast, a spontaneous, nondeliberative decision may meet the standard. According to the standard, a decision is rational only if it maximizes utility, provided that both the agent and the decision problem are ideal. The standard for ideal cases yields a goal of rationality for nonideal cases. Rational pursuit of the goal is sensitive to an agent’s abilities and circumstances. An agent should cultivate in a reasonable way habits and rules of thumb that promote the goal of utility maximization. I treat the standard of utility maximization to put aside problems with utility maximization’s use as a decision procedure. An oft-cited problem is a looming infinite regress of decisions. The regress proceeds from an initial decision problem to a decision about how to decide in that initial problem, from that second decision problem to a decision about how to decide in that second problem, and so on. However, cost-effectiveness halts the regress. A decision procedure maximizing utility at every step eventually calls for a decision without another decision about how to decide. A more serious threat to the decision procedure is the possibility of self-defeat. This is the possibility that the procedure frustrates its goal. Sometimes the best method of attaining a goal is to pursue it indirectly. For example, an anxious person maximizes his chances of falling asleep not by aiming to fall asleep but by counting sheep. A recently divorced man best forgets his ex-wife not by trying to forget but by concentrating on his job. An artist may become famous only if she does not pursue fame directly. An athlete may perform better if she relaxes and does not deliberate about her performance as she performs. Pursuing a goal indirectly still counts as an attempt to attain the goal, but does not count as a direct attempt in typical cases.1
Equilibrium
99
The procedure of maximizing utility faces the risk of self-defeat, as Parfit (1984: Part 1) observes. It is possible that one maximizes chances of a utility maximizing act, not by seeking a utility maximizing act, but by other steps. Perhaps in strange circumstances a diner maximizes utility by unintentionally dropping his glass. Of course, he cannot drop his glass unintentionally by intending to drop it. He may unintentionally drop it by, say, taking a pill that causes hand tremors. Dropping his glass unintentionally is not a good procedure for the diner because it is self-defeating. It is followed only if it is intentionally adopted, and then it thwarts its objective. In some cases “Maximize utility” is not a procedure rational to follow because that procedure is self-defeating. Utility maximization works better as a standard of evaluation than as a procedure, and similarly for generalizations of the principle of utility maximization. A standard of evaluation does not give advice. If an act falls short of the standard, the agent should have done something else. That does not mean that he should have been advised to do something else, or that he should have tried to do something else. The advice and the trying may worsen matters. They may rattle the agent so that he not only fails to maximize but also bungles miserably. Moreover, failing to maximize does not entail that the agent should have deliberated more carefully. To maximize utility now, an agent has to act now. Taking time for careful deliberation may not maximize utility now.2 Gigerenzer (2002) criticizes utility maximization as a decision procedure because it is self-defeating. He believes that sometimes if you try directly to maximize utility, you will fail. Take catching a fly ball. If a fielder pursues utility maximization directly by calculating where the ball will land and then running to that spot, he will fail to arrive in time to catch the ball. Calculating will not maximize utility given the shortage of time for reaching the ball’s landing spot. A utility-maximizing decision procedure need not instruct an agent to calculate utilities for all options. Perspicacious utility maximization may proceed to the goal indirectly. Good advice is to run toward the ball, keeping constant the angle of sight to the ball. This aim is compatible with the background aim of utility maximization. Chasing the ball this way maximizes without calculating the ball’s trajectory. An engineer who wants to build a robot that maximizes utility need not build a robot that in every case internally calculates the act that maximizes utility. She may construct the robot to respond to its environment so that the net result is maximization without internal calculation. To follow a wall, a robot may bump against the wall and move at an angle away from the wall a few feet. Then it may move toward the wall until it bumps against it again. It may repeat its retreat and return indefinitely. As a result, it moves along the wall without calculating the wall’s path. A direct aim may yield utility maximization indirectly. A person may maximize utility by picking his favorite ice-cream flavor even if he picks spontaneously without deliberation. If he selects vanilla and that maximizes utility, he indirectly
100
Collective Rationality
achieves utility maximization although his direct aim was a flavor. He indirectly reaches the abstract and general objective of maximization.3 Does the prospect of self-defeat create dilemmas of rationality? Suppose that a speaker wants to impress the audience, but to succeed must impress it unintentionally. A dilemma arises. If he tries to impress, he will not impress. If he does not try to impress, he will not impress, for he will not succeed without effort. Either way, he will not impress and so will fail to maximize utility. The speaker seems doomed to irrationality. A response to the dilemma is to disqualify impressing as an option. Impressing is not in the speaker’s full control because it must arise without trying and without trying is unlikely. Hence impressing is not evaluable for rationality. A suitable evaluation assesses its utility rather than its rationality. May a dilemma arise for a genuine option, an act an agent fully controls? Suppose that a woman cannot laugh by trying directly to do it. She has to think of a funny scene to prompt a laugh. Although she does not directly control laughing, she fully controls it. Hence the act is evaluable for rationality. Also suppose that the woman, without blame, does not realize that she has full control of laughing by indirect means. Then she is not likely to laugh unless she tries, and if she tries, will fail. Given that laughing maximizes utility, it appears that a dilemma of rationality arises. The case does not refute the standard of utility maximization. Strictly speaking, the standard does not apply to indirectly controlled laughing. Rationality evaluates the act by its components, namely, the thought and the laugh it causes. However, waiving that point, the example still offers no objection. It presents an excused failure to maximize utility. The woman’s failure to laugh is excused by the case’s background assumptions. Success without trying is unlikely to arise on its own from her beliefs and desires. That unlikeliness, together with the impossibility of success by trying, excuses failure. The case does not undermine the standard of utility maximization but only reveals a source of excuses for failing to maximize utility. 6.2 U TILITY M AXIMIZATION Utility maximization is a standard for acts. It applies to acts that an agent directly controls, such as decisions. For simplicity, this section introduces the standard for decisions only. In a decision problem, options are possible decisions, including for convenience the null decision, that is, no decision. Propositions represent both decisions and their contents. A decision’s content typically concerns another act, perhaps an extended act. In a decision problem, the standard of utility maximization requires adopting an option that maximizes utility among all options. Utility is rational degree of desire, as Weirich (2001: Chap. 3) explains. It depends on information. An option’s utility equals its expected utility. Consequently, utility maximization does not presume knowledge of every option’s
Equilibrium
101
outcome. An option’s utility depends on the probabilities and utilities of its possible outcomes. Any set of mutually exclusive and jointly exhaustive states generates an option’s possible outcomes, one possible outcome for each state. Propositions represent the possible outcomes. The most fine-grained outcomes are possible worlds, taken as consistent propositions that are conatively maximal. That is, they are maximal in the sense of entailing for each of the agent’s desires its satisfaction or frustration. Because coarse-grained outcomes comprehend finegrained outcomes, all outcomes have comprehensive scope.4 Just as utilities depend on an agent’s desires, probabilities depend on an agent’s beliefs. Both probabilities and utilities are subjective because they depend on an agent’s psychological states. Principles of rationality governing beliefs and desires generate the usual structural features of probabilities and utilities. When obtaining an option’s utility from the probabilities and utilities of its possible outcomes, the formula for evaluating an option o’s possible outcome in a state s is P(s given o)U(o given (s if o)). Causal decision theory generates an interpretation of the formula. That theory makes an option’s utility depend on an appraisal of the option’s efficacy in realizing an agent’s goals. Special circumstances warrant simplification of the formula. For instance, if options have no influence on states, the probability-utility product reduces to P(s)U(o given s), as Weirich (2001: Chap. 4) explains. How should a theory of rationality represent beliefs and desires? Standard quantitative representations are useful but limited. In some cases they are too rich. Beliefs and desires are not quantitative. In other cases the representations are not rich enough. The literature on Pascal’s Wager, for instance, observes that two chances for infinite bliss have the same infinite utility even if the first yields infinite bliss with greater probability. This comparison runs contrary to the continuity axiom behind standard quantitative representations of utility. Some cases demand lexicographic orderings of options, infinitesimal probabilities, and the like. This book puts aside these richer representations and works only with standard, Archimedean representations. Its account of probability and utility treats only ideal cases where standard representations are adequate. In the ideal cases standard representations fit, economists say that a rational agent acts as if he maximizes utility. This section advances a stronger norm: a rational agent maximizes utility. This norm requires probability and utility to be propositional attitudes and not just functions representing preferences. An inference to the best explanation supports the stronger norm. A rational agent acts as if he is maximizing because he is maximizing. Principles of rational preference insufficiently explain rational action in cases where an agent’s preference structure is sparse. Take the principle that preferences should be embeddable in a preference structure with a unique (given a choice of scale) maximizing representation. This principle has no justification except that genuine utility-maximization requires compliance. Also, it is too weak to be an exhaustive account of the structure
102
Collective Rationality
rationality imposes. It does not rule out, for instance, utility-minimizing preferences. Standards for rational decisions, although evaluative and not procedural, are sensitive to an agent’s situation. The standards become more demanding as an agent comes closer to having perfect decision-making tools, and to being perfectly situated for making her decision. They become less demanding as she acquires limitations and new obstacles arise. The standard of utility maximization, although not a decision procedure that an agent must apply as she makes her decision, assumes that the input for its application is accessible to the agent. In the ideal conditions that the standard assumes, the agent has access to the options, probabilities, and utilities that the standard’s application requires. A proposition’s representation affects access to its utility. An agent may not know an outcome’s utility given some way of naming the outcome. So the principle of utility maximization requires that an outcome be named in a canonical way that transparently designates the outcome, and thereby makes its utility accessible.5 Utility maximization is a requirement of rationality in ideal cases in which agents are cognitively perfect and decision problems are routine. When agents and decision problems are not ideal, obstacles to utility maximization excuse failures to maximize. A decision’s rationality depends on its meeting less demanding standards. The absence of probabilities and utilities for options’ possible outcomes is a familiar obstacle to utility maximization. Without those quantities, options’ utilities may not exist so that options are not comparable with respect to utility, and no option has maximum utility. I adopt an interpretation of probability and utility according to which an agent’s having probability and utility assignments does not require her having quantitative beliefs and desires. Probabilities and utilities are just quantitative representations of beliefs and desires. Nonetheless, in many cases agents are not in a position to assign probabilities and utilities to an option’s possible outcomes and so to assign a utility to the option. Because of human limits, agents often do not have the probability and utility assignments that utility maximization assumes. Even after a reasonable amount of reflection, the necessary assignments may not arise. The standard of utility maximization, suitably restricted for attainability, advances a necessary condition of a decision’s rationality in ideal cases with an option of maximum utility. It is silent about nonideal cases without adequate probability and utility assignments. Agents in those cases do not run afoul of it. Taking its application to a case as a truth-functional conditional with the existence of the requisite probabilities and utilities as antecedent, an agent without those probabilities and utilities meets the standard by default, whatever she does. The conditional is true because its antecedent is false.6 Suspending judgment about probabilities and utilities does not mandate suspension of standards of rationality for options. If conditions are ideal except
Equilibrium
103
for the absence of probability and utility assignments, an option is rational if not contrary to comparative probability and utility judgments. Good (1952: 114) expresses this tolerant view more precisely in the form of a decision principle. It endorses an option that maximizes utility with respect to some quantization of beliefs and desires, that is, some probability and utility assignments compatible with beliefs and desires. Good’s principle handles not only cases in which probability and utility assignments are missing, but also cases in which those assignments are unattainable because the agent’s relevant goals are incommensurable. It offers an attractive generalization of utility maximization. To illustrate Good’s standard, suppose that someone does not assign any precise probability to increasing his cholesterol level if he eats foods containing trans fat. However, given any probability assignment compatible with his beliefs, avoiding trans fat maximizes utility. Then he should avoid trans fat. Under no quantization of beliefs and desires does a diet with trans fat maximize utility. Another obstacle to utility maximization is the cost of following decision procedures aimed at utility maximization. The most famous amendment for agents with cognitive limits is Simon’s principle to satisfice. Simon advances satisficing as a decision procedure. Because this chapter treats standards of evaluation, not decision procedures, and because standards of evaluation also accommodate cognitive limits, I reformulate Simon’s principle to obtain a standard of evaluation. The reformulation makes his principle a generalization of utility maximization. The generalization says that a rational option comes from the highest classification of options into which options fit. The relevant classifications depend on the agent’s beliefs, desires, and aspiration levels. In routine decision problems in which probability and utility assignments are available and some option maximizes utility, satisficing reduces to utility maximization. Aspiration levels rise to the top of the preference ranking of options. The highest classification is the set of utility maximizing options. Only a utility maximizing option is satisfactory. In nonquantitative cases and other cases in which no option maximizes utility, adopting a satisfactory option, according to suitable aspiration levels, goes beyond utility maximization. In such cases it requires only classification of options according to some scheme of classification into ranks. A classification of options as satisfactory or unsatisfactory suffices. The principle sanctions a satisfactory option in cases with an infinite number of options better and better without end. Satisficing, as I formulate it, accommodates decision costs by factoring them into the classification of options. Options are possible decisions and their consequences include decision costs. Given suitable methods of setting aspiration levels, the principle agrees with Good’s principle for nonquantitative cases. It agrees if the highest classification of options contains exactly the options that are utility maximizing under some quantization of beliefs and desires. For example, limiting consumption of trans fat to a gram per day may satisfice. A consumer may deem that quantity and lesser quantities as safe and may be indifferent
104
Collective Rationality
between all safe quantities. Several diets low in trans fat then maximize utility under a quantization of beliefs and desires. They form the highest class of options, although none maximizes utility. Attempts to make satisficing compatible with utility maximization and generalizations such as Good’s are sometimes criticized for ignoring the procedural side of decision making. The principle of satisficing, as I formulate it, does not make this mistake. It advances a standard of evaluation to be applied to decision procedures as well as to the decisions they produce. A rational decision procedure should itself satisfice stepwise taking account of the agent’s circumstances and the costs and benefits of decision procedures. It may satisfice even if just followed, and not selected using a metadecision procedure aimed at satisficing among decision procedures. The standard need not exempt decision procedures from evaluation to avoid a regress in the rational adoption of a decision procedure. Good’s and Simon’s principles are familiar generalizations of utility maximization. They illustrate how generalization of utility maximization proceeds. Section 6.3 presents another generalization of utility maximization designed to handle nonideal decision problems.7 6.3 S ELF -S UPPORT Suppose that during an afternoon a person eats a Fuji apple and a Gala apple. The Fuji apple is superior to the Gala apple according to his tastes. Those tastes are typically constant during an afternoon and form a constant basis for the comparison. Next, suppose that a person has a choice between a Fuji apple and a Gala apple, and he picks the Fuji apple because of his tastes. A comparison of the choice made with the choice not made requires supposing the choice not made and considering its outcome. Normally, supposing the choice not made preserves tastes and other basic grounds for comparing options. In standard cases, desires forming the basis of comparison are the same no matter whether one supposes the option realized or an option not realized. Similarly, beliefs forming the basis of comparison are the same under supposition of any option. Rationality evaluates options by comparing them, and the comparisons involve hypothetical situations. Rationality evaluates an option realized by comparing its realization with other options’ realizations. For each alternative option, it hypothetically supposes that option’s realization. Rationality also evaluates options not realized. Its evaluation of an option not realized hypothetically supposes the option’s realization. The method of evaluation is straightforward in normal cases because the hypothetical situations an evaluation entertains do not alter the basis of comparison of options. Although an evaluation’s hypothetical suppositions typically do not affect the basis of comparison of options, they may affect it in special cases. When an evaluation hypothetically supposes an option not realized, the supposition may affect beliefs or desires forming the basis of comparison. Suppose, for example,
Equilibrium
105
that a person compares her current career to an alternative career. If she were to pursue the alternative career, she would acquire new preferences favoring it in place of her current preferences favoring her current career. The basis of comparison shifts according to the career imagined. If a directly controlled option’s supposition changes beliefs and desires forming the basis of comparison with rivals, how do comparisons settle its evaluation? This section formulates a response. An option’s utility assignment depends on the agent’s beliefs and desires. It changes with assumptions touching the agent’s information. An option’s utility therefore is sensitive to assumptions about the option’s adoption. In typical decision problems, an option’s adoption does not carry information that bears on options’ utilities. In some decision problems, however, an option’s adoption carries relevant information. It is possible that for options o and o0, U(o given o) 6¼ U(o given o0 ). An option’s evaluation should attend to U(o given o), but which utilities should guide comparison with other options? Because U(o given o) and U(o0 given o0 ) involve different informational assumptions, they are not adequate for comparing o and o0. In such decision problems, no option’s assumption is appropriate for comparison of all options. No single body of information is appropriate for calculating all options’ utilities. Comparisons shift with assumptions. One option may have greater utility than all others with respect to the information it carries, while another option has greater utility than all others with respect to the information it carries. Utility maximization then yields ambiguous advice. One response to the problem notices that an option’s being rational is a dispositional property. In ideal conditions, an option is rational if and only if it is utility maximizing if performed. This observation leads to Jeffrey’s principle of ratification (1983). To accommodate information an option carries, that decision principle instructs an agent to adopt an option that has maximum utility on the assumption that it is adopted. Such an option is said to be self-ratifying. Ratification does not conflict with causal decision theory. Although an option may provide evidence concerning its consequences, comparison with other options still proceeds according to its expected causal consequences. In an ideal constant-sum game an equilibrium-strategy is self-ratifying. Take the game in Table 6.1. (Up, Left) is the equilibrium. Given Up, Row infers that Column picks Left. If Column picks Left, Row cannot improve by switching to Down. So Row’s equilibrium strategy is self-ratifying. Table 6.1 Self-Ratification
Up Down
Left
Right
2, 2 1, 3
3, 1 4, 0
106
Collective Rationality
Von Neumann and Morgenstern ([1944] 1953: 105, 146–48) implicitly use selfratification to derive realization of a Nash equilibrium. They assume, roughly, that in an ideal two-person, zero-sum, single-stage noncooperative game, an agent may use the strategy he adopts to infer his opponent’s response. A Nash equilibrium is then a profile in which each strategy maximizes expected utility given the profile, and so maximizes expected utility on the assumption that it is adopted.8 In some decision problems no option is self-ratifying, so the advice to adopt a self-ratifying option cannot be followed. This is the case in an ideal version of Matching Pennies without mixed strategies (see Table 5.2). The players are cognitively perfect and, moreover, are prescient about each other. A player knows, for each of his strategies, the other player’s response. If a player’s opponent excels at out-maneuvering him, it may turn out that any choice the player makes carries evidence that his opponent adopts a counter-measure, and hence evidence that another choice would have been better. The player’s showing Heads has greater utility than his showing Tails, given that he shows Tails. On the other hand, his showing Tails has greater utility than his showing Heads, given that he shows Heads. For attainability, Weirich (1998: Chap. 4) introduces a standard of evaluation weaker than self-ratification. Its ground is an agent’s pursuit of incentives to switch options. It defines self-support so that every self-ratifying option is self-supporting although not every self-supporting option is self-ratifying. Specifically, an option is self-supporting if it does not start a terminating path of pursued incentives to switch options. A path of pursued incentives to switch options terminates if it is finite and not extendible. Every decision problem has a self-supporting option. Rationality imposes the standard of adopting a selfsupporting option. Self-support is a necessary condition of rationality for ideal agents in ideal circumstances. The standard of self-support is an attainable generalization of the standard of utility maximization.9 An incentive to switch is a reason for an alternative, and responsiveness does not require the passage of time. The incentive is a preference conditional on an option. An option supposed as evidence is supposed indicatively. Supposed as a response to circumstances, it is supposed subjunctively. Indicative supposition gives priority to evidential relations, and subjunctive supposition gives priority to causal relations. An incentive to switch involves indicative supposition of an option and subjunctive supposition of a new option’s realization. A response to the new strategy involves indicative supposition of the new strategy. For example, an agent reasons, if I adopt o then others adopt such and such strategies. If others adopt those strategies, I would do better if I were to adopt o0 . However, if I adopt o0, then others adopt such and such new strategies. Supposing o0 indicatively initiates another round of changes. A string of conditionals expressing those changes underlies a path of incentives to switch strategy.
Equilibrium
107
Because the standard of self-support transforms some issues about rational decision into issues about rational pursuit of incentives to switch options, this section introduces standards for pursuit of incentives to switch options, too. They concern selection of incentives to pursue when there are several and also stopping pursuit of incentives. The guiding general principle requires pursuit of sufficient incentives, but acknowledges that an incentive is not sufficient if pursuing it does not lead anywhere or undermines the incentive. Take the decision problem of picking your own income. Pursuit of incentives has no end. A stopping rule allows halting at a good income even if another is preferred. For illustrative purposes, this book assumes selection of a greatest incentive, if one exists, and stopping only to avoid endless pursuit of incentives. It treats cases where these selection and stopping rules are sensible.10 Because an ideal game ends, all paths of pursued incentives terminate. An option is self-supporting if it does not start a path of pursued incentives. Any option realized is self-supporting because its realization implies that any incentive to switch options is not pursued. However, comprehensive rationality requires more than self-support. A comprehensively rational option is self-supporting with respect to pursuit of incentives that meets standards of rationality. In ideal cases where agents are comprehensively rational, not all options qualify as selfsupporting. To illustrate the standard of self-support, return to the game of Matching Pennies. If the matcher’s opponent is skilled in out-maneuvering him, he has an incentive to switch options no matter which option he adopts. Suppose that he rationally forgoes endless pursuit of incentives and, as a result, showing Heads does not generate a path of pursued incentives. That option meets the standard of self-support despite not being self-ratifying.11 Suppose that an agent in a decision problem has several self-supporting options. Which should she adopt? This is an open question. It is commonly called a selection problem although it is not a second decision problem, but just a search for a necessary condition of rationality in addition to self-support. Weirich (2004: Chap. 9) resolves the problem in a few special cases. In those cases the preparations of a comprehensively rational agent ensure adoption of an optimal self-supporting strategy. However, a general resolution of the problem is a project for future research. In ideal cases all reasons to act bear on preferences concerning options so that a rational act follows all-things-considered preferences among options. In nonideal cases where relentless pursuit of incentives is unending, some reasons to act do not bear on preferences, and a rational act need not follow all-thingsconsidered preferences. An individual with pairwise preferences that cycle has a reason stop the cycle so that he will not be a money pump. He may form a higher-order desire not to pursue a preference. The higher-order desire may eliminate the preference so that his halting the cycle follows preferences. The futility of relentless pursuit of
108
Collective Rationality
incentives gives an agent a reason not to pursue an incentive. Pursuit wastes effort. The reason may modify the agent’s incentives so that paths of incentives end. Then incentives are sufficient. However, in the incentive structures I treat, incentives hold all things considered. Some paths do not end. The futility of relentless pursuit of incentives does not modify incentives but makes some incentives insufficient reasons for action. A reason not to pursue incentives relentlessly affects action but not incentives. A person who picks his own income may have a desire to halt pursuit of preferences for higher incomes. The higher-order desire may eliminate preferences for higher incomes. Then halting at a certain income accords with preferences. However, in some versions of the decision problem, preferences for higher incomes hold all things considered. The person has a reason to pick some income despite preferring a higher income all things considered. That relentless pursuit of preferences never ends generates a reason to stop with good enough. In most cases, a reason to realize an act (an internal reason that can motivate) generates a reason to prefer the act to alternatives. Nonetheless, exceptional cases arise. For example, one may have a reason to break a tie without having a reason to form a preference. In an ideal decision problem, a higher-order desire has no influence on a rational ideal agent except through his all-things-considered preferences among options. In nonideal problems, some reasons to act do not operate through all-things-considered preferences among options. Whether an option is best supported may depend on reasons besides preferences. A pattern of all-things-considered preferences may generate a reason not to pursue a preference in the pattern. All-things-considered preferences do not absorb that reason. A nonideal agent may have a sufficient reason not to pursue an all-thingsconsidered preference. The preference may be an irrational, pure time-preference, for example. May a rational ideal agent have a sufficient reason not to pursue an all-things-considered preference? If the reason is effective, doesn’t the agent incoherently prefer all things considered not to pursue the preference? A rational ideal agent’s sufficient reason to halt pursuit of insufficient incentives is not and does not generate an all-things-considered preference not to follow an all-thingsconsidered preference. The reason does not operate through preferences. Reasons besides preferences adjudicate cases without a stable basis of comparison.12 6.4 S TRATEGIC E QUILIBRIUM The principle of self-support contributes to normative game theory because decisions made in games often have the evidential characteristics the principle addresses. Given the common assumptions that the players in a game are cognitively ideal, fully rational, and have extensive common knowledge of their game, a player’s choice often provides information about other players’ choices. Hence, supposing the choice may prompt a change in the player’s assignment of utilities to his strategies. So a strategy that maximizes utility on the assumption that it is
Equilibrium
109
realized may not maximize utility on the assumption that another strategy is realized. As a result, it may be impossible to maximize utility with respect to information at the time of choice. Or, there may be several strategies, not all equally rational, that would each, if realized, maximize utility with respect to information at the time of choice. These problems motivate replacing the standards of self-ratification for decisions and joint self-ratification for solutions with the standards of self-support for decisions and joint self-support for solutions. The standard of Nash equilibrium takes joint rationality to require joint selfratification. However, just as some decision problems lack a self-ratifying option, some games lack a Nash equilibrium. Every game with a finite number of strategies and agents has a Nash equilibrium in the game’s mixed extension, which permits as strategies probability-mixtures of strategies. Without mixed strategies, however, the game may lack a Nash equilibrium. Mixed strategies are not realistic in all games. Sometimes randomization is blocked. Then a Nash equilibrium may be unattainable. Moreover, suppose that mixed strategies are merely representations of one agent’s strategies in the minds of other agents, and a Nash equilibria is an equilibrium-in-beliefs. A finite game may lack an equilibrium of this type, too. For example, consider a three-agent game in which two agents attribute different probabilities to the third agent’s strategies. The third agent lacks mixed strategies in the epistemic sense. If an equilibrium requires mixed strategies, the game lacks an equilibrium-in-beliefs. Also, foreknowledge of an opponent’s strategy eliminates mixed strategies in the epistemic sense. If agents are prescient so that each anticipates the other’s strategy, and equilibrium requires mixed strategies, then no equilibrium-in-beliefs exists, pace Bovens (2001: 291–92). Matching Pennies without randomization may lack an equilibrium-in-beliefs, for example. Suppose that the agents are prescient. They both know that Row, the matcher, does not pursue incentives, whereas Column, the mismatcher, does, and that (H, T) will be realized. Given their beliefs about each other, Row has an incentive to switch from H to T given (H, T). Every other profile also generates an incentive to switch for some agent. No profile is an equilibrium-in-beliefs. Applying the principle of self-support to ideal games yields a generalization of Nash equilibrium covering games without a Nash equilibrium. A solution to a game requires joint rationality, and joint rationality requires joint self-support. A profile of strategies is jointly self-supporting if each strategy is self-supporting given the profile. A strategic equilibrium is a profile of jointly self-supporting strategies. Weirich (1998) shows that a strategic equilibrium exists in every ideal game with a finite number of agents, with or without mixed strategies. In ideal games, every Nash equilibrium is a strategic equilibrium but not vice versa.13
110
Collective Rationality
For each of an agent’s strategies, the other agents have a best response that is a strategic equilibrium of the subgame obtained by fixing the agent’s strategy. An agent’s strategy is self-supporting if given its adoption the agent lacks a sufficient reason to adopt another strategy. It is self-supporting given a profile of strategies if under the assumption that the profile is realized the agent lacks a sufficient reason to adopt another strategy. In a strategic equilibrium, no player has a sufficient reason to switch strategy. Selection and stopping rules identify sufficient incentives. Take Matching Pennies without randomization as an example. Given that the mismatcher does not pursue incentives, (H, H) is a strategic equilibrium. That profile may be a solution although it favors the matcher. The game may have an asymmetric solution despite having a symmetric payoff matrix because of asymmetries in the matrix’s concrete realization, such as asymmetries in the agents’ psychologies. The asymmetry of each possible outcome ensures an asymmetry in agents’ behavior in the concrete game. A concrete game has features its payoff matrix does not represent. Identification of strategic equilibria appeals to more facts about a game than does identification of Nash equilibria. Whereas a Nash equilibrium depends only on a game’s payoff matrix, a strategic equilibrium depends also on responses to an agent’s change of strategy. Agents’ pursuit of incentives is part of a game’s full specification. It explains a game’s outcome in cases where the payoff matrix by itself is an insufficient explanation. Although strategic equilibria in a game such as Matching Pennies may vary from one concrete realization to another, the variation is not arbitrary but dependent on the players’ psychologies. A complete theory of collective rationality explains the effect of players’ psychologies on their strategic equilibria. In games without a Nash equilibrium, incentives may lead players from one strategy profile to another without ever settling on any profile. Incentives that do not lead to a particular strategy profile are not sufficient reasons to change strategies. Hence they do not upset strategic equilibria. Rationality’s standards for a player’s decision do not require pursuing such incentives. In an ideal version of Matching Pennies, any choice is a sign that another choice is better. One agent must forgo pursuit of incentives. Nonetheless, universal rationality is possible because it does not require universal utility-maximization. Rationality’s attainability ensures the consistency of individual and collective rationality. Rationality sets for each player an attainable standard. Attainability for a player depends on others’ acts. In Matching Pennies, if a player’s opponent anticipates his choices, then he may act rationally even though he foresees losing. All players may simultaneously achieve rationality and hence may achieve collective rationality. Game theorists need an account of equilibrium in noncooperative games without Nash equilibria for several reasons. First, such games are possible, and rational behavior in them is possible, too. So a general account of rationality must
Equilibrium
111
treat such games. Second, such games actually arise, and in them agents are often more or less rational. A general theory of rationality offers an approximate description of behavior in those games. Take the game of Matching Pennies with a stipulation that forbids randomization. People do not simulate random devices well and so cannot circumvent the ban on randomization by internally randomizing.14 In such a game the agents’ psychologies and their knowledge of their psychologies influence the game’s outcome. Identifying dispositions to pursue incentives to switch strategies, and thereby strategic equilibria, roughly predicts the game’s outcome.15 Pursuit of incentives registers all reasons and leaves none for equilibrium selection. Nothing but psychology and circumstance settles selection of a strategic equilibrium. Efficiency among strategic equilibrium affects pursuit of incentives and not selection of an equilibrium. Rational agents observe constraints on rational pursuit of incentives. Instead of pursuing incentives leading to an inefficient strategic equilibrium, agents may pursue incentives leading to an efficient strategic equilibrium. One agent may initiate pursuit of incentives that leads to an efficient strategic equilibrium if others respond appropriately. They respond appropriately if they prepare. Preparation coordinates their pursuit of incentives. Fully rational agents coordinate their entry into deliberational dynamics so that an efficient strategic equilibrium emerges. As Section 10.4 argues, prepared pursuit of incentives yields a strategic equilibrium that is efficient among strategic equilibria.16 6.5 R EALIZATION
OF AN
E QUILIBRIUM
This section treats realization of a strategic equilibrium in single-stage noncooperative games. For simplicity, it treats only games in which the relevant strategic equilibria are also Nash equilibria. The restriction allows self-ratification to replace self-support. In the games treated, realization of a Nash equilibrium is a standard of collective rationality. How do standards of individual rationality support it? One method of deriving an equilibrium’s realization from individuals’ reasoning assumes that their reasoning about strategies is bounded and takes place in stages. Agents are understood to have cognitive limits that force them to process bits of information sequentially instead of all at once. Harsanyi and Selten’s tracing procedure (1988: Chap. 4) and Skyrms’s deliberational dynamics (1990a), for example, take this tack. However, the principles for cases of bounded rationality are not extendible to ideal cases. Decision theory needs general principles that cover ideal cases, too. Strategic reasoning reveals its basic structure in those cases. The classical treatment of single-stage noncooperative games, which I pursue, makes many idealizations. Von Neumann and Morgenstern ([1944] 1953: 146–48), for instance, assume that players have knowledge of the theory of
112
Collective Rationality
rationality and use it to figure out others’ strategies. Each player knows how the others will apply the theory of rationality to the game. Von Neumann and Morgenstern implicitly assume that players have common knowledge of all relevant facts about their game. They do not explicitly state their assumptions about common knowledge because when a text presents a game, a reader naturally supposes that the players have the same knowledge that the reader acquires as he learns about the game. As a background assumption, the reader supposes that all the players know what he knows about the game. Because the reader knows every relevant feature of the game, the background assumption entails that the players know every relevant feature of the game, too. What a player knows is a relevant feature of the game. The reader knows what the players know. Hence, by assumption, each player knows every relevant fact that any player knows.17 This assumption is enough to generate common knowledge of all relevant facts. More precisely, the agents have common knowledge of a relevant proposition if at least one agent knows the proposition. To see this, suppose that an agent i knows that p, that is, Kip. Then also, every agent j knows that p. Moreover, because agent i is an ideal agent whose knowledge satisfies the standard S5 knowledge axioms, including positive introspection, the agent i knows that he knows that p. Hence every agent j knows that agent i knows that p. That is, for all i, j, if Kip then KjKip. Consequently, for any string of agents 1 through n, not necessarily distinct, K1K2 . . . Knp. The set of strings entails the levels of mutual knowledge that constitute common knowledge of p. In an ideal game, players have common knowledge of their game and their rationality. Which principles of individual rationality support their participation in a Nash equilibrium? Deriving realization of a Nash equilibrium from individual rationality confronts the interdependence in strategic reasoning of the probabilities players assign to each other’s strategies. To apply the principle to maximize expected utility, a player needs the expected utility of each of his strategies. To obtain a strategy’s expected utility, he needs the probabilities of other players’ strategies. For every player, the probability of a strategy depends on the probabilities of the other players’ strategies. So the probabilities of players’ strategies are interdependent.18 The dynamic nature of decisions in games suggests that equilibrium rather than utility maximization directs rational agents. Should the principle to do one’s part in the best Nash equilibrium replace the principle to maximize utility? A theory of rationality gains strength by having a single set of decision principles good for all decision problems. To support Nash equilibrium as a standard of collective rationality, one should derive the principle to adopt a Nash strategy from independently motivated principles of individual rationality. A derivation showing that Nash equilibrium follows from players’ adopting self-ratifying strategies unifies decision theory and game theory. Best-response reasoning is a type of strategic reasoning. It examines agents’ pursuit of incentives. Their pursuit of incentives may lead them to a Nash
Equilibrium
113
equilibrium. For example, two people bidding for a project, which costs each the same, may entertain bids higher than costs. Each wants to bid just a little lower than the other’s bid. If each expects the other to underbid a high bid, they lower tentative bids until bids equal costs. Best-response reasoning leads each to bid what the project costs, and their bids form a Nash equilibrium. In the ideal games this section treats, relentless pursuit of incentives is rational, and agents make best responses. In a single-stage game a player does not observe the other players’ moves but may infer their moves. The assumption of best responses grounds inferences about others’ choices. A player uses assumptions about his choice to infer other players’ choices. He knows other players respond rationally to his choice. He makes a self-ratifying choice. Ratification, because it uses utilities conditional on choices, accommodates agents’ interdependent probabilities.19 Ratification’s account of an agent’s choice of a Nash strategy in an ideal game starts with his knowledge about the game and the players and ends with his knowledge of his choice. So that the agent’s deliberations justify his choice, they do not begin with direct knowledge of his choice and its rationality. Each agent knows that the others are certain that he will make a best response to them. Because he knows that they are rational ideal agents, he initially considers their certainty to be justified, but nonetheless possibly mistaken. He does not know directly that others know that he will make a best reply, nor does he know directly that he will make a best reply. The agent’s knowledge of others enables him to discover their choices, given his knowledge of his choice. During deliberations, an agent gains indirect foreknowledge of his choice and its rationality and indirect foreknowledge of others’ responses. Agents who realize a Nash equilibrium coordinate, although they do not have the common goal of realizing a Nash equilibrium. Each realizes a Nash strategy because each has the goal of maximizing utility. That goal is enough for coordination because each responds to evidence about the others’ acts. Each acts because of expectations about the others. Realization of a Nash equilibrium is epistemic coordination on a strategy profile. The players respond to a common set of circumstances, namely, the game and their common knowledge of it and each other. If an ideal game has multiple Nash equilibria, deriving realization of an equilibrium from individual rationality also must solve the problem of coordinating to realize a particular equilibrium. Because all players know the profile realized, they achieve a Nash equilibrium. That they realize a Nash equilibrium does not however explain the realization of the particular Nash equilibrium they realize. They may need extensive knowledge of their situation to derive knowledge of the profile realized. Its derivation may require knowledge of their psychologies beyond knowledge of their rationality. This section and the chapter’s appendix derive players’ Nash strategies in some games with a unique Nash equilibrium. Chapter 7 extends the derivation to some games with multiple Nash equilibria, but a unique efficient Nash
114
Collective Rationality
equilibrium. The argument for participation in a unique Nash equilibrium uses the players’ common knowledge of their concrete game and their rationality. A payoff matrix represents a concrete game but omits relevant features, including players’ knowledge and their responses to out-of-equilibrium strategies. The argument assumes that players have common knowledge of all relevant features of their game. Players’ rationality entails their conformity with principles of rationality. Although utility maximization is a common principle of rationality, the theory of rationality advances the principle of ratification as a refinement for special cases. According to it, players adopt a strategy that maximizes utility on the assumption that it is adopted. The argument assumes that players have common knowledge of their conformity with the principle of ratification. Consider a version of Matching Pennies with the payoff matrix in Table 6.2 and with mixed strategies. This game has only the mixed-strategy Nash equilibrium (1/2, 1/2), listing first the probability that Row selects Up and second the probability that Column selects Left. What reasoning supports this equilibrium? The players have common knowledge of conditionals such as, “If Up, then Right.” In consequence, Row’s mixed strategy 1/2 is his only ratifiable strategy. This is common knowledge. Given that each agent adopts a ratifiable strategy, each participates in the equilibrium. This is common knowledge because it follows from premisses that are common knowledge. Each pure strategy has the same expected payoff as a player’s Nash strategy given that his opponent participates in the equilibrium. Why does the player adopt his Nash strategy? The reason appeals to out-of-equilibrium behavior. Adopting a pure strategy gives an agent evidence that his opponent adopts a best response and so a deviation from her equilibrium strategy. Given her deviation, his pure strategy does not maximize utility. Row’s knowledge of Column’s response to a deviation rests on the players’ common knowledge. Their common knowledge grounds Row’s knowledge of conditionals such as, “If Up, then Right.” The representation of players’ reasoning gains depth by explaining the reasoning that generates such knowledge. It comes from a player’s knowledge of his own strategy and a player’s adopting a best response to an opponent’s strategy, and all this being common knowledge. If Row adopts Up, then he knows this. If he knows his strategy, then Column knows that he knows. If Column knows that, then Column knows his strategy. Hence Column adopts a best response. Column knows her response. Hence Row knows her response. Every relevant fact that any player knows is common knowledge. Table 6.2 Matching Pennies
Up Down
Left
Right
2, 0 0, 2
0, 2 2, 0
Equilibrium
115
Hence it is common knowledge that if Row adopts Up, then Column adopts Right. By such reasoning, each player attains prescience of responses. Each knows for each strategy the other’s response. Common knowledge explains prescience. The argument for participation in a unique Nash equilibrium extends to similar ideal games. The extension for two-player games is simple, but the extension for multiplayer games is complex because in them a player calculating the other players’ response to his strategy must calculate their responses to each other, too. This chapter does not provide the extensions because its purpose is just to illustrate individualistic reasoning that leads to realization of a Nash equilibrium. The example shows the structure of that reasoning. The players’ common knowledge of their concrete game and their rationality gives them good reasons to adopt their Nash strategies. A player’s knowledge supports participation in the equilibrium by supporting knowledge of his opponents’ strategies. Each player knows the others’ strategies by an inference grounded in common knowledge. Common knowledge that just one profile has self-ratifying strategies gives each agent a reason to do his part in the profile. Realizing a solution, and so an equilibrium, is collectively rational in ideal games. Because of rationality’s attainability, equilibrium is attainable. Generalizing the standard of Nash equilibrium to obtain the standard of strategic equilibrium accommodates rationality’s attainability. Chapter 9 makes a similarly motivated generalization of equilibrium for cooperative games. 6.6 A PPENDIX : R EALIZATION
OF A
N ASH E QUILIBRIUM
This appendix elaborates derivation of a Nash equilibrium’s realization from players’ common knowledge of their game, using the method sketched in Section 6.5. It treats realization of the unique Nash equilibrium in the mixed extension of Matching Pennies. The techniques illustrated extend to similar ideal games.20 Support for participation in a Nash equilibrium entertains counterfactual worlds, which the antecedents of counterfactual conditionals may introduce. Analysis of counterfactual conditionals, both indicative and subjunctive, introduces relations between worlds in addition to the accessibility relation representing epistemic possibility. To represent strategic reasoning, a modal system representing knowledge such as S5 may be combined with a logic of counterfactual conditionals involving two distance relations between possible worlds, namely, a evidentially-sensitive relation for indicative conditionals and a causallysensitive relation for subjunctive conditionals. Supposition of a conditional’s antecedent does not add a proposition to one’s knowledge. It entertains the proposition without the support required for knowledge. Hence, using Kp to express one’s knowing that p, the conditional (if p, then Kp) is not necessarily true. A conditional having a counterfactual antecedent is true if its consequent holds in a world minimally revised to accommodate the antecedent. The nearnessconditionals representing pursuit of incentives in an ideal single-stage game
116
Collective Rationality
preserve, first, agents’ predictive power and, second, their rationality. I use the corner > as a connective forming counterfactual conditionals. Context distinguishes the corner from the symbol for the greater-than relation. Sentences flank the corner, whereas names and variables flank the symbol for the greater-than relation. In an ideal game, all relevant knowledge is common knowledge. A single knowledge operator K stands for knowledge and common knowledge. The operator K has its standard interpretation in terms of a set of worlds and an accessibility relation for them. Accordingly, a proposition p is known at a world w if and only if p is true in every world epistemically accessible from w. The semantics for the corner > use a selection function that given a world and proposition yields the nearest world in which the proposition is true. A corner-conditional is true at a world w just in case the consequent is true in the world selected given w and the antecedent. For simplicity, this appendix uses a single selection function underwriting indicative, or evidential conditionals because only these conditionals appear in its argument. The argument concludes that in a classical version of Matching Pennies, where players independently pick strategies and may randomize, they realize the Nash equilibrium in mixed strategies. The argument has several premisses describing the game. There are two players, Row and Column. Row’s pure strategies are Up and Down. Column’s pure strategies are Left and Right. A mixed strategy for Row, specifies a value for p, the probability of Up, taken as a percentage. Similarly, a mixed strategy for Column specifies a value for q, the probability of Left. Each player realizes exactly one mixed strategy. The payoff matrix in Table 6.2 states the utilities of combinations of pure strategies. The argument includes premisses concerning players’ utilities and choices. These premisses use U(o, s) to abbreviate U(o given s), that is, the utility of an option o given a state s. In accordance with the introduction of utility in Section 6.2, the premisses use propositional variables as placeholders for canonical names of propositions. First is a general principle of conditional utility, simplified for games where players select strategies independently. Expected Utility: U(o, o0 ) ¼ of states
P
0
i P(si/o
) U(o, si), where {si} is a partition
In calculations of an option’s expected utility, a relevant state is an opponent’s strategy. If an opponent’s strategy s is known, that knowledge swamps the evidence o0 provides about his strategy. Suppose that knowledge of a proposition makes the probability of the proposition equal to 1 given any supposition compatible with the proposition. Imagine that K(p ¼ 50%). Then consider UC(q ¼ 0%, q ¼ 0%). By Expected Utility, it equals P(p ¼ 50%/q ¼ 0%)UC (q ¼ 0%, p ¼ 50%) þ P(p 6¼
Equilibrium
117
50%/q ¼ 0%) UC(q ¼ 0%, p 6¼ 50%) ¼ UC (q ¼ 0%, p ¼ 50%). Because the condition p ¼ 50% is known, it replaces the condition q ¼ 0%. Another premiss generalizes this effect of knowing an opponent’s strategy s. Knowledge: Ks > U(o, o0 ) ¼ U(o, s) Third is the principle of ratification. Ratification: o > 8o0 U(o, o) U(o0, o) An option o satisfying the consequent is self-ratifying, and the principle of ratification applies when a self-ratifying option exists. It says that only a selfratifying option is realized. Using knowledge of an opponent’s strategy yields a simplification of Ratification: Ks > (o > 8o0 U(o, s) U(o0, s)). Fourth is a premiss asserting awareness of an option’s realization. Awareness: o > Ko An agent knows his option directly, and others know it indirectly. A deeper-going justification of Nash equilibrium than this short appendix provides derives the common knowledge that the premiss Awareness expresses from common knowledge of other sorts concerning the game and the players. The argument for Nash strategies uses knowledge and conditionals taken with respect to the world in which the game occurs. To simplify, it omits indexing the knowledge operator K to that world and also omits indexing the corner > to that world. In counterfactual conditionals, the semantics of conditionals implicitly settles the index for an embedded occurrence of K and an embedded occurrence of > . The argument also has some premisses about conditionals that express the effects on conditionals of distance between worlds. One premiss asserts that given an opponent’s strategy, an agent adopts a best response. This conditional holds no matter how deeply embedded it is in other conditionals. Best Response: In all contexts Column adopts a best response to Row: p ¼ x > (q ¼ y > 8z UC(q ¼ y, p ¼ x) UC(q ¼ z, p ¼ x)). Likewise, in all contexts Row adopts a best response to Column: q ¼ y > (p ¼ z > 8w UR(p ¼ z, q ¼ y) UR(p ¼ w, q ¼ y)). When the conditional about Row’s best response is embedded in a supposition about Row’s strategy, the result is this: p ¼ x > (q ¼ y > (p ¼ z > 8w UR(p ¼ z, q ¼ y) UR(p ¼ w, q ¼ y))). To illustrate, suppose that p ¼ 50%. Then make the further supposition that q ¼ 100%. Because p ¼ 50% is not a best response to
118
Collective Rationality
q ¼ 100%, adding the supposition that q ¼ 100% overturns the initial supposition that p ¼ 50%. Although q ¼ 100% is a best response to p ¼ 50%, it is not a best response to the value of p it indicates, namely, p ¼ 100%. By Best Response, p ¼ 50% > (q ¼ 100% > p ¼ 100%). A related premiss called Nearness retains a supposition about an agent’s strategy if it is a best response to a further supposition about the opponent’s strategy. Nearness: For Row, p ¼ x > (q ¼ y > (8z UR(p ¼ x, q ¼ y) UR(p ¼ z, q ¼ y) > p ¼ x)). An analogous generalization holds for Column. To illustrate, suppose that for Row p ¼ 50%. Because that strategy is a best response to q ¼ 50%, supposition that q ¼ 50% retains the original supposition. By Nearness, p ¼ 50% > (q ¼ 50% > p ¼ 50%). The following proof that agents realize the mixed-strategy Nash equilibrium presents the principal inferential steps but compresses other steps. Its main step shows that only the agents’ Nash strategies are self-ratifying. Hence following Ratification, the agents realize those strategies. First, the proof establishes that each player’s Nash strategy is self-ratifying. It treats Row’s Nash strategy step by step, and similar steps apply to Column’s Nash strategy. To show that Row’s Nash strategy p ¼ 50% is self-ratifying, one must show that 8x UR(p ¼ 50%, p ¼ 50%) UR(p ¼ x, p ¼ 50%). Consider, for example, the alternative strategy p ¼ 100%. Its conditional utility UR(p ¼ 100%, p ¼ 50%) depends on what the condition p ¼ 50% shows about Column’s strategy. Given p ¼ 50%, K(p ¼ 50%) by Awareness. Hence all Column’s strategies have the same expected utility. Also, p ¼ 50% > (q ¼ 50% > p ¼ 50%) by Nearness. So q ¼ 50% is self-ratifying. Moreover, only q ¼ 50% is self-ratifying. Take the alternative q ¼ 100%. By Best Response, p ¼ 50% > (q ¼ 100% > p ¼ 100%). Column’s strategy q ¼ 100% is an inferior response to p ¼ 100%. So it is not selfratifying. In general, alternatives to q ¼ 50% are not self-ratifying. Therefore by Ratification, q ¼ 50%. Thus K(q ¼ 50%) by Awareness. So all Row’s strategies have the same expected utility. Hence UR(p ¼ 50%, p ¼ 50%) ¼ UR(p ¼ 100%, p ¼ 50%). Similarly, the equality holds when any other alternative strategy replaces p ¼ 100%. Therefore p ¼ 50% is self-ratifying. Next, the proof establishes that nonNash strategies are not self-ratifying. Consider Row’s options. They reduce to three: p > 50%, p < 50%, and p ¼ 50%. Row realizes the coarse-grained option p > 50% just in case he realizes a finegrained option satisfying the inequality, and similarly for p < 50%. Suppose that Row realizes the option that p > 50%. Then K(p > 50%) by Awareness. Because UC(q ¼ 0%, p > 50%) > UC(q > 0%, p > 50%), Knowledge implies that UC(q ¼ 0%, q ¼ 0%) > UC(q > 0%, q ¼ 0%). Hence, by Ratification, q ¼ 0%, as all other options are not self-ratifying. So by Awareness, K(q ¼ 0%). Because
Equilibrium
119
UR(p > 50%, q ¼ 0%) < UR(p ¼ 0%, q ¼ 0%), Knowledge implies that UR(p > 50%, p > 50%) < UR(p ¼ 0%, p > 50%). So by Ratification, it is not the case that p > 50%. Similarly, the supposition that p < 50% leads to a contradiction. So by Ratification, p ¼ 50%. The same steps establish that q ¼ 50%. Realization of the two Nash strategies constitutes realization of the game’s Nash equilibrium.
7
Coordination
I
N noncooperative games, players may benefit from coordination. For instance, they may benefit from coordinating to realize an efficient equilibrium when several equilibria exist. Players lack opportunities for joint action in noncooperative games, however. In a single-stage noncooperative game, coordination may seem out of reach. Do any principles of rationality generate its benefits? If principles of collective rationality yield coordination, how are those principles grounded in principles of individual rationality? This chapter evaluates novel principles of individual rationality designed to facilitate coordination. It ends with a method of initiating coordination that uses only the standard principle of utility maximization. The method shows how rational players in an ideal game attain an efficient equilibrium when just one exists. That equilibrium is a solution, and realizing it is collectively rational. To simplify, the chapter treats games in which strategic equilibrium agrees with Nash equilibrium. Also, the illustrations used in this chapter are two-player games. The reasoning exhibited extends to other games, but the chapter does not elaborate the extension because its purpose is just to show how individualistic reasoning may support efficiency’s selection of an equilibrium.
7.1 S TRATEGY AND L EARNING To coordinate, players need beliefs about other players’ acts. Some beliefs may have the form: if I do my part, others will do theirs. Such beliefs have various grounds. Players’ beliefs about other players’ acts arise from sources such as authority, agreements, contracts, common values, natural dispositions, conventions, and common knowledge. Coordination comes in two types. One type is causal. A person suggests a method of coordinating. Another agrees. The first’s suggestion causes the second’s agreement. The two causally settle their coordination. Another type of coordination is evidential. A person foresees a partner’s act and adopts an act that complements it. She uses evidence about her partner’s act to achieve coordination. In cooperative games, communication and binding 120
Coordination
121
agreements create causal links between strategies that yield coordination. In single-stage noncooperative games coordination arises solely from players’ evidence and their strategic reasoning. Bicchieri (1993, 1997, 2004, 2006) studies learning and behavioral evolution to explain action that on the surface appears to be spontaneous coordination. For example, underneath coordination in a single-stage game may lie a process that produces common knowledge grounding the coordination. Young’s evolutionary theory of coordination (1998) treats agents with modest rationality. He shows that, given time, they reach an efficient equilibrium (p. 114). Skyrms (1996, 2004: 95–102, 123) uses evolution and the dynamics of players’ local interaction to explain coordination. He observes that in the Stag Hunt, stag hunters seek interaction with other stag hunters. They form an association that fares better than an association of hare hunters. Lewis (1969) presents an account of coordination that appeals to convention. As he observes, precedents make some equilibria salient. They ground a person’s anticipation of others’ behavior. Precedence furnishes inductive reasons for expecting others to adhere to convention. Their adherence is a reason for the person to follow convention also. Similarly, Gibbard ([1972] 1990: 5–9, 159–66) notes that a convention of keeping agreements may give a person inductive evidence of a counterpart’s act, and reinforce the person’s inclination to follow the convention.1 This chapter treats strategic reasoning in single-stage games. It does not use repeated games, convention, or evolutionary dynamics to explain coordination or coordination’s grounds. It explains the rationality of coordination in ideal cases and does not explain coordination among humans. Explaining the rationality of coordination justifies rather than explains coordination. Evolutionary game theory may explain but does not justify participation in an efficient equilibrium. Evolution may yield individuals disposed to coordination, but rationality may discredit a disposition evolution implants. Although justification and explanation of coordination are independent enterprises, they are complementary. Justified strategic reasoning may have an explanatory role. The dynamics of an evolutionary account of coordination in repeated games may incorporate principled strategic reasoning as a mechanism. This mechanism may replace imitation of success in a neighborhood of interaction. Conversely, principles of explanation may suggest principles of justification. For instance, explanations of coordination using focal points may suggest principles of strategic reasoning that employ focal points. Moreover, conventions, evolution, and learning generate the background for single-stage games of strategy. For example, associations may shape the background for an occurrence of the Stag Hunt and the players’ strategic reasoning. As Section 5.3 notes, justification and explanation of behavior employ different methods. Coordination’s explanation varies from case to case. Its explanation may be learning in one case and evolution in another case. A general justification of coordination starts with features common to all cases, and does not explain the
122
Collective Rationality
various origins of those features. This chapter examines the beliefs and desires that generate an efficient Nash equilibrium and not the various ways those attitudes may arise. A player in a game may have a cooperative disposition that affects the utility she assigns to outcomes. The disposition may be the product of evolution or character development and training. Skyrms (1996: 28) notes that a desire to be fair may yield a fair outcome in a game, but he seeks a deeper explanation that also explains the origin of the desire to be fair. Although an explanation of a fair outcome that starts with an unexplained desire to be fair is shallow, a justification of a fair outcome that starts with the same unexplained desire is not shallow. It generates a standard instrumental justification of action.2 Skyrms (2004: 50–55) reviews Lewis’s case (1969) that common knowledge and salience are decision-theoretic foundations for conventions, in particular, signaling conventions. He concludes that Lewis does not explain conventions because he does not explain the origins of the salience and common knowledge on which conformity to conventions rests. If Lewis’s project is justification of coordination through conventions, it may use salience and common knowledge without explaining their origins. Explaining their origins deepens explanation of coordination but does not deepen justification of coordination. Because this chapter justifies rather than explains coordination, its success does not require explaining its assumptions’ realization. A justification may adopt an idealization that grants knowledge leading to coordination without exploring ways the idealization may be realized. 7.2 C HANGING
THE
RULES
Some theorists modify principles of rationality to justify cooperation in games such as the Prisoner’s Dilemma. The modified principles, through constraints on utility maximization, allow agents seeking efficiency to achieve cooperation. The modified principles generate coordination, too. This section assesses the modifications. A complex theory with multiple principles typically fine-tunes each principle in light of the others. A general theory of rationality mutually adjusts principles of individual and collective rationality. Because individual rationality suffices for collective rationality, intuitions about collective rationality may motivate revisions of principles of individual rationality. Principles of collective rationality may create epistemic reasons to reformulate principles of individual rationality. Are constraints promoting cooperation warranted? Chapter 6 modestly revises standards of individual rationality to make them attainable. A principle of individual rationality’s revision should arise from such general theoretical considerations and not just a desire to sanction cooperation within groups. Rather than modify principles of individual rationality to make them more easily support cooperation and coordination, this chapter applies
Coordination
123
them circumspectly, following the tradition of sophisticated choice. In ideal cases, utility maximization, applied circumspectly, duplicates results of constrained utility maximization. Sections 7.4 and 7.5 show that it generates coordination to achieve an efficient equilibrium. Rationality’s attainability motivates generalizing the principle of utility maximization to obtain the principle of self-support. Modifying standards of individual rationality to facilitate attaining goals of collective rationality such as cooperation and coordination lacks support unless the modifications are independently motivated. Rather than revise principles of individual rationality to make them achieve goals of collective rationality in all cases, it is better to grant excuses for failing to attain those goals if independently supported principles of individual rationality do not overcome obstacles to their attainment. This chapter’s treatment of coordination distinguishes between comprehensive and noncomprehensive principles of rationality. As Section 3.3 notes, common forms of evaluation differ according to whether they take mistakes for granted. Principles of comprehensive rationality, which demand more than utility maximization does, arise in a theory of rationality that recognizes evaluation’s variable scope. Comprehensive principles ground coordination, but have a theoretical motivation independent of their ability to ground coordination. Gauthier (1986: Chap. 5) proposes a revised principle of individual rationality, namely, constrained utility maximization, to derive the rationality of cooperation. It requires an agent to act cooperatively provided that others are similarly disposed to act cooperatively. In favorable conditions, constrained maximizers recognize their cooperative dispositions and all act cooperatively. The cooperative behavior a constrained maximizer’s disposition generates does not maximize utility, but is nonetheless rational, Gauthier claims. To support his principle, he assumes that efficiency is a standard of collective rationality and shows that rational individuals following his principle meet the standard when conditions are favorable. He shows that constrained maximization may yield cooperation in the Prisoner’s Dilemma, for example. Constrained maximizers profit from their cooperative disposition. Their disposition has more utility than its rivals have.3 Constrained maximization is insufficiently supported, as Parfit (2001) argues. Why should an agent forgo the benefit of defection in the Prisoner’s Dilemma? That a cooperative act issues from a utility-maximizing disposition is not an adequate argument for the act. The act and disposition have different consequences. The disposition is supported by its consequences, whereas the act is not. The disposition is rational, but the act it issues is not. Constrained maximization does not fit into a general theory of rationality. It is not motivated except by its ability to generate cooperation. McMahon (2001: 21–27) also proposes a novel principle designed to yield cooperation. He presents it as a principle of collective rationality, and so names it PCR. However, it directs individuals to cooperate and so counts as a principle of individual rationality. The principle urges agents to act in response to a
124
Collective Rationality
reconfigured payoff matrix that supports cooperation. Individuals following the principle achieve mutually beneficial cooperation in cases such as the Prisoner’s Dilemma. McMahon’s argument for the principle is that it yields cooperation in cases where utility maximization does not yield an efficient outcome (pp. 13–16). Efficiency is not a plausible general standard of collective rationality, and so does not adequately support PCR. The principle does not have support independent of efficiency, and so is an unsatisfactory route to cooperation. McMahon’s principle may be construed as utility maximization for agents valuing cooperation (pp. 29–30). Then it is a principle of noninstrumental rationality in effect requiring agents to be disposed to cooperate. Such a requirement is not sufficiently supported by observing that agents’ compliance with the principle realizes cooperation. Support requires addressing the case for contrary principles of individual rationality that tolerate uncooperative dispositions. The next novel principle, the principle of team reasoning, facilitates coordination in games such as Hi-Lo. This two-player game has the payoff matrix in Table 7.1. There are two Nash equilibria in pure strategies, (High, High) and (Low, Low), and a Nash equilibrium in mixed strategies. The strategy profile (High, High) is an efficient Nash equilibrium, that is, a Nash equilibrium efficient among Nash equilibria. In Hi-Lo it is also efficient among all outcomes. In Hi-Lo, coordination realizes a Nash equilibrium, and the superior form of coordination yields the efficient Nash equilibrium (High, High). If each player assigns equal probabilities to the other’s strategies, then High maximizes utility for each. Suppose that the players initially have no reason for such a probability assignment. Does each still have a reason to adopt High? Row may put Column in a position to achieve (High, High) by selecting High. Does this yield a reason for Row to pick High? Rationality requires Row to empower Column to realize the superior form of coordination only if Column will use that power. Similarly, empowering Row is not a sufficient reason for Column to adopt High unless Row when empowered will adopt High. No sufficient reason for Row to adopt High emerges. So the case for efficient coordination founders. As Gilbert (1996: 3–6) notes, in coordination problems such as Hi-Lo, even players’ common knowledge of their payoff matrix and their utility maximization does not yield the efficient Nash equilibrium. Colman (2003) contends that game theory needs revised principles of choice to sustain coordination and presents some principles that do the job. His novel principles of reasoning are descriptive rather than normative. One principle Table 7.1 Hi-Lo
High Low
High
Low
2, 2 0, 0
0, 0 1, 1
Coordination
125
advances team reasoning (Sec. 8.1). Agents following it participate in the best team plan. In Hi-Lo team reasoning yields (High, High).4 Because of the difficulty of deriving cooperation and coordination from individuals’ rationality, Bacharach (1999, 2006), Sugden (2000b), Gilbert (2001), and Gold and Sugden (2007a, 2007b), advance team reasoning. A team reasoner shoulders the obligation of participating in the plan that maximizes collective utility defined as a sum of individual utilities. Individuals who each follow the principle of team reasoning realize an efficient outcome. Bacharach claims that it is not irrational to adopt team reasoning. He defends its normative status. So that team reasoning does not lead to pointless sacrifice of one’s interests, Bacharach (2006: 127–35) proposes restricted and circumspect team reasoning, which make adjustments for the possibility that others are not team reasoners.5 Hurley (1989: 142–59) observes that it is rational to cultivate a cooperative disposition. She advocates having a disposition to cooperate with those prepared to cooperate with who ever else cooperates. This cooperative disposition involves identification with a group and participation in collective acts for the sake of the group’s interests rather than self-interest. In other words, it leads to team reasoning, or what she calls collective agency. Hurley (2003) maintains that no reason makes the individual rather than the group the unit of agency. Sugden (2000b) similarly views a team perspective as a frame for a decision. As Hurley does, he maintains that an agent may frame a decision by taking either an individual or a team perspective. However, the standards of rationality are not matters of taste. Rationality does not let agents choose either individual or team reasoning as they please. Agents cannot rationally ignore that they are individual agents. The evidence is too overwhelming. They cannot rationally lose sight of their individuality and think only of their team. Proponents of team reasoning hold that collective rationality entails individual rationality. They say that an individual’s act is rational, although not utility maximizing, if it is part of a collectively rational team act. They offer no motivation for departures from utility maximization except that those departures yield decisions that are beneficial from a collective perspective. The argument for team reasoning just points to the benefits of working as a team. Pointing to those benefits does not justify team reasoning as a method of realizing them. The main problem with team reasoning (even if restricted and circumspect) is that it instructs an agent to act contrary to his all-things-considered preferences. Acting that way is not rational in cases with stable preferences. In the Prisoner’s Dilemma, team reasoning yields cooperation but does not justify cooperation and forgoing the gain from defection. Even if adopting a team perspective supports acting contrary to preferences, the argument does not show that it is rational to adopt that perspective. It does not show that it is rational to identify with a group and act to realize the profile of strategies that best promotes the group’s interests. If an expanded argument were to show the rationality of those acts, then it would
126
Collective Rationality
ground changing preferences to support those acts rather than performing those acts contrary to preferences. Team reasoning is also open to the converse objection. In Hi-Lo, agents who are instrumentally rational may fail to achieve (High, High). They may achieve (Low, Low) because for each agent Low is best given the other agent’s act. Each player acts rationally given his circumstances. However, each player acts contrary to team reasoning. To supplement this criticism of team reasoning’s rationality, Sections 7.3–7.5 explain how individual rationality leads to coordination, in particular, realization of an efficient equilibrium. Those sections show that individuals maximizing utility achieve coordination in ideal cases, where conditions favor coordination. They focus on sufficient, not necessary, conditions for efficient coordination. Showing that individualist reasoning accomplishes the objectives of team reasoning weakens the case for team reasoning. The last novel principle this section examines targets character rather than action. It holds that agents with rational characters cooperate and coordinate. Regan (1980: 18) treats a game similar to Hi-Lo during an appraisal of utilitarian moral theory. He notes that in the game two utilitarians may realize the inferior coordination equilibrium. The pair may fail to maximize collective utility although each member maximizes collective utility. This happens in Hi-Lo if the players realize (Low, Low). The pair fails to maximize collective utility because it may realize (High, High) instead. Each player maximizes collective utility, however, because given the other player’s choice of Low, choosing Low produces the outcome (1, 1), whereas choosing High produces the outcome (0, 0). To generate the superior coordination equilibrium, Regan proposes for individuals, supplementary principles requiring a readiness to cooperate. The principles are about dispositions rather than choices. The dispositions may be the result of evolution, convention, or training. They provide a revised context for decisions rather than revised decision principles. Zimmerman (1996: Chap. 9) takes a similar line. He strengthens conditions for an act’s being morally right by adding to collective-utility maximization conditions that promote coordination. He requires of individuals nonintrusive transigence (p. 268). Reformulating Zimmerman’s principle to obtain a principle of rationality, and so taking options as acts an agent fully controls, and conflating times of requirements and times of action, the principle enjoins an agent to maximize utility and to be cooperatively disposed. It is similar to the principles McMahon and Gauthier advance. However, it requires a cooperative disposition instead of revising the principle of utility maximization. Zimmerman’s principle yields coordination in Hi-Lo, but does not yield cooperation in the Prisoner’s Dilemma.6 A solution to a coordination problem assumes agents’ comprehensive rationality and not just their rationality given their circumstances. Utility maximization is only a necessary condition of comprehensive rationality in ideal cases.
Coordination
127
Comprehensive rationality imposes additional requirements. If rationality requires a cooperative disposition, the agents in Hi-Lo are not comprehensively rational unless they have acquired cooperative dispositions. Then comprehensively rational agents reach the superior coordination equilibrium.7 Does rationality require a cooperative disposition? Regan and Zimmerman advocate a cooperative disposition because it yields collectively beneficial cooperation. However, rationality appraises the disposition according to its promotion of its possessor’s goals. Section 7.4 advances an argument for having a cooperative disposition. It concludes that certain dispositions, roughly, holding oneself ready for reasonable cooperation, are goals of rationality. Rationality requires agents to cultivate cooperative dispositions and to have them in ideal cases. 7.3 A N E FFICIENT E QUILIBRIUM This section and the next two explore realization of an equilibrium in a coordination problem. A coordination problem has more than one equilibrium, so justifying realization of a particular equilibrium is challenging. Still, many resources are available to circumspect application of standard decision principles. This section evaluates a simple argument involving payoff transformations. Section 7.4 entertains another argument relying on agents’ preparations and their evidence about each other. Section 7.5 shows that a particular type of preparation, an intention to adopt a strategy, yields coordination. The three sections together establish that a compositional theory of rationality may justify coordination. Justifying coordination does not require novel principles of rational action. How do rational agents realize a Nash equilibrium in a game with multiple Nash equilibria? Inspection of a payoff matrix is not always sufficient to direct agents in a game to a particular outcome of the game. Take the case of the interrupted phone call in Section 5.2. Table 7.2 repeats its payoff matrix. To resume conversation, who should call, and who should wait? The game has two equivalent Nash equilibria in pure strategies (and an inefficient Nash equilibrium in mixed strategies). Rationality’s application to the matrix does not yield a particular Nash equilibrium. Even with common knowledge of the matrix and the players’ rationality, the players may not coordinate on a Nash equilibrium. The profile (Call, Call), for instance, is rationalizable. In ideal conditions, prescience eliminates that profile and yields a Nash equilibrium. The equilibrium emerging depends on agents’ psychologies. Rationality, a Table 7.2 The Interrupted Phone Call
Call Wait
Call
Wait
0, 0 1, 1
1, 1 0, 0
128
Collective Rationality
disposition of agents, does not break ties. Rational agents acquire and advertise supplementary dispositions. In one realization of the game, Row takes the initiative and Column realizes that, so together they achieve (Call, Wait). Justifying the equilibrium’s realization appeals to the agents’ psychologies and not just to their rationality. Rationality with respect to considerations depicted in a game’s payoff matrix constrains the game’s outcome. Rationality with respect to additional, undepicted considerations may further constrain its outcome. Agents’ psychologies may play a role in realizing a particular equilibrium. Their psychologies may narrow the set of rationally permissible profiles. They may eliminate some profiles that a game’s payoff matrix permits. To handle coordination problems, rational people take steps to ensure mutual understanding. The easiest preparation is an agreement about a method of coordination. Common knowledge arises from the agreement. All know the agreement, know that all know it, and so on. The agreement may be self-enforcing in the sense that none has an incentive to deviate. It may be binding if not selfenforcing. The disposition to honor agreements is widespread and not irrational because it yields beneficial coordination. Are there noncooperative games in which without agreements rationality selects a Nash equilibrium? In Hi-Lo, an efficient Nash equilibrium’s emergence seems less dependent on agents’ psychologies than in the phone-call game. In Hi-Lo, agents may foresee the agreement that they would reach if they were to communicate. They may comply with that agreement, because of its salience, even in the absence of communication. An agreement is just a way of revealing agents’ psychologies. Coordination may dispense with it given independent knowledge of their psychologies’ relevant features. Communication and binding agreement are dispensable if agents imitate the acts that they would adopt if they had opportunities for communication and binding agreement. A Nash equilibrium’s salience may ground epistemic coordination to realize it. A noncooperative game with a unique efficient Nash equilibrium such as Hi-Lo has a structural focal point. A focal point, according to Schelling (1960: 54–58), is a strategy profile that is salient. Schelling holds that each player participates in a focal point’s realization because she expects the other players to participate in it. Salience and focal points do not yield coordination, Gilbert (2001), Sugden (2000b), and Colman (2003) argue. Although the efficient Nash equilibrium stands out, rational agents cannot count on each other to participate, they claim. Only evidence that the other player will participate gives a rational player reason to participate. If all players are rational, none has reason to participate. In Hi-Lo, no player has a reason to participate in a particular Nash equilibrium unless she believes that her partner will. Hence none has a reason to participate. These arguments against the power of focal points assume that agents are hyperrational in Sobel’s sense.8 Sobel (1994: Chaps. 13, 14) uses the term
Coordination
129
hyperrational for ideal agents moved only by rationality. These agents have a minimal set of basic intrinsic desires and cannot break ties. They are limited to acts for which rationality furnishes reasons. A hyperrational agent chooses an act only if the act maximizes utility. She cannot choose High in Hi-Lo without assurance that her partner will choose High. Communication aids coordination, but hyperrational players may ignore salience, signals, and even agreements. They respond only to foreknowledge of another’s participation in an equilibrium. Hyperrationality entails rationality taking circumstances for granted. It does not entail comprehensive rationality, however. It does not entail sensible preparations for decisions. A hyperrational agent does not prepare for Newcomb’s Problem by taking a one-boxing pill. The pill produces a disposition to act independently of reasons. The pill’s effect is contrary to an agent’s hyperrationality. Similarly, a hyperrational agent does not form an intention to perform an act unless she has sufficient reason to perform the act. In Hi-Lo, a hyperrational agent adopts High if and only if the other agent does. Neither agent takes the initiative. Hyperrational agents coordinate only when rationality suffices for coordination. They miss opportunities to coordinate. Rabinowicz (1992) and Sobel (1992, 1994) describe the difficulties hyperrational players have coordinating. Do hyperrational agents foresee their difficulties, and then have higher-order reasons to change preferences so that they profit from coordination? They have those reasons for change, but the characterization of hyperrationality prevents preference changes responding to those reasons. In Hi-Lo, an agreement to realize (High, High) is self-enforcing among rational agents, but hyperrational agents have no reason to honor the agreement. Selfenforcing agreements require a tendency to honor agreements that hyperrational agents lack. Rationality lets factors such as salience select a profile from the set of equilibria a payoff matrix generates. It is not rational to be moved only by rationality, and so being hyperrational is not being fully rational. Humans are not hyperrational, and the ideal agents this chapter treats are not hyperrational either. They can break ties and may have a basic intrinsic desire that rationality does not demand, such as a desire for chocolate. Unlike a hyperrational agent, a rational ideal agent may adopt an act not because of its responsiveness to others but because its adoption prompts others to respond to it. The agent may form an intention to act because of the intention’s good consequences. Ideal agents have the human capacity to act according to an intention and to initiate coordination. Although a hyperrational agent in a group of hyperrational agents has no reason to adhere to conventions, a rational ideal agent in a group of rational ideal agents may have reasons to adhere to them. To coordinate, rational ideal agents may use agreement, signals, convention, cultural and structural salience, and association with like-minded agents. Anything affecting an agent’s expectations of other agents’ behavior may work. A fully rational agent anticipates being in coordination problems and prepares for them. She may initiate coordination. She acquires a disposition to follow
130
Collective Rationality
conventions, agreements, and so on if it is not a part of her genetic endowment. The disposition solves coordination problems when others share it and may have safeguards to reduce costs when they do not share it. Societies encourage coordination by controlling agents’ relevant psychological features. For example, a society may generate a convention, or promote a disposition to choose a certain way in a coordination problem. Societies of rational agents use opportunities to promote coordination. Putting aside hyperrational agents, how may rational ideal agents reach the efficient equilibrium in Hi-Lo? Bacharach (1999, 2006), Colman (2003: Sec. 5.6), and Sugden (2000b) contend that individualistic reasoning does not yield the efficient Nash equilibrium. They contend that even individual desires to be cooperative fail to yield coordination. That is, even transformations of the payoff matrix to reflect team spirit fail to yield (High, High). Their argument uses Regan’s point (1980: Chap. 7) that no common, utilitarian payoff function over profiles yields (High, High). Common interests do not give an agent a reason for High unless the other agent has a reason for High. Common interests do not ensure coordination. Colman, for example, presents a bloated version of Hi-Lo in which utilities are uniformly increased in response to a cooperative disposition. Colman’s bloated version of Hi-Lo adds concern for the group but not in a way that yields (High, High). Concern for others may yield various payoff transformations, however. One type of concern for a group may raise a cooperative act’s utility in every eventuality. Team spirit may enhance all outcomes of a team-oriented act because an agent wants to participate in the best team act. An agent in Hi-Lo, for instance, may have team spirit. Seeing that (High, High) is best for the team, she may want to do her part in that profile. This desire may transform payoffs so that High strictly dominates Low for that player. Then individualistic reasoning yields High for her. The other player may reason the same way so that (High, High) emerges.9 Sugden (2000b: 191, 197) claims that team preferences are independent of members’ preferences. He does not take outcomes broadly so that they include acting out of team spirit (p. 200). However, identification with a group and acting to promote its interests may be rational. A rational individual may care about a group’s interests. Then her promoting the group’s interests may agree with her promoting personal utility. Principles of individual rationality may justify acting out of team spirit. Team players may rationally commit to collective acts. Complying with those commitments is rational if it is not contrary to preferences. It is rational when the commitments transform preferences so that acting in accord with the commitments agrees with preferences. People are social beings, and their sociality influences their preferences among acts. Team spirit influences agents’ preferences among acts. They do not act contrary to their preferences when they participate in joint acts out of team spirit. If the members of a team want to advance the team’s goals, their personal utility assignments promote cooperative action. In Hi-Lo individualistic reasoning
Coordination
131
yields (High, High) if each player desires to do her part in the profile best for the team. For team-spirited agents, a relevant consequence of High is expression of team spirit. The payoff transformation that makes High dominant adopts a broad view of an act’s consequences that recognizes such consequences. Some theorists hold that a broad view of consequences makes utility maximization trivial. Any act is utility maximizing according to some assignment of high utility to specially selected fine-grained consequences. However, broad utility-maximization is attentive to an agent’s actual utility assignment. It puts aside fictitious assignments that rationalize choices. An agent has to care about team-spiritedness for it to merit a place among an act’s relevant consequences. Not every choice maximizes utility with respect to the agent’s actual utility assignment. Broad utility-maximization is not trivial. Team spirit may justify (High, High), although rationality does not demand team spirit. Sophisticated team spirit’s affect on payoffs may be subtler than making the team-spirited act dominant. A payoff transformation that makes High dominant has bad features. It makes an agent’s act unresponsive to his counterpart. A better payoff transformation is attentive to expectations about his counterpart. Also, team spirit offers a shallow explanation of (High, High)’s realization. A deeper explanation explains an agent’s team spirit and its effect. It may show that evolution as well as character development prepares agents for cooperation and coordination. Altruism and team spirit may motivate rational individuals in games such as the Prisoner’s Dilemma and Hi-Lo. These motives may affect utility assignments and convert those games into ones with efficient solutions. Although payoff transformations show that individual rationality may yield coordination, they do not yield solutions to coordination problems such as Hi-Lo. The payoff transformations change the character of the problem. After payoff transformations, Hi-Lo has a single equilibrium. Its realization does not solve the original coordination problem in which two equilibria exist. Solving the original coordination problem requires a different approach. Sections 7.4 and 7.5 take a new tack. 7.4 P REPARATION Rational preparation for a game supplements rational choice within a game. It includes acquisition of dispositions that facilitate coordination. This section shows how agents may solve coordination problems by preparing for them. A rational ideal agent may acquire and advertise a disposition to initiate coordination. A solution is a strategy profile realized if agents are jointly rational in a comprehensive sense. Because a solution assumes comprehensive and nonconditional rationality, the case for a solution does not take any mistakes for granted
132
Collective Rationality
or narrow rationality’s scope. An inefficient form of coordination may follow from acts that are rational granting the agents’ lack of preparation for their coordination problem. The acts producing it may be rational in a noncomprehensive sense that does not evaluate preparation for the coordination problem. However, ideal agents who are comprehensively rational are fully rational prior to their game (in all relevant matters). They enter their game with rational beliefs and desires and also rationally prepare for it. As this section argues, in an ideal coordination problem, comprehensively and so fully rational acts yield an efficient form of coordination. Rational people make commitments when that is profitable. An agent’s commitment to perform an act is a constraint on behavior that the agent imposes to induce performance of the act. A rational person may commit herself to performing an act without knowledge that the act maximizes utility. A rational commitment may require an act not supported by reasons at the time of the commitment. Utility maximizing contracts and promises sometimes issue acts that do not maximize utility. They are rational gambles. A rational commitment may require an act if some condition obtains, and the occasion for the act may never arise. A person may profit from making the commitment without the cost of performing the act. A commitment to irrational retaliation may be an effective deterrent. Its benefit may not require retaliation. In general, a rational person may form a disposition to choose a certain option because of the disposition’s consequences rather than because of the choice’s consequences. Preparation for coordination may introduce a commitment to an act that initiates coordination. Suppose that in Hi-Lo Row is committed to High. Column knows this. She then adopts High also because that maximizes utility. Row’s commitment is rational to acquire. Row knows that Column will respond to his commitment by performing High. Still, to get the ball rolling, he must commit himself to High independently of Column’s choice. He commits to doing his part in the efficient Nash equilibrium without regard for his partner’s act. The commitment is rational even if it yields an irrational act in the counterfactual circumstances in which Column fails to infer Row’s commitment and to respond appropriately. It yields a rational act in the environment it creates. Row knows that his commitment to High without regard to its maximizing utility ensures that it maximizes utility. His commitment brings benefits without ever yielding an act contrary to utility maximization.10 Moreover, his commitment’s disregard for Column’s choice need not show disregard for reasons for High. In coordination problems, an agent does not know that an act is optimal unless he knows the response to it. Given ignorance of the response to the act, the act may not have a utility assignment. However, the act may maximize utility under a quantization of beliefs and desires. Then it is rational. Row may rationally forgo High, but also rationally perform High if he is in the dark about Column’s response. A commitment to perform High may be rational despite ignorance of the act’s optimality.
Coordination
133
A rational agent may commit to participation in an efficient equilibrium because he knows his partner will respond with participation in that equilibrium. His partner need not observe his act before acting herself. She may infer his act from knowledge of his rationality and the rationality of his adopting a commitment to that act. Row commits to his part in the efficient Nash equilibrium without independent knowledge of Column’s part. Given his commitment, he expects her part. The players’ inferences are mutually responsive although neither’s move causally influences the other’s move. The rationality of a commitment to an act depends on the commitment mechanism. Rational ideal agents utilize a rational commitment mechanism to prepare for coordination problems. Which mechanisms are rational? Suppose that agents are prescient. In a coordination problem, multiple strategies are self-ratifying. Coordination follows from an additional tie-breaking principle, namely, the principle to maximize self-conditional utility among strategies that are self-ratifying. Because one agent’s choice is good evidence of the other’s choice, in Hi-Lo the principle of ratification recommends both High and Low. The tie-breaking principle recommends High because the payoff from High given High’s adoption is greater than the payoff from Low given Low’s adoption. A comprehensively rational agent adopts a choice disposition in anticipation of a coordination problem. The disposition to comply with the tie-breaking principle maximizes utility. Hence, a comprehensively rational agent complies with that principle. In an ideal coordination problem, comprehensively rational agents know that each follows the tie-breaking principle. They therefore have a disposition to participate in the efficient Nash equilibrium. This disposition is a rational commitment mechanism.11 A comprehensively rational agent forms a disposition to participate in (High, High), and two comprehensively rational agents participate in (High, High). Each knows that the other will participate. In ideal games, agents know that they are comprehensively rational. In an ideal version of Hi-Lo, they know that each has a disposition to participate in (High, High). This knowledge gives each a reason to adopt High. The act the disposition prompts is not irrational, and knowledge that the other agent has the disposition furnishes a reason for the agent to follow the disposition. The disposition is rational to form and to maintain. It yields a rational act in ideal games with other comprehensively rational agents. Participation in Hi-Lo’s efficient Nash equilibrium is comprehensively rational for each agent. Comprehensive rationality yields coordination. Realizing an efficient Nash equilibrium is a requirement of joint rationality in ideal noncooperative games. Comprehensively rational ideal agents in ideal conditions each maximize utility with respect to dispositions they adopt in advance and together achieve an efficient Nash equilibrium. Culturally shared dispositions to participate in structural focal points provide another rational commitment mechanism. In cooperative games, players may achieve efficient coordination by agreeing on a plan for joint action. In
134
Collective Rationality
noncooperative games they may evidentially coordinate through strategic reasoning that yields knowledge of the agreement they would reach if they could communicate and act jointly. Suppose that in an ideal version of Hi-Lo an agent announces, “I’ll do High if you do High.” The announcement conveys no information. Prior to it, each agent knows the other will do High if he does High. Similarly, the announcement, “I’ll do High” conveys no information. Prior to it, each infers that the other will do High. Without announcements, they act as if the announcements had been made. The equilibrium the hypothetical agreement yields is a focal point. The agents may have dispositions to do their parts in that equilibrium and know that they do. The disposition is rational to have in a society that shares it. The acts it issues are rational in such a society. Comprehensively rational agents with common knowledge of their comprehensive rationality may count on each other to do their parts in the hypothetical agreement. The game Hi-Lo is embedded in a larger game with moves preparing for Hi-Lo. The payoff matrix of Hi-Lo discloses the preparations of comprehensively rational ideal agents. It contains enough information to infer any ideal realization’s solution. When a single-stage game has a unique efficient Nash equilibrium, ideal agents evidentially coordinate to realize it. They prepare for coordination by controlling their pursuit of incentives. They have a common interest in achieving an efficient Nash equilibrium and control their incentive structure so that their pursuit of incentives leads to that equilibrium. A strategy profile in a noncooperative game is a collective act evaluable for rationality. If a group realizes a Nash equilibrium but not an efficient Nash equilibrium, then it fails to realize a solution and in ideal conditions is not comprehensively rational. Consequently, some member is not comprehensively rational. Collective rationality does not demand cooperation in noncooperative games. It does not ask agents to forgo a benefit for the sake of the common good when they cannot communicate and cannot take steps to prevent exploitation. It asks agents to coordinate, however. They can do that using their information about each other. Decision preparation may solve a one-shot coordination problem by cultivating and advertising dispositions for choices that yield a unique efficient equilibrium. It is not irrational to do one’s part in the equilibrium, and an inclination to do one’s part rationally initiates coordination. Decision preparation settles equilibrium selection prior to a game. It supplants team reasoning and similar principles of coordination. Novel principles of reasoning designed to yield coordination are unnecessary. General utility maximization yields coordination when collective rationality demands it. Collective rationality’s compositionality ensures that individual rationality meets its demands without radical revision of principles of individual rationality.
Coordination
135
7.5 I NTENTIONS Dispositions yielding coordination may be rational because of their effects. They form one type of rational commitment mechanism. Intentions form another type of rational commitment mechanism. The standards for an intention’s rationality demand more than the standards for a disposition’s rationality. Unlike a rational disposition to perform an act, a rational intention to perform an act rests on reasons to perform the act. Its rationality does not rest solely on the intention’s effects.12 Intentions are commitments an agent controls more easily than dispositions. Acquiring a disposition may take long training, whereas an agent may form an intention instantaneously. Because of an agent’s control over intentions, obtaining coordination from them requires only the standard principle of utility maximization. Reasoning from intentions to coordination does not require the principle of ratification or the tie-breaking principle. In a coordination problem, selfconditional utility maximization among self-ratifying options reduces to utility maximization in forming an intention to initiate coordination and in carrying out the intention. To begin, this section reviews some general points about intentions and their role in deliberations. An agent who intends to perform an act commits himself to performing it. From rationality’s viewpoint, his commitment is revocable, of course, because of new relevant information or revision of basic goals. So it is a conditional commitment. It is a commitment to act provided that relevant circumstances are constant. Even a nonconditional intention to act is a conditional commitment from rationality’s perspective. The grounds of the nonconditional intention may change. A smoker may nonconditionally intend to quit smoking but may reasonably rescind that intention given new unexpected information that smoking is harmless. People benefit from forming intentions to perform useful acts because an intention to perform an act increases the act’s objective probability, as Joyce (1999: 60) notes. Bratman (1987: 22–23) observes that if a rational agent has an intention to act, that intention gives him a reason to act. To illustrate, Bratman imagines a choice of highways tied in utility. Intending to take one breaks the tie. The intention creates a new reason to take that highway. Rabinowicz (personal communication, May 20, 1998) imagines a sleepyhead in bed who forms an intention to rise on the count of ten. The extra reason the intention provides makes a difference, and the sleepyhead rises.13 Does the extra reason an intention provides conflict with utility maximization? The examples show that an intention to perform an act may furnish an extra reason for the act, but they do not show that conforming with the intention contravenes utility maximization. The intention generates a desire to perform the
136
Collective Rationality
act and so increases the act’s utility. Because utility maximization responds to the intention’s effects, the extra reason is compatible with it. Forming intentions to act prevents vacillation between two acts equally good. Buridan’s ass is indifferent between two piles of hay. Forming an intention to go to a particular pile breaks the tie and ends vacillation. Forming intentions to act also prevents vacillation between two incomparable acts. Abraham does not and cannot compare obeying God and sparing his son. If he spares his son, he nonetheless loses his son’s trust if he vacillates. Vacillation is costly. Forming an intention to act prevents vacillation not by breaking a tie but by making revocation incoherent absent new reasons.14 An ideal agent, although without cognitive limits, may profit from an intention to perform an act. As for humans, the intention may increase the act’s objective probability, furnish a reason for the act, and halt vacillation. Because an ideal agent is aware of her mental states and their consequences for acts she controls, an intention boosts an act’s subjective probability when it boosts the act’s objective probability. Chapter 2 imagines perfect agents whose beliefs and desires may lead to an act without the intermediary of an intention. Even a perfect agent, who may act in a utility maximizing way without deliberating and deciding, may profit from forming an intention to act. The intention may prompt coordination among perfect agents. This section explains how an intention may instigate coordination in an ideal game. An intention to perform an act regardless of the act’s consequences may have good consequences. Nonetheless, the intention may be irrational because the act insufficiently motivates the intention. A rational intention to perform an act requires reasons for the act, not just reasons for the intention.15 Showing that comprehensively rational ideal agents may use intentions to instigate coordination requires showing that the intentions are rational to form, hold, and fulfill. A rational ideal agent in Hi-Lo may form an intention to perform High. Forming the intention has good consequences provided that the other player infers it. The intention must meet cognitive standards, however. There must be reasons for the act intended. Given the intention, there are reasons for the act intended. The act has good consequences. The other player meets it with High. The act does not promise good consequences prior to the intention, but after formation of the intention it does promise good consequences. The intention effects the other player’s act. The other player knows of it in an ideal version of the game. May it be rational to form the intention as well as to maintain and execute it? If Row’s direct knowledge does not warrant assigning probabilities to Column’s acts, Row’s performing High may nonetheless maximize utility under a quantization of his beliefs and desires. In that case performing High is rational, and forming an intention to perform High is rational, too. The intention serves to break a tie.
Coordination
137
In addition, the intention generates positive support for itself. The intention’s self-support derives from Row’s anticipation of the intention’s effects. The process resembles support that pragmatic reasons provide for beliefs. An athlete may inculcate confidence that she will win her match. To be rational, her belief must also be epistemically justified. Suppose that she knows that confidence in victory makes her play well-enough to win. She forms the belief for pragmatic reasons, but holding it is epistemically justified because having the belief raises its prospects of being true. The prophecy that she will win is self-fulfilling because of its effects on performance. A rational person may form a belief for its pragmatic value if having the belief is rational because of its self-supporting character. She does not need supporting evidence prior to the belief ’s formation if the belief provides its own support, as Joyce (2007: 558–59) notes. The same is true of an intention to perform High in the game Hi-Lo. It has cognitive support once formed because of its effect on one’s counterpart. A player’s act does not causally influence other players’ acts in a single-stage noncooperative game. Others may, however, have foreknowledge of his act. This foreknowledge may causally influence their acts. All may know that at least one player is an instigator of coordination. Row gives himself a reason to perform High by forming the intention to perform High, assuming knowledge that Column knows of his intention. The players in an ideal version of High-Low may have common knowledge of at least one player that he is an instigator of coordination. Then they achieve (High, High). The intention to perform High has better consequences than the intention to perform Low, but the reasons for acts do not favor High over Low at the outset. The reasons favoring an intention to perform High arise from the intention’s rather than the act’s consequences. However, forming the intention to perform High furnishes new reasons for the act intended. Knowledge of the intention gives a player’s partner a reason to perform High. Her performing High then gives him a reason to perform High. An agent’s intention to perform High does not yield a decisive reason to perform High unless his opponent knows of it and so will perform High herself. This happens in an ideal game. The intention increases the probability of High, and in an ideal game an increase is all it takes to get the ball rolling. One player’s increasing the probability of High grounds the other player’s increasing it, too, until repeated increases lead each player to settle on High. The intention’s formation arises from its consequences, but the act’s performance arises from its own consequences given the intention’s formation. Forming, maintaining, and executing the intention to perform High each maximize utility. A rational ideal agent in an ideal version of Hi-Lo may give herself a reason to perform High by intending to perform it. Bratman (1987: 24–27) warns against bootstrapping intentions. He observes that merely forming an intention to perform an act does not yield a reason to
138
Collective Rationality
perform the act. The intention requires a grounding in reason. Irrational formation of an intention to perform an irrational act does not convert the act into a rational act simply because the act then conforms with intention. However, an intention to perform High in Hi-Lo is not an irrational intention and may provide a reason to perform High. Forming the intention changes circumstances so that performing High maximizes utility. The intention is not defective, so the act it supports is well-grounded. This chapter and Chapter 6 show that rational agents may coordinate to realize a particular equilibrium in a noncooperative game. Although rationality’s attainability sanctions some revisions of common principles of rationality, coordination problems do not motivate additional revisions of those principles. Concrete coordination problems have features that payoff matrices do not represent. Comprehensively rational agents have traits conducive to efficient forms of coordination. In ideal conditions, they solve coordination problems. Chapter 8 begins a study of cooperative games. In those games, agents are better positioned than in noncooperative games to attain goals of collective rationality.
8
Cooperative Games
A
joint act is a type of collective act. A theory of collective rationality evaluates joint acts that arise in cooperative games. It thoroughly covers cooperative games. This chapter explicates the difference between cooperative and noncooperative games, and introduces some convenient terminology that is summarized in Figure 8.1 and Table 8.3. Its presentation of cooperative games attends especially to a type called coalitional games. Subsequent chapters present an account of collective rationality in coalitional games, and use it to refine solutions to those games. 8.1 J OINT A CTION A bargaining problem is a cooperative game. Two bargainers act jointly if they agree on a price for an item at a garage sale. Each benefits from the exchange they accomplish jointly. The problem of forming political coalitions is another cooperative game. A block of voters may agree to vote for the same candidate. Their coordination improves the chance that their votes decide the election. In cooperative games, the players may realize joint acts such as agreements. Subsequent sections elaborate this account of cooperative games. This section explains joint action, how it arises, and why opportunities for it affect solutions to games. Its characterization of joint action is close to ordinary usage, and to usage in game theory, but aims primarily to advance a theory of collective rationality. People may act together to perform a joint act. For example, two people singing a duet perform a joint act. As I use the term, a joint act requires coordination. Some collective acts are not joint acts because they do not involve coordination. Suppose that each member of a group of 1000 people donates $1, so that the group donates $1000. The group performs a collective act, but, if its members do not coordinate, it does not perform a joint act. Coordination is harmonious collective action. It is a combination of complementary acts, perhaps, but not necessarily, involving mutual adjustment of acts. Some coordination is collaborative, say, carrying a table together. Agents who 139
140 Collective Rationality collaborate, act in concert to obtain a goal of each, perhaps a common interest. Not all coordination is collaboration. Boxers coordinate to conduct a match. Their coordination resembles a dance. They are mutually responsive. One retreats and the other advances. They do not collaborate, however. They fight. Their collective movement is not a goal of each. Although both want the match, the loser does not want its outcome. Some coordination is cooperative. The members of a club paying dues to fill the treasury both coordinate and cooperate. Cooperation is collective action achieving a goal of each participant and also imposing costs on participants. Suppose that two people can achieve some benefit working alone but can achieve a greater benefit working together. If each is sure that his partner will not exploit him, and the two work together, then they cooperate. A cooperator, if others fail to join him pays a premium above the cost of not cooperating. So cooperation may require assurances of others’ participation.1 Coordination is a common means to cooperation, but cooperation may occur without it. For example, the members of a group without coordination may all give to charity. Despite lack of coordination, they cooperate to help the needy. In cooperative games, however, cooperation is joint action and so involves coordination. Collaboration does not require resisting temptation. An orchestra’s members are not tempted to deviate from their collaboration to perform a symphony. In contrast, cooperators pay a premium, and members of a group may be tempted not to pay it. When neighbors gather to clean their park, one may loaf while others work. Because free riding on the cooperative acts of others has benefits, feigning cooperation, unlike feigning collaboration, may be profitable. Communication and agreements generate behavioral expectations that ground collaboration. Collaboration easily occurs without a binding contract, whereas cooperation often requires one. A binding contract attaches penalties to acts in violation of the contract. An individual conforms because of a penalty for breach, but agrees because of a gain from the agreement. The enforcement mechanism may be an internal moral code rather than an external police. A binding contract’s point is just to create expectations about behavior. With a contract, one agent knows that another will act if he does. If he fulfills his part of the contract, then the other will in response fulfill her part. Two farmers may contract to help each other with the harvest. Each knows that the other will help if he does. Collaboration and cooperation yield a collective act (propositionally individuated) that participants typically want. Each may want their collective act as a means to a personal end and may want another collective act even more. Desires hold with respect to reference points. For example, a hungry diner wants each item on the menu compared to having none, but does not want his last choice compared to having his first choice. The reference point for desires is lower in collaboration than in cooperation. Collaborators want their collective act taking
Cooperative Games
141
a failure to collaborate as the status quo. Cooperators want their collective act taking the content of their aspirations as the status quo. Consequently, competition prevents cooperation but not collaboration. For example, bargainers collaborate to set a price but do not cooperate to set it. Each wants the price given no sale as the status quo, but each also wants a more favorable price, and not the sale price, given as the status quo the price he aspires to set. I take coordination, collaboration, and cooperation as success terms. People may desire these types of collective action and be aware of achieving them, but the desire and awareness are not essential. People may coordinate by all turning on their televisions at 8 PM for a presidential debate. In an economic market they may collaborate to achieve efficiency as if directed by an invisible hand, while in fact each pursues self-interest. Donators may cooperate to support a charity even if ignorant of their collective act, and not acting because of a desire to achieve it. Although a common goal often motivates coordination, collaboration, and cooperation, acting for a common goal is not necessary for these types of collective action. Joint action is coordination. However, there are two types of coordination, and just one type characterizes joint action. Coordination occurs when the acts of individuals are correlated. An act may be correlated with another either causally or noncausally, so coordination may be either causal or noncausal. Suppose that all cheer at midnight on New Year’s Day. This common response to a common temporal observation is a noncausal form of coordination. Suppose that two people push a car together because of their communication with each other and their observation of each other. This is a causal form of coordination. For noncausal coordination, people need not monitor the acts of others. One person’s act need not cause another’s act. A person may anticipate another’s act without causally responding to the act. Without observing others or communicating with others, each person may know enough about the others’ acts to coordinate. People may respond to a signal instead of to each other. They may know that others respond to the signal. Their acts may have a common cause. A joint act, as I define it, requires causal coordination. The causal influence among agents may be mutual or just in selected directions. Assuming that an orchestra’s conductor is not a member of the orchestra, if the orchestra coordinates by following its conductor and not through causal interaction of its members, then its members act collectively but not jointly. For example, their starting together is not a joint act.2 Culture facilitates noncausal, evidential coordination. It promotes a common way of thinking. Similarity of thought creates focal points and thereby aids anticipation. Culture also generates conventions that enable agents to anticipate others. Evidential coordination is possible in a simultaneous-move noncooperative game, although agents do not causally interact. Each agent infers the others’ acts. His anticipation of their acts guides his act. Such anticipation may lead to participation in a Nash equilibrium. In realizing a Nash equilibrium agents
142 Collective Rationality coordinate. An agent’s act depends on beliefs about the others’ acts. However, agents do not causally coordinate. An agent’s act does not cause the others’ acts. Realization of the Nash equilibrium is not a joint act because the coordination it involves is evidential. Joint action involves causal coordination, but that is not enough to characterize it. In a sequential noncooperative game with perfect and complete information, such as Chess, players causally interact. A player’s move at a stage responds to players’ moves at earlier stages. Typically, one player wins and the other loses. The players’ moves do not constitute a joint act. They do not warrant application of standards of collective rationality such as efficiency. As I use the term, joint action is causal coordination that meets a goal of each participant, for example, striking a bargain. It is collaboration and not just causal coordination. Figure 8.1 displays types of collective act. The types beneath a type do not entail the type but just indicate varieties of the type. For example, cooperation may be a type of coordination, but cooperation may also occur without coordination. How do joint acts arise? Everything crucial for a rational agent’s act boils down to beliefs and desires. The origin of an agent’s participation in a joint act lies in the agent’s beliefs and desires. The crucial element is an agent’s beliefs about other agents’ acts. An agent in a game acts according to his prediction of others’ acts. Prediction is an evidential matter. It may arise in a single-stage noncooperative game from agents’ knowledge of other agents’ characters. Communication affects agents’ knowledge about each other. An agent may communicate to others an intention to perform an act or reveal information about her preparations for interaction with others and, in particular, whether she has a cooperative disposition. Communication allows bargainers to reveal facts about themselves that others cannot predict given only the structure of their bargaining problem. This revelation influences one agent’s expectations about another’s behavior. Communication may involve nonlinguistic signals as well as language. In a sequential noncooperative game, agents reveal their acts in the Collective act /
\
Not coordination
Coordination /
\
Noncausal
Causal / Not joint action
\ Joint action /
Noncooperative
FIGURE 8.1 Types of collective act.
\ Cooperative
Cooperative Games
143
game’s stages. Observation yields knowledge of other agents’ acts. It affects an agent’s beliefs, and so his acts. Communication may create dependencies among acts and the causal coordination characteristic of joint action. Communication is transmission of information. It is otiose for perfect predictors. It matters only if some agents are opaque and may reveal relevant information about themselves. An agent with information to reveal anticipates responses to it. She may communicate a disposition to reciprocate cooperative acts and thereby initiate cooperation. Communication informs agents about strategies planned and yields agreements about joint strategies. The upshot of effective communication and agreements is knowledge of others’ acts so that one may adjust one’s own act to theirs and realize a joint act. Groups often have authority structures, and members follow their leaders. Joint action may arise from leadership. For example, conferees may follow a leader to a cafeteria table for lunch. Joint action may arise without leadership, however. It may arise from an agreement that makes one act depend on another. One player’s entering the agreement may convince another that he will do his part in the joint act. So the second player does his part in the joint act. The agreement itself may be a joint act. One agent’s offer elicits another’s acceptance. The acceptance is a causal response to the offer so that the agreement itself is causal coordination. Bratman (1999: 16, 52–53) observes that plans promote coordination. Using plans, a group coordinates its members’ acts. However, coordination may arise without plans. Suppose that two people reach a door at the same time. One goes through it while the other waits. That coordination may occur spontaneously. It may arise from a common understanding of the situation and not from an agreement. It need not follow a plan the pair adopts. In fact, coordination cannot always be the result of a plan. A group’s adoption of a plan is itself coordination and cannot always be the result of another plan without an infinite regress of plans. An agreement creates a network of causal influence. Because of an agreement, one agent’s adoption of a strategy may precipitate another’s adoption of a complementary strategy. For example, two players may agree that if one does his part in a joint strategy, the other will do her part. Hence, the first person’s participation causes the second’s participation.3 An agreement is typically a joint act, but acting because of an agreement need not be a joint act. One agent’s act does not causally influence another’s if both act only because of an agreement. An agreement may be a common cause of agents’ acts. The causal coordination necessary for a joint act may be missing. Agents act to cause expectations of correlations among their acts, but some correlations they create are noncausal. Whether a collective act is a joint act depends on the proposition that individuates the collective act. If the collective act includes the agreement, it may count as a joint act. If it includes only the agreement’s effects, it
144
Collective Rationality
may not count as a joint act. Not every collective act caused by a joint act is a joint act. Otherwise, an agreement about strategies prior to a single-stage noncooperative game makes playing the game a joint act. How do opportunities for joint action affect solutions to games? Opportunities for joint acts are opportunities to causally influence others’ acts. The additional opportunities make a dramatic difference. Opportunities for joint action affect rational behavior for individuals. Agreement changes expectations concerning others’ behavior when rationality is not enough to predict behavior. The expectations it generates may tip the balance between two equally good Nash equilibria in a coordination problem such as the interrupted phone call. Suppose that Hi-Lo is transformed by adding possibilities for communication and agreement. If able to communicate, players may not need common knowledge of their game and their rationality to achieve an efficient Nash equilibrium.4 An ability to make binding contracts transforms Hi-Lo into a tractable cooperative game. A binding agreement informs agents about other agents’ strategies. That foreknowledge makes utility maximization support participation in a coordination equilibrium. All know that all do their parts in the joint strategy. An agent’s foreknowledge of his own choice is indirect and based on probabilities of others’ choices. He assigns those probabilities in light of the agreement reached. The agreement provides evidence that swamps an agent’s evidence under assumptions concerning his own choice. An agent’s nonconditional probabilities of others’ choices replace probabilities of others’ choices conditional on his choice. Agreement breaks the circle of strategic reasoning. Suppose that opportunities for binding contracts modify the Prisoner’s Dilemma. Suppose also that the resulting cooperative game is ideal. In addition to agents’ being prescient, rational, and informed about their circumstances, conditions are perfect for communicating and forming binding agreements. For instance, contracts incur no transaction costs. These changes influence application of standards of rationality. Rationality recommends an agreement to cooperate. No agent should settle for less than he can obtain without cooperation. So none should settle for an agreement in which he receives the worst of all possible outcomes. Therefore only two agreements are viable, namely, universal cooperation and universal noncooperation. Because universal cooperation is better for both agents, they should agree on it. Some player should propose a binding agreement to cooperate, and the other player should accept.5 At the moment of the Dilemma, the agents’ cooperative acts are causally independent, and the combination is not a joint act. However, their cooperative acts have a prior common cause, namely, the agreement, which is a joint act. One may count the agreement and its result together as a joint act of cooperation. The opportunity for joint action elevates standards of individual rationality. In ideal conditions for joint action, individual rationality yields efficiency.
Cooperative Games 8.2 O PPORTUNITIES
FOR
145
J OINT ACTION
This section introduces cooperative games, assumptions about them, their relation to noncooperative games, and proposed solutions to them. Its selective review of cooperative games targets points advancing a philosophical theory of collective rationality. Cooperative games often have two features: opportunity for joint action and possibilities of exploitation. The challenge for rational players is to eliminate fears of exploitation so that joint action may emerge. They may not succeed in every case. In some cooperative games, rational agents may not find a way to cooperate. Players’ goals typically conflict in cooperative games, and players may not care about cooperation except as a means to personal gain. The term cooperative game may mislead. Cooperation is not the distinctive feature of cooperative games. The joint action characteristic of cooperative games may occur without cooperation. Consider the Ultimatum Game. It is a two-stage bargaining problem. Two people A and B share $10 if they agree on a division of that money. A makes a proposal. B takes it or leaves it. The game is not repeated, so B does not turn down a proposed division to discourage repetitions of a slight it manifests. A may propose a 9–1 split, and B may accept. Each aims to maximize personal gain. Their causal coordination profits each, but because of conflicting aspirations their 9–1 split does not constitute cooperation. Their agreement is a joint act but not a cooperative act. B wants the division reached compared with no division at all, but does not want it compared with other divisions she aspires to attain. Because of competition, a cooperative game may not end with the players performing a cooperative act. The definition of a cooperative game is stipulative, and various versions are serviceable. Theorems in game theory make precise the type of game they treat and are generally independent of the definition of cooperative games. A theory of collective rationality, however, investigates the relation between cooperative and noncooperative games, and for this purpose must settle the definitions of these types of game. The definitions it adopts should identify fundamental differences that affect application of general principles of rationality and so explain differences between solutions to cooperative and noncooperative games. Noncooperative games do not include opportunities for joint action. In a simultaneous-move game each player acts independently. One player’s act may causally depend on another player’s act in a noncooperative sequential game such as chess. These games do not yield joint action, however, because outcomes do not advance each player’s goals. In noncooperative games an agent acts according to expectations about others’ acts. In cooperative games an agent may generate those expectations. Agents may causally coordinate acts to obtain a gain for each. Opportunities for joint action (which emerges in favorable conditions) define cooperative games as I introduce them. Some game theorists use opportunities for
146
Collective Rationality
binding contracts, instead, to define cooperative games. Binding contracts are the mark of cooperative games in which the relevant joint acts are cooperative and require measures to preclude exploitation. Other cooperative games yield collaboration without cooperation, and agents’ joint acts require only communication and agreements to generate expectations about their behavior. An agreement may be self-enforcing in the sense that no individual has an incentive for unilateral departure from the agreement. In a bargaining problem agents may act jointly to restrain pursuit of incentives so that they achieve a mutually beneficial agreement. A bargainer trades a right to demand concessions for the benefit of an agreement. Honoring the agreement need not require resisting a temptation to exploit partners. The agreement may be self-enforcing. Deviation revokes the agreement and forgoes a benefit. Because a joint act may arise without a binding agreement, I use joint action rather than binding agreement to characterize cooperative games.6 Games permitting correlation of agents’ strategies illustrate my characterization of cooperative games. In such games correlated equilibria are possible, as Section 5.3 explains. Two or more agents may jointly deviate from an unattractive Nash equilibrium to reach a correlated equilibrium. This is a probability mixture of strategy profiles such that no individual profits from unilateral deviation. Mixtures of Nash equilibria are correlated equilibria. In the game in Table 8.1, a 50–50 mixture of the two pure Nash equilibria yields a correlated equilibrium with the payoff profile (3/2, 3/2). This profile is better for each player than the payoff profile of the Nash equilibrium in mixed strategies, namely, (2/3, 2/3). Unilateral departure from the correlated equilibrium profits neither player. An arbitrator may implement that equilibrium by flipping a coin to select a Nash equilibrium in pure strategies and by communicating his choice to the players. Neither player has an incentive to deviate from the selected Nash equilibrium. A correlated equilibrium does not require binding contracts. Whether a correlated equilibrium involves joint action depends on the type of correlation between agents’ strategies. Their strategies may be probabilistically correlated but causally independent. An arbitrator’s signal may be a common cause of acts that do not causally influence each other. Then realization of the equilibrium is not a joint act. However, suppose that players communicate with each other, and their strategies’ correlation has a causal basis, perhaps an agreement to follow a coin toss, as in Myerson (1991: 250). Then their selection of a
Table 8.1 Correlated Equilibrium
Up Down
Left
Right
1, 2 0, 0
0, 0 2, 1
Cooperative Games
147
Nash equilibrium is a joint act. The game in which it occurs is cooperative despite the absence of binding contracts.7 Some theorists, such as Ordeshook (1986: 303–4), Harsanyi and Selten (1988: Chap. 1), Kreps (1990: 9, 93), Myerson (1991: 244–45, 257–58, 420–22, 423–24), and Binmore (2007: Chap. 18), distinguish cooperative and noncooperative games according to the analyses they receive. Groups are units of analysis for cooperative games. Only individuals are units of analysis for noncooperative games. Simplicity motivates adopting a cooperative analysis. For example, bargaining problems receive a cooperative analysis because many bargaining protocols are too complex to analyze noncooperatively. Communication and coordination are explicit moves in a sequential, noncooperative analysis of a bargaining problem. A cooperative analysis relegates such moves to the background. It treats the effects of the suppressed moves.8 According to this view, cooperative and noncooperative games are just abstract representations of concrete games. The same concrete game may receive both a cooperative and a noncooperative analysis. Although differences in representations need not reflect differences in the concrete games represented, there is a distinction in concrete games, too. Some offer opportunities for joint action. A concrete game with these opportunities is cooperative regardless of its representation. A concrete game cannot be both cooperative and noncooperative. It cannot both have and lack opportunities for joint action. A cooperative game, as I use the term, is a type of concrete game, and not a type of abstract representation of a game. Different prospects for representation distinguish cooperative and noncooperative concrete games. A cooperative concrete game has a cooperative representation and also a noncooperative representation. A noncooperative concrete game, however, has only a noncooperative representation. It lacks a cooperative representation because it affords no opportunities for joint action. Consider a cooperative concrete game. What distinguishes its noncooperative and cooperative analyses? Acts are propositional and fine-grained. A combination of individual acts that constitute a joint act differs from the joint act even if the former realizes the latter. A noncooperative analysis represents individual acts and uses principles of individual rationality to support solutions. It does not explicitly show joint acts. The representation may display strategy profiles in a sequential game without identifying the profiles that realize joint acts. The representation may display the same features as does the representation of a noncooperative sequential game without joint acts. For example, a bargaining problem’s representation may display only sequences of individuals’ moves in a bargaining protocol and not the bargains those moves realize. Characterizing cooperative games in terms of joint acts has a normative point. Opportunities for joint action impose new standards of evaluation. A cooperative game with opportunities for joint action may be subject to standards of collective rationality such as efficiency. The game’s cooperative analysis represents joint acts rather than the combinations of individual acts that realize them. It
148 Collective Rationality represents incentives for groups and not exclusively incentives for individuals. Then it uses standards of collective rationality to support solutions. Applying those standards to a group is simpler than aggregating applications of individualistic standards to every member. To study strategic reasoning without addressing learning, this book treats single-stage cooperative games. A game of this type concerns joint acts, and a sequence of moves by individuals realizes a joint act. A sequential game underlies a single-stage cooperative game. Joint acts occur during the cooperative game’s single stage, but individual acts occurring at different stages of the underlying sequential game constitute a joint act. For example, two people bargaining over division of a dollar may agree on a 50–50 division in a single stage of the cooperative game. In the underlying sequential game, one person may propose a 50–50 division. Then his partner may accept that proposal. A sequence of moves realizes their joint act of agreement.9 Suppose that a multistage noncooperative game realizes a single-stage cooperative game. The cooperative game may have a single-stage cooperative analysis and also a multistage noncooperative analysis. These analyses are consistent, if accurate. Myerson (1991: xi, 455–56) holds that the noncooperative analysis is fundamental, while the cooperative analysis has an essential, complementary role. He advocates reducing cooperative analyses to noncooperative analyses and, in particular, reducing solutions cooperative analyses generate to solutions that noncooperative analyses generate.10 Contrary to this common view, Osborne and Rubinstein (1994: 3, 256) hold that noncooperative analyses are not more basic than cooperative analyses. Both types of analysis identify solutions, that is, strategy profiles compatible with rational play. A cooperative analysis need not attend to the processes realizing profiles that constitute joint acts. The solution of a cooperative game is independent of the underlying sequential game that realizes the cooperative game because various underlying sequential games may realize it. The analysis of the relation between individual and collective rationality in Chapter 4 supports Myerson’s account of the relation between cooperative and noncooperative analyses. A noncooperative analysis is more basic than is a cooperative analysis. A cooperative game’s abstract representation may have multiple possible realizations, but a concrete cooperative game has a unique concrete realization as a sequential game. Players’ acts realize their joint acts, so a fundamental analysis of a concrete cooperative game uses a sequential rather than a cooperative analysis. A noncooperative analysis uses first principles to support a solution to a concrete cooperative game. A solution of the underlying sequential game realizes a solution of the cooperative game. Verification of a cooperative analysis shows that the solutions it identifies are equivalent to the solutions a noncooperative analysis identifies. An accurate cooperative analysis agrees with an accurate noncooperative analysis of the same concrete game.
Cooperative Games
149
The same concrete bargaining game may be analyzed cooperatively using only a representation of possible agreements, or as a sequential game using a representation of individuals’ offers, counteroffers, and acceptances that result in an agreement. The difference between cooperative and noncooperative analyses concerns the game’s representation and principles for identifying its solutions. A solution’s derivation from an analysis of the underlying sequential game may be complex. Nonetheless, a derivation is possible in theory and yields the solution using only principles of individual rationality. To illustrate the two types of representation, consider the Ultimatum Game. A typical result is a sequence of players’ acts constituting coordination to achieve a division from which each player benefits. The sequence is not a joint act that a pair performs at a single time. The pair’s agreement on a division is a joint act. A sequence of players’ acts realizes it. The agreement occurs at a single time, namely, at the culmination of the agreement procedure. The components of the agreement’s realization, namely, the steps generating it, occur over a period of time. The pair acts jointly even if they do not simultaneously perform their parts in the agreement’s realization. The Ultimatum Game’s result is a joint act according to its cooperative representation. It is a combination of individual acts according to its noncooperative, sequential representation. The causal influence that characterizes joint action occurs in the underlying sequential game. The sequential game yields the agreement that resolves the bargaining problem. That agreement is a joint act in the concrete bargaining game. For another example, consider a cooperative revision of the Prisoner’s Dilemma in which joint action, in particular, binding agreement is possible. Without agreement the players, Row and Column, are in an ordinary Prisoner’s Dilemma and achieve (1, 1), as Table 5.1 indicates. With agreement, Row and Column may achieve any of the four outcomes of an ordinary Prisoner’s Dilemma. Because the outcomes (3, 0) and (0, 3) are asymmetric, Row and Column will not agree to either. Of the remaining outcomes, each prefers (2, 2) to (1, 1). If Row proposes coordinating and Column accepts the proposal, then Row and Column enter a bargaining problem in which symmetry and efficiency lead to (2, 2). The tree in Figure 8.2 represents the game sequentially. Backward induction, whose course
Row Propose coordination Column Accept 2, 2
Don’t 1, 1
Don’t 1, 1
FIGURE 8.2 A sequential analysis of a cooperative revision of the Prisoner’s Dilemma.
150
Collective Rationality Table 8.2 A Single-Stage Analysis of a Cooperative Revision of the Prisoner’s Dilemma
Propose Don’t propose
If propose, accept
If propose, don’t accept
2, 2 1, 1
1, 1 1, 1
the double lines indicate, supports the strategy profile (Propose, (If propose, accept)). It is a rollback equilibrium of the sequential game. Realization of that profile is causal coordination for mutual benefit. It is a joint act arising from Row’s sending relevant information to Column. Causal interaction in the underlying sequential game yields a joint act of the cooperative game. The payoff matrix in Table 8.2 represents the sequential game as a single-stage game. The strategies in the matrix are strategies of the sequential game. In this payoff matrix (Propose, (If propose, accept)) is the efficient Nash equilibrium. The sequential game has the same solution as the single-stage game. The sequential analysis uses principles of individual rationality that support the rollback equilibrium. The single-stage analysis uses principles of individual rationality that support an efficient Nash equilibrium. The game’s cooperative representation reduces outcomes of the players’ interaction to (1, 1) and (2, 2). The first results from a failure to act jointly, and the second results from joint action. The joint act is the upshot of a sequence of individual acts, although the cooperative representation depicts only a single stage. The principle of efficiency, a principle of collective rationality, yields (2, 2) when applied to the cooperative representation. The cooperative analysis thus yields the same outcome as the sequential and single-stage noncooperative analyses. A single-stage cooperative game allows for joint acts, such as agreements, in the single stage of its cooperative representation. A player’s agreeing may be part of a Nash equilibrium in an underlying sequential game, which may have enforcement stages. A joint act achieved in stages of the sequential game yields the outcome of the single-stage cooperative game. In the underlying sequential game, individuals have strategies such as making and accepting offers. In the cooperative game, groups have joint strategies that correspond to agreements individuals may reach in the underlying sequential game. The joint strategies are defined in terms of the strategies available to agents in the cooperative game, not in terms of the strategies available to agents in the underlying sequential game. Table 8.3 summarizes terminology. I use standard representations of cooperative games and assume that they display all factors that bear on solutions of the games they represent, given idealizations and supplementary assumptions about the games. The representations display payoffs, and idealizations specify players’ information, among other
Cooperative Games
151
Table 8.3 Classification of Acts, Games, and Methods of Analysis collective act, the combined acts of a group’s members. coordination, a collective act in which each member’s act rests on information about the others, for example, a Nash equilibrium. joint act, an act of coordination, meeting a goal of each participant, and produced by causally interdependent acts of participants, for example, an agreement. Same as collaboration. cooperation, a collective act meeting a goal of each participant with respect to her aspirations and requiring a premium for participation, for example, mutual aid achieved by a binding agreement. cooperative game, a game with opportunities for joint action. noncooperative game, a game without opportunities for joint action. cooperative analysis, representation of joint acts and application of principles of collective rationality. noncooperative analysis, representation of individual acts only and application of principles of individual rationality only.
things. As an idealization, I assume that players and their circumstances are ideal, and, in particular, that they have common knowledge of their game and their rationality.11 In an ideal cooperative game, conditions are perfect for joint action. Communication is unconstrained and has no cost. If a group performs a joint act, the members know that they do. The joint act typically arises from agreement, which generates common knowledge. Each, as he does his part, knows that the others do their parts. The profile of strategies realized surprises no member of the group. Members lack incentives to keep information private when forming coalitions and adopting joint strategies. If a joint act involves a probability mixture of acts, the mixture’s adoption is common knowledge although its result is unknown. Standard representations of bargaining problems display outcomes as payoff profiles. The representations make background assumptions to ensure their adequacy. The set of possible divisions of a good does not settle a bargaining problem if, for example, the parties may apply force to reach a bargain. Force may enable one bargainer to obtain the lion’s share of the good, although nothing in the set of possible divisions suggests this outcome. An analysis should represent all incentives that affect players, including incentives created by application of force. An adequate representation omits no relevant options and no desires and beliefs that motivate agents. Because a bargaining problem’s standard representation does not display force’s presence, those representations assume its absence. A typical representation of a cooperative game lists the players’ utilities for outcomes of all possible combinations of players’ acts, including their joint acts. A proposed solution is a strategy profile or set of strategy profiles. Support for a proposal shows that the designated profiles meet conditions for being a solution and that other profiles fail to meet those conditions. Rational agents take advantage of opportunities for joint action. So standards for solutions to cooperative games may differ from standards for noncooperative games. Standards of
152
Collective Rationality
collective rationality support traditional requirements for solutions to cooperative games. For example, intuitive standards of collective rationality support the requirement of efficiency. Cooperative game theory entertains many proposals concerning solutions. Shubik (1982, Chaps. 6, 7, 11), Fudenberg and Tirole (1991), and Moulin (1995) survey these proposals. The remainder of this section mentions important proposals put aside. Subsequent sections examine a plausible necessary condition for a solution to a broad type of cooperative game. If a cooperative game has more than two agents, a group of agents may form a coalition that adopts a joint strategy involving its members and excluding others. Von Neumann and Morgenstern ([1944] 1953: Sec. 4.5, Sec. 56.12) present a classic treatment of such cooperative games. They take a solution to be a set of profiles yielding a stable set of outcomes. Luce and Raiffa (1957: Chap. 9), Ordeshook (1986: 389–97), and Binmore (2007: Sec. 18.4) review the main features of these stable sets. A set of outcomes is stable if and only if it meets two conditions: (1) for every outcome outside the set some coalition can achieve an outcome inside the set that is better for all its members and (2) no coalition can achieve an outcome inside the set better for all its members than another outcome inside the set. I put aside stable sets because an ideal cooperative game may lack a stable set, whereas a solution is attainable in every ideal cooperative game. Voting creates a type of game in which a group selects an option. A voting rule, such as majority rule, specifies the combination of votes that count as selection of an option. Arrow’s Theorem (1951) shows that no general voting rule meets certain plausible conditions. The literature on social choice, to which Arrow’s work belongs, treats aggregation of individual preferences into collective preferences. An aggregation mechanism generally does not attend to the strategic interaction of individuals. Its approach to collective action is analogical. The goal is a collective preference ranking of options that rationalizes collective acts. This book considers how strategic reasoners act together to achieve collective rationality, and so does not follow that approach. Also, a voting game does not count as a cooperative game in my sense if voters act independently. Their voting is a collective act, but not a joint act. Because this chapter treats strategic reasoning in cooperative games, it puts aside common proposals concerning solutions to voting games. Shapley ([1953] 1997) makes a famous proposal about the solution to a cooperative game. According to it, the solution distributes the gain from efficient joint action according to an assessment of individual players’ power. The assessment examines each player’s marginal contribution to an efficient joint act of the players. This is the increment in value that his joining the other players creates. The possible efficient joint acts are distinguished by the order in which the players may form a coalition of all players. The Shapley value accords a player, the average of his marginal contributions to possible efficient joint acts. The argument for a distribution according to Shapley values rests partly on principles of rationality
Cooperative Games
153
and partly on principles of fairness. To be fair to all players, the distribution may give a group of players less than the group can obtain on its own. Because the chapter puts aside principles of fairness, it puts aside Shapley’s proposal.12 The rest of this chapter examines a common standard for a solution to a type of cooperative game called a coalitional game. Two sections introduce coalitional games and a standard for a coalitional game’s solution that requires an outcome in a set called the game’s core. The final section assesses that standard. 8.3 C OALITIONAL G AMES In some cooperative games, individuals may form coalitions whose members act jointly. The coalitions may interact. These are coalitional games. Partners sharing the cost of an enterprise face a coalitional game, for example, three people paving a road to their houses and also four farmers building a levee for flood control. Raiffa (1982: Chap. 17) presents a case in which three cement companies divide the cost of an innovation they share. They benefit differently and so divide the cost unequally. The usual representation of a coalitional game is a characteristic function specifying possible results for each possible coalition. Friedman (1990: 17) and others call coalitional games characteristic-function-form games, after the functions their typical representations employ.13 This book treats coalitional games with a single stage in which coalitions form and act. A multistage game with opportunities for individuals to make and accept offers realizes such a coalitional game. Coalitions’ behavior in a single-stage game arises from individuals’ behavior in the underlying multistage game. Although moves in the single-stage coalitional game do not causally influence each other, moves in the underlying multistage game may causally influence subsequent moves. Formation of coalitions and their joint acts, the culmination of moves in the multistage game, occur at the same time in the coalitional game. Individuals are taken as single-member coalitions for expository convenience. Generalizations about all coalitions then cover all agents, individuals and multiindividual coalitions alike. The coalition of all is the grand coalition. Efficiency generally requires its formation. After adding opportunities for joint action, a two-individual noncooperative game becomes a three-agent coalitional game. The new agent is the coalition of the two individuals. Multiindividual coalitions are collective agents. A coalition may perform a joint act because its members adopt a binding contract to perform their parts. Collective rationality governs joint action within and among coalitions. This chapter introduces elementary coalitional games. A game of this type has a finite number of players. Each coalition, if it forms, and its members act rationally, generates a value a single number may represent. The coalition may distribute its value among its members. The characteristic function specifying each coalition’s value is superadditive, that is, a coalition’s growth maintains or increases its value. Consequently, the grand coalition’s value is at least as great as any other coalition’s
154
Collective Rationality
value, and no feasible outcome allocates more than the grand coalition’s value. Elementary coalitional games raise interesting normative issues that any study of more complex coalitional games must also resolve.14 An elementary coalitional game’s representation by a characteristic function carries some assumptions about the effects of joint acts. A coalition’s value is the product of the coalition’s independent rational work. A representation using a characteristic function assumes that a coalition does not obtain more or less than its value because of outsiders’ acts. A coalition’s profits if it forms and acts rationally are constant with respect to outsiders’ behavior. More precisely, the payoff for a coalition from its rational action is causally independent of compatible action by outsiders. Outsiders cannot affect a coalition’s gain by use of force, for example. Because of independence, a coalition’s value equals the maximum it may gain from joint action. Independence eliminates any hope that a coalition’s departure from a strategy profile awarding its value brings its members more than its value, because its departure prompts beneficial outsiders’ acts.15 Representation by a characteristic function also assumes that all distributions of a coalition’s gains from rational action yield the same value. A coalition’s value is often taken as an amount of money, but more generally it is an amount of transferable utility. This interpretation does not assume interpersonal utility, but instead assumes just a way of scaling each member’s utility assignment so that distributions of a coalition’s gains from rational action yield a constant sum of utilities for the coalition’s members. This scaling is possible if a coalition gains an amount of money, and each member has a linear utility function for money. After scaling personal utility to make it transferable, one may sum members’ utilities to obtain a coalition’s utility as one would if the scales were interpersonal. A coalition’s value is the total utility its members receive if the coalition forms, acts, and adopts an efficient division of its gains.16 A utility profile represents an outcome by listing its utility for each individual. Because divisions of a coalition’s value yield utility profiles whose elements have a constant sum, in a utility space the utility profiles representing divisions of a twoperson coalition’s value form a line, and for a three-person coalition form a plane. A coalition’s possible divisions of its value represent its members’ joint strategies. An elementary coalitional game’s outcome includes, besides a utility profile, formation of coalitions that produce the profile. The outcome is a coalition structure, or a partition of individuals into coalitions that form, and a division of the gains of each coalition in the structure. Some features of outcomes may be inferred from others. For example, if the grand coalition forms, then no unit coalition forms. In ideal coalitional games, if a coalition forms, it acts optimally, obtains its value, and then rationally distributes its value among its members. Ideal conditions for negotiation make it rational for a coalition’s members to achieve an efficient division of the coalition’s value. Suppose that a three-person coalition has a value of eight. Then it may achieve any utility profile with values summing to eight, for example, (0, 4, 4) and (2, 0, 6).
Cooperative Games
155
Each utility profile corresponds to a two-part strategy profile stating, first, that the three-person coalition forms and, second, that it divides its value to realize the utility profile. By specifying coalitions’ values, a characteristic function represents strategy profiles generating utility profiles that realize those values. It indicates coalitions that may form and possible distributions of their values. A characteristic function represents all relevant options in an elementary coalitional game. Its representation of options is coarse-grained, but by tacit assumption suffices for solving a game it represents. Different coalitions may form in two strategy-profiles yielding the same utility profile. For example, suppose that coalitions are unproductive. Then the utility profile that assigns everyone no gain is a product of every coalition structure. Representing outcomes with utility profiles may conceal factors that affect solutions. However, in many coalitional games, assuming that coalitions act rationally, every utility profile indicates a coalition structure and joint strategies for coalitions in the structure. Consequently, a one–one correspondence exists between utility profiles and strategy profiles. Hence, utility profiles may represent strategy profiles. In particular, a coalition’s possible divisions of its value among its members may represent its joint strategies. Although a game’s solution specifies a set of strategy profiles, the corresponding set of utility profiles may substitute for it. Bargaining games are coalitional games in which the only significant coalitions are the unit coalitions and the grand coalition. No coalition is productive except the grand coalition. A bargaining game is an elementary coalitional game when players bargain about the division of an amount of transferable utility. Joint acts are agreements all players reach. Efficiency requires formation of the grand coalition and a division of its value. A contract or self-enforcing agreement may yield the division. Nash’s solution ([1950] 1997b) to a bargaining problem picks the division that maximizes the product of individuals’ utilities (after scaling so that failure to reach an agreement serves as a zero point for each individual’s utility function).17 Coalitional games are generalized bargaining games in which leverage comes from an ability to form productive coalitions. A coalition member’s share of his coalition’s value depends on how much he gains in other coalitions. Coalitions may fail to form because members do better in other coalitions. The significance of multiindividual coalitions besides the grand coalition complicates negotiations. Individuals in a coalitional game may bargain over a coalition structure in addition to bargaining over divisions of a coalition’s value. Possible outcomes in utility space do not adequately represent a coalition’s bargaining problem. Nash’s solution requires modification to reflect differences in bargaining power arising from members’ opportunities to join other coalitions. The asymmetric Nash bargaining solution maximizes the product of the agents’ utility gains after raising each agent’s gain by an exponent that indicates the agent’s bargaining power because of factors such as patience, tolerance of risk, and the bargaining
156
Collective Rationality
protocol. I assume that individuals in an ideal coalitional game bargain rationally, but do not assume that they reach any particular solutions to their bargaining problems. Treating coalitional games’ equilibria rather than their solutions permits leaving open the solutions of bargaining problems. 8.4 T HE C ORE In coalitional games, where coalitions as well as individuals are agents, the standard type of equilibrium is a strategy profile generating an outcome in the core. The core is the set of outcomes that give each coalition at least its value. Individuals are unit-coalitions. So each individual must receive at least as much as she can get on her own. Each pair must get at least as much at it can get from its members working together. The same holds for triples and so on. Whether a coalitional game’s outcome gives a coalition at least its value depends on whether it assigns to the coalition’s members utilities that sum to at least the coalition’s value. The outcome for a coalition that does not form is the sum of the utilities its members receive in the coalitions that form. Consider this typical coalitional game. Three people A, B, and C may form work teams. The work teams can earn various amounts of money. Let v(AB) abbreviate v({A, B}), the coalition {A, B}’s value, and abbreviate similarly for other coalitions. Then the coalitions’ values in terms of transferable utility are: v(A) ¼ v(B) ¼ v(C) ¼ 1, v(AB) ¼ v(BC) ¼ v(AC) ¼ 4, v(ABC) ¼ 12. To achieve an outcome in the core, the people must form the grand coalition {A, B, C}, generate its value 12, and divide its value so that each individual receives at least 1 and each pair receives at least 4. So, two outcomes in the core are (4, 4, 4) and (6, 3, 3). The core is a set of outcomes, but the outcomes arise from strategy profiles. Requiring an outcome that gives every coalition at least its value constrains strategy profiles. Generating an outcome in the core, that is, a core allocation, is an equilibrium standard for a strategy profile. Some coalition profits from changing a strategy profile that does not generate a core allocation.18 Nash equilibrium in noncooperative games ignores joint strategies. It considers unilateral deviation by an agent, but not joint deviation by two or more agents. This suits noncooperative games because they do not afford opportunities for joint action. Unilateral deviation exhausts an agent’s response to others’ strategies. Being a Nash equilibrium is not enough for equilibrium in a coalitional game, however. An individual may profit from leading a coalition’s departure from a Nash equilibrium. When joint action is possible, equilibrium must consider a coalition’s deviation. Aumann (1959) introduces the standard of a strong Nash equilibrium for coalitional games. This is a strategy profile such that no coalition profits from unilateral deviation. Its outcome is such that no coalition does better under any joint strategy it may adopt. Strong Nash equilibrium generalizes Nash equilibrium for coalitional games by considering coalitions’ gains. Assuming that each
Cooperative Games
157
coalition gains by moving from an inefficient to an efficient strategy, it entails efficiency for all coalitions simultaneously. It also entails a core allocation because a coalition profits from unilaterally changing a profile that gives it less than its value. The standard of a strong Nash equilibrium is stricter than the standard of a core allocation. Suppose that a coalition’s payoff, if it forms, is not constant as nonmembers’ acts vary. Then it may have an incentive to adjust its strategy so that a response by others makes its payoff greater than its value. An outcome may give a coalition what it can gain on its own but not give it what it can gain by also manipulating outsiders’ acts. However, realizing a strong Nash equilibrium is equivalent to achieving a core allocation in elementary coalitional games, where a coalition’s gain if it forms is independent of outsiders’ acts. A profile realizes a core allocation, just in case no coalition profits from unilateral deviation.19 For precision, the analogy between being a Nash equilibrium and achieving a core allocation needs a slight revision. A coalition’s strategies include forming or not forming, and so coalitions’ strategies are not independent. For example, if a unit coalition forms, then the grand coalition does not. Because a coalition’s change in strategy may cause changes in the strategies of other coalitions, its change in strategy may not be unilateral. Still, its change in strategy may unilaterally instigate a profile’s change. In an elementary coalitional game, no coalition profits from unilaterally changing a profile that realizes a core allocation.20 I adopt efficiency as the measure of a coalition’s gain. Accordingly, a coalition gains, just in case each member does. A utility profile lists an outcome’s utility for each individual. A utility subprofile for a coalition lists an outcome’s utility for each member of the coalition. The strategy profiles for the individuals in a coalitional game yield the feasible utility profiles. The joint strategies of a coalition, given a strategy subprofile for nonmembers, yield the feasible utility subprofiles for the coalition given the strategy subprofile for the nonmembers. A strategy profile realizes a core allocation if and only if no coalition’s utility subprofile is worse for each member than any utility subprofile feasible for it given the strategy subprofile for nonmembers. The standard of the core for an elementary coalitional game assumes that no collectively rational coalition forgoes a gain. The core may be objectively defined in terms of coalitions’ payoffs or subjectively defined in terms of coalitions’ incentives. The two definitions agree in an ideal coalitional game because individuals are informed about their game. The equilibrium standard is more easily extended to nonideal games if the core is defined subjectively in terms of coalitions’ incentives. Accordingly, a profile realizes a core allocation if and only if no coalition has an incentive to change the profile unilaterally. Chapter 9 offers an account of a coalition’s incentives. Nash equilibrium is advanced as a necessary condition for a solution. Theorists argue that only certain types of Nash equilibrium are solutions. Similarly, achieving a core allocation is advanced as an equilibrium condition necessary for achieving a solution. Selecting an element of the core when several exist requires
158
Collective Rationality
solving negotiation problems within and among coalitions. For example, Nash’s solution to a bargaining problem is an equilibrium selection criterion for agreements yielding outcomes in the bargaining problem’s core. Because this chapter treats only the equilibrium condition of realizing a core allocation, it does not address negotiations that yield a particular core allocation. An outcome’s being in the core entails its efficiency. If an outcome is not efficient, then the grand coalition does not receive its value, and the outcome is not a core allocation. A core allocation also achieves efficiency among coalitions. No allocation gives each coalition more than a core allocation does because none gives the grand coalition more than a core allocation does. In a coalitional game, communication, contracts, and other forms of commitment create opportunities for joint action. The opportunities for joint action are a two-edged sword, however. Intense competition between coalitions may block profitable joint action. Opportunities for communication and binding agreement facilitate efficiency, but do not ensure it in every case where players are rational. Consider the Ultimatum Game, a bargaining problem and so a coalitional game. The players’ communication is limited to an offer and a response to it. A failure to achieve efficiency resulting from Player 2’s refusing a low offer need not entail any individual’s irrationality, despite opportunities for communication and binding agreements. Player 2’s refusal may be justified as an attempt to discourage future slights, and Player 1’s low offer may be justified by ignorance of Player 2’s longterm strategy. Ensuring efficiency generally requires ideal individuals in ideal conditions for joint action. The proposal that a solution realizes a core allocation rests on many idealizations. Coalitions form costlessly, and individuals have full information about the characteristic function. They communicate effortlessly about formation of coalitions and divisions of profits. If a coalition forms and adopts a joint strategy, individuals in the coalition have direct knowledge of the coalition’s formation and its joint strategy. Because of communication, all individuals know, without strategic reasoning, the coalitions that form and their joint strategies. A coalition may disband if a proposed division of its value short-changes some member. Do coalitions need binding agreements to hold them together? Core allocations are self-enforcing and do not require binding contracts, as Moulin (1995: 403) notes. A coalition does not gain by unilaterally changing a profile realizing a core allocation. The profile gives the coalition at least its value, and the change yields no more than its value. Some theorists reject realizing a core allocation as an equilibrium standard for coalitional games. Raiffa (2002: 444–46) rejects the standard in coalitional games where a single player holds the key to production, as in a coalitional game with this characteristic function: v(A) ¼ v(B) ¼ v(C) ¼ 0, v(AB) ¼ v(AC) ¼ 10, v(BC) ¼ 0, v(ABC) ¼ 10. The unique core allocation is (10, 0, 0). In theory, A plays B and C against each other so that they lower their prices for collaborating with him. However, Raiffa holds that if B and C are rational, they collude to escape
Cooperative Games
159
the bidding war. They extract concessions from A, who needs someone’s collaboration. They revise short-term tactics in light of long-term strategy. The example shows that the characteristic function is not an adequate representation of the resources of B and C in some versions of the coalitional game. If they may enter a binding agreement not to let A play them against each other in cutthroat competition to be his partner, then the value of their coalition is greater than 0. The example is not a compelling objection to the game’s core allocation granting that the characteristic function adequately represents options and incentives. If the characteristic function represents the game adequately, B and C cannot collude to restrain their competition. They cannot stop the slide to (10, 0, 0).21
8.5 A N E MPTY C ORE In coalitional games, rational individuals take advantage of opportunities to communicate and to form coalitions. Opportunities for joint action not only facilitate cooperation, but also create obstacles. Because coalitions are agents, their presence diminishes the prospects of a strategy profile from which no agent has an incentive to switch. Many alliances are unstable. Notoriously, an ideal coalitional game may lack a core allocation. Realization of a core allocation fails as a standard of collective rationality because it is not attainable.22 Take, for instance, an ideal coalitional game in which three agents divide $6 according to a majority decision, otherwise they receive nothing. Assuming that money converts into utility, the game’s characteristic function is: v(A) ¼ v(B) ¼ v(C) ¼ 0, v(AB) ¼ v(BC) ¼ v(AC) ¼ 6, v(ABC) ¼ 6. Consider the outcome in which each agent receives $2, namely, (2, 2, 2). This outcome does not give the coalition of A and B as much as it can get on its own, namely, $6. It gives the coalition only $4. The coalition {A, B} can do better by forming a majority that votes to give each member $3 and leaves nothing for C. Each pair of agents has an incentive to vote itself all the money, and to divide it equally among two instead of three. So, there are incentives to switch to (3, 3, 0), (3, 0, 3), and (0, 3, 3). No strategy profile yields a core allocation; the core is empty. For every division the agents may reach by majority decision, some pair of agents has an incentive to switch. For each possible division, some coalition has a voting strategy that makes improvements for its members. No profile yields a core allocation, because any pair of agents can hog the whole $6. Because the game is ideal, a concrete version nonetheless has a solution (which may depend on details the characteristic function does not represent). It has a profile of strategies that are jointly rational. Therefore, realizing a core allocation is not a necessary condition for being a solution. It is not a standard of joint rationality among coalitions. When a game’s core is empty, coalitions cannot pursue all incentives. They rationally forgo pursuit of some incentives.23
160
Collective Rationality
When an ideal game’s core is empty, some coalition fails to pursue its incentives. Suppose that coalitions besides {A, B} pursue incentives to switch. Then it is impossible for {A, B} to adopt a strategy from which it has no incentive to switch. Suppose that the coalition {A, B} forms and realizes the outcome (4, 2, 0). Then the coalition {B, C} has an incentive to supplant the coalition {A, B}. It may propose the division (0, 4, 2), offering B more than she gets in the coalition {A, B}. Because B has an incentive to defect, the coalition {A, B} has an incentive to disband. Suppose that if the coalition {A, B} disbands, the coalition {B, C} will not realize (0, 4, 2) because then the coalition {A, C} has an incentive to form and will produce the outcome (2, 0, 4) instead. Given this result of not forming, the coalition {A, B} has an incentive to form and realize the outcome (4, 2, 0). So if {A, B} forms, it has an incentive to disband; according to supposition, B has an incentive to leave. Also, if {A, B} does not form, it has an incentive to form; according to supposition, if it does not form (2, 0, 4) is the outcome, and both A and B do better by realizing (4, 2, 0). Whatever strategy the coalition {A, B} adopts, it has an incentive to switch. The coalition cannot pursue all incentives. Rationality, being attainable, does not require it to pursue all incentives. Because every agent, even a coalition, has a rational option, not every incentive to switch is a sufficient reason to switch. Introducing mixed strategies does not prevent empty cores. Whether there are outcomes in a game’s core depends on the game’s characteristic function. Its assignment of a value to a coalition does not imply anything about the coalition’s strategies. They may include mixed strategies, that is, randomizations of a coalition’s formation and its distribution of its value among its members. Adding mixed strategies to a game with an empty core need not change the payoff a coalition can secure on its own. Mixed strategies may yield a characteristic function that makes the core empty. They ensure that a finite noncooperative game has a Nash equilibrium, but they do not ensure that a finite coalitional game has a core allocation. Moulin (2003: 233) holds that if a game’s core is empty, then its outcome is arbitrary and no possible outcome is rational. This view rejects rationality’s attainability. A decision problem’s outcome may be arbitrary but still rational. Arbitrary tie breaking is rational, for instance. Moulin may mean that rationality favors no outcome and permits any. If so, then his view is too tolerant. In the majority-rule game, the outcome (0, 0, 0) is not rational. Even if the core is empty, some outcomes are impermissible and irrational. Principles of rationality govern games with empty cores; not anything goes. The agents playing a game with an empty core achieve some outcome and may avoid irrationality. A coalition may rationally give up endless pursuit of incentives. This happens in a concrete realization of the majority-rule game. Circumstances may excuse a coalition’s failure to pursue an incentive. Not all incentives an outcome generates constitute sufficient reasons for realizing another outcome. For example, some incentives are self-undermining in the sense that they vanish in
Cooperative Games
161
the conditions their pursuit realizes. Agents may have reasons not to pursue such incentives. Because the core may be empty, a coalitional game’s solutions need not realize core allocations, as Binmore (1998: 39–40) notes. Because collective rationality is attainable, agents can be collectively rational in coalitional games. Chapter 9 generalizes the core to obtain a necessary condition of collective rationality that can be met in every ideal coalitional game. The generalization ensures that the set of equilibrium outcomes is nonempty. It assumes a richer representation of a coalitional game than a characteristic function provides. A characteristic function does not adequately represent a coalitional game with an empty core. My approach to generalization of the core follows the tradition of the bargaining set, the kernel, and the nucleolus. The bargaining set has the allocations for which every objection meets a counterobjection, and the kernel and nucleolus are variants of the bargaining set. These generalizations all deal with the core’s emptiness. They are all nonempty in coalitional games. The bargaining set contains the kernel, which contains the nucleolus, which is a single outcome. Strategic reasoning motivates the generalizations. They consider not only a coalition’s response to a proposal, but also other coalitions’ counterresponses.24 As Osborne and Rubinstein (1994: 277–78) explain, these generalizations impose restrictions on a coalition’s credible deviations from a proposal. They consider what happens after a deviation, and so a deviation’s ultimate and not just proximate effect. Although they differ about significant objections and counterobjections, these generalizations all take an outcome to be stable if each objection is balanced by a counterobjection. Osborne and Rubinstein (1994: 278) observe that the generalizations have few persuasive applications. Their special assumptions about objections and coalition structures limit their applicability. Chapter 9 presents a generalization of the core that relaxes those assumptions and so has greater applicability. Moulin (1995: 18) objects to using credible deviations to generalize the core. His objections are: (1) the strategic reasoning involved is too sophisticated for humans and (2) second-order stability concepts yield large sets of stable outcomes and so do not have much bite. To handle games with empty cores, he prefers relying on forms of cooperation appealing to justice. Despite Moulin’s criticisms, using higher-level objections best fits a theory of rationality. Idealizations handle criticism (1). A generalization of the core may advance a standard of rationality for ideal agents. Future research may achieve greater scope by dispensing with idealizations. Advancing a generalization of the core as only a necessary condition of collective rationality handles criticism (2). The generalization pares down the set of profiles that may be solutions, and other criteria narrow it further. Although Chapter 9 relies on higher-level objections, it addresses some problems with precedents. The bargaining set, kernel, and nucleolus exist in all coalitional games, but do not derive from principles of individual rationality. They lack a foundation in first principles. For instance, they consider counterobjections, but
162
Collective Rationality
not countercounterobjections. They imagine that the path of events that a deviation triggers is cut short after two stages. They ought to consider all chains of objections. Also, their accounts of objections and counterobjections assume a static coalition structure. They ought to accommodate coalitions’ incentives to form and disband. Friedman (1990: 243, 256, 273, 275) notes these problems with the proposals.25 Chapter 1 argues for a unified theory of collective rationality with general principles governing both noncooperative and cooperative games of strategy. Greenberg (1990) proposes general principles of this sort. His principles govern the acts of individuals and coalitions alike. First, he unifies game theory by treating noncooperative games as cooperative games with extremely limited opportunities for joint action (pp. 5, 62). Second, he uses facts about a game that its characteristic function and payoff matrix do not display (pp. 4, 64, 87, 118). Third, in contrast with earlier accounts of credible deviations, he accommodates chains of responses with more than two links and flexible coalition structures (p. 77). I follow him in these respects but, unlike him, assume that rational ideal agents follow the principle of self-support, a generalization of the principle of utility maximization. A unified theory of games should rest on general principles of rationality. Standards for coalitions should conform with standards for individuals. A generalization of the core for coalitional games should conform with a generalization of Nash equilibrium for noncooperative games. For example, if a unified theory adopts the bargaining set, kernel, or nucleolus for coalitional games, then for noncooperative games it should adopt an equilibrium standard that also recognizes counterobjections. The problem of an empty core mirrors problems with missing Nash equilibria in noncooperative games (with pure strategies only, or with an infinite number of players, or pure strategies for players). The general principle from which Nash equilibrium and the core both follow is that an agent should pursue every incentive. This general principle is flawed. It conflicts with rationality’s attainability. The principle of self-support replaces it and tolerates an agent’s failure to pursue an insufficient incentive. It generates unified equilibrium standards for both noncooperative and coalitional games. Chapter 9, following the treatment of noncooperative games in Chapter 6, formulates an attainable equilibrium standard for the coalitional games that this chapter presents.
9
Strategy for Coalitions
C
OA L I T I O NA L games show that rationality does not require pursuit of all incentives. Agents in games with empty cores cannot pursue all incentives, but rationality is still attainable. Its demands adjust to circumstances. This chapter introduces strategic equilibrium as a generalization of a core allocation’s realization in a coalitional game. Strategic equilibrium is attainable. Achieving it replaces achieving a core allocation as a requirement of rationality. The standard of strategic equilibrium for coalitions is an analogical extension of the standard of strategic equilibrium for individuals that Chapter 6 formulated. To prepare for its introduction, the first section defines a coalition’s incentives, and the second section defines a coalition’s pursuit of incentives. The third section presents and supports the standard of strategic equilibrium for coalitional games. It verifies this standard of collective rationality by showing that individual rationality entails its satisfaction. Chapters 10 and 11 have illustrations.
9.1 A C OALITION ’ S I NCENTIVES Solutions to both cooperative and noncooperative games specify collective acts that meet standards of collective rationality. Because being an equilibrium is necessary for being a solution, and because every ideal game has a solution, every ideal game has an equilibrium. Coalitional games may lack core allocations. Yet rational players reach a type of equilibrium. This chapter proposes weakening the standard of the core to obtain an equilibrium standard attainable in all ideal coalitional games. The weakening proposed has the support of attainable standards of rationality for agents. These standards require self-supporting strategies rather than strategies that nonconditionally maximize utility. Strategic equilibrium’s extension from noncooperative games to coalitional games shows that a single type of equilibrium, generated by a single standard of collective rationality, suits both types of game. Its extension resembles the extension of Nash equilibrium in noncooperative games to the core in coalitional games. The extension of Nash equilibrium identifies a coalition’s incentives. The 163
164
Collective Rationality
extension of strategic equilibrium identifies a coalition’s sufficient incentives. Extending strategic equilibrium to coalitional games unifies the treatment of noncooperative and coalitional games. Because strategic equilibrium rests on rules for individuals making decisions, adopting it also unifies decision theory and game theory. It promotes unification of the branches of game theory and also unification of principles of individual and collective rationality. In a coalitional game a strategy profile is a strategic equilibrium if and only if no coalition has a sufficient incentive to change its strategy given the profile. Incentives come in two types. An opportunity for improvement generates an objective incentive, and knowledge of an opportunity for improvement generates a subjective incentive. In ideal coalitional games the two types of incentive agree. However, a theory of rationality addresses subjective incentives. Accordingly, as this chapter understands a coalition’s incentives, they depend on the coalition’s options and its preferences among them. This section defines a coalition’s incentives, and the next section considers which incentives constitute sufficient reasons for options. An account of a coalition’s incentives must use technical definitions of a coalition’s options and preferences because coalitions do not have minds, and so lack options and preferences in the ordinary sense. This section technically defines a coalition’s options and preferences to suit principles of rationality extended from individuals to coalitions. The definitions use the options, beliefs, and desires of the coalition’s members. They make standards of rationality for coalitions compatible with standards of rationality for individuals. In ideal cases the definitions make standards of collective rationality agree with intuitive goals of rational collective action. Fruitfulness for the theory of collective rationality justifies the definitions. Because a coalition acts freely, it literally has options. It may form or not form and, if it forms, may divide its profits in many ways. In general, a coalition’s options are possible collective acts constituted by options of the coalition’s members. This characterization leaves some crucial points unsettled, however, so a coalition’s set of options needs a technical specification. In a coalitional game individuals collectively realize a strategy profile. A strategy profile specifies for every coalition whether the coalition forms and its strategy if it forms. A coalition’s nonformation is a collective act, although not a joint act, and counts as its strategy. A strategy profile’s coalition structure specifies the coalitions that form. Those coalitions constitute a partition of agents, that is, a collection of sets of agents in which the sets are nonempty, disjoint, and exhaustive. Inevitably, in a coalitional game the agents realize a unique coalition structure. Because a game’s coalition structure is open, each multimember coalition has the option to form or not to form. More fully, a multimember coalition’s options are not to form, or to form and adopt a joint strategy. A joint strategy requires the members’ collaboration, although not necessarily their efficient collaboration.
Strategy for Coalitions
165
If they do not collaborate, the coalition adopts no joint strategy and has not formed. A joint act requires the coalition’s formation. For a multimember coalition, failure to form is in the hands of its members. One member’s recalcitrance is all it takes for the coalition not to form. Collaboration is unnecessary for the coalition’s nonformation. The coalition’s formation requires the action of all its members, but its failure to form requires the action of only one member. A coalition’s option by definition is an act that the coalition fully controls. A unit-coalition fails to form only if its unique member joins a larger coalition. This happens only given collaboration with others. A coalition fully controls an act just in case if the coalition were to realize the act, no outside agent would exercise any control over it. Hence, a unit-coalition does not have full control over not forming. No individual can on his own not form his unit-coalition. A unitcoalition has the option of attempting not to form but lacks the option of not forming. A unit-coalition may (1) attempt not to form or (2) not attempt not to form. The success of an attempt not to form depends on others; the unit-coalition may form despite the attempt not to form. The outcome of not attempting not to form is assured; the unit-coalition forms. Because the outcome is assured, I call the strategy of not attempting not to form “forming.” But failing to exercise that strategy is attempting not to form; it is not failing to form. A unit-coalition cannot fail to form without nonmembers’ assistance. A unit-coalition’s nonformation is just a by-product of the formation of a larger coalition that includes the unitcoalition’s unique member. Formation of some coalitions prevents formation of other coalitions. Not every strategy of every coalition is compatible with every coalition structure. Hence not every profile of feasible strategies is feasible. A feasible profile has strategies that are compatible with one another. It assigns strategies to coalitions to form a coalition structure. In noncooperative games, where every profile of feasible strategies is itself feasible, the feasibility of a strategy profile goes without saying. In coalitional games, specifying that a strategy profile is feasible adds clarity, although for brevity, this chapter often takes feasibility as understood. If two strategies are incompatible, then given the first, the second is not realized. Nonetheless, given the first, the second remains realizable. Although their combination is not realizable, each strategy is realizable. Modal logic similarly observes that the impossibility of a conjunction entails that if one conjunct is true, then the other conjunct is false. The conjunction’s impossibility does not entail that if one conjunct is true, then the other conjunct is impossible. In a single-stage coalitional game, if a coalition with an individual forms, another coalition including that individual does not also form, because an individual does not belong to more than one coalition that forms. A coalition’s realization of a strategy blocks the realization of strategies of other coalitions. For example, if {A, B} forms, then {A} does not form. Nonetheless, in a
166
Collective Rationality
single-stage coalitional game, where choices are simultaneous, one coalition’s strategy does not curtail other coalitions’ strategies. Although given {A, B}’s forming, {A} does not form, the unit-coalition retains the option of forming. All it takes for {A} to form is A’s choosing to form {A}. Agent A fully controls formation of his unit-coalition. He can form his unit-coalition until the coalitional game ends without its formation. Until that time, he can form {A} by leaving {A, B} even if he does not do that. An individual’s adopting an option prevents his adopting other options, but it does not entail that he lacks other options. Similarly, a coalition’s strategy may prevent another coalition’s formation, but it does not eliminate the other coalition’s option to form. The strategies available to coalitions do not change until the game ends. The form of supposition the theory of rationality adopts for deliberations recognizes options’ persistence. In deliberations, supposition of a strategy profile preserves the strategies available to agents. In a coalitional game with two individuals A and B, given {B}’s formation {A} forms as well. Supposition of {B}’s formation does not preclude the possibility of {A, B}’s formation, however. If {B} forms, then {A, B} does not form. Nonetheless, if {B} forms, {A, B}’s formation is still an option. It is still possible even if not actual. Taking supposition this way facilitates rejection of self-justifying mistakes. A mistake may be self-justifying if its realization disqualifies rivals as options. The theory of rationality prevents selfjustifying mistakes by recognizing unrealized options. In some cases, if a coalition were to act differently, other coalitions would act differently too. Such dependencies hold because of an incompatibility of coalitions. They do not signal a change in strategies available to coalitions. If a coalition were to act differently, then others would have the same strategies. They just would not realize the same strategies. The difference in strategy realized would come from a difference in the acts of the relevant individuals, perhaps a switch from a rational to an irrational act. Because in a single-stage coalitional game all coalitions realize their strategies in the same stage, one coalition’s strategy does not change other coalitions’ options. If one coalition structure forms, others do not form. They can form, nonetheless, until the game ends without their formation. Although moves in a single-stage coalitional game do not cause other moves, moves in the underlying sequential game may cause other moves. A coalition’s formation does not causally preclude formation of other coalitions comprising its members. However, in the underlying sequential game, members of coalitions with incentives to create a coalition structure may instigate it. Then their formation of coalitions causes the nonformation of coalitions outside the structure. In other cases, individuals’ nonformation of some coalition may cause formation of another coalition. A representation of the underlying sequential game may display causal relations among moves that lead to formation of coalitions. Moves of a single-stage coalitional game are not causally related, however. Realization of a coalition structure occurs all at once.
Strategy for Coalitions
167
A coalition may have an incentive to move from one option to another. A coalition’s incentives depend on its members’ incentives. A multimember coalition has an incentive to form when it has a joint strategy that benefits each member. It has an incentive not to form when some member does better on her own or in another multimember coalition under some joint strategy of that coalition. Even one member’s profiting from a coalition’s not forming creates a coalition’s incentive not to form. A coalition has an incentive to disband if any member has an incentive to depart. Although in a single-stage coalitional game, coalitions do not form and then disband, deliberations may suppose a coalition’s formation. Given its formation, it may have an incentive to disband prior to the game’s conclusion. Members’ unanimous preferences are necessary and sufficient for a coalition’s preference between two joint acts. According to this definition, a coalition’s preference ranking of joint acts is incomplete in many cases. For some pair of joint acts, members’ preferences between the acts are not unanimous, and no collective preference exists between them. The standard to follow collective preferences among joint acts agrees with the standard to follow unanimous individual preferences. The standard of unanimity does not require a definition of collective preferences. The definition of collective preferences and the standard of collective preferences are dispensable in favor of the basic standard of unanimity. It governs joint action in ideal conditions. The standard to follow collective preferences generates a shortcut method of evaluating a joint act. In ideal conditions a coalition following its preferences meets the standard of unanimity. According to the definition of a coalition’s incentives, if a coalition’s value exceeds the sum of its members’ payoffs given some outcome, then the coalition has an incentive to form and realize an alternative outcome because its members have incentives to form the coalition and realize the alternative. A coalition’s incentive holds on balance, factoring in others’ response to its pursuit. A coalition’s pursuing an incentive to switch strategy may not end a coalitional game because the switch may prompt another coalition’s formation. The switch offers a chance that the coalition moves up its preference ranking, however, if there is a chance that the switch is final, that is, a chance that its outcome is a stopping point for coalitions’ pursuit of incentives. All theorists take a coalition’s members’ unanimous preferences as sufficient for the coalition’s having an incentive to change its joint strategy. Not all take it as necessary, however. Some hold that preferences of some members and others’ indifference also yield an incentive to change joint strategy. However, making unanimous preferences necessary for a coalition’s incentive to switch joint strategies has a theoretical advantage. Implementation of a switch in joint strategies requires all members’ participation. Members who are indifferent to the switch may not participate. Rational members may block a coalition’s following its preference unless its preference requires their preferences, too. Therefore, I take
168
Collective Rationality
a coalition to have an incentive to switch joint strategy if and only if all members prefer switching. It follows that a strategy profile is efficient if and only if the grand coalition lacks an incentive to deviate.1 To make strategic reasoning vivid and to follow traditional terminology, I speak of coalitions responding to each other. In their coalitional game their responses are not causal, however. Coalitions do not observe the moves of others before making their own moves. All moves occur in the same stage. A response to a coalition’s strategy is just the strategies other coalitions adopt if the coalition adopts the strategy. A response to a coalition’s strategy, conjoined with the coalition’s strategy, yields a strategy profile with a coalition structure and a joint strategy for each multimember coalition that forms. Agents may causally respond to each other’s moves in the underlying sequential game only. There, individuals may accept or reject others’ proposals of a strategy profile for the coalitional game. Many combinations of individuals’ acts may realize the same joint acts of the coalitional game. Information affects a coalition’s preferences among its options. This section technically defines a coalition’s knowledge so that a coalition knows a fact if and only if all its members do. As Section 8.4 explains, in ideal conditions for joint action, communication is perfect among individuals and coalitions. Because conditions for communication are ideal, information that aids a coalition’s joint action spreads among its members. Hence a coalition knows a relevant fact if and only if some member does. This chapter treats a coalition’s knowledge only in such ideal cases. In an ideal single-stage coalitional game, each coalition knows others’ strategies directly. It does not need to infer their strategies from its strategy or to replicate the reasoning of their members. Individuals have direct, nonstrategic knowledge of the profile realized because they participate in joint acts constituting the profile. Also, because of ideal communication, if a coalition changes its strategy, other coalitions’ information changes to preserve its accuracy. Coalitions are prescient about others’ acts. Each knows, for each of its strategies, others’ response. Supposition of a profile’s realization carries to all agents direct, nonstrategic knowledge of its realization. A coalition’s incentives to switch strategy arise from its knowledge of others’ response to its strategy. A strategy occurs in many profiles, of which only one gives others’ response to the strategy. A coalition may have an incentive to switch away from the strategy relative to the profile containing the response to it, but lack an incentive to switch away from the strategy relative to some other profile. Incentives are relative to a profile’s realization. An incentive to deviate from a strategy profile not realized rests on knowledge of the profile if it is realized. Supposition of a profile’s realization may counterfactually change incentives through changes in information about agents’ acts. Supposing that an agent deviates from a strategy profile minimally revises the strategy profile. In a noncooperative game, supposition of an agent’s change in
Strategy for Coalitions
169
strategy does not include other agents’ changes in strategy. In a coalitional game, they may include such changes. In a two-person coalitional game, for example, {A, B} does not switch from formation to nonformation unless {A} switches from nonformation to formation. Calculation of the utilities of an agent’s strategies given a strategy profile does not hold fixed the strategies of other agents if the agent’s switching strategies entails that other agents also switch strategies. A profile of strategies for agents indicates the coalitions that form and the strategies they adopt. Given a profile, a coalition has an incentive to switch from a strategy s to a strategy s0 if and only if (1) s0 is a joint strategy and given s each of the coalition’s members prefers s0 to s, or (2) s0 is nonformation and given s at least one of the coalition’s members prefers s0 to s. This definition acknowledges that a coalition’s joint strategy requires the assistance of all its members, whereas any member of a multimember coalition can by herself block the coalition’s formation by forming her unit-coalition. In a coalitional game, each coalition is an agent, and also the group of coalitions is an agent. The group of coalitions achieves the game’s outcome. It differs from the grand coalition of all individuals. The grand coalition need not form in order for the group of coalitions to act. If many coalitions form and act, the group of coalitions acts although the grand coalition does not form. The group of coalitions resembles a noncollaborating group of individuals in a noncooperative game. The group of coalitions acts collectively, but not jointly, whatever coalition structure is realized. If two coalitions communicate and strike a bargain, then their members form and act jointly within the coalition of their combined members. The pair of coalitions does not act jointly. In cases where it appears that the group of coalitions acts jointly, coalitions smaller than the grand coalition do not form and their members form the grand coalition and act jointly within it. Strategy profiles are not joint strategies of the group of coalitions because the profiles require only the joint action of coalitions in the coalition structure the profile realizes, and not the joint action of all coalitions. In some contexts it may be fruitful to treat the whole group of coalitions as an agent. Its acts are rational if the coalitions’ acts are rational, so a standard of rationality for it may supply a shortcut method of evaluating coalitions.2 This book does not explore such shortcuts and works only with principles applying to coalitions of collaborating individuals. It does not assign options and incentives to the group of coalitions.
9.2 PATHS
OF
I NCENTIVES
Section 9.3 uses coalitions’ paths of pursued incentives to define a strategic equilibrium of a coalitional game. As a preliminary, this section explains such
170
Collective Rationality
paths. It treats a coalition’s pursuit of incentives and a coalition’s halting pursuit of incentives. In a coalitional game a strategy profile specifies a coalition structure according to which some coalitions form and others do not form. The nodes of a path of incentives are strategy profiles. The utility profiles they generate explain incentives to change strategy profile. A profile may start a path of incentives that begins with a formed coalition’s incentive to switch joint strategy or with an unformed coalition’s incentive to switch from nonformation to formation. A path of incentives for a coalition involves relative incentives, that is, incentives to switch strategy in the context of a profile. It terminates in a strategy in the context of a profile if, for instance, the coalition has no incentive to switch from the strategy, given the profile. A path of incentives for a coalition depends on other coalitions’ pursuit of incentives because a coalition’s incentives depend on other coalitions’ responses to its strategies. Consider majority-rule division of $6. The coalition {A, B} has an incentive to move from (2, 2, 2) to (3, 3, 0). Suppose that the coalition {A, C} pursues its incentive to move from (3, 3, 0) to (4, 0, 2), and the coalition {B, C} pursues its incentive to move from (4, 0, 2) to (0, 2, 4). Then, if the coalition {A, B} pursues its incentive to move from (2, 2, 2) to (3, 3, 0), the result of the move is (0, 2, 4), and the coalition has a subsequent incentive to move to (2, 4, 0). A path of incentives for multiple coalitions implies a path of incentives for its initial coalition. Deleting other coalitions’ responses yields a path of incentives for the coalition that initiates the multicoalition path. Their responses just explain the origin of the coalition’s incentives, as Figure 9.1 shows. Its derived path puts in brackets the response to the coalition {A, B}’s pursuing its initial incentive. A strategy profile specifies a feasible coalition structure and specifies each formed coalition’s division of its profits. Several coalitions may have the power to change one strategy profile into another. In a game with exactly two players A and B, both {A} and {B} have the power to change the coalition structure from {{A, B}} to {{A}, {B}}. I assume that exactly one coalition is the instigator of any change in strategy profile. The instigator is responsible for the change between successive nodes of a multicoalition path of incentives. The coalition instigating a change switches strategy without outsiders’ assistance, although its switch may require other coalitions to form or disband. Individuals’ acts in the underlying sequential game make some coalition the instigator of a change. A coalition’s
Multicoalition path: (2, 2, 2) →{A, B} (3, 3, 0) →{A, C } (4, 0, 2) →{B, C } (0, 2, 4) →{A, B } (2, 4, 0) Derived path for {A, B}: (2, 2, 2) →{A, B } (3, 3, 0) [( 0, 2, 4)] →{A, B} (2, 4, 0)
FIGURE 9.1 Multi- and single-coalition paths of incentives.
Strategy for Coalitions
171
pursuing an incentive in a coalitional game instigates a change if and only if in the underlying sequential game some member’s pursuing an incentive instigates the coalition’s pursuit of the incentive. Given a profile’s realization, an agent may have an incentive to switch strategy. This depends on the consequences of the agent’s switching to an alternative strategy. If the agent were to switch to the alternative, what would happen? According to a standard interpretation of conditionals, his switch triggers a minimal departure from the profile. The agent’s switch yields the causally nearest world in which he adopts the alternative strategy. The context, especially the type of game, influences the nearness of worlds and thus events in the nearest world with the agent’s switch. In a noncooperative game a minimal departure from a profile does not include other agents’ switches. In a coalitional game it may.3 A path of incentives represents dispositions to pursue incentives if any are pursued. A strategy profile in a path of incentives is the nearest alternative to the preceding profile. Imagining a profile’s realization is an evidential supposition. The evidence it carries may bear on the results of a coalition’s switching strategy. Imagining a switch is a causal supposition. An agent’s switching strategy may prompt other agents’ switches in the underlying sequential game. Nearest alternative profiles summarize agents’ selection of incentives in a coalitional game. Paths of selected incentives may involve many agents, many profiles, and many incentives. Some paths of incentives are endless. A path of pursued incentives specifies a halting place for pursuit of incentives. In an ideal game, it contains sufficient incentives to switch strategy. If a nonterminal node is realized, some agent fails to pursue a sufficient incentive to switch strategy. Rational ideal agents pursue all sufficient incentives and do not stop short of a path’s terminal node. Of course, stopping short changes pursuit of incentives, but stopping’s rationality depends on actual, not hypothetical pursuit of incentives. If Section 9.1 did not restrict coalitions’ options, in some ideal coalitional games every profile would generate for some coalition a sufficient incentive to switch strategy. Consider an ideal version of three-person division of $6 by majority rule. Suppose that coalitions pursue optimal incentives. Profiles of the resulting game generate sufficient incentives to switch if, contrary to the account of options, unit-coalitions have nonformation as an option. In that case, for every profile either the grand coalition or a unit-coalition has a sufficient incentive to switch strategy, as the following two paragraphs show. Take a profile where a unit-coalition forms. Imagine that the coalitional structure is {{A, B}, {C}}, and it yields (3, 3, 0). Grant that {C} has the option to disband in order to form the coalition {B, C} and produce, say, (0, 4, 2). Then it has a path away from formation of {C}. It has an incentive to switch from formation to nonformation. Others’ responses to {C}’s nonformation may make {C} form, but it has no incentive to switch from nonformation to formation.
172
Collective Rationality
Its formation produces 0, and every other outcome yields at least 0 for {C}. Thus {C} has a path away from the original profile, and that path terminates in nonformation. Given nonformation as a strategy for unit-coalitions, there is a similar terminating path away from every unit-coalition’s part in every profile where some unit-coalition forms. A unit-coalition, if rational, pursues such incentives if it controls their pursuit. In majority-rule division of $6, in a profile where no unit-coalition forms, the grand coalition forms. In every profile where the grand coalition forms, it has a terminating path of incentives away from formation to nonformation. No matter how the grand coalition proposes to divide the $6, two members of the grand coalition have an incentive to desert the grand coalition to form a two-person coalition that gains the $6. This constitutes an incentive of the grand coalition to disband, because a coalition has an incentive to disband if any subcoalition has an incentive to form. The grand coalition has no incentive to form again after it has disbanded. As long as two-person coalitions pursue optimal incentives, the grand coalition cannot give each individual more than she receives in a rival two-person coalition; the grand coalition is not more productive than is a two-person coalition. So the grand coalition has an incentive away from formation to nonformation, and no incentive away from nonformation. Its path of incentives away from formation terminates in nonformation. Given its formation, it has a sufficient incentive not to form and so pursues that incentive. The grand coalition’s formation starts a path of pursued incentives that terminates in a profile where the grand coalition does not form. In an ideal version of the majority-rule game, the grand coalition has an undeniable incentive to disband. Consequently, providing for rationality’s attainability requires not recognizing an incentive of a unit-coalition to disband. Rejecting nonformation as an option for a unit-coalition discredits such an incentive. Moreover, as Section 9.1 observes, independent reasons support not recognizing that option. Section 9.1 defines a coalition’s incentives in terms of its members’ unanimous preferences. Consequently, a coalition’s incentives never proceed from one joint strategy to another in a circle back to the original joint strategy. However, a coalition’s incentives to switch strategy, from formation to nonformation and the reverse, may lead the coalition around a circle. A coalition may have an incentive to switch from nonformation to a joint strategy. The joint strategy may then create an incentive for a member to defect, and thus an incentive for the coalition to disband. This cycle does not show that the coalition’s incentives to switch strategy are irrational. Incentives to switch strategy are conditional preferences, and rational conditional preferences may generate a cycle. A restless but rational traveler may prefer being in Paris to being in Venice, given that he is in Venice, and prefer being in Venice to being in Paris, given that he is in Paris. In an elementary coalitional game independence of a coalition’s payoff mitigates incentives’ relativity to profiles. A coalition’s incentives if it forms are
Strategy for Coalitions
173
constant. Despite entailment relationships among coalitions’ strategies, a multimember coalition’s payoff from its strategies is independent of others’ strategies. If the coalition forms, it gains its value (using an efficient joint strategy). If it does not form, it gains nothing (although its members may gain as members of other coalitions). A change in others’ strategies does not influence the coalition’s payoff from its strategy. Whatever others do, its possible strategies are unchanged, and its payoff from its strategy depends only on the strategy it realizes.4 A multimember coalition may have a self-undermining incentive. If it does not form, it may have an incentive to form, and yet realize that if it forms it has an incentive to disband. Its formation may lead to responses that it anticipates and that leave it with an incentive to disband. Then its incentive to switch from nonformation to formation is self-undermining. For example, in a three-person game where {A, B} does not form, that coalition may have an incentive to form to obtain gains for A and B. But C ’s response if {A, B} forms may give that coalition an incentive to disband. Agent C may offer B inducements to collaborate exclusively with him and so give B an incentive to leave the coalition {A, B}. Agent C ’s response does not dissolve the coalition {A, B} but gives the coalition an incentive to disband. Because of that incentive, its incentive to switch to formation is selfundermining. Because of self-undermining incentives, the grand coalition may rationally adopt a collective strategy that does not achieve efficiency. It may fail to form because members pursue incentives to form other coalitions. By halting pursuit of incentives at a profile in which it does not form, it may forgo pursuing an incentive to achieve an outcome better for each member. Rationality requires pursuit of sufficient incentives only, and self-undermining incentives are not sufficient. Principles of rationality regulate pursuit of incentives, in particular, instigation of a change in strategy profile. A coalition’s pursuit of incentives, to be rational, must conform to standards of rationality for selecting incentives to pursue when there are several, and for stopping pursuit of incentives when pursuit of selected incentives is endless. The standards leave agents some latitude. So paths of pursued incentives depend on agents’ psychologies. The psychologies of a coalition’s members settle its pursuit of incentives. Its pursuit of incentives then settles the behavior rational for it. Rational behavior for the coalition depends on its members’ psychologies, as a person’s rational choice between chocolate and vanilla ice cream depends on her tastes. A selection rule governs the incentive pursued if any is pursued. A stopping rule governs the place pursuit of incentives stops if it stops. The selection and stopping rules that Weirich (1998) presents for individuals in noncooperative games extend to agents in coalitional games. This section presents only the extension’s key points. It does not explore in detail the requirements of rational pursuit of incentives, because for its purpose their main consequence is just that rational coalitions may stop pursuit of incentives. Some incentives are
174
Collective Rationality
insufficient. The stopping rule says that when pursuit of selected incentives is endless, an agent may stop at an incentive that is not a sufficient reason to switch. Pursuit of incentives need not be relentless if that makes pursuit endless. The details of rational pursuit of incentives, although they add flesh to examples, do not affect the theoretical points about collective rationality in Section 9.3. That section establishes the existence of a strategic equilibrium in an ideal coalitional game without showing that a particular profile is a strategic equilibrium. It derives the collective standard of strategic equilibrium from individual standards using only the general assumption that individuals and coalitions pursue incentives rationally. The derivation does not require rules that explicate rational pursuit of incentives. This section formulates simplified selection and stopping rules to illustrate such rules and to fill out examples and comparisons. Examples treating ideal games assume that pursuit of incentives is completely rational and complies with the simplified rules. Compliance with those rules may not be sufficient for rationality and may be merely consistent with rather than necessary for rationality. Selection and stopping rules for coalitional games address the incentives of all agents together and through them the incentives of each agent separately. The selection rule permits selecting any coalition with an incentive when there are several and for that coalition selects an incentive optimal to pursue, if one exists. This rule puts aside any considerations favoring a coalition with an incentive that starts a terminating rather than an endless path. It also puts aside any global strategic considerations favoring pursuit of a suboptimal incentive. The stopping rule permits any agent’s halting at any node in a circular path of selected incentives and at any node in a noncircular but endless path of selected incentives, except a node with the agent’s initial strategy. It puts aside, for instance, considerations favoring nodes in a cycle to which the path leads. Although these simplified selection and stopping rules shelve several pertinent considerations, examples use games in which those considerations do not matter.5 The selection and stopping rules govern ideal games in which individuals have opportunities for communication and, if necessary, binding agreement, to achieve efficient outcomes for coalitions to which they belong if those coalitions form. A coalition that forms adopts an efficient strategy if it pursues an optimal incentive. Its strategy is efficient for the coalition, given the behavior of other coalitions that form. For example, given the unit-coalition structure, if each unitcoalition maximizes utility among its independent strategies, then the resulting strategy profile achieves efficiency within coalitions that form. Because paths of selected incentives follow optimal incentives, if a coalition pursues incentives beyond its initial strategy, it does not halt pursuit of incentives at a profile in which it forms and adopts an inefficient joint strategy. Paths of pursued incentives, that is, paths of nearest alternative profiles, depict a coalitional game’s deliberational dynamics. They use dispositions to pursue
Strategy for Coalitions
175
incentives in hypothetical situations to explain how players’ strategic reasoning leads them to a particular profile. The dynamics move through tentative decisions about coalition formation and joint action. Its stages are argumentative, not temporal, although they have temporal counterparts in the underlying sequential game. A coalitional game’s dynamics depends on the incentives pursued at each strategy profile, given pursuit of some incentive, and the stopping point for pursuit of incentives along an endless path of selected incentives. The selection rule ensures that paths of pursued incentives do not fork, and the stopping rule governs their termination. A path moves from a profile generating an incentive along a single path to a terminal profile. Each starting point yields exactly one endpoint. A strategic equilibrium in the coalitional game depends on constraints on the dynamics of pursuit of incentives, as an equilibrium state of a ball in a basin depends on momentum and gravity’s constraints on the dynamics of the ball’s motion in the basin. A strategic equilibrium is a steady state of the dynamics. Its realization responds to all sufficient incentives. The players at stages of deliberation survey the whole game and deliberation’s progress. They process all reasons. The players are strategic, look-ahead reasoners and so can find a global maximum, not just a local maximum. As one sees where a ball rolling in a basin will come to rest and may place the ball there immediately, players see where their tentative decisions will end and may go there immediately. Also, rational players can jump to the best basin of attraction in one step. They need not wait for chance events to create the jump, as in an evolutionary dynamics. The players’ realization that they are headed toward an inferior profile may either make them swerve toward a superior profile, or restart their deliberations on a better trajectory. The dynamics move from one stage of deliberation to another until deliberations reach a halt. When there are several halting places, the dynamics eventually settle on one. The final profile arises from the agents’ rationality and their psychologies. The agents’ preparation for a coalitional game yields their coordination to realize a particular profile. Deliberation for strategic reasoners may work backward from a desired end point to a starting point that leads there. Players settling incentives to pursue may begin by observing where they want to end and then devise a way to arrive there. Although a rational player pursues her goals at each stage, the players may reach a particular profile by in advance forming suitable rational dispositions to pursue incentives. Those dispositions create their deliberational dynamics. 9.3 S TRATEGIC E QUILIBRIA
IN
C OALITIONAL G AMES
This section introduces the standard of strategic equilibrium for coalitional games. The standard formulates a requirement of collective rationality. It governs after-the-fact evaluation of a group’s act. It does not explicitly direct a group’s act
176
Collective Rationality
although it implies some procedural principles for favorable cases. After defining strategic equilibrium, the section shows that in every ideal coalitional game a strategic equilibrium exists, and individuals’ rationality entails realization of a strategic equilibrium. Equilibrium is a reasonable requirement for a solution to a coalitional game, but the requirement needs an account of equilibrium according to which every ideal coalitional game has at least one equilibrium. Equilibrium outcomes may exist more widely than do core allocations. The core overlooks types of equilibrium broader than joint utility maximization. The decision principle of self-support, a generalization of the principle of utility maximization, yields another type of equilibrium, namely, joint self-support. In coalitional games the principle of self-support applies to coalitions as well as to individuals. A strategy profile’s being out of equilibrium is a matter of opposition to its strategies. A profile is out of equilibrium if an alternative profile opposes it. One profile opposes another profile if in deliberations it is rational to consider it instead of the other profile. An out-of-equilibrium profile may be decisively rejected in favor of an alternative. Decisive rejection entails that rational deliberation never returns to the profile. Suppose that some coalition has an incentive to switch away from a profile. A second coalition has an incentive to abandon the profile that the first coalition’s switch would realize. A third coalition has an incentive to abandon the profile that the second coalition’s switch would realize. This path of incentives continues until some coalition’s incentive to switch leads it to the original profile. The return to the original profile allows it to be an equilibrium despite the first coalition’s incentive to switch away from it. A path of incentives leading away from a profile does not disqualify that profile as an equilibrium unless it settles on some alternative profile. If the path of incentives does not terminate, then it does not produce an alternative that decisively replaces the original profile in deliberations. For example, suppose that a group may select an annual income for itself. For each figure it may select, some alternative is better. Although some path of incentives leads away from selection of any given income, no such path terminates. As a result, the group may select a figure despite the availability of higher figures. Selecting the figure may be an equilibrium of rational deliberations. Rational pursuit of incentives does not require relentless pursuit of incentives. Pursuit of incentives may stop with a self-supporting strategy, one from which an agent has no sufficient incentive to deviate. Halting pursuit of incentives is not irrational in cases where not all can pursue incentives relentlessly. Although rational individuals seek gains, some may lose without being irrational, as a child playing musical chairs may lose without being irrational. Applying selection and stopping rules to prune and truncate paths of incentives yields paths of pursued incentives. A path of pursued incentives terminates in a strategy profile. A profile meets opposition if it initiates a path of pursued incentives. Otherwise, it is an equilibrium. It is at the bottom of a basin of
Strategy for Coalitions
177
attraction in the dynamics pursuit of incentives creates. A strategic equilibrium is a feasible strategy profile such that given the profile no coalition has a path of pursued incentives away from the profile. Strategic equilibria exist in concrete realizations of coalitional games. An outcome’s being a strategic equilibrium depends on more features of a concrete coalitional game than a characteristic function represents. In particular, it depends on the incentives that coalitions pursue when they cannot pursue all incentives. An adequate representation of a concrete coalitional game supplements a characteristic function with a representation of coalitions’ pursuit of incentives, such as a directed graph of the type Figure 9.1 illustrates. A characteristic function represents bargaining leverage that comes from coalition formation but not leverage that comes from, say, being more patient than others are. Pursuit of incentives registers bargaining power that a characteristic function omits. It represents players’ psychologies. Concrete coalitional games are not characteristic-function-form games in the sense of being adequately represented by characteristic functions. Because being a strategic equilibrium is a necessary condition of being a solution, pursuit of incentives is a relevant feature of a concrete coalitional game. An adequate abstract representation of the game includes it. Using coalitions’ paths of pursued incentives to define a strategic equilibrium is equivalent to using multicoalition paths for the same work. If a profile starts no path of pursued incentives, then it starts no path of a single coalition’s pursued incentives. Also, any path of pursued incentives away from a profile entails that some coalition has a path of pursued incentives away from the profile. The path’s first incentive belongs to some coalition and starts a path for that coalition. Hence a profile starts no path of pursued incentives just in case it starts no path of pursued incentives for any single coalition. Strategic equilibria are more prevalent than profiles realizing core allocations but still narrow the set of possible solutions in most concrete coalitional games. In the game in Section 8.4 with the characteristic function v(A) ¼ v(B) ¼ v(C) ¼ 1, v(AB) ¼ v(BC) ¼ v(AC) ¼ 4, v(ABC) ¼ 12, not all strategy profiles are strategic equilibria. Coalitions rationally pursuing incentives continue until the grand coalition forms and divides its value. Profiles without formation of the grand coalition are not strategic equilibria. The strategic equilibria of a concrete version of three-person majority-rule division of $6 depend on the agents’ psychologies. If A does not pursue incentives, then a strategic equilibrium yields (0, 3, 3), and no strategic equilibrium yields (2, 2, 2). The standard of strategic equilibrium narrows the field of candidates for a solution in light of the agents’ pursuit of incentives. Other standards of collective rationality may narrow the field further. A solution requires joint rationality, and an equilibrium offers only joint self-support. Self-support attends to sufficient incentives but does not consider all reasons. It is an equilibrium among reasons of a certain type. Some strategic equilibria may fail to be solutions.
178
Collective Rationality
In an ideal coalitional game every coalition has a self-supporting strategy. This follows from applying to coalitions points about agents mentioned in Chapter 6. A profile of self-supporting strategies may not be jointly self-supporting, however, and so may not be a strategic equilibrium. An agent’s information about other agents’ strategies depends on the profile realized. A profile of self-supporting strategies’ realization may affect information, incentives, and so self-support. In an ideal game, all agents know the coalition structure realized given the profile realized. They know which strategies are feasible given the coalition structure realized. One coalition’s self-supporting strategy may be incompatible with another coalition’s self-supporting strategy. A profile of self-supporting strategies may not be feasible. For example, suppose that formation strategies for {A} and {A, B} are selfsupporting taken by themselves. The coalition {A, B} may have no incentive to deviate from its formation strategy because neither member does better by deviating. The coalition {A} may also have no incentive to deviate from its formation strategy because it does not have the option to disband and form {A, B}. Although forming {A} and forming {A, B} are self-supporting, they are not self-supporting taken together because they are incompatible. Rolling back idealizations may facilitate reaching a strategic equilibrium. If an agent has no information about others’ responses to his strategies, problems of strategic reasoning do not arise. An agent’s strategy does not furnish evidence about other agents’ strategies and so create incentives to deviate. Removing idealizations may make equilibrium easier to achieve by eliminating incentives to deviate from a strategy in the context of a profile. Agents may more easily pursue all incentives, which is a way of achieving strategic equilibrium. In an ideal (concrete) coalitional game, agents’ full rationality includes their rational pursuit of incentives, and their complete information about their game includes information about their pursuit of incentives. Although existence of a strategic equilibrium does not follow from existence of self-supporting strategies, every ideal coalitional game has at least one strategic equilibrium. The proof that a strategic equilibrium exists shows that for a profile to not be an equilibrium, some other profile must be an equilibrium. To begin, select any profile of an ideal coalitional game. Suppose that it is not an equilibrium. Then it starts a path of pursued incentives. The path terminates with a profile that is an equilibrium because a terminal profile initiates no path. If a coalition has a path of pursued incentives away from a profile, then the terminal strategy for the coalition and the response to it form an equilibrium. These simple points establish the existence of a strategic equilibrium in every ideal coalitional game. In a coalitional game, a strategic equilibrium has jointly self-supporting strategies for coalitions. Realization of a strategic equilibrium is equivalent to realization of jointly self-supporting strategies for individuals in the underlying sequential game. Joint self-support for coalitions agrees with joint self-support
Strategy for Coalitions
179
for individuals because of the relation between incentives for coalitions and for individuals. Suppose that no coalition has an incentive to deviate from a profile in a coalitional game. Individuals form unit-coalitions, so none has an incentive to deviate unilaterally. None has an incentive to deviate jointly with others, unless that deviation is implementable and so profits each participant. Those conditions yield a coalition’s incentive to deviate and so contravene the supposition. Joint acts of coalitions of which an individual is a potential member represent all the individual’s opportunities to participate in joint acts. So no individual in the underlying sequential game has an incentive to deviate from the profile. For example, consider an element of the core in a coalitional game with a nonempty core. No coalition, and thus no individual, has an incentive to depart from its realization. Moreover, if some coalition has an incentive to deviate from a profile, then some individual has an incentive to deviate from its realization. The coalition’s incentive requires the incentives of some members. Therefore, a coalition has an incentive to deviate from a profile if and only if some individual in the underlying sequential game has an incentive to deviate from its realization. These points about incentives hold for sufficient incentives also. If no coalition has a sufficient incentive to deviate from a profile, then no individual has a sufficient incentive to deviate either unilaterally or jointly with others. An individual’s having a sufficient incentive to deviate requires some multimember- or unit-coalition’s having a sufficient incentive to deviate. If all coalitions have excuses for not deviating, then their members have excuses. Moreover, if some coalition has a sufficient incentive to deviate from a profile, then some individual does. Rationality does not require a coalition to act unless it requires at least one member to act. Therefore, in a coalitional game a profile has jointly self-supporting strategies for coalitions if and only if its realization in the underlying sequential game has jointly self-supporting strategies for individuals. Because coalitions’ realizing a strategic equilibrium is equivalent to individuals’ realizing jointly self-supporting strategies, three points follow. (1) Strategic equilibria among coalitions identify strategic equilibria among individuals. (2) Strategic equilibrium is a standard for solutions to coalitional games. (3) Strategic equilibrium is a standard of collective rationality in ideal coalitional games. The remainder of this section establishes these points. A coalitional game’s underlying dynamics indicate strategic reasons for a strategy profile. Many avenues of strategic reasoning may lead to the profile. A profile’s status as a strategic equilibrium in a coalitional game may be independent of detailed moves in the underlying sequential game, however. General features of the sequential game’s dynamics may ensure that the profile is a strategic equilibrium. They offer a shortcut to identification of strategic equilibria. For simplicity, a game’s analysis using coalitions’ pursuit of incentives does not decompose it into individuals’ pursuit of incentives. Individuals may pursue incentives in varied ways that do not matter, as long as a coalition’s incentives
180
Collective Rationality
are pursued. The analysis may concentrate on collective dynamics instead of individual dynamics. Economy justifies an analysis that uses collective incentives. The analysis discovers a strategic equilibrium more easily than application of first principles does. Take a coalitional game with a single strategic equilibrium (which may be a core element). Spotting it is quicker than working through the individual incentives that generate it. Identifying strategic equilibria for coalitions is a shortcut method of identifying strategic equilibria for individuals. As shown, in ideal coalitional games a strategic equilibrium for coalitions results from individual’s compliance with the principle of self-support, and individuals’ compliance with the principle yields a strategic equilibrium for coalitions. The shortcut method, by identifying strategic equilibria for coalitions, identifies all strategic equilibria for individuals in the underlying sequential game. The shortcut saves computation but sacrifices generality in identifying profiles in which individuals achieve joint self-support. It addresses ideal games only. Next, this section shows that strategic equilibrium is a standard for solutions to ideal coalitional games. This requires showing that strategic equilibrium is necessary for a solution. To show that a solution is a strategic equilibrium, one must show that a strategic equilibrium emerges if all coalitions are rational. A demonstration that collective rationality within and among coalitions generates a strategic equilibrium in an ideal coalitional game may use principles of collective rationality, such as the standard of self-support for coalitions. However, such principles are derivative and have restricted range. Support using first principles is more reliable. Standards of rationality apply to a group by way of its members because a group acts through its members and not directly. A group is rational if all its members are rational, as compositionality asserts. A solution is a profile of strategies that are jointly rational. A solution’s realization entails the joint rationality of all agents, and so the rationality of all agents. The rationality of all agents entails the rationality of all individuals. The rationality of all individuals entails realization of a strategic equilibrium. Hence, a solution’s realization entails a strategic equilibrium’s realization. To back this argument that being a strategic equilibrium is necessary for being a solution, the following paragraph shows that individuals’ universal rationality entails realization of a strategic equilibrium. It shows that a strategic equilibrium emerges if all individuals in a coalitional game are rational. Showing that individuals acting rationally in a coalitional game realize a strategic equilibrium confirms that a solution is a strategic equilibrium. A coalition’s paths of incentives follow from individuals’ paths of incentives. If a coalition fails to pursue a sufficient incentive, then some member fails to pursue a sufficient incentive. If a coalition irrationally fails to pursue an incentive, some member irrationally fails to pursue an incentive. If the individuals in a coalitional game fail to realize a strategic equilibrium, then at least one is irrational. If a strategic equilibrium is not realized in an ideal coalitional game,
Strategy for Coalitions
181
then some individual fails to pursue incentives rationally. Violating a selection or stopping rule entails a coalition’s, and at least one member’s, irrational response to incentives. For example, stopping pursuit of incentives despite having a sufficient incentive to switch strategy is irrational for a coalition. It results from at least one member’s irrationality. In the case of an incentive to form a coalition, it is contrary to a sufficient incentive of every agent. In the case of an incentive to disband, it is contrary to a sufficient incentive of some agent. As shown, universal rationality entails strategic equilibrium. Using the technical definition of a coalition’s options and incentives, the standard of strategic equilibrium for solutions to coalitional games follows from the basic standard of composition by rational acts of individuals. The standard requires a coalition to follow sufficient collective preferences among its options. Rational acts by a coalition’s members entail the coalition’s compliance with the standard. The entailment, besides verifying the standard of strategic equilibrium for solutions to coalitional games, also establishes the fruitfulness of the technical definitions of a coalition’s options and incentives. The definitions make the standard of strategic equilibrium a sound principle of joint rationality. Universal rationality’s entailment of strategic equilibrium is a crucial point and merits further exploration. Chapter 11 illustrates the entailment by analyzing coalitional games as noncooperative games, for example, by analyzing coalition formation as a combination of individuals’ strategies. It shows how the entailment proceeds in a coalitional game, first, from individual rationality in the underlying sequential game to strategic equilibrium in the sequential game and, then, to strategic equilibrium in the coalitional game. This section’s last step is to show that strategic equilibrium is a standard of collective rationality in ideal coalitional games. Collective rationality is not equivalent to realizing a solution. Collective rationality may require more than realizing a profile of strategies that are jointly rational. It may require realizing such a profile for the right reasons. It may have a procedural component. Also, collective rationality may require less than realizing a solution does. Realizing a solution demands realizing a profile of rational strategies. Collective rationality may not demand universal rationality. It tolerates inconsequential mistakes. The players in a game may be collectively rational although one player is irrational in a nondamaging way. A coalition with an irrational member may nonetheless pursue sufficient incentives, for instance. A strategic equilibrium is a profile of strategies. Two strategy profiles may generate the same outcome. That is, their outcomes may be the same in relevant respects. Collective rationality demands only an outcome equivalent to the outcome of a profile of universally rational strategies. So collective rationality does not entail realization of a strategic equilibrium. Strategic equilibrium is a standard of collective rationality in the sense that collective rationality entails an outcome the same in relevant respects as the outcome of a strategic equilibrium. The relevant respects depend on what matters to the individuals in the concrete
182
Collective Rationality
game. Their utility functions represent what matters to them. So their collective rationality entails realizing a utility profile that a strategic equilibrium realizes. Imagine an ideal coalitional game in which collective rationality requires the rationality of all players. This may happen because collective rationality requires every player to pull his oar; no player’s irrationality is inconsequential. In such a game, a single player’s irrationality entails collective irrationality. That is, collective rationality entails universal rationality. Because universal rationality entails strategic equilibrium, strategic equilibrium is then necessary for collective rationality in this type of ideal coalitional game. In an ideal coalitional game, collective rationality yields a utility profile that a solution yields. Realizing a solution entails realizing a strategic equilibrium, as shown. If a utility profile is not the outcome of a strategic equilibrium, then it is not the outcome of a solution. Hence, it is not a product of collective rationality. Collective rationality yields the utility profile a strategic equilibrium generates. Therefore in an ideal coalitional game, strategic equilibrium is a standard of collective rationality in the appropriate sense. As this chapter shows, collective rationality supports strategic equilibria rather than core allocations in ideal coalitional games. Strategic equilibrium makes an attainable standard of collective rationality. Chapter 10 elaborates the case for strategic equilibrium.
10
Illustrations and Comparisons
T
H I S chapter illustrates strategic equilibrium in coalitional games and compares it with other types of equilibrium such as realizing an element of the core. In addition, it compares the standard of strategic equilibrium with nonequilibrium standards for solutions such as efficiency. It shows that the standard of strategic equilibrium coheres well with other components of a theory of collective rationality.
10.1 T HE M AJORITY-RULE G AME In the game of dividing $6 among three people by majority rule, the values of coalitions are: v(A) ¼ v(B) ¼ v(C) ¼ 0, v(AB) ¼ v(BC) ¼ v(AC) ¼ 6, v(ABC) ¼ 6. The core is empty. What are the strategic equilibria? They depend on the incentives that coalitions pursue in a concrete realization of the game. In an ideal realization of the game, players are ideal, fully rational, and in circumstances perfect for forming coalitions to perform joint acts. They have a broad common knowledge of their game including its characteristic function, their rationality, and their psychologies. They know how players pursue incentives and are prescient about responses to their strategies. Perhaps prior to the game a psychologist tests all players and announces their pursuit of incentives. What paths of incentives arise for coalitions in the majority-rule game? For simplicity, suppose that the $6 available may be divided only into amounts of whole dollars. Some possible outcomes are (0, 3, 3), (1, 2, 3), and (2, 2, 2). Another possible outcome is failure to reach a majority decision about a division of the money. It yields no gain for anyone, that is, the outcome (0, 0, 0). Majority decisions distributing less than $6 are possible, too. A majority may leave a portion of the $6 on the table. From any profile in which the money distributed totals less than $6, a path of incentives proceeds to a profile in which it totals to $6. A thorough representation of the game describes the players’ pursuit of incentives using hypothetical conditionals to specify responses to strategy profiles. 183
184
Collective Rationality
In an ideal version of the game, coalitions follow the selection and stopping rules for pursuit of incentives. According to the selection rule, each coalition optimizes. Given a profile, if multiple incentives arise, the coalition pursues an incentive that maximizes its gain if it pursues any incentive. According to the stopping rule, each coalition pursues sufficient incentives to switch strategy. Hence, it pursues any selected incentive that leads to a profile that does not generate an incentive to switch. The selection rule takes agents to divisions that total $6 and prevents return to a division totaling less than $6. The stopping rule then authorizes any division totaling $6. Such divisions are potential strategic equilibria. The actual strategic equilibria depend on the agents’ psychologies. The coalition {A, B} can form and then adopt the division (3, 3, 0). Suppose that no coalition pursues an incentive away from that outcome. For example, the coalition {B, C} does not pursue its incentive to form and then adopt the division (0, 4, 2). In this case, realizing the division (3, 3, 0) is a strategic equilibrium. There may be other strategic equilibria, too. The coalition {B, C} can form and then adopt the division (0, 3, 3). Perhaps no coalition pursues an incentive away from that division. Then achieving it is another strategic equilibrium. The agents, if rational, realize a strategic equilibrium of their pattern of pursuit of incentives. Rational players may realize a strategic equilibrium despite incentives to switch to another strategy profile. Realizing (3, 3, 0) is rational although the coalition {B, C} has an incentive to form and then adopt the division (0, 4, 2). Coalitions cannot pursue all incentives, and rationality does not require the impossible. The impossibility of all pursuing incentives relentlessly gives each coalition a reason to relent. Once one relents, others have no reason to relent. Coalitions’ incentives depend on individuals’ incentives, and incentives coalitions pursue depend on incentives individuals pursue. If given (3, 3, 0) the coalition {B, C} does not pursue its incentive to form and achieve the division (0, 4, 2), then B does not pursue her incentive to initiate {B, C}’s formation and to bargain with C to achieve the division (0, 4, 2). Although in a coalitional game coalitions’ incentives define strategic equilibrium, it arises from the incentives of individuals that generate coalitions’ incentives. Reasons for pursuit of incentives take account of all strategic considerations, including principles favoring an efficient equilibrium. Nonetheless, multiple strategic equilibria may exist. The strategic equilibrium realized depends not just on agents’ pursuit of incentives, but also on their coordination to realize a particular strategic equilibrium when multiple strategic equilibria exist. Reasons governing preparation for deliberations may guide their coordination. If a game is purely competitive so that players do not coordinate, then features of their situation such as their starting point for deliberations settles the realized strategic equilibrium. Because realization of a particular strategic equilibrium depends on nonstrategic considerations, a theory of equilibrium selection covers players’ psychologies and their preparation’s effect on the equilibrium they realize.
Illustrations and Comparisons
185
An incentive is insufficient if it does not lead anywhere if pursued. Its insufficiency depends on agents’ psychologies and circumstances’ effect on the operation of agents’ psychologies. An agent may give up pursuit of incentives because of his impatience to reach a settlement. Also, agents may be psychologically similar, but in different circumstances. For example, one may have more time than others have to reach a settlement. Players may use latitude left by the selection and stopping rules to pursue incentives in a way that generates a particular strategic equilibrium. The agents’ pursuit of incentives together with other features of their psychologies and circumstances yields the equilibrium. This chapter explains the strategic considerations that make a strategy profile a strategic equilibrium, but does not explain the features of players’ psychologies that lead to realization of a particular strategic equilibrium. Explaining players’ pursuit of incentives and selection of an equilibrium deepens an explanation of their realization of a strategic equilibrium. However, explaining how players exercise strategic reasoning’s latitude is not part of a justification of their strategic equilibrium’s realization. Its justification requires only that pursuit of incentives and selection of an equilibrium comply with principles of rationality. Consequently, an example of a strategic equilibrium’s realization may assume that the players’ rational pursuit of incentives yields that equilibrium without specifying the psychological features that generate it. 10.2 C OMPARISONS This section compares strategic equilibrium with other types of equilibrium in ideal single-stage elementary coalitional games. The comparisons treat the core, Nash equilibrium, and Nash’s bargaining solution. To begin, consider the core. It generates an equilibrium standard requiring a profile that yields a core allocation. Strategic equilibrium generalizes this standard because the decision principles supporting it generalize the utility-maximization principles supporting realization of a core allocation. In a coalitional game, if a profile realizes a core allocation, then no coalition has an incentive to switch strategy. Consequently, no coalition has a path of incentives, or pursued incentives, away from its part in the profile. Therefore the profile is a strategic equilibrium. Because every profile realizing a core allocation is a strategic equilibrium, the standard of realizing a core allocation does not conflict with the standard of realizing a strategic equilibrium. That is, in every coalitional game where each standard is attainable, some profile meets both standards. Pursuit of sufficient incentives yields a strategic equilibrium. Relentless pursuit of incentives, when it is possible, yields a core allocation. The standard of strategic equilibrium demands only pursuit of sufficient incentives, whereas the standard of the core demands relentless pursuit of incentives, even when it is impossible because a game’s core is empty. Because strategic equilibrium demands less than
186
Collective Rationality
the core does, strategic equilibria in coalitional games are more plentiful than profiles realizing core allocations. In a game having core allocations, some strategic equilibria may not realize core allocations and may rival profiles realizing core allocations as potential solutions. They may be as attractive as profiles realizing core allocations. A coalition’s members may not favor realizing a core allocation over realizing a rival strategic equilibrium. In the game in Section 8.4 with the characteristic function v(A) ¼ v(B) ¼ v(C) ¼ 1, v(AB) ¼ v(BC) ¼ v(AC) ¼ 4, v(ABC) ¼ 12, the strategy profile generating (1, 2, 2) may be a strategic equilibrium if conditions are not ideal. The coalition {B, C} may realize it, although its outcome is outside the core, because B and C foresee that if they join A in the coalition {A, B, C} the result will be the core allocation (8, 2, 2), which does not increase their payoffs. In a coalitional game, a Nash equilibrium is a profile such that no individual, or unit-coalition, has an incentive for unilateral departure. To be a Nash equilibrium, a profile must give each unit-coalition at least as much as it can obtain on its own. Nash equilibrium, because it ignores opportunities for joint action, is not sufficient for a solution to a coalitional game. However, it is appealing as a necessary condition for a solution. Does strategic equilibrium conflict with it? Suppose that in the majority-rule game, individuals may unilaterally elect to obtain $1 from the $6 available, so that the values of coalitions are: v(A) ¼ v(B) ¼ v(C) ¼ 1, v(AB) ¼ v(BC) ¼ v(AC) ¼ 6, v(ABC) ¼ 6. In this revised game, realizing (0, 3, 3) is not a Nash equilibrium because A may profit from unilateral deviation. May realizing that outcome nonetheless be a strategic equilibrium because A rationally forgoes potentially endless pursuit of incentives? No, the unit-coalition an individual forms achieves its value in any strategic equilibrium of an elementary coalitional game. Although A may forgo insufficient incentives to participate in joint acts, A may not forgo incentives for independent productivity. Those incentives are sufficient because they are constant given others’ responses. Suppose that a strategy profile is not a Nash equilibrium. Then some unitcoalition has an incentive to deviate from its strategy in the profile. A unitcoalition has limited options. Its incentive must be to form and to maximize its payoff, given its formation. That strategy achieves the unit-coalition’s value. The path the incentive starts terminates with the unit-coalition’s optimal strategy, given its formation. However other agents respond, the unit-coalition has no incentive to switch away from its optimal strategy. It does not have the option to disband because that requires others’ cooperation, and it can do no better than its optimal strategy as long as it forms. The unit-coalition thus has a terminating path away from its part in the original profile. So that profile is not a strategic equilibrium. Hence, if a profile is not a Nash equilibrium, then it is not a strategic equilibrium. Therefore, by contraposition, if a profile is a strategic equilibrium, then it is also a Nash equilibrium. Agents realizing a strategic equilibrium meet the standard of Nash equilibrium. The standards of
Illustrations and Comparisons
187
Nash equilibrium and of strategic equilibrium do not conflict in elementary coalitional games. Strategic and Nash equilibria depend on agents’ strategies in a concrete game. A representation that displays all strategies salient in deliberations adequately identifies these equilibria. Representations that display a reduced set of strategies may also adequately identify them, however. The strategic and Nash equilibria relative to a reduced representation may yield the same utility profiles as the strategic and Nash equilibria relative to a full representation. A concrete game’s set of equilibria relative to any adequate representation of the game yields the same set of utility profiles as the game’s set of equilibria relative to any other adequate representation. A representation of the sequential game underlying a coalitional game displays more strategies than a representation of the coalitional game. Instigation of joint action is a strategy of the underlying sequential game, whereas only the joint action is a strategy of the coalitional game. An individual may have an incentive to instigate joint action because he does better with a multimember coalition than with the optimal strategy of his unit-coalition. His incentive may be a sufficient reason to change strategy. Then his pursuit of incentives in the underlying sequential game does not stop with his unit-coalition’s optimal strategy. However, a strategic equilibrium of the underlying sequential game is also a Nash equilibrium of that game. For any level of representation, strategic equilibrium entails Nash equilibrium in an elementary coalitional game. Bargaining problems are a type of coalitional game. In these problems agents may reach an agreement about a division of some good. The good may be a windfall or the value generated by an exchange. Each agent gains only if the whole group strikes a bargain. Only the grand coalition is productive. Division of its
Utility for B
2
(1, 1) Nash’s Solution
0
2
FIGURE 10.1 A two-person bargaining problem.
Utility for A
188
Collective Rationality
profits is the issue. In bargaining problems that are elementary coalitional games, utility is transferable. As an illustration, consider a bargaining problem in which two people A and B together receive two dollars if and only if they agree on its division between them. For a bargain to be realized, the two-person coalition {A, B} must form. Imagine that the utility of money is linear and so transferable and that {A, B}’s value is 2 units of utility. All divisions of its value are possible outcomes. Figure 10.1 is a classical representation of the problem. It depicts a utility space with a dimension for each agent. The triangle represents the possible outcomes. All outcomes arise from striking a bargain, and perhaps disposing of some money, except the outcome the origin represents. It may arise from failure to reach a bargain, and is called the disagreement point. The triangle’s hypotenuse represents the efficient outcomes. In them {A, B} receives its value. The coalition receives less than its value if its members reach a bargain and then dispose of a portion of their gains. The midpoint of the hypotenuse (1, 1) indicates the outcome of Nash’s solution to the bargaining problem. The classical representation does not depict the protocol by which bargaining takes place. The protocol may, for example, impose no regulations, stipulate alternating offers, or stipulate one individual’s making an offer and the other’s taking it or leaving it. An individual has options with respect to the underlying protocol that do not appear in the problem’s representation in utility space. The classical representation assumes that the protocol gives no individual an unrepresented advantage. A profile of strategies realizing a bargain is a joint act. An individual lacks full control of that joint act. He does not have achieving a bargain as an option. In contrast, realizing the disagreement point is not a joint act. An individual has full control over that outcome’s realization. To realize a bargain, each bargainer typically makes some concessions. A bargainer requests as much as he thinks other bargainers will accord him. None insists on the outcome optimal for him. He cannot realize that outcome by himself, and other bargainers do not perform their parts in its realization. The bargainers reach an agreement only if each does not insist that others make maximal concessions. In the underlying bargaining protocol, relentlessly exercising options to demand concessions does not lead to a bargain, given the other bargainers’ psychologies. A bargaining solution depends partly on individuals’ rationality, and partly on other features of their psychologies.1 Every bargain is a Nash equilibrium because no agent gains by reneging. In fact, because no individual gains from unilateral action, every strategy profile is a Nash equilibrium, even disagreement. Only Nash equilibria yielding outcomes on the triangle’s hypotenuse are efficient, however. The efficient Nash equilibria realize core allocations. The two-person coalition {A, B} blocks all inefficient profiles. Nash’s solution to a bargaining problem selects an efficient Nash equilibrium. It selects an outcome from the core and so strengthens the standard of the core. All
Illustrations and Comparisons
189
profiles realizing core allocations are strategic equilibria. So Nash’s solution also selects a strategic equilibrium and thus strengthens the requirement of strategic equilibrium. It is compatible with the requirement of strategic equilibrium.2 Illustrating strategic equilibrium’s application in a bargaining problem requires specifying the way coalitions pursue incentives. In the bargaining problem that Figure 10.1 presents, an inefficient profile involves either disagreement or disposal of gains from agreement. Given any inefficient profile, either the two-person coalition has an incentive to reach an agreement, or individuals have incentives not to dispose of gains. By the selection rule, agents pursue optimal incentives. So the agents reach an efficient profile, either by reaching an agreement in one step, or by reversing disposal of gains one individual at a time, in at most two steps. Their path terminates in an efficient profile. Neither the two-person coalition nor any individual has an incentive to switch from an efficient profile. The two-person coalition lacks an incentive to switch because both members prefer no alternative. An individual lacks an incentive to switch because he cannot achieve improvements unilaterally. He cannot alone bring about a profile better for him. Its realization is not an option for him. Hence no path of pursued incentives leaves an efficient profile, whereas such a path leaves every inefficient profile. Therefore all and only efficient profiles are strategic equilibria. To generalize this result, imagine an arbitrary bargaining problem with any number of bargainers. According to the selection rule, an agent pursues an optimal incentive if it pursues any incentive. That selected incentive is sufficient because the path it starts stops at an efficient profile. Pursuit of incentives does not stop short of that efficient profile. Hence, every inefficient profile starts a terminating path of pursued incentives. Therefore, no inefficient profile is a strategic equilibrium. Because all efficient profiles are strategic equilibria, all and only efficient profiles are strategic equilibria. The standard of strategic equilibrium replicates the standard of efficiency, given the selection and stopping rules. In coalitional games some Nash equilibria are not strategic equilibria because, whereas a Nash equilibrium takes account of only individuals’ incentives, a strategic equilibrium takes account of all coalitions’ incentives. Consequently, some profiles that do not initiate a path of pursued incentives for any individual nonetheless initiate a path of pursued incentives for a coalition. This occurs in the bargaining problem illustrated in Figure 10.1. Disagreement is a Nash equilibrium because no individual can improve his lot by unilateral departure. The coalition {A, B} has an incentive to move from realization of the disagreement point (0, 0) to realization of Nash’s solution (1, 1). Suppose that it pursues that incentive if it pursues any incentive. The incentive is sufficient because realization of Nash’s solution generates no incentive to switch strategy. Each unit-coalition’s part in realizing (1, 1) is nonformation. Given (1, 1)’s realization, neither unit-coalition has an incentive to switch from nonformation to formation. Formation yields the disagreement point. The coalition {A, B} lacks an incentive to switch because it has no alternative that brings
190 Collective Rationality gains for both members. Because the path from realization of (0, 0) to realization of (1, 1) terminates at (1, 1), the coalition {A, B} pursues its incentive to switch from disagreement to Nash’s solution. Thus disagreement starts a terminating path of pursued incentives. It is not a strategic equilibrium, despite being a Nash equilibrium. The standard of strategic equilibrium does not settle the outcome of a bargaining problem. Agents must coordinate to realize a particular strategic equilibrium. Although Nash’s solution is plausible, it is an open question whether rationality requires the method of coordinating that Nash’s solution requires.
10.3 C ONFLICT The standards of strategic equilibrium and Nash equilibrium do not conflict in elementary coalitional games because strategic equilibria are Nash equilibria in those games. May the standards conflict in other cooperative games? This section shows that a cooperative descendent of Matching Pennies generates a conflict, and that strategic equilibrium survives the conflict. Table 10.1 depicts the noncooperative game Matching Pennies. Suppose that mixed strategies are not available, and that the game is ideal. A concrete realization of the game has no Nash equilibrium. For every strategy profile, some player has an incentive to switch strategy. However, the game has a strategic equilibrium. When the game ends, although players pursue incentives rationally, some player fails to pursue an incentive at some profile, and that profile is a strategic equilibrium. Every strategy profile is a potential strategic equilibrium, and the game’s concrete realization settles the actual strategic equilibria. Whether a potential strategic equilibrium is a strategic equilibrium depends on agents’ pursuit of incentives. If Row pursues no incentives and Column pursues all incentives, then the strategic equilibria are (H, T) and (T, H). If there are multiple strategic equilibria, then the strategic equilibrium realized depends on the equilibriumselection mechanism. In this purely competitive game the mechanism is not coordination. It may be the players’ starting point for deliberations. The game’s transformation into a cooperative game treats coalitions as agents and allows joint strategies. The agents have avenues of communication and opportunities for binding agreements. With the addition of joint strategies, Matching Pennies becomes a cooperative game. However, because it remains Table 10.1 Matching Pennies
Heads Tails
Heads
Tails
2, 0 0, 2
0, 2 2, 0
Illustrations and Comparisons
191
competitive, players bypass opportunities for joint action. Because players’ payoffs have a constant sum, joint strategies are otiose. The two-player coalition has no incentive to form. Possible joint strategies such as (H, H) are not advantageous to each player. Also, given any of the two-player coalition’s joint strategies, the coalition has an incentive to disband because at least one member does better by defecting. Consider the two-player coalition’s joint strategy (H, H). Column does better if she defects from the coalition and adopts T. The cooperative version of Matching Pennies is not an elementary coalitional game, and a characteristic function represents it poorly. Its characteristic function specifies a coalition’s value independently of nonmembers’ acts, although, for a unit-coalition that forms, an act’s payoff depends on nonmembers’ acts. For example, {Row}’s choice of H has a payoff of 2 or 0 depending on whether {Column} chooses H or T. A strategy profile of the cooperative game has a coalition structure, joint strategies for the multimember coalitions that form, and strategies for unitcoalitions that form. In the cooperative game the potential strategic equilibria correspond to the potential strategic equilibria of the noncooperative game. The unit-coalition structure and all combinations of strategies of the unit-coalitions yield the potential strategic equilibria. A unit-coalition’s path of selected incentives away from a strategy profile does not terminate. Also, no profile initiates a path for the two-player coalition. The strategic equilibrium realized in a concrete version of the cooperative game depends on pursuit of incentives, as in a concrete version of the noncooperative game. If Row pursues no incentives and Column pursues all incentives, then the strategic equilibria are the two profiles yielding the unit-coalition structure and in one case the strategy assignment (H, T) and in the other case the strategy assignment (T, H). If deliberations start with a particular strategic equilibrium, then it is realized. Next, expand the cooperative version of Matching Pennies by giving both Row and Column an additional strategy D. Any combination of strategies for unitcoalitions containing either player’s choice of D leads to disaster for both players; each receives –1,000,000 units of utility, or, abbreviating, m. Table 10.2 presents the payoff matrix for the expanded cooperative game. A strategy profile with the unit-coalition structure and the strategy assignment (D, D) is the unique Nash equilibrium of the expanded cooperative game. It is not a strategic equilibrium, however. The two-player coalition has an incentive Table 10.2 Expanded Cooperative Matching Pennies
D H T
D
H
T
m, m m, m m, m
m, m 2, 0 0, 2
m, m 0, 2 2, 0
192 Collective Rationality to form and adopt the strategy assignment (H, H). After the switch, the coalition has an incentive to disband because Column has an incentive to adopt T. The twoplayer coalition has no incentive to switch after disbanding. No departure from (H, T) benefits both members. So the two-player coalition’s path of incentives from (D, D) to (H, H) to (H, T) terminates. The coalition’s other paths away from (D, D) are similar, and the selection rule favors none. The two-player coalition pursues an incentive away from (D, D), but the incentive pursued depends on the players’ psychologies. According to the game’s description, the potential strategic equilibria involve the unit-coalition structure and an assignment of H or T to each unit-coalition. The two-player coalition has no incentive to switch away from these profiles. Although the unit-coalitions have incentives to switch, their paths of incentives do not terminate. Every switch triggers a response that generates another incentive to switch. For instance, {Column} has an incentive to switch from its part in (H, H) to T. But {Row}’s response to (H, T) is T, and {Column} has an incentive to switch from its part in (T, T) to H. For the unit-coalitions, paths away from the potential strategic equilibria cycle among those profiles. In a concrete version of the game, the paths stop because some agent fails to pursue an incentive. Some potential strategic equilibria are actual strategic equilibria. The players realize a strategic equilibrium. If {Row} pursues incentives and {Column} stops pursuit of incentives only at (H, H), then the unique strategic equilibrium is the profile with the unit-coalition structure and H as each unit-coalition’s strategy. So the players achieve that strategy profile. In a concrete realization of expanded cooperative Matching Pennies, the strategic equilibria are not Nash equilibria. Both strategic and Nash equilibria exist but do not overlap. A solution cannot be both a strategic equilibrium and a Nash equilibrium, so the standard of strategic equilibrium conflicts with the standard of Nash equilibrium. Strategic equilibrium survives this conflict. The unique Nash equilibrium (D, D) is unattractive. Intuitively, it is not a solution. Instead, a solution is a strategic equilibrium, perhaps (H, H). In a game a player’s strategies are all his options. A representation features strategies salient in strategic reasoning. It does not list all strategies. For example, a payoff matrix omits mixed strategies. The strategies it lists imply them. Also, a matrix omits more specific versions of the strategies it lists. A solution in the strict sense is a strategy profile drawn from players’ full sets of strategies. An adequate representation may not list the strategies a solution contains. It may instead list strategies that generate the same utility profile. A profile of strategies drawn from a game’s representation counts as a solution in an expanded sense, if it is utility-equivalent to a solution drawn from players’ full sets of strategies. The utility profiles that solutions in the expanded sense generate are independent of the game’s representation although the solutions themselves are relative to it. As Chapter 5 explains, this book usually treats solutions in the expanded sense.
Illustrations and Comparisons
193
A game with one solution among coarse-grained strategies may have multiple fine-grained solutions. In a version of Matching Pennies with mixed strategies, the unique Nash equilibrium is a profile in which each player flips his penny to display either Heads or Tails. Suppose that profile is the solution. Consider a finedgrained representation of strategies that splits flipping into flipping with the right hand and flipping with the left hand. Then four strategy profiles count as solutions, for example, the profile in which each player flips with the right hand. These profiles are all utility-equivalent to the coarse-grained solution. The utility profile both agents’ flipping yields is the same, whether a player flips with the right hand or the left hand. Some theorists define a player’s Nash strategies with respect to a matrix, just as they define a player’s dominant strategies and maximin strategies with respect to a matrix. However, in the strict sense a player’s Nash strategies are defined with respect to a complete set of the player’s strategies. Consequently, a Nash equilibrium exists with respect to players’ full sets of strategies. A Nash equilibrium with respect to a game’s representation may be utility-equivalent to a Nash equilibrium with respect to players’ full sets of strategies. The strategy profiles may yield the same utility profile. An extended definition of a Nash equilibrium includes profiles utility-equivalent to a Nash equilibrium with respect to players’ full sets of strategies. The extension accommodates usage that makes Nash equilibrium relative to a game’s representation. This book often uses the extended sense of Nash equilibrium. Its account of strategic equilibrium similarly adopts a basic definition using players’ full sets of strategies and an extension using players’ sets of strategies in a game’s representation. Some conflicts between strategic and Nash equilibrium arise with respect to a game’s representation, and not with respect to players’ full sets of strategies. Consider a nonideal version of Matching Pennies in which players are not prescient. Each player is uncertain of the other’s response to Heads and to Tails. Each assigns probability 1/2 to a response of Heads and to a response of Tails. Hence for each player the expected utility of Heads is (1/2)2 þ (1/2)0 ¼ 1. The expected utility of Tails has the same value. Because Heads and Tails have the same expected utilities, a representation may conflate them. Table 10.3 shows how the conflation transforms the game’s payoff matrix. The original matrix lacks an objective Nash equilibrium among the strategies it depicts, but the new matrix has an objective Nash equilibrium among the Table 10.3 Transformation of Matching Pennies
H T
H
T
2, 0 0, 2
0, 2 2, 0
H or T becomes
H or T
1, 1
194
Collective Rationality
strategies it depicts. Because the game is not ideal, objective and subjective Nash equilibria need not agree. The objective and subjective Nash equilibria of the coarse-grained representation agree. They yield the matrix’s single strategy profile. The objective and subjective Nash equilibria of the fine-grained representation do not agree. Although Column gains by switching from (H, H) to (H, T), she is not prescient and cannot discriminate between (H, T) and (T, T). She does not have a subjective incentive to switch. The fine-grained representation lacks an objective Nash equilibrium, but (H, H) is a subjective Nash equilibrium. The representation’s other strategy profiles are subjective Nash equilibria for similar reasons. Each profile is also a strategic equilibrium because no profile generates an incentive to switch. All profiles are utility-equivalent although they yield different assignments of objective payoffs, or informed utilities. Suppose that the nonideal version of Matching Pennies expands because each player acquires the disastrous strategy D. Table 10.4 shows how conflating the equivalent strategies H and T transforms the game’s payoff matrix.3 The efficient Nash equilibrium in the coarse-grained matrix yields the utility profile (1, 1). The efficient Nash equilibrium is a strategic equilibrium, too. With respect to the finegrained representation, objective Nash equilibrium conflicts with strategic equilibrium. However, with respect to the coarse-grained representation, objective Nash equilibrium does not conflict with strategic equilibrium. The efficient objective Nash equilibrium corresponds to a subjective Nash equilibrium that agrees with a strategic equilibrium. Moving from the finer to the coarser representation of the nonideal version of expanded Matching Pennies resolves an apparent conflict between Nash equilibrium and strategic equilibrium. The same move does not resolve the conflict in the ideal version of expanded Matching Pennies. Because agents are prescient in the ideal version, subjective Nash equilibria are identical with objective Nash equilibria. The unique objective Nash equilibrium of the finer representation is also the unique subjective Nash equilibrium. It differs from all strategic equilibria. Moving to the coarser representation does not reveal a new subjective Nash Table 10.4 Transformation of Expanded Matching Pennies
D H T
D
H
T
m, m m, m m, m
m, m 2, 0 0, 2
m, m 0, 2 2, 0
D
H or T
m, m m, m
m, m 1, 1
Becomes
D H or T
Illustrations and Comparisons
195
equilibrium. The new objective Nash equilibrium is just a figment of the representation. It is not utility-equivalent to any subjective Nash equilibrium of the finer representation. The coarser representation is inadequate because it does not display Nash equilibria utility-equivalent to Nash equilibria with respect to players’ full sets of strategies. Although conflict persists in the ideal version of expanded Matching Pennies, strategic equilibrium survives because the Nash equilibrium is unattractive. 10.4 C OLLECTIVE S TANDARDS Familiar nonequilibrium standards for solutions to cooperative games are nondomination, independence, and efficiency. This section compares the standard of strategic equilibrium with these standards. The comparison with efficiency prompts a close look at idealizations. A commonly proposed requirement for solutions uses the relation of strict dominance. According to the proposal, a strategy profile is a solution only if no agent has a strategy that strictly dominates the strategy the profile assigns to it. An agent’s strategy strictly dominates another if and only if the agent prefers the first strategy to the second, given every possible state of the world. For an agent in a game, the relevant possible states of the world are compatible combinations of other agents’ strategies. Strict domination among a coalition’s strategies depends on the definition of a coalition’s preferences. According to the definition in Chapter 9, a multimember coalition prefers a joint strategy to another strategy if and only if all its members prefer the first strategy, and it prefers nonformation to another strategy if and only if some member prefers nonformation. Hence, for a multimember coalition, a joint strategy strictly dominates another strategy if and only if each member prefers the first strategy to the second, given each possible response by other coalitions. In an ideal coalitional game, for example, a coalition’s efficient distribution of its value strictly dominates an inefficient distribution of its value. A multimember coalition’s joint strategy is strictly Pareto superior to another strategy, given a compatible combination of others’ strategies, if and only if every member prefers the first strategy to the second, given the combination of others’ strategies. A multimember coalition’s joint strategy strictly dominates another strategy if and only if the first strategy is strictly Pareto superior to the second, given every compatible combination of others’ strategies. For brevity in comparing these relations among strategies, this section speaks of dominance instead of strict dominance and superiority instead of strict Pareto superiority. Both dominance and superiority involve preferences conditional on a strategy profile. A coalition’s strategy dominates another strategy if and only if given every profile with the second strategy the coalition prefers the first strategy. In contrast, the first strategy is superior to the second if and only if given the profile with the second strategy and others’ response the coalition prefers the first strategy.
196
Collective Rationality
Superiority holds with respect to a strategy in the context of a particular profile. Thus, for joint strategies, domination strengthens superiority. To illustrate, consider three-person, majority-rule division of $6. Its characteristic function is: v(A) ¼ v(B) ¼ v(C) ¼ 0, v(AB) ¼ v(BC) ¼ v(AC) ¼ 6, v(ABC) ¼ 6. In this game, whether {A, B, C} has a strategy superior to nonformation depends on others’ acts if it does not form. Its realization of (2, 2, 2) is superior to its nonformation if given its nonformation {A, B} forms and realizes (0, 0, 0). Its realization of (2, 2, 2) does not dominate its nonformation, however. The coalition {A, B, C} does not prefer realization of (2, 2, 2) to its nonformation if given its nonformation {A, B} forms and realizes (3, 3, 0). Superiority depends on others’ acts, whereas dominance is independent of others’ acts. In elementary coalitional games, the standard of strategic equilibrium absorbs the standard of dominance given Chapter 9’s selection and stopping rules for pursuit of incentives. Selection of optimal incentives prevents adoption of a dominated strategy. If a profile assigns a coalition a dominated strategy, the coalition has an incentive to switch from that strategy to a strategy dominating it. The path that incentive starts does not return to the dominated strategy, no matter how others respond to the coalition’s strategies. Even if the path is infinite, the stopping rule requires pursuing the initial incentive. Hence, a path of pursued incentives leads the coalition away from the dominated strategy. It is not part of a strategic equilibrium. The standard of independence governs changes in solutions as games change. It asserts that elimination of profiles without elimination of solutions does not change the set of solutions. Theorists commonly propose this standard for bargaining games, where only the coalition of all individuals, the grand coalition, has an incentive to form. The argument for the standard assumes that the grand coalition has a preference ranking of profiles and that a solution is a profile at the top of that preference ranking. It claims that the grand coalition’s preferences among profiles should be independent of profiles’ feasibility. So profiles at the top of its preference ranking should not change as other profiles become infeasible. Therefore, solutions should not change as nonsolutions become infeasible. This argument uses a principle of independence for an agent’s preferences. The principle applies to cases where changes in acts’ feasibility do not change the grounds of preferences among acts. The argument ignores the principle’s restrictions. Consequently, it applies the principle outside the principle’s range. Changes in bargains’ feasibility influence individuals’ bargaining positions. They alter the grounds of preferences among bargains.4 The standard of independence conflicts with the standard of the core in coalitional games. In three-person, majority-rule division of $6, the core is empty. So no profile qualifies as a solution according to the standard of the core. Modifying the game so that two-person coalitions cannot form makes (2, 2, 2) a core allocation. If, in the modified game, that allocation is the only
Illustrations and Comparisons
197
feasible division of {A, B, C}’s gains, then it is the only element of the core. In that case, the standard of the core makes realization of (2, 2, 2) the game’s solution. Contrary to the standard of independence, it makes a nonsolution become a solution as other nonsolutions disappear. The standard of independence may similarly conflict with the standard of strategic equilibrium. Removing strategies may remove opposition to a strategy profile so that it becomes a strategic equilibrium and also a solution. In such cases of conflict, the standard of strategic equilibrium has the upper hand. Removing nonsolutions may affect the set of solutions. The standard of independence incorrectly ignores feasibility’s effect on preferences. Theorists propose the standard of efficiency for solutions to cooperative games. It is implausible in noncooperative games. For instance, it conflicts with the standard of nondomination in the Prisoner’s Dilemma. In a coalitional game, a core allocation entails efficiency. Also, a failure to realize a core allocation entails some coalition’s inefficiency among its strategies. Weakening the standard of the core for attainability’s sake condones some departures from coalitions’ efficiency among their strategies. However, efficient allocations exist even when core allocations do not exist. Does a coalitional game’s solution require an efficient allocation? Economics contains celebrated results about efficiency’s realization by rational agents. Coase’s Theorem asserts that rational agents with symmetric information achieve efficiency in the absence of transaction costs. Coase did not prove this proposition, and others have proved it in special cases only, and not for coalitional games with empty cores. The First Fundamental Theorem of Welfare Economics states that efficiency emerges in a perfectly competitive market. It assumes a market with a nonempty core. No proof establishes that utility maximization yields efficiency in coalitional games with empty cores. It is an open question whether rational agents achieve efficiency in these games.5 Suppose that two people bargain over division of a windfall. One may refuse an efficient 50–50 split if she thinks that her refusal leads to an inefficient 60–30 split favoring her. However, in ideal conditions for bargaining, opportunities for communication and contracts ensure that no individual has an incentive to block efficiency because she does better in an expected inefficient outcome. Rational bargainers form the grand coalition and efficiently divide its profits. Bargaining in ideal conditions produces efficiency. The ideal conditions that ensure efficiency in bargaining problems do not ensure efficiency in all coalitional games, however. In a coalitional game, a player’s bargaining leverage may vary according to the coalition to which she belongs. A player with much power in a subcoalition may have little power in the grand coalition. A rational player joins a coalition that favors her even if that blocks formation of the grand coalition and efficiency. Players may be able to solve an isolated bargaining problem efficiently without being able to solve efficiently the multiple bargaining problems that a coalitional game creates. Efficiency may be a
198 Collective Rationality standard for solutions to bargaining problems without being a standard for solutions to coalitional games in general. Consider a three-person game with this characteristic function: v(A) ¼ v(B) ¼ v(C) ¼ 0, v(AB) ¼ v(BC) ¼ v(AC) ¼ 8, v(ABC) ¼ 9. Only the grand coalition generates efficiency. However, for any efficient profile, some two-person coalition has an incentive to deviate. It profits by excluding the third person, despite her productivity. I call this game Inefficient Exclusion. In it utility maximization by informed individuals does not ensure efficiency even given opportunities for communication and contracts. In a concrete version of the game, B may prevent formation of the grand coalition. She may do better in {A, B} than in {A, B, C} because she bargains better with A alone than with both A and C. She may foresee that if {A, B} forms, the outcome is (2, 6, 0), whereas if {A, B, C} forms, the outcome is (3, 3, 3). Anticipating more from the two-person coalition than from the three-person coalition, she may block {A, B}’s expansion to include C.6 These results are compatible with strategic equilibrium. Suppose that in a concrete realization of Inefficient Exclusion, the coalition {A, B} pursues its incentive to switch from realization of (3, 3, 3) to realization of (4, 4, 0). The coalition {A, C} responds with (6, 0, 2). Then the coalition {B, C} switches to (0, 4, 4). Finally, the coalition {A, B} switches to (2, 6, 0). Coalitions stop pursuit of incentives at (2, 6, 0). Thus, coalitions pursue incentives along the path (3, 3, 3) ! (4, 4, 0) ! (6, 0, 2) ! (0, 4, 4) ! (2, 6, 0) and stop at the end. Their pursuit of incentives complies with the selection and stopping rules. Given the pattern of pursuit of incentives, the coalition {A, B} has a terminating path of pursued incentives from (3, 3, 3) to nonformation and then, finally, to (2, 6, 0). Consequently, realizing (3, 3, 3) is efficient but not a strategic equilibrium. On the other hand, realizing (2, 6, 0) is a strategic equilibrium but is not efficient. Efficiency is neither necessary nor sufficient for strategic equilibrium. Are there coalitional games in which the standard of strategic equilibrium requires a solution to be inefficient? In every game both efficient profiles and strategic equilibria exist, so the standards of strategic equilibrium and of efficiency conflict only if efficient profiles and strategic equilibria do not overlap. Some realizations of Inefficient Exclusion prevent an overlap and create a conflict. In any realization, every efficient profile requires formation of the three-person coalition and its division of 9 units of utility. The strategic equilibria depend on the agents’ pursuit of incentives. They may pursue incentives, conforming with the selection and stopping rules, so that each of the three-person coalition’s joint strategies starts a path terminating in a two-person coalition’s joint strategy. Then no efficient profile is a strategic equilibrium. The standard of strategic equilibrium survives conflict with the principle of efficiency. The principle of efficiency fails in noncooperative games, such as the Prisoner’s Dilemma, because it ignores individuals’ incentives. In coalitional games, the principle of efficiency ignores incentives of coalitions smaller than the grand coalition. It does not acknowledge a subcoalition’s incentive to deviate
Illustrations and Comparisons
199
from an efficient profile. Efficiency is too narrow-minded to be a general standard for coalitional games. Being a strategic equilibrium is a necessary condition for being a solution. Being efficient is not an additional necessary condition for being a solution. If it were, then in realizations of Inefficient Exclusion without efficient strategic equilibria, a solution would not exist. Joint rationality would be unattainable. Joint rationality does not require efficiency. A solution need not be efficient. Despite ideal conditions for joint action, rational agents may fail to achieve efficiency in a coalitional game with an empty core. Do the selection and stopping rules ensure efficiency? The selection rule governs the incentive pursued if any is pursued. It regulates the agent who pursues an incentive away from a profile if multiple agents have incentives to switch. Suppose that the selection rule gives priority to the grand coalition’s incentives and requires a selected agent to pursue an optimal incentive. Suppose also that the stopping rule makes the grand coalition the last to halt pursuit of incentives and allows it to halt only at a joint strategy. Then the rules ensure that paths contain efficient profiles and stop only at efficient profiles. However, the selection and stopping rules have no reason to favor the grand coalition’s incentives to adopt joint strategies. The rules rest on individuals’ reasons for pursuit of incentives so that they may support collective standards for solutions such as equilibrium. Efficiency is a collective reason for pursuit of incentives and so not good grounding for the rules. Plausible selection and stopping rules do not eliminate the possibility of inefficient strategic equilibria. In Inefficient Exclusion the coalition {A, B, C} has an incentive to switch from a profile realizing (2, 6, 0) to a profile realizing (2.4, 6.4, 0.2). Achieving the second outcome is better for each member of the coalition. The incentive to switch may be insufficient, however. In a realization of the game, suppose that pursuing the incentive starts a path back to (2, 6, 0). Then {A, B, C} may justifiably fail to pursue its incentive to achieve (2.4, 6.4, 0.2). Pursuit starts a cycle of incentives, and a coalition may rationally forgo nonproductive relentless pursuit of incentives. In a game with an empty core, not every coalition may pursue all incentives. The stopping rule allows the grand coalition to halt pursuit of its incentives. It may rationally abandon pursuit of an incentive to adopt a joint strategy. Smaller coalitions are not the only agents that may rationally forgo incentives. Does a coalitional game’s being ideal ensure efficiency? Perhaps the characteristic function of an ideal game prevents an incentive structure in which rational pursuit of incentives does not reach an efficient outcome. The argument for the claim invokes efficiency’s status as a goal of collective rationality. If efficiency is a goal of collective rationality, the argument contends, ideal games meet conditions that ensure its attainment. This way of obtaining efficiency requires identifying plausible idealizations that ensure efficiency. Otherwise inefficiency in games that are apparently ideal may show that efficiency is not a goal of collective rationality.
200 Collective Rationality A suitable idealization supporting efficiency is the comprehensive rationality of agents. Comprehensively rational agents prepare for coalitional games. In ideal conditions, they can communicate effortlessly about their pursuit of incentives and strike bargains. Because of preparation, they coordinate pursuit of incentives so that some strategic equilibrium is efficient among all strategy profiles, and they coordinate to realize an efficient strategic equilibrium. They may do this without violating rationality’s constraints on their pursuit of incentives. In an ideal version of Inefficient Exclusion, comprehensively rational agents have in advance coordinated their pursuit of incentives so that it does not stop at (2, 6, 0) but continues to an efficient outcome such as (2.4, 6.4, 0.2). Every path of pursued incentives stops at an efficient outcome. This ensures that the coalitions forming do not exclude a productive agent. Because the agents have the opportunity to coordinate pursuit of incentives and coordination brings gains for each, they coordinate to achieve efficiency. The standard of efficiency for ideal coalitional games does not depend on ideal conditions for joint action, or on the selection and stopping rules for pursuit of incentives, but rather on players’ comprehensive rationality. Provided the idealizations for coalitional games are understood to include comprehensive rationality, efficiency is necessary for a solution. Strategic equilibrium fares well in comparisons with common standards of collective rationality. Core allocations, when they exist, are outcomes of strategic equilibrium, and in ideal coalitional games strategic equilibria are efficient. Strategic equilibrium, in appropriate circumstances, grounds other less general standards of collective rationality.
11
Compositionality
R
AT I O NA L I T Y establishes standards for individuals and for groups. The principle of compositionality in Chapter 2 implies that a group acts rationally if each member does. Strategic equilibrium is a standard of collective rationality. Chapter 9 shows that, in accordance with compositionality, individual rationality entails strategic equilibrium in coalitional games. This chapter illustrates that general entailment. It shows how strategic equilibrium in certain coalitional games emerges from individuals’ rationality. Its analysis of selected coalitional games uses methods that extend to other coalitional games.
11.1 U NDERLYING G AMES The illustrations in this chapter present a coalitional game and the sequential game underlying it. The coalitional game ends with a coalition structure and a utility profile. The sequential game has procedures that individuals use to form coalitions and divide gains. A strategic equilibrium in a coalitional game is an equilibrium among coalitions’ incentives, whereas a strategic equilibrium in the underlying sequential game is an equilibrium among individuals’ incentives. Examples take two steps to move from individual rationality to strategic equilibrium in a coalitional game. The first step moves from individuals’ compliance with the principle of self-support to strategic equilibrium in the underlying sequential game. The second step moves from strategic equilibrium in the underlying sequential game to strategic equilibrium in the coalitional game. Putting the two steps together, principles of individual rationality lead to strategic equilibrium in the coalitional game. To prepare for the examples, this section analyzes the sequential games underlying the coalitional games. It reviews their features, the idealizations that govern them, and their representations. A coalitional game’s representation displays some, but not all features of the sequential game underlying the coalitional game. Several concrete sequential games may realize a coalitional game’s representation. For example, sequential 201
202 Collective Rationality games with different protocols for negotiation may realize three-person, majorityrule division of $6. Its characteristic function’s realization may affect the outcome. For example, if only two individuals may communicate, then they divide the $6 among themselves. A specification of equilibria of a coalitional game’s representation generally presumes a limited range of realizations by sequential games. This chapter identifies strategic equilibria in concrete coalitional games. Just one concrete sequential game underlies a concrete coalitional game. A single-stage coalitional game has a representation with a single stage. A characteristic function represents the game nonsequentially. Coalitions form and divide profits in the representation’s single stage. The underlying sequential game has a multistage representation. The stages represent potential rounds of negotiation. Moves in multiple stages may realize a path of tentative steps in the coalitional game’s single stage. The characteristic function depicts only the culmination of moves in the multistage representation. The derivation of strategic equilibrium in Chapter 9 treats only ideal coalitional games and the illustrations in this chapter feature those games. An ideal sequential game underlies an ideal coalitional game. In the sequential game, individuals have opportunities for communication and coalition-formation, including opportunities to bargain about division of a coalition’s gains. Offers and acceptances are costless moves. Individuals are cognitively ideal and fully informed about their game and about each other. Their deliberations and decisions are cost-free. Their cognitive power fosters quick settlements. Individuals foresee the result of prolonged negotiation, and so reach its result as soon as possible in the underlying sequential game. Although their deliberations have no cost, time is valuable to them, and progressing through the sequential game’s stages takes time. Settling in the first stage, if possible, evades the temporal costs of lengthy negotiations. Finally, individuals are fully rational, and so comply with selection and stopping rules for pursuit of incentives. They prepare for joint action and make the most of opportunities to achieve it. In ideal sequential games agents’ information makes them prescient. As a result, they foresee a profile’s realization. Also, agents know the incentives pursued. That knowledge is part of their knowledge of the game. Agents’ prescience and knowledge of pursuit of incentives may arise from agreements that inform each of the acts and choice-dispositions of all, but the chapter’s examples leave open such details. The sequential game underlying a coalitional game has too many features to list, so a representation adequate for identifying solutions in the coalitional game selects features on which those solutions depend. A detailed representation of the sequential game specifies individuals’ possible moves and their order. It displays individuals’ proposals and responses leading to agreements that form coalitions constituting a coalition structure and that divide the coalitions’ profits. The sequential game underlying a coalitional game may allow individuals to negotiate endlessly about formation of coalitions and divisions of profits. At
Compositionality
203
each stage an individual may make a proposal, and at subsequent stages other individuals may respond. Paths of possible proposals and responses may be infinitely long because coalitions may form and disband endlessly. Also, when a coalition forms, members typically face an infinite number of possible divisions of the coalition’s profits. A tree may not adequately represent all possible moves in the sequential game. Because individuals end their sequential game, they do not pursue an infinitely long path of proposals and responses. At some point, individuals make proposals to which no one objects, and their proposals become the game’s outcome. Coalitions form and settle divisions of profits. Representing a play of the sequential game requires fewer resources than representing the whole game. A branch of a tree suffices. It indicates the moves individuals make in the sequential game. Further, players may reach the anticipated results of many stages of negotiation in the game’s first stage. The sequential game gives players the option of multiple stages, but players may not exercise that option. A play of a multistage game may end in a single stage. The branch representing the play may have a single node. A coalition’s act typically may emerge from individuals’ acts in many ways. Different individuals may instigate the coalition’s formation, for example. A representation of the sequential game underlying a coalitional game may conflate acts that yield the same outcome if the diversity of acts yielding that outcome does not affect strategic considerations. An adequate representation displays all features that affect the sequential game’s solutions. It agrees with an adequate representation of the coalitional game about solutions. That is, a profile of the sequential game’s representation is a solution only if it entails a solutionprofile of the coalitional game’s representation. Moreover, a profile of the coalitional game’s representation is a solution only if some profile of the sequential game’s representation entailing it is a solution. For this chapter’s elementary coalitional games, a characteristic function is an adequate representation of coalitions’ incentives. A companion representation of the underlying sequential game specifies how individuals’ options and incentives yield coalitions’ options and incentives. It shows how individuals’ acts yield coalitions’ acts and individuals’ incentives yield coalitions’ incentives. Chapter 9 took a step toward specification of an underlying sequential game by representing outcomes with payoffs for individuals, not just for coalitions. Doing this reveals how coalitions’ incentives depend on individuals’ incentives. A sequential game’s representation specifies how individuals’ pursuit of incentives yields coalitions’ pursuit of incentives. To identify strategic equilibria, just as a coalitional game’s characteristic function needs supplementation with an account of coalitions’ pursuit of incentives, an underlying sequential game needs an account of individuals’ pursuit of incentives. A strategic equilibrium of a concrete coalitional game’s single-stage representation corresponds to a strategic equilibrium of the game’s sequential representation because coalitions’ pursuit of incentives depends on individuals’ pursuit of incentives.
204 Collective Rationality An analysis aiming to identify solutions may simplify a sequential game’s representation. It need not represent all paths containing exactly one move at each stage of the game. In a sequential game, some paths of moves include irrational moves. An analysis may eliminate a path with an irrational move. Some individual has a sufficient reason to change a move in the path. For an ideal sequential game underlying an ideal coalitional game, a representation may omit a path of moves that leads to an outcome inefficient for some coalition that forms. It contains an irrational move by some individual. Its elimination does not require identification of the irrational move. In special cases, elimination of irrational strategies solves the underlying sequential game. An initial elimination may yield a finite game tree. Then backward induction may eliminate more strategies with irrational moves, and may identify the underlying sequential game’s unique Nash equilibrium. Assuming that Nash and strategic equilibria coincide in the game, that Nash equilibrium is the game’s solution. Realizing that profile solves the sequential game and the coalitional game it underlies. Of course, a reduced representation, although adequate for identifying a game’s solution, may lack resources needed to explain why that strategy profile is a solution. Whether a game’s representation has adequate detail depends on its purpose. If used to identify the strategic equilibrium realized, it needs information sufficient for equilibrium selection. If used to identify strategic equilibria, it needs information sufficient for identifying strategy profiles where pursuit of incentives may halt. If used to verify that a profile is a strategic equilibrium, it needs information sufficient for detecting sufficient incentives to switch from the profile. The examples in this chapter include details of an underlying sequential game that are important for identification of strategic equilibria. Verifying that a strategy profile is a strategic equilibrium of a coalitional game requires showing only that it results from a strategic equilibrium of the underlying sequential game. A representation of the sequential game that shows sufficient incentives to switch from the profile suffices for that task. To verify rather than explain an equilibrium, an analysis may dispense with the global dynamics of the underlying game. It may verify an equilibrium by examining its local dynamics, that is, incentives with respect to the profile. To show that individual rationality entails a strategic equilibrium in a coalitional game, this chapter needs only the underlying sequential game’s local dynamics for selected profiles. It may show that if individuals in the coalitional game realize a nonequilibrium, then some individual irrationally participates in the profile’s realization. If all individuals are rational, they realize an equilibrium instead. 11.2 C ONFIRMATION If a concrete game has two representations, an analysis using one representation may confirm an analysis using the other representation. A fine-grained analysis’s
Compositionality
205
standards of individual rationality may confirm a coarse-grained analysis’s standards for solutions. Several familiar derivations of standards for solutions illustrate this method of confirmation. A paradigm derivation of a standard for solutions treats ideal sequential noncooperative games. It shows that utility maximization by individuals yields a Nash equilibrium. Backward induction eliminates strategies with a nonmaximizing last step, then strategies with a nonmaximizing penultimate step, and so on. Players’ common knowledge of their game tree and their resilient utility maximization supports this backward induction. A player making a move predicts others’ later moves and uses those predictions to make a utility-maximizing move. In games without moves that tie, the procedure yields the rollback equilibrium among complete strategies for playing the game. That is a Nash equilibrium of the sequential game. As Section 5.3 explains, the rollback equilibrium of the sequential game is also a Nash equilibrium of any single-stage noncooperative game that the sequential game underlies. Individual rationality supports both the rollback equilibrium of the sequential game and the corresponding Nash equilibrium of the single-stage game. However, a Nash equilibrium of the single-stage noncooperative game’s representation may not be a rollback equilibrium of the game’s sequential representation. Then individual rationality rejects that Nash equilibrium. The singlestage representation is inadequate for representation of the concrete game’s solutions. In the finer analysis, individual rationality confirms some, but not all Nash equilibria of the coarser analysis. Backward induction may justify a solution to a bargaining problem. Consider the Ultimatum Game that Section 8.2 presents. It is a two-stage sequential game underlying a two-person bargaining problem about division of $10. Imagine an isolated, ideal version of the game with transferable utility, and suppose that B breaks a tie between (10, 0) and (0, 0) by favoring (0, 0). Backward induction yields (9, 1) as A’s proposal and its acceptance by B. A solution of the bargaining problem follows from the unique Nash equilibrium of the underlying sequential game. That Nash equilibrium follows from players’ step-wise utility maximization. The solution differs from Nash’s bargaining solution because the bargaining protocol gives A more leverage than B. It agrees with Nash’s asymmetric bargaining solution given suitable bargaining-power weights stemming from the bargainers’ situation. The fine-grained analysis’s standards for solutions modify the coarse-grained analysis’s standards for solutions.1 Binmore (1987) carries out the Nash program for Nash’s solution to a twoperson bargaining problem such as reaching an agreement about division of a pie. He reduces Nash’s solution to a Nash equilibrium in an underlying sequential game where offers and acceptances are moves at a stage and the result is a settlement of the bargaining problem. In the sequential game, the two bargainers take turns making offers until one accepts an offer. They are under time pressure to reach an agreement. After any bargaining move, having time for an additional
206
Collective Rationality
move depends on chance. The bargainers have perfect and complete information about their sequential game. They also have common knowledge of their sequential game’s tree, the utility profiles of its outcomes, and their resilient utility maximization. The sequential game has a unique subgame perfect Nash equilibrium, as Rubinstein (1982) shows. Assuming that the players do not discount future gains, it converges to Nash’s solution to the bargaining problem as the probability of bargaining’s breaking down in a stage approaches 0. Nash’s bargaining solution follows from the subgame perfect Nash equilibrium, and the subgame perfect Nash equilibrium follows from individual rationality because other strategy profiles require some individual to make an irrational move.2 Consider an alternating-offers protocol for two-person bargaining over 10 units of transferable utility. Because the protocol differs from the Ultimatum Game’s take-it-or-leave-it protocol, bargaining has a different outcome than in the Ultimatum Game. Suppose that the agents are symmetrically situated. Then Nash’s solution calls for a (5, 5) division, and Binmore’s methods support this division. Principles of individual rationality may yield different outcomes of an abstract bargaining problem characterized by a set of utility profiles when different underlying sequential games realize the abstract bargaining problem. A set of utility profiles is an inadequate representation of a concrete bargaining problem. It needs supplementation at least by an account of agents’ bargaining power. An expanded representation assigns to each agent the bargaining power that the underlying sequential game confers. All adequate representations of a concrete bargaining problem agree about agents’ bargaining power because it affects solutions. However, one representation may explicitly indicate bargaining power, whereas another representation may indicate it implicitly. Subsequent sections use the method of confirmation this section illustrates to show how strategic equilibrium in a coalitional game emerges from individual rationality in an underlying sequential game. Section 11.3 introduces the type of sequential game that following sections assume. 11.3 AGREEMENT G AMES Individuals in a concrete coalitional game have a mechanism for forming coalitions and dividing their profits. This section describes a particular mechanism individuals may use to reach agreements that produce coalitions and their acts. The mechanism is a sequential game that I call an agreement game. It underlies a coalitional game. An agreement game has multiple rounds in which individuals simultaneously propose ways of acting jointly. Each round by itself takes a negligible amount of time, although many rounds together take a significant amount of time. In a round, each individual makes a proposal about forming a coalition and dividing its profits. Any individual may change her proposal after seeing others’ proposals. A change generates a new round of proposals. In one round, an individual may
Compositionality
207
propose formation of a coalition and a certain division of its profits. In the next round, another individual may propose a different division of the coalition’s profits or formation of another coalition. If for a coalition proposed, every member makes the same proposal, then their proposals constitute an agreement between the members of the coalition. If the coalitions reaching agreement form a coalition structure, then their concordant proposals constitute an agreement on a strategy profile of the coalitional game, provided that none alters her proposal. Thus, the agreement game ends when a round of proposals repeats concordant proposals in the previous round. The proposals then constitute an agreement on a strategy profile of the coalitional game. Once reached, this agreement is binding, and players implement it. An individual’s strategy is a plan for the whole agreement game. Players may adopt strategies independently. A sequence of moves in the game realizes players’ plans. A move in a sequence may causally influence a subsequent move. For example, in a two-agent bargaining problem about a division of 2 units of transferable utility, suppose that the agents A and B are symmetrically situated and the solution is (1, 1). In the underlying agreement game, suppose that the initial pair of proposals is ((0, 0), (0, 0)). In the next round, A may change his proposal to (1, 1) so that the result is ((1, 1), (0, 0)). Then B may respond to A’s overture with (1, 1) to achieve the profile of concordant proposals ((1, 1), (1, 1)). If A and B persist with these proposals, they reach an agreement with the outcome (1, 1). In the agreement game, each individual seeks to maximize utility, and has an incentive to instigate any coalition profitable to her. If the players cannot achieve their demands, then they reduce their demands. If their proposals prevent realizing a coalition structure, then they alter their proposals. Despite individuals’ conflicting incentives, the agreement game eventually ends. Agents may use communication during rounds to accelerate the process of reaching agreement. The final proposals form an equilibrium of the game’s pattern of pursuit of incentives.3 Unanimity on the coalition structure and division of coalitions’ profits suffices but is not required for an agreement settling the game’s outcome. A coalition structure arises when the relevant players accept proposals realizing the structure. In three-player majority-rule division of 6 units of transferable utility, if two players reach agreement, they settle the game’s outcome regardless of the other player’s proposal. The agreement game requires only agreement among players sufficient to establish a coalition structure. Two players suffice in the majority-rule game. Players A and B may establish the structure {{A, B}, {C}}. In a four-player game, {A, B} and {C, D} generate a coalitional structure. To realize it, players A and B must agree, and also players C and D must agree. In general, an outcome results from unanimity among members of each coalition in a set of coalitions that constitute a coalition structure. Consequently, the same outcome of a coalitional game may arise from several strategy profiles in the underlying agreement
208 Collective Rationality game. For example, in the majority-rule game, the division (4, 2, 0) may arise from the profile of proposals ((4, 2, 0), (4, 2, 0), (4, 2, 0)) or from the profile of proposals ((4, 2, 0), (4, 2, 0), (0, 0, 6)). Both profiles generate the coalition structure {{A, B}, {C}} because they agree on the division of profits of the multimember coalitions in that structure. The agreement game is ideal. Nonetheless, if individuals are not informed about each other, matching proposals are unlikely in the first round (unless their coalitional game has a unique solution). Rounds of proposals reveal information that assists coordination. Individuals may learn about others from a round of proposals and use that information in subsequent rounds. Information they acquire may exceed common knowledge of their coalitional game’s characteristic function and their rationality. It may include information about their psychologies not inferable solely from their rationality, say, information about their pursuit of incentives. The sequential game underlying a single-stage coalitional game need not have a single-stage realization. Acts at various stages of the sequential game may constitute acts in a single stage of the coalitional game. In the agreement game, multiple rounds of proposals culminate at a stage when no player changes her proposal. The culmination of the rounds constitutes coalitions’ acts in the coalitional game’s single stage. To simplify, I suppose that prior to the agreement game, players communicate without cost or restriction and gain in advance all the information they may acquire in multiple rounds of proposals. For instance, to promote profitable joint action, players disclose information about their pursuit of incentives. Hence, in the agreement game they make concordant proposals in the first round and are steadfast so that the game ends with those proposals. During the pregame communication period, players may reach self-enforcing agreements to coordinate proposals. These agreements are not binding. Otherwise, they may change the coalitional game so that its characteristic function no longer adequately represents it. The agreement game is an infinite sequential game. It has round after round of proposals. However, players may reach an outcome without moving through all rounds. A play of the game is potentially multistage, but it may end quickly if concordant proposals in the first round are repeated in the second round. This happens when the players are fully informed about relevant facts. They review in deliberation multiple hypothetical rounds of proposals before formulating their proposals in the game’s first round. They see in advance where multiple rounds lead and go there straightaway to save time. For example, consider a play of an agreement game underlying a symmetric bargaining problem with two players who must divide a windfall to gain any of it. Both players may from the start steadfastly propose their parts in Nash’s solution, namely, a 50–50 split. That settles the outcome as quickly as possible. In this case, players independently reach proposals. They have opportunities for causally
Compositionality
209
coordinating proposals but do not need them. Their coordination is evidential and not a joint act. The player’s standing pat after learning their proposals is causal coordination, however, and it constitutes their agreement. Hence their agreement is a joint act. Because the agreement game ends with the proposals in its first round, it provides a cost-free realization of a coalitional game. Hence, if it realizes a coalitional game, the coalitional game’s characteristic function is an adequate representation of the coalitional game. If the coalitional game’s realization were to add negotiation costs, then an adequate representation of it would supplement the game’s characteristic function with an account of those costs. Also, an agreement game gives no individual an advantage she does not already have in virtue of the coalitional game’s characteristic function. That function represents individuals’ bargaining power in the agreement game. A coalitional game’s realization by an agreement game is therefore compatible with the adequacy of its representation by its characteristic function. The outcome of an agreement game depends on many factors that this chapter does not analyze, such as the reasoning that leads rational players to choose their proposals in each round. Also, how much of its profits a coalition allocates to a member depends on her power in the coalitional game. Because bargaining power arises from a player’s ability to contribute to coalitions, players may differ in bargaining power. The chapter takes differences in bargaining power for granted, and does not explain them or show how to derive them from a characteristic function. It just assumes that a coalition’s members reach a bargain that complies with standards of individual rationality. In an ideal coalitional game, a coalition’s members are rational, and if the coalition forms they divide profits in a rational way. This chapter does not explain the division they achieve. For example, suppose that in a particular case, realizing a strategic equilibrium requires the grand coalition’s formation. Also, suppose that many divisions of the grand coalition’s profits yield equilibria, and the grand coalition’s members realize one by bargaining. They may, for instance, adopt Nash’s asymmetric solution, which registers differences in bargaining power. This chapter explains the realization of the grand coalition, but not the division of profits it adopts. For definiteness, some examples assume certain divisions of profits, but the chapter does not explain those divisions. Prior to the agreement game, players have an opportunity to coordinate pursuit of incentives in the agreement game and so settle its dynamics. They also have an opportunity to coordinate the realization of a strategic equilibrium. This chapter does not analyze the players’ use of those opportunities. They are part of a larger game in which the agreement game is embedded. This chapter uses the agreement game to explain the outcome of the coalitional game it underlies, and does not analyze the agreement game’s origin. It just assumes that players have a rational pattern of pursuit of incentives. The players’ pattern of pursuit of incentives yields the strategic equilibria of an agreement game and the coalitional
210
Collective Rationality
game it underlies. The players’ coordination on a strategic equilibrium yields an outcome of the agreement game and the coalitional game. The players are comprehensively rational and rationally prepare for their coalitional game in ways that realize an equilibrium that is also a solution. 11.4 T HE C ORE
AND
U TILITY M AXIMIZATION
Strategic equilibrium in coalitional games is a generalization of the core. To illustrate its emergence from individual rationality, this section treats a special case. It treats an ideal coalitional game with a nonempty core in which individuals pursue all incentives. In this case, realizing a strategic equilibrium is equivalent to realizing an element of the core. The section shows that a core element’s realization follows from individuals’ joint rationality in the underlying agreement game and that their joint rationality in the agreement game follows from their individual rationality. The argument for a core allocation has a simple structure. Suppose that the individuals in a coalitional game know the profile realized, and conditions are ideal for coalition formation. Then by maximizing utility individually in the underlying agreement game they also maximize utility jointly and thereby realize a core allocation. Other outcomes are incompatible with their knowledge, decision procedures, and opportunities. The following paragraphs elaborate this derivation of a core allocation’s realization. In ideal games, agents are prescient. Given prescience, an agent’s strategy furnishes him information about the other agents’ strategies. Using their own strategies as assumptions, agents obtain foreknowledge of the profile realized. Rationality requires a self-supporting strategy. In this section’s ideal coalitional games, and in the agreement games that underlie them, agents pursue all incentives. Hence, a self-supporting strategy is a self-ratifying strategy. That is, it maximizes utility given its adoption. Realizing a self-ratifying strategy maximizes utility. Hence, rationality entails utility maximization. Furthermore, because individuals have foreknowledge of the profiled realized, if all maximize utility, they also maximize utility jointly. That is, in the profile they realize, each strategy maximizes utility given the profile’s realization. Self-support and joint self-support are equivalent, respectively, to utility maximization and joint utility maximization. In a single-stage game, an agent’s strategy has no causal influence on other agents’ strategies. In the agreement games underlying coalitional games, communication and binding agreements are possible. One agent’s strategy may influence another agent’s strategy. The causal mechanism is observation of an agent’s move in a round of proposals and a response to it in a subsequent round. If one agent were to change a proposal in one round, another agent may also change a proposal in a later round. The causal influence explains how an agent’s switch
Compositionality
211
in strategy may precipitate a coalition’s switch in strategy. Also, communication and self-enforcing agreements during a pregame period may affect the first round of proposals and their finality. Players may communicate information about themselves, such as their pattern of pursuit of incentives, to facilitate coordination in the first round. The players may reach a self-enforcing agreement on proposals prior to the agreement game. Compliance with the agreement in the first round may prompt them to finalize their proposals. An individual’s deviation from a profile is unilateral in a single-stage noncooperative game, where individuals’ strategies are causally independent. It need not be unilateral in the agreement game underlying a coalitional game. Individuals may instigate a multimember coalition’s formation. One individual may offer another individual a payoff increase so that she withdraws from a coalition and establishes a new coalition better for both. In the agreement game, because one individual’s strategy may causally influence another individual’s strategy, an individual’s deviation from a profile need not be unilateral. A Nash equilibrium is a strategy profile such that no individual profits from unilateral deviation. A profile may be a Nash equilibrium although an individual can precipitate and profit from a coalition’s deviation. The possibility of an individual’s strategy causally influencing another individual’s strategy makes Nash equilibrium insufficient for joint utility maximization in the underlying agreement game. An individual’s strategy may be utility maximizing given a profile of strategies, but not utility maximizing given other individuals’ responses to his other strategies. A rival strategy may precipitate other individuals’ deviations from their parts in the profile. An individual’s strategy switch may instigate formation of a coalition that profits him. Participation in a Nash equilibrium is utility maximizing assuming that others’ strategies do not change, but may not be utility maximizing, given the anticipation that a strategy change prompts changes in others’ strategies. Because of opportunities for joint action, individuals’ joint utility maximization in the agreement game entails realization of a core allocation. Joint utility maximization in the underlying agreement game, given the full range of options it offers individuals, including joining a coalition and bargaining in the coalition to achieve a favorable division of its profits, yields a core allocation. Given a profile in which individuals jointly maximize utility, none forgoes an opportunity to instigate a profitable coalition. Hence, the profile gives each coalition at least its value. No member passes up an opportunity to precipitate formation of a coalition if the coalition’s formation yields gains for all members. Some individual instigates formation of a coalition that moves the outcome into the core. The move benefits every member of the coalition. Suppose, by way of reductio, that individuals do not realize a core allocation. Then some coalition has an incentive to deviate. Some deviation provides gains for all the coalition’s members. Consequently, some member has an incentive to
212
Collective Rationality
instigate the coalition’s deviation. So strategies in the underlying agreement game that realize the profile do not jointly maximize utility. Rational individuals therefore realize a core allocation. To illustrate derivation of an element of the core from individual rationality, consider a cooperative version of the Prisoner’s Dilemma that yields a coalitional game. The players may communicate and adopt binding agreements about playing the game that is represented by the payoff matrix in Table 5.1. The payoff from mutual failure to cooperate is one unit of utility for each player. A binding agreement to cooperate yields four units of transferable utility. Consequently, the characteristic function is: v(A) ¼ v(B) ¼ 1, v(AB) ¼ 4. The coalition {A, B}, if it forms, equally divides its profits because the players are symmetrically situated. In the underlying agreement game, each individual submits a proposal for the pair’s play of the coalitional game. For example, A may submit the proposal (Cooperate, Cooperate), the combination of A’s cooperation and B’s cooperation. The proposal (Cooperate, Cooperate) indicates the coalition structure {{A, B}}, whereas other proposals indicate the structure {{A}, {B}}. Each player’s dominant proposal is (Cooperate, Cooperate). This proposal maximizes utility given any chance that the other player submits it too. Other proposals yield (1, 1), but proposing (Cooperate, Cooperate) has a chance of yielding (2, 2). The profile ((Cooperate, Cooperate), (Cooperate, Cooperate)) is jointly utility maximizing. It generates an agreement to realize (Cooperate, Cooperate), and in the coalitional game generates the coalition structure {{A, B}} and the core allocation (2, 2). Thus individual rationality generates an element of the core. The profile ((Don’t Cooperate, Don’t Cooperate), (Don’t Cooperate, Don’t Cooperate)) is a Nash equilibrium in the agreement game. No agent profits from unilateral deviation. The profile is not jointly utility maximizing, however, because each agent can precipitate ((Cooperate, Cooperate), (Cooperate, Cooperate)) by switching his proposal from (Don’t Cooperate, Don’t Cooperate) to (Cooperate, Cooperate). Precipitating that profile changes the outcome from (1, 1) to (2, 2) and so benefits the instigator. 11.5 S TRATEGIC E QUILIBRIUM
AND
S ELF -S UPPORT
This section examines special cases of two-person bargaining and three-person majority-rule division. It shows that self-support in the underlying agreement games entails strategic equilibrium in the coalitional games. Then it shows that the entailment holds for other similar coalitional games. It illustrates the derivation of strategic equilibrium in coalitional games from individuals’ adoption of self-supporting strategies, as conducted in Chapter 9. Suppose that two individuals A and B must together decide how to divide two units of transferable utility in order to receive the two units. The characteristic function for this coalitional game is: v(A) ¼ v(B) ¼ 0, v(AB) ¼ 2. In the game’s
Compositionality
213
concrete realization, the individuals’ situations are symmetrical, and (1, 1) is the outcome if they act jointly. The individuals, being rational, adopt self-supporting strategies. A’s proposing (1, 1) is self-supporting. Given that proposal, B proposes (1, 1), and being prescient A knows this. A has no incentive to change his proposal to (2, 0), for example. He does not gain from the change unless B follows him. In this concrete realization of the game, B does not follow him, and A knows that. The combination of proposals ((0, 0), (0, 0)) is a Nash equilibrium because neither individual gains from unilateral deviation. However, in this concrete realization of the game, A can precipitate the individuals’ switching from ((0, 0), (0, 0)) to ((1, 1), (1, 1)) by switching from (0, 0) to (1, 1). Communication creates that pattern of causal influence before the agreement game starts. Imagine that B has entered a self-enforcing agreement to switch from (0, 0) to (1, 1) if A does. Because of A’s causal influence on B’s strategy, A has an incentive given ((0, 0), (0, 0)) to switch his proposal to (1, 1). Therefore his proposing (0, 0) is not self-supporting. He has a sufficient incentive to propose (1, 1) instead. A does not gain from switching his proposal from (0, 0) to (1, 1) unless B also proposes (1, 1), but B responds with that proposal. Although the combination ((0, 0), (0, 0)) is a Nash equilibrium, it is not jointly self-supporting. Individuals A and B both propose (1, 1) in the first and second round of proposals. Each proposal is self-supporting. Because each individual anticipates the profile of proposals, their proposals are jointly self-supporting. The profile of proposals ((1, 1), (1, 1)) yields an agreement on the coalition structure {{A, B}} and the division (1, 1). So that division is the outcome of the individuals’ bargaining problem. The profile yielding that division is a strategic equilibrium in their coalitional game. No coalition has a sufficient incentive to deviate from its part in that profile. In fact, no coalition has any incentive to deviate. Neither A nor B can switch to a division more gainful than (1, 1). For example, A cannot switch to (2, 0) without B’s cooperation and in this concrete bargaining problem he does not receive it. He cannot instigate a switch to a division better for him than (1, 1). Similarly, the pair of individuals has no incentive to switch. Although the pair can switch to another division, it does not have an incentive to switch because a switch does not benefit each member. A three-person game with majority-rule division of 6 units of transferable utility furnishes a second illustration of the derivation of collective strategic equilibrium from individually self-supporting strategies. In this case, the core is empty. Each coalition may achieve self-support without a core allocation because its members may achieve self-support without pursuing all incentives. The characteristic function for the coalitional game is: v(A) ¼ v(B) ¼ v(C) ¼ 0, v(AB) ¼ v (BC) ¼ v(AC) ¼ 6, v(ABC) ¼ 6. In the underlying agreement game, an agreement ensues if at least two individuals steadfastly submit the same proposal. In the game’s concrete realization, the coalition {B, C} pursues its incentive from ((4, 2, 0), (4, 2, 0), (0, 4, 2)) to ((4, 2, 0), (0, 4, 2), (0, 4, 2)). The last profile yields the agreement game’s outcome, namely, {B, C}’s agreement on the division
214
Collective Rationality
(0, 4, 2). That division is the coalitional game’s outcome. In the game’s concrete realization, the coalition {A, C} does not pursue its incentive to switch from ((2, 2, 2), (0, 4, 2), (0, 4, 2)) to ((2, 0, 4), (0, 4, 2), (2, 0, 4)) so that the outcome changes from (0, 4, 2) to (2, 0, 4). Halting pursuit of incentives is rational for it and for its members. All comply with the selection and stopping rules. The individuals adopt self-supporting strategies. These strategies initiate no pursued incentives to switch strategies, given each individual’s knowledge of the response to his strategy. Individual A prefers many outcomes to (0, 4, 2) but cannot realize them. Individual C rebuffs his efforts to realize (2, 0, 4), for instance. Although C prefers (2, 0, 4) to (0, 4, 2), he does not pursue that incentive. Consequently, A cannot instigate a switch to (2, 0, 4). No individual has a sufficient incentive to deviate from his part in (0, 4, 2). Each individual’s strategy in the profile realizing that division is self-supporting. Strategies for the agreement game cover multiple rounds of proposals, but the last round settles the game’s outcome. Proposals in it are the salient components of the overall strategies. For brevity, I treat these proposals as the overall strategies’ representatives. Because the individuals have foreknowledge of the strategy profile they realize, namely, ((4, 2, 0), (0, 4, 2), (0, 4, 2)), each individual’s strategy is self-supporting given that profile, and so the strategies in the profile are jointly self-supporting. The strategies in ((4, 2, 0), (0, 4, 2), (0, 4, 2)), because jointly self-supporting, do not start for any individual a path of pursued incentives leading to another profile. No individual has a sufficient incentive to switch strategies given the profile. Even taking account of opportunities to instigate a coalition’s switch, none has a sufficient incentive to switch. Because no individual has a sufficient incentive to switch, no coalition has a sufficient incentive to switch. The coalition {A, C} has an incentive to switch to ((2, 0, 4), (0, 4, 2), (2, 0, 4)), but that incentive is not sufficient because C does not have a sufficient incentive to do his part in the switch. Because no coalition has a sufficient incentive to deviate from ((4, 2, 0), (0, 4, 2), (0, 4, 2)), none has a path of pursued incentives away from that profile. The profile is a strategic equilibrium in the agreement game, and its upshot, (0, 4, 2), is the outcome of a strategic equilibrium in the coalitional game. The outcome (0, 4, 2) is a consequence of the coalitional game’s concrete realization. In other concrete realizations of the game, agents pursue incentives differently or coordinate differently to select an equilibrium and so realize a different outcome. The general derivation of strategic equilibrium from self-support in ideal coalitional games follows the pattern of the examples. It begins with a concrete realization of an ideal coalitional game. The concrete realization includes a pregame communication period during which agents may reach self-enforcing agreements about joint action. The culmination is an agreement game in which agents make simultaneous proposals concerning a coalition structure and divisions of coalitions’ profits. The concrete realization settles the incentives
Compositionality
215
pursued in the agreement game and the game’s outcome. The agreement game’s outcome settles the coalitional game’s outcome. The derivation’s first step moves from self-support to joint self-support in the underlying agreement game. Each agent adopts a self-supporting strategy, namely, a proposal that does not start a path of pursued incentives. The agent’s incentives depend on his information about the other agents’ response to his strategy. Because the agent foresees other agents’ strategies, his strategy is self-supporting given the whole profile of strategies adopted. The same holds for every agent. Therefore the profile of strategies realized is jointly self-supporting. The second step moves from joint self-support in the agreement game to strategic equilibrium in the coalitional game. A profile of jointly self-supporting strategies in the agreement game yields a profile of the coalitional game. No coalition pursues an incentive to deviate because its pursuing an incentive requires a member’s instigation of pursuit of the incentive in the agreement game. A coalition’s path away from the profile in the coalitional game requires an individual’s path away from it in the underlying agreement game. Because the coalitional game’s profile arises from jointly self-supporting strategies of the agreement game, no individual has a sufficient incentive to instigate any coalition’s pursuit of an incentive to deviate. Therefore the profile is a strategic equilibrium of the coalitional game. In an ideal coalitional game realized by an agreement game, individual rationality generates a strategic equilibrium. Because, as this chapter illustrates, the rationality of players in a game leads them to a strategic equilibrium, a theory of collective rationality may require that a game’s solution be a strategic equilibrium. The principle of compositionality supports this requirement.
12
Implications
A
theory of collective rationality contributes to various philosophical projects. This final chapter highlights implications of the book’s theory. It briefly applies the theory to social institutions such as the law and economic markets. Then it uses the theory to draw conclusions about game theory’s relation to decision theory. Finally, it points out some research topics the theory suggests. 12.1 S OCIAL I NSTITUTIONS Social institutions taken broadly include social practices, conventions, and customs. They form the background for social interaction. Collective rationality, in addition to morality, offers a method of evaluating social institutions. The method examines social institutions as instruments of collective rationality, that is, as instruments for achieving goals of collective rationality. Social institutions effective in achieving these goals are meritorious.1 Evaluation of a social institution depends on the institution’s context, in particular, other social institutions. To put aside other social institutions’ effect, one may evaluate together a society’s ensemble of social institutions. Call that ensemble its social contract. A social contract is a comprehensive social institution that has the law as a component. Effective social institutions constitute a social contract that rational people may accept. Such a contract creates an environment in which people achieve goals of collective rationality to a reasonable approximation. Rational social contracts are the foundation of contractarian theories of justice and morality.2 Members of a society may change the terms of their interaction by introducing penalties, inducements, commitments, promises, contracts, and the like. These devices promote joint action. They may change a game from noncooperative to cooperative. In social interactions, people can change the game they play by changing their social institutions. In addition, they may influence each other’s probability and utility assignments to their game’s possible outcomes. People may use opportunities to alter other people’s basic preferences, as well as their beliefs 216
Implications
217
and belief-derived preferences. Marketers may create a desire for a product. The new desire changes for a potential buyer the utilities of a bargaining problem’s possible outcomes. The bargaining problem occurs in a larger game with marketing moves. An analysis of a game assumes that some background is fixed. Design of social institutions to solve coordination problems is a large game in which only natural resources and human nature are fixed background features. Because cooperation is beneficial, individuals should prepare for collective action by acquiring cooperative dispositions. Similarly, social institutions should create an environment for cooperation among individuals. The theory of collective rationality assists the design of social institutions in two ways. First, the theory presents goals of collective rationality, such as efficiency, that ideal institutions realize. Second, the theory predicts the outcome of a social institution under the assumption that people operating within it approximate collective rationality. Having specified the outcomes of possible social institutions, it may recommend one that achieves a desired outcome. That outcome may be a goal of morality rather than a goal of collective rationality, for example, an equitable distribution of the cost of a public good. The goals of collective rationality do not exhaust the objectives of social institutions. Ideal agents who are comprehensively rational may dispense with social institutions. They achieve efficiency through preparation for joint action. They proceed directly to the results of having ideal social institutions. Social institutions help humans approximate the interactions of ideal agents. Human societies have imperfect control of social institutions, and so often cannot easily implement a design’s recommendations. A design not implemented may nonetheless suggest implementable improvements. It may indicate how to adjust features of social institutions so that the outcomes the institutions produce approximate a goal of collective rationality. An analysis may predict the outcome of a modified institution by investigating the interaction of rational agents within that institution. If an institution is deficient with respect to goals of collective rationality, a modification may advance individuals rationally interacting within it toward goals of collective rationality. Furthermore, an institution may compensate for human irrationality. It may yield outcomes that meet goals of collective rationality despite the irrationality of individuals and groups. In an interaction problem, it may promote a solution that the participants fail to see. Meeting standards of collective rationality is a requirement, but it demands less than meeting goals of collective rationality demands. Standards of collective rationality adjust to circumstances and in adverse circumstances demand little. People may meet the standards of collective rationality without attaining goals of collective rationality if their circumstances are not ideal for joint action. Members of a society are responsible for arranging their society so that life within it approximates the goals of collective rationality as much as circumstances allow. This is not their sole, or even their weightiest responsibility, so they may be excused for failing to reach the best attainable approximation to those goals.
218
Collective Rationality
They need only take reasonable steps toward the goals. They should be collectively rational in their pursuit of goals of collective rationality. In an ideal human society, social institutions are collectively rational only if they achieve the goals of collective rationality, assuming that meeting these goals is not incompatible with meeting other reasonable goals. In a nonideal society, a social institution may be collectively rational even if it falls short of the goals of collective rationality. Actual societies are not ideal and may rationally have social institutions that do not achieve those goals. Standards of collective rationality are attainable and adjust in light of the limits of nonideal societies. Nonetheless, within those limits, good social institutions promote the goals of collective rationality. An account of collective rationality suggests ways of improving social institutions so that collective rationality within them better approximates those goals. The remainder of this section points out ways that social institutions promote goals of collective rationality. It identifies goals relevant to a social institution’s evaluation without attempting an overall evaluation of the institution. Its objective is just to indicate the bearing of a theory of collective rationality on evaluation of social institutions. Some institutions promote attainment of goals of collective rationality by making conditions favorable for joint action. Serving this purpose by facilitating binding agreements are the law of contracts, the courts, and the custom of promise keeping. Some institutions encourage joint action by facilitating associations and organizations; for example, the laws for forming corporations do this. Other institutions promote communication and knowledge about possibilities for joint action. The postal service, the Internet, and the media all assist joint action. Some institutions, such as a common language and money, facilitate trade and reduce the costs of approximating goals of collective rationality. Conventions such as rules of the road, rules of etiquette, and customs for doing business streamline coordination. They generate productive joint action when communication is impossible, impractical, or costly. Because joint acts are collectively rational if their participants are rational, social institutions that promote individual rationality, such as the educational system, also promote collective rationality. Some institutions promote coordination by promoting comprehensive rationality and preparation for coordination problems. Culture does this by promoting dispositions to see coordination problems as others do so that focal points emerge. It also promotes patterns of pursuit of incentives that yield efficient strategic equilibria. Using the law, governments intervene in cooperative dilemmas concerning pollution to secure public goods such as clean air, and assign property rights to achieve efficient outcomes. Some hold that the law should not be content with efficiency but should maximize social utility. Maximizing social utility and achieving efficiency diverge when society gains from practices that impose costs on a few. Finkelstein (2004: 401, 415) holds that laws should aim just for mutual
Implications
219
benefit, or efficiency. She notes that collective acts are realized by individuals’ acts, and that each individual involved must have a reason to participate. Hence collective acts that informed individuals endorse bring each participating individual a benefit. Collective rationality agrees with this contractarian approach to the law. A social institution is collectively rational if it may result from rational action by each member of society. This follows from collective rationality’s compositionality. Governments make economic markets competitive to promote efficiency, a goal of collective rationality. Welfare economics shows that competitive markets achieve efficiency. Individual pursuit of preferences in a perfectly competitive, isolated market yields an efficient outcome according to the First Fundamental Theorem of Welfare Economics. Individuals making profitable trades in such a market produce an efficient allocation of goods. As Pindyck and Rubinfeld (1989: 570) explain, if each person trades in the marketplace to maximize her satisfaction, and all mutually beneficial trades are completed, the resulting allocation will be efficient in the sense that no alternative allocation yields gains for some without losses for others. The theorem by itself does not justify making markets competitive. Not every means to a goal of collective rationality need be collectively rational, and, in particular, not every component of a means to a goal need be collectively rational. The competitiveness of markets is just one component of the theorem’s means of attaining efficiency. Competitiveness may not generate efficiency if the theorem’s other idealizations are not met, for example, if individuals fail to maximize utility.3 A society may realize improvements for all if it establishes a collective preference ranking of social policies. Such a ranking ensures that collection action in accord with the ranking does not wastefully cycle through policy alternatives. The goal of a collective preference ranking counts against the social institution of majority rule because that method of ranking social policies does not ensure a collective preference ranking. As the paradox of voting shows, majority rule may generate an irrational cycle of collective acts. Majority rule produces acyclic collective preferences if voters have singlepeaked preferences. May it be backed by institutions that promote such preferences? It is not a goal of collective rationality to have single-peaked preferences even if having them ensures realizing a goal of collective rationality. Alternative means of meeting the goal exist. Not all conditions sufficient for achieving a goal are acceptable themselves. Dictatorship also achieves acyclicity of social preferences but is too drastic a remedy. Efficiency does not supply a strong case for majority rule. Independent reasons support it, however. Morality may support majority rule as an approximation to maximizing social utility. Statistics may deem it more reliable than an individual’s judgment.4 Individual rationality may support majority rule in a series of collective action problems if one anticipates being a part of the majority most of the time. Individual rationality may support it in a single case if it forms a
220
Collective Rationality
means of coordinating on a joint act, and each individual prefers joint action to inaction.5 As considering markets and majority rule shows, collective rationality endorses some, but not all means to goals of collective rationality. Does it support sharing information? Sharing information grounds a collective probability assignment with respect to which rational collective acts may maximize collective (expected) utility. Institutions that aid information sharing improve conditions for attainment of the goal of collective utility maximization, which Section 4.3 restricts to favorable cases not requiring self-sacrifice. Sharing information serves this restricted goal, but must be restricted itself to prevent conflict with other goals of collective rationality such as efficiency. A free market promotes efficiency, but discourages information sharing. Willingness to buy or sell a stock indicates information about its value and information about future events that effect its value. Profits from a transaction are a motive to enter the market for the stock and therefore to reveal information about the stock’s value. However, an unregulated market also impedes information sharing, because information is an advantage in the market. The laws against insider trading help offset the advantage of private information and so promote disclosure of information. A theory of collective rationality devises such mechanisms for sharing information in ways that do not impede collective rationality in trading. It guides regulations of accounting practices and the information investors receive. Good regulations remove incentives for companies to conceal information about their financial viability. To eliminate incentives for executives to manipulate earnings to generate overvaluation of stocks, Jensen (2005) advocates corporate disclosure of business strategies and also communication between financial analysts and short sellers. Sharing information is a social goal, but people have incentives to keep information private. A general issue is how to induce citizens to reveal their preferences when that is beneficial. Suppose that a government considers whether to produce a public good using citizens’ tax dollars. It would like to assess the public good’s value to each citizen. To induce citizens to reveal the value to them of a public good requires ingenuity. A branch of economics called implementation theory investigates elicitation of such information. Results show how the government may obtain reliable information by paying for it.6 In some bargaining problems, the bargainers do not know the shape of the bargaining problem in utility space. A buyer and seller may not know where their bargaining curve crosses the horizontal and vertical axes. Neither knows the other’s reservation price. The buyer and seller have incentives to keep private their reservation prices but then risk a breakdown of negotiations. Raiffa (1982: 58–65; 2002: 86) emphasizes the value for productive negotiations of full, open, truthful exchange of information, and presents a method of eliciting reservation prices. Auctions are games of asymmetric information. A bidder does not know how much other bidders are willing to pay for an item on the block. Suppose that an
Implications
221
art dealer is auctioning a painting from his collection. Except for unit coalitions, the only possible coalitions have two members, a bidder and the auctioneer. The auctioneer tries to induce bidders to reveal their willingness to pay for the painting. How may one design an auction so that bidders reveal their reservation prices? A Vickrey auction, or second-price sealed-bid auction, does this by asking the winner to pay the second highest bid rather than his own bid.7 In a game where agents have asymmetric information, some agents have incomplete information about their game. For instance, an agent may lack information about other agents’ utility assignments. The theory of games with asymmetric information treats collective rationality in cases with obstacles to sharing information. Future research may extend this book’s treatment of ideal coalitional games to nonideal games with asymmetric information. This extension assists the design of social institutions that achieve goals of collective rationality.8 12.2 S TRATEGIC E QUILIBRIUM
AND
I NSTITUTIONS
Social institutions promote various goals of collective rationality. A traditional goal is achieving a core allocation. Strategic equilibrium generalizes that goal for the sake of attainability. Although strategic equilibrium’s significance is primarily philosophical, it governs collective behavior in various practical settings. Replacing the core with strategic equilibrium alters the theory of coalitional games’ application to social institutions. This section points out implications of the replacement. Replacing the traditional goal of a core allocation with the goal of a strategic equilibrium has two general consequences. First, it presents a goal for coalitional games with empty cores. Second, in coalitional games with nonempty cores, it presents alternatives to a core allocation. Rational agents reach a strategic equilibrium that depends on the incentives they pursue. Their strategic equilibrium need not be a core allocation. This section applies the book’s account of strategic equilibrium to social contracts and economic markets. Society is a cooperative venture. The establishment of a social contract is a coalitional game. The theory of strategic equilibrium articulates the type of social contract that members of a society may endorse. The contract should achieve a strategic equilibrium. A core allocation is efficient, but a strategic equilibrium need not be efficient. In a nonideal game with an empty core, a coalition may bring about an inefficient outcome because the coalition profits from the outcome. Even if all players prefer an efficient outcome, efficiency may be unstable. Imagine a social contract creating the basic structure of a society, such as the law and the machinery of government. What prevents a contract that excludes some people from the benefits of society? A boutique hospital excludes indigent patients so that it may use revenue to attract patients who pay. It offers better care
222
Collective Rationality
than a full-service hospital that uses patients’ payments to subsidize care of the poor. Similarly, in some circumstances a society’s rationally adopted social contract may exclude some members from the society’s benefits. A rational social contract may make people with disabilities second-class citizens. A social contract justified by collective rationality may not benefit all its members. Rationality acknowledges the possibility of exclusion. Morality supplements rationality’s evaluation of a social contract. However, given ideal conditions for adoption of a social contract, rationality independently reduces the danger of exclusion. A rational society takes steps to include all who can contribute to society. Their contributions may come in many forms and need not be economic. Each contributor’s inclusion increases the benefits the society may distribute. Inclusion promotes efficiency. Section 10.4 shows that comprehensive rationality leads agents in an ideal coalitional game to an efficient strategic equilibrium. Agents who are comprehensively rational prepare for a coalitional game so that their pursuit of incentives makes some strategic equilibria efficient. Then they coordinate to select an efficient strategic equilibrium. A society’s social contract is in flux. Social institutions that the contract establishes affect the contract’s evolution. Effective social institutions promote inclusion of potentially productive members of society. They promote comprehensive rationality and prepare individuals to pursue incentives so that they realize an efficient strategic equilibrium. A rational social contract shapes social institutions to prevent exclusion that would otherwise be rational. Well-designed social institutions create a framework in which rational interaction yields an efficient social contract. Besides shaping a social contract, an account of strategic equilibrium also shapes components of a social contract. For example, it bears on the design of a federal regulatory agency, such as the Food and Drug Administration. A regulatory agency generally uses expert information to serve the public’s basic goals, for example, the goal of safety. It adopts a policy on behalf of the public. In a representative democracy, it seeks the policy that the public, if informed, would reach on its own in ideal negotiations.9 Such negotiations create a coalitional game. The core is often empty, so the outcome is not a core allocation. Good regulations are the product of a strategic equilibrium rather than a core allocation. An account of strategic equilibrium fills out the theory of regulation. Politics yield other applications of the theory of strategic equilibrium. Schofield (1995) analyzes the stability of democratic institutions, such as political coalitions. He presents several theorems concerning the nonemptiness of the core of certain voting systems. A reformulation of his results may replace the core with strategic equilibria. Political coalitions may reach strategic equilibria outside the core. Using strategic equilibrium instead of the core generalizes an account of institutional stability. Replacing the core with strategic equilibrium also affects the theory of markets. The classical theory takes the outcome of a rational market to be a core allocation.
Implications
223
Shifting from the core to strategic equilibrium has implications for a market’s design and for predictions of a market’s outcome. General equilibrium theory offers exquisitely detailed results concerning the core of markets. Basic results concern competitive equilibrium in an exchange economy. In an exchange economy agents trade goods, but do not also produce goods. An economy is competitive if trade is free and informed, and no agent in the economy significantly affects prices, as in an economy with many participants. A competitive equilibrium holds if prices are market clearing so that supply and demand balance. General equilibrium theory establishes these points. (1) A competitive exchange economy guided only by prices (in which agents have well-behaved, convex, and continuous utility functions for goods) has a competitive equilibrium that is market-clearing for all goods (except perhaps free goods, which may be in excess supply in the equilibrium). (2) A competitive-equilibrium allocation is in the core. (3) In a competitive exchange economy with many consumers, if the endowment of each consumer is negligibly small relative to the overall endowment of the economy, then the set of core allocations is nearly the same as the set of competitive-equilibrium allocations. As the number of consumers becomes infinite, an allocation is in the core if and only if it is competitive. This last claim is known as Edgeworth’s proposition (1881).10 General equilibrium theory relies on idealizations about economies and progresses by relaxing those idealizations to attain greater generality. Replacing the core with strategic equilibrium allows the theory to gain generality by removing assumptions that ensure that the core is not empty. Strategic equilibrium offers a means of extending results to economies with empty cores. Also, strategic equilibrium is a new type of equilibrium economies may achieve. The set of strategic equilibria is larger than the core. So in a large economy some strategic equilibria are not competitive equilibria. Alternative forms of stability may emerge. To illustrate treatment of small markets with empty cores, imagine a case of majority-rule division of $6 among three voters. Each voter may sell her vote for a price up to $6. Each voter may buy another’s vote for a price up to $6, thereby securing the money available for division. No market-clearing price exists because after one voter buys another’s vote, no voter gains from buying the remaining vote; two votes are enough to secure the prize. Because this small market in votes has no core allocation, a competitive equilibrium is impossible. Nonetheless, a strategic equilibrium is possible. 12.3 T HEORETICAL U NITY A unified theory has general principles attuned to the relevant features of various cases. Its components are mutually supportive and cohere well with each other. The book’s theory of collective rationality contributes to a unified general theory of rationality. The general theory unifies the branches of game theory, and unifies game theory and decision theory.
224
Collective Rationality
The book’s general theory refines accounts of solutions to games of strategy by analyzing joint rationality and other requirements of collective rationality. Solutions to games agree with agents’ collective rationality in ideal cases. Comprehensively rational agents realize a solution to an ideal game. Game theory and the theory of collective rationality contribute to each other. Game theory illuminates collective rationality, and the theory of collective rationality verifies solutions to games. The chapters on games of strategy derive equilibria for cooperative and noncooperative games from a single set of basic principles attuned to differences among games and agents. The principles’ applications to the two types of game differ only because the two types of game offer different opportunities for joint action and involve different types of agent. In cooperative games, the principles accommodate joint action and recognize coalitions as agents. The principles require the same type of equilibrium in both types of game, adjusting for the type of agent involved. They require strategic equilibrium in noncooperative games, taking individuals as agents, and require strategic equilibrium in coalitional games, taking coalitions as agents. Noncooperative sequential games realize coalitional games. The chapters on games show that equilibrium in a coalitional game arises from equilibrium in an underlying sequential game. The chapters’ account of equilibrium and its origin unifies theories of cooperative and noncooperative games. A unified theory of cooperative and noncooperative games may analyze complex situations involving the simultaneous resolution of games of both types. For example, an industry and a regulatory agency may negotiate about standards for workplace safety. Within each group, individuals may pursue personal interests so that noncooperative games occur within the cooperative negotiation game. Raiffa (2002: Chap. 25) reviews an actual situation of this type. It arose during negotiations concerning the United States’ transfer of the Panama Canal to Panama. The two parties to the negotiations experienced internal divisions. Consequently, negotiators resolved cooperative and noncooperative games simultaneously. A general theory of rationality treats all agents whether individual or collective. In a unified theory, general principles explain principles specialized for individuals and for groups. A basic principle requires responding to a sufficient reason. An incentive is a reason for an individual, and an incentive that all members of a group share is a reason for the group. A unified general theory uses the same basic principles for individuals and for groups and, moreover, establishes the consistency of their applications to both types of agent. Applications demand consistency because individuals compose groups. Individuals’ acts must be consistent with the acts of groups they form. Principles applied to individuals take account of membership in groups and opportunities for joint action. The book’s general theory adopts the principle of compositionality for individuals and groups. That principle makes an act’s evaluation depend on the agent’s type of control. Using the principle, the theory derives collective rationality from individual rationality
Implications
225
and establishes their consistency. It unifies the theories of collective and of individual rationality. The chapters on game theory treat collective principles concerning equilibrium in a coalitional game. The chapters show that strategic equilibrium in a coalitional game follows from strategic equilibrium in the underlying sequential game. Then they show that strategic equilibrium in the underlying sequential game follows from individuals’ compliance with the principle of self-support. They derive a necessary condition of collective rationality from a necessary condition of individual rationality. Grounding a coalitional game’s equilibrium in principles for an individual’s rational pursuit of incentives unifies game theory and decision theory. It shows that decision theory’s principles of rationality for individuals explain game theory’s principles of collective rationality.11 This book provides a decision-theoretic foundation for game theory using standard idealizations about agents’ informational resources and cognitive abilities. It succeeds because its generalization of the principle of utility maximization, the principle of self-support, fits game theory better than the standard principle of utility maximization does. Game theory needs the more general principle of self-support to explain rationality’s attainability in games. Theoretical unity has epistemic as well as explanatory rewards. It promotes discovery of principles of rationality. Although principles for individuals are normatively fundamental, principles for individuals and for groups have a symbiotic epistemic relationship within a unified general theory of rationality. Rationality for coalitions follows from rationality for individuals, joint rationality for coalitions follows from joint rationality for individuals, and joint rationality for individuals is just each individual’s rationality given all individuals’ acts. A principle of collective rationality may have implications about principles of individual rationality. The attainability of collective rationality is a good reason for strategic equilibrium among coalitions and, via the unity of rationality, good epistemic support for strategic equilibrium among individuals also. Strategic equilibria for individuals and for groups reinforce each other. Argumentatively, the standard for individuals supports the standard for groups, and vice versa. Reflection on coalitions suggests the standard of self-support, and, because of rationality’s unity, suggests it for individuals too. A general principle’s success with one type of agent promises success with the other type also. Because of rationality’s unity, applications of principles of rationality to individuals and to groups provide mutual epistemic support, at least in ideal cases where a principle’s application to groups follows from its application to individuals. 12.4 F UTURE R ESEARCH Collective rationality is rationality’s application to groups. Chapter 1 asked two questions about it. What are collective rationality’s standards, and how do groups meet them? In ideal conditions standards of collective rationality include strategic
226
Collective Rationality
equilibrium and joint rationality. According to compositionality, groups meet standards of collective rationality if their members are rational. The book’s general theory of rationality suggests various directions for future studies of rationality. This section mentions a few projects that advance the theory’s principal themes. The theory establishes these points. (1) Efficiency is a goal of collective rationality. It is not a standard of collective rationality in cases where conditions are not ideal for joint action. Standards of rationality are sensitive to circumstances such as inability to communicate. (2) Rationality is attainable. Strategic equilibrium generalizes Nash equilibrium and the core to form attainable standards of collective rationality. (3) Principles of evaluation for rationality distinguish acts directly controlled from acts not directly controlled. First principles evaluate the elements of control, namely, acts directly controlled. Evaluation of indirectly controlled acts proceeds by examination of their components. Groups act only indirectly through their members. Evaluation of their acts depends on evaluation of their members’ acts. (4) Collective rationality follows from individual rationality. That is, the rationality of a group’s members suffices for the group’s rationality. However, being a member of a group may change the acts that are rational for an individual. Members of a group often have opportunities for coordination and cooperation. (5) The standard of strategic equilibrium in coalitional games is a nonbasic standard that depends on technical definitions of a coalition’s options and incentives. First principles verify the standard in ideal coalitional games. The product of individual rationality in an ideal coalitional game is a strategic equilibrium. In favorable conditions, individuals’ comprehensive rationality suffices not only for attainment of standards of collective rationality but also for attainment of goals of collective rationality such as efficiency. (6) Achieving a solution to a game is a standard of collective rationality in ideal cases. If all players are fully rational, they realize a solution. If they do not realize a solution, they are not collectively rational. Future research may pursue these points further. Besides efficiency, information acquisition and sharing may be goals of collective rationality. Necessary conditions of collective rationality may complement the sufficient condition compositionality expresses. The unification of game theory requires extending results from elementary coalitional games to other cooperative games, for example, coalitional games without transferable utility. Also, a theory of games gains realism by removing restrictions and idealizations. It extends its analysis of solutions to nonideal games in which players have incomplete information. A treatment of nonideal games may adopt a subjective account of solutions that accommodates a player’s ignorance about other players, for instance. The definition of strategic equilibrium anticipates this extension. A strategic equilibrium depends on subjective incentives that are sensitive to information, and so it is a suitable necessary condition for a subjective solution. In addition, a complete
Implications
227
theory of games deepens its account of an agent’s behavior in a coalitional game by explaining the agent’s pattern of pursuit of incentives. Building a unified general theory of rationality that covers both individual and collective rationality yields many discoveries about rationality, and opens new paths for exploration.
This page intentionally left blank
Notes
C HAPTER 1 1. A general theory of rationality applying to individuals and groups is a goal of other authors, too. Copp (1995: Chap. 8) has this goal. Nida-Ru¨melin (1997) advances a broad theory of rationality that combines instrumental rationality, rationality of intentions, and collective rationality. Many theorists support unification of decision theory and game theory. Spohn (1982), for instance, argues that game theory is a specialization of decision theory. I build on the work of these authors.
C HAPTER 2 1. Although I claim that agents cause events, I do not endorse the view that causation by agents is independent of causation by events. An agent’s being a cause of an event may reduce to events involving the agent being a cause of the event. Also, suppose that a decorator moves a vase. She causes the vase’s movement. She performs the act of causing the vase’s movement. Her causing the vase’s movement is an event. Does she cause this event? If so, does she cause that causing, and so on? Perhaps the decorator causes multiple causings when she moves the vase. This chapter puts aside such issues concerning agency, and treats only points important for the book’s theory of collective rationality. Watson (2004), for example, has a more thorough account of agency. 2. Suppose that in a person with multiple personalities, the personalities are autonomous and operate simultaneously, and not just serially. Then the person acts as a group does. I also assume that this case does not arise. The problem of the unity of consciousness asks why multiple experiences may belong to the same person. What features make them experiences of the same person? I assume that a person has a unified mind without tackling this deep issue. 3. To support this point, suppose that a bolt of lightening strikes the primordial ooze and creates an exact duplicate of a person. The creature moves a finger and then dissolves back into the ooze. Is that momentary swampman a free agent? He has no beliefs or desires despite having duplicates of belief and desire states. Genuine beliefs and desires, because of external individuation of their content, require an agent’s having causal interactions with its environment. Those causal interactions give mental states content, Dretske (1988) argues. The swampman’s belief and desire states, lacking a history of
229
230
4.
5.
6.
7.
8.
9.
Notes to Pages 10–14
causal relations to his environment, lack content. They are not genuine beliefs and desires. Because the swampman exists only a moment, he lacks psychological integration. Not having a unified mind, he does not act freely. The problem of personal identity asks which person-stages belong to the same person. For some responses, see Parfit (1984) and McMahan (2002). Their answers invoke psychological continuity. A similar problem of identity arises for a collective agent. Because a group lacks a mind, psychological continuity is not the criterion of a collective agent’s identity over time. I put aside the problem of specifying the criterion. Searle (2001: Chap. 3) argues that agency requires a self, unified and extended. He means human agency and does not mean to exclude the agency of groups that are not selves but are composed of selves. If the people in China function as the neurons of a brain that believes 2 þ 2 ¼ 4, then perhaps the group believes 2 þ 2 ¼ 4, although it is not conscious and does not know that it has the belief. Perhaps a group may have desires, too, without being able to sense pleasure or pain. May a group have a unified mind despite insentience and lack of consciousness? Block (1991) addresses such issues. If the individuals who compose a collective agent also form a unified mind, puzzles arise. Suppose that a nation elects a candidate as a result of the votes of citizens. Suppose that their acts duplicate the acts of neurons of a brain deciding not to elect that candidate. Does the nation decide not to elect the candidate while it elects the candidate? Is it then irrational because it acts contrary to its decision? Perhaps the nation only realizes a decision state and not a genuine decision, with externally individuated content, able to prompt action. Or perhaps its decision is not autonomous. Or perhaps the group of citizens differs from the agent whose decision they realize. If the nation’s decision is irrational, whereas the acts of all voters are rational, then the voters realize two agents, one collective and one noncollective. The voters realize the decision-maker, a simple agent, but compose the electorate, a nonsimple agent. Their acts constitute the act of the electorate and realize, but do not constitute the decision. It is convenient to represent acts with propositions because deliberation considers possible acts. Realizing a possible act is realizing the proposition that represents the act. Because propositions individuate and represent acts, theorists sometimes say that acts are propositions, but strictly speaking acts are concrete and not abstract. Ginet (1990: 70–71) states that various views about the nature of acts find support in our ordinary talk about acts. A philosophical treatment of acts should thus specify its understanding of acts. I state my view but acknowledge the viability of other views. Propositions represent acts. Propositions have structure and may have abstract and concrete components, for example, a concrete subject and an abstract predicate. Because acts are objects of evaluation, their evaluation is relative to a way of grasping them, as Weirich (2004: App. A) explains. Strictly speaking, an act’s representation is a proposition under a mode of presentation. For simplicity, a standard of evaluation may select a constant way of grasping each proposition using some canonical sentential expression of the proposition. Tuomela (1995: 142, Chap. 5) observes that a group with an authority structure may perform an act without the participation of all members. A nation may adopt a trade agreement, if its congress ratifies the agreement. A group may delegate authority to a single individual who may then act on its behalf without other members’ participation.
Notes to Pages 16–32
231
10. Although some individualists deny the existence of collective acts, the reduction of collective acts to individual acts is compatible with the existence of collective acts. Ockham’s razor does not eliminate them, granting that they underwrite convenient, shortcut methods of describing and evaluating the behavior of multiple individuals. 11. Bermu´dez (2002) considers rationality’s extension to animals. I do not pursue that extension, although it may reveal important points about rationality’s essential features. 12. Fischer (1994) examines free will and control. He describes a type of control that is compatible with determinism. 13. Joyce (1999: 57–9) applies decision theory to acts that are exercises of the will. 14. An act is momentary if from a practical perspective it is as if instantaneous. 15. Pollock (2002) argues that basic acts do not exist, and so cannot constitute the option set for an application of utility maximization. He applies utility maximization to conditional policies. Rather than discuss his complex proposals in detail, I shelve the issues he raises by treating only ideal cases in which basic acts exist. 16. Scanlon (1998: 248) states that a person has attributive responsibility for an act if the act is appropriately taken as a basis of moral appraisal of the person. Normative responsibility is similar to attributive responsibility but signals evaluability of an act, not an agent, and evaluability for rationality, not morality. 17. Even if the act is not evaluable for unconditional rationality, it is evaluable for conditional rationality. It may be rational given the agent’s beliefs and desires. Section 3.3 explains conditional rationality. 18. Searle takes free acts to be co-extensive with acts done for a reason and claims that acts open to evaluation for rationality are acts done for a reason, that is, free acts (2001: 16–17, 83–84, 201–2). 19. Some theorists distinguish an act performed unintentionally from a nonintentional act. Applying this distinction, a speaker’s contradicting himself may be unintentional, but yet an intentional act because it is a product of intentions. This defense of the view that exactly intentional acts are evaluable for rationality makes too many acts intentional. It makes alerting the prowler an intentional act. Also, it does not handle acts of perfect agents. 20. Rationality’s evaluation of a group’s act considers whether the group’s members were ignorant of contributing to that act. Attending to members’ knowledge replaces attending to technically defined collective intentions. 21. Rescher (2003: Chap. 11) agrees with this point. He claims that a group acts wrongly only if some member acts wrongly.
C HAPTER 3 1. Rescher (1988) offers a book-length account of rationality. 2. For example, an act’s being rational may differ from its being justified. Justification adjusts to an agent’s cognitive abilities less than rationality does. A child’s ill-considered act may be excused and rational without being justified. 3. Kolodny (2005) distinguishes objective rationality, which depends on external reasons, and subjective rationality, which depends on internal coherence. Rationality in its ordinary sense is dependent on internal reasons, but it requires more than coherence.
232
Notes to Pages 32–41
4. McNaughton and Rawling (2004: 124–25) catalog unclarities. 5. Von Neumann and Morgenstern ([1944] 1953: Sec. 2.1) seek but do not find a general mathematical definition of rationality. They note that utility maximization is not precisely defined. It depends on options, information, and the formula for an option’s utility. Moreover, it has an indefinite range of application. 6. Some theorists adopt a technical definition of utility from which utility maximization follows. A normative principle of utility maximization takes utilities to be conceptually independent of choices. For example, it may take them as rational degrees of desire. Probabilities and utilities may be conceptually independent of choices and yet inferable from choices using methods traditional representation theorems employ. To increase those methods’ scope, one may use a person’s evidence and his choices together to infer his probability and utility assignments. 7. Hooker and Streumer (2004) review the debate about instrumental rationality’s exhaustiveness. 8. Sober and Wilson (1998: 240) say that directives of morality are universalizable, whereas directives of rationality target a particular agent. However, rationality may universally prohibit a pure time-preference, and morality may direct a parent to give priority to caring for his or her own children. Rationality and morality differ in the scope of their evaluations and the reasons they entertain rather than in the form of their principles. 9. Rovane (2004: 321) says that standards of rationality apply to those who think that they ought to comply with them. As I interpret her, she means that only the acts of reflective agents are evaluable for rationality. 10. If there are multiple equilibria, each identifies possible worlds in which agents are fully rational. The union of the sets of worlds they identify forms the set of worlds in which agents are fully rational. 11. Gibbard (2003: 56) notes that a good plan is realizable: “An important requirement of consistency for a plan is that it must not rule out every alternative open on an occasion.” Being rational shares this virtue of a good plan. 12. Treating possible acts as possibilia is convenient, but claims about hypothetical conditionals may replace claims about these possibilia. 13. Epistemologists, such as Bergmann (2006: Chaps. 1, 2), debate whether an agent’s mental states may justify an inference although the agent is unaware of those states and whether not being aware of them excuses not drawing an inference they would otherwise require. 14. Not every assumption qualifies as an idealization. By assumption, the scope of a decision principle may be just the single decision I make at 12:00 A.M. January 1, 2009. Suppose that the principle declares rational the first option that pops into my head. Imagine that I adopt that option. By luck, it is rational to adopt. The principle, having narrow scope, faces no counterexamples. It is not explanatory, however. Its restriction to a single time does not control for an explanatory factor and so is not an idealization. 15. Young (1998) treats the deliberations of modestly rational agents. As a realistic alternative to optimization, Pollock (2006) presents a decision procedure he calls locally global planning.
Notes to Pages 41–49
233
16. Some foundational issues remain unsettled. Sober and Wilson (1998) survey the continuing debate about psychological egoism, for example. 17. A decision is the formation of an intention to act. Intentions may be firm or not firm. A decision may be defective because although it yields a reasonable intention, the intention is not sufficiently resolute. Should decision principles say not only which intentions to form but also how firmly to form those intentions? I thank Trent Dougherty for raising this question. If firmness is a property of an intention, then a fully rational ideal agent not only makes a reasonable decision but also makes it in a reasonable way. However, firmness may not be a property of an intention but of adherence to an intention. Then the assumption that a fully rational ideal agent adheres to an intention in a reasonable way already covers the firmness of her intention. 18. Jackson and Pargetter (1986) treat a type of conditional obligation that obeys the rule of detachment. It is analogous to the type of conditional rationality that does not grant mistakes. Broome (2000: 203–4; 2002: 92–95) examines rational requirements that do not obey the rule of detachment. Suppose that one intention rationally requires another, and that a person has the first intention. Nonetheless, the second intention may not be rationally required because the first intention is not rationally required. A conditional rational requirement is analogous to the type of conditional rationality that grants mistakes. That type of conditional rationality is similar in structure to conditional probability. Just as conditional probability is neither the probability of a conditional nor the probability of a conditional’s consequent, that type of conditional rationality is neither the rationality of a conditional nor the rationality of a conditional’s consequent. Using the terminology in Kolodny (2005: Sec. 1), the requirement may have neither wide nor narrow scope. 19. In the example, a rational person can decline tomorrow’s offer. Thus, he can effectively intend today to decline tomorrow’s offer and afterward realize his intention. He can perform this two-step sequence. However, because he will not decline tomorrow’s offer, he cannot perform the first step in this sequence. Is this consistent? If a person can perform a sequence of acts, can’t he perform each act at the time for it? An agent’s abilities are sensitive to context. Utility maximization uses the act evaluated to set the context. Evaluating today’s act assumes tomorrow’s act as background for today’s options. Evaluating a sequence with an act today and an act tomorrow removes that background assumption so that today’s options expand. Taking context into account, the example is consistent about a rational person’s abilities. The person cannot effectively intend today to decline tomorrow’s offer because tomorrow, as he knows, he will accept the offer. However, he can effectively intend today to decline tomorrow’s offer and tomorrow decline the offer. His accepting tomorrow’s offer is not a background condition settling the class of options to which that two-step sequence belongs. 20. Constitution in the metaphysical sense is roughly the physicalist’s realization relation, described, for example, by Melnyk (2003). 21. Suppose that a sequence of acts directly controlled has a step that is a composite of acts directly controlled. For example, the step may be raising both arms simultaneously. Its components are raising the left arm and raising the right arm. Then there are multiple overlapping sequences of acts directly controlled. One sequence has raising the left arm only. Another has raising both arms. Evaluation by components may yield different
234
22.
23. 24.
25. 26.
27. 28.
Notes to Pages 50–55 evaluations for sequences differing only at the directly controlled act occupying a step. Rationality’s evaluation of extended acts by components evaluates complete temporal components. A change in desires to escape a money pump is sensible decision preparation. Spohn (2000: 78–82) observes that a similar change of desires makes resolve in a plan’s execution compatible with rationality in momentary choices. A desire to follow a plan increases the desire to execute each step in the plan and makes each step utility maximizing. The benefits of coordination in a sequence of acts accrue to each act in the sequence. Because of this, a sequence of acts directly controlled that maximizes at each step may also maximize among such sequences. If so, in ideal cases, rationality recommends utility maximization among extended acts without irrational components. Dynamically consistent extended acts maximize utility at each moment. Spohn (2000: 72–73) advocates sophisticated choice, that is, utility maximization among dynamically consistent extended acts. The principle of dynamic consistency, which Section 5.1 reviews, uses standards for momentary acts to settle standards for extended acts. It agrees with compositionality. Hunter (1994) adopts rule-utilitarianism in ethics because of the benefits of coordination. Bratman (1987) observes that a plan’s adoption imposes reasons for acts in the case of nonideal, cognitively limited agents. This holds for ideal agents, too. Nonetheless, the rationality of a plan’s execution depends on the rationality of its steps’ executions. Partial explanations may be circular if the full explanation is an equilibrium of reasons. Of course, the circumstances surrounding realization of a plan’s component affect the component’s evaluation. The goal a plan serves may explain the component’s rationality. An evaluation of the component attends to its contribution to the plan’s execution. In light of the plan’s goal and other acts contributing to the plan’s execution, each act contributing to the plan’s execution may be rational. Skyrms (1996: 38–42) criticizes Gauthier’s and McClennen’s earlier views about commitment to plans with nonmaximizing steps. Gibbard (2003: 151–52) uses the Charge of the Light Brigade to illustrate how a disposition rational to inculcate may yield an irrational act. A soldier rationally primes himself to obey orders without thinking, but as a result may irrationally obey some orders. Newcomb’s problem, where irrationality is rewarded, furnishes another instructive example. A maximizing plan is to make oneself a one-boxer before prediction of one’s choice and then to one-box. The plan maximizes but is not rational because its second, one-boxing step is irrational. The plan is irrational because its execution has an irrational step. Although making oneself a one-boxer, by, say, taking a pill, is rational, one-boxing itself is irrational.
C HAPTER 4 1. Learning about a subject by extending its principles is commonplace. For example, extending arithmetic principles to new cases clarifies the number system. Extending subtraction to cases in which the subtrahend is greater than the minuend reveals the negative numbers. As Kit Fine (personal communication, December 29, 2004) notes,
Notes to Pages 56–65
2. 3. 4. 5.
6.
7.
8.
235
to learn about the essence of a mathematical function, one may extend its range of application. Under the extension, two properties originally in agreement may differ. One property may reveal itself as more fundamental and more explanatory than is the other. The first rather than the second may explain some feature of the function. Similarly, extending an account of self-deception to groups may reveal the essence of that phenomenon. Suppose that a fund-raising committee compares its results with another committee’s results. The first committee raised more dollars than the second committee did. The second committee had a smaller fund-raising base, however. The percentage of people in its base who donated is higher for it than for the first committee. The first committee compares only total contributions, however, so that it may congratulate itself. It deceives itself about its success, although it does not have intentions, beliefs, or a mind. The general phenomenon of self-deception does not require those mental states. Some theories of self-deception that Mele (2001) reviews consider only self-deception in individuals, and claim that self-deception involves intentions and beliefs. Extending an account of self-deception to groups corrects that claim. Intentions and beliefs are not essential ingredients. Levi (1982) compares an agent’s conflicting desires to the conflicting desires of a group’s members. Bacharach (2006: 191–98) pursues this analogy. See, for example, Wooldridge (2002) and other literature on cellular automaton and agent-based models of society. Pettit (2001: 108, 112, 123) supports applying standards of argumentative coherence to a group’s acts. In an example, he shows how workers may collectively support premisses that rule out a pay raise and, nonetheless, collectively support a pay raise. The example illustrates the Discursive Dilemma. It arises because the reasons of a group’s members may not aggregate to form the group’s reasons. Ironing out the best means of achieving argumentative coherence is a complex matter. List and Pettit (2002), Mongin (2005), and Dietrich (2006) present impossibility results. Either collective acts expressing support for propositions are not subject to principles of argumentative coherence, or collective rationality requires argumentative coherence only in ideal cases. The second view is plausible. In ideal cases with unanimity, collective support for the premisses of a valid argument ensures collective support for the conclusion. In nonideal cases, argumentative coherence must compete with a group’s other objectives. Thagard (2004: 372–73) explores similar conflicts between individual rationality taken as utility maximization and collective rationality taken as efficiency. Coleman (1992: 18–21, 33–37) contrasts collective rationality’s and individual rationality’s constraints on social cooperation, taking collective rationality as efficiency and individual rationality as avoiding personal loss. Taking individual rationality as loss prevention is common in the literature on bargaining. However, I take individual rationality in its ordinary, nontechnical sense. Olson (1965) shows that individuals may profit from not doing their parts in efficient collective acts. Arrow (1974: 19) notes that efficiency may not emerge from individuals’ rational behavior. Bicchieri (2006: 177) observes that lack of communication is a source of inefficient social norms. I thank Susan Vineberg for these points.
236
Notes to Pages 66–79
9. Moulin (1994) reviews the literature on collective preferences and social choice. Presentations of Arrow’s Theorem take collective rationality as the requirement that collective preferences order options and so be transitive. Collective rationality in the ordinary, nontechnical sense allows for requirements demanding more than a collective ordering of options. It provides for requirements such as efficiency in ideal cases. One may meet the challenge of Arrow’s Theorem by grounding collective preferences in a richer set of features of individuals than their preferences. One may expand their grounding to include interpersonal comparisons of utility, following, for example, Strasnick (1975) and d’Aspremont and Gevers (1977). This book’s main principles of collective rationality do not require interpersonal comparisons of utility. 10. Dutta (1999: 95) defines social optimality as maximization of the sum of individuals’ utilities. Dixit and Skeath (2004: Chap. 12) say that a socially optimal joint act maximizes collective utility defined as the sum of individuals’ utilities. 11. Some theorists assume that a collectively rational act agrees with a hypothetical recommendation of a perfect representative of the group, one whose object is to serve the group. The representative assigns personal utilities to the group’s collective acts, attending to their utilities for the group’s members. His recommendation maximizes personal utility. Keeney and Raiffa (1976: Chap. 10) present this conception of a social planner. Their approach defines collective rationality in terms of ordinary individual rationality. The rub is defining the utilities of a perfect representative. The definition is just as vexed as the definition of collective utility. 12. Harsanyi (1966) holds that having the same evidence requires people to assign the same probabilities. Aumann (1987a) makes this assumption, too. Bayesians hold that rationality does not demand a common probability assignment, even if all have the same relevant evidence. Rationality grants agents latitude in assigning probabilities. They may assign them according to their cognitive tastes. Only in special conditions is evidence strong enough to settle probability assignments for all rational agents. Weirich (2001: Chap. 6) treats the aggregation of individual utility assignments, affected by asymmetric information, into collective utility assignments. To simplify, this book generally treats games in which agents have complete and so symmetric information. 13. Weirich (2001: Chap. 6) supports maximization of a sum of individuals’ powerweighted utilities. This principle, although restricted to special cases, relaxes an implicit idealization that individuals have equal social power. 14. I thank Peter Markie for raising this issue. 15. I am grateful to Brian Lang for this point.
C HAPTER 5 1. Aumann (1987b: 468) and Fudenberg and Tirole (1991: 150) present the Folk Theorem, Taylor (1987) investigates cooperation in repeated Prisoner’s Dilemmas, and Spohn (2000: 76–77) surveys conclusions about the repetitions. 2. Brams (1990: 266, 270) and Osborne and Rubinstein (1994: 24) explain this terminology. Myerson (1991: 74–83) treats games with incomplete information.
Notes to Pages 80–90
237
3. Some theorists describing evolutionary game theory are Sugden (1986), Binmore (1992), Skyrms (1996, 2004), Samuelson (1997), Weibull (1997), Young (1998), Gintis (2000), and Vanderschraaf (2001). 4. Bicchieri and Chiara (1992) and Bicchieri, Jeffrey, and Skyrms (1999) collect essays on strategic reasoning. 5. Context settles whether an option o’s supposition carries certainty that o. The conditional probability P(s/o), the probability of a state s given an option o, assumes o with certainty before assessing s’s probability under the assumption. Third and first person perspectives differently affect supposition of an option. Suppose that an agent’s evaluator considers the conditional that if option o is realized, then state s obtains. The antecedent need not carry certainty of o’s realization. Suppose that the agent considers the same conditional. She entertains being certain of o’s realization. 6. Some theorists make outcomes coarse-grained so that equivalent outcomes are identical. Two solutions may have the same outcome in this coarse-grained sense despite yielding different possible worlds and so different outcomes in a fine-grained sense. 7. Luce and Raiffa (1957: 107), Schelling (1960: 291–92)), Taylor (1987: 19), and Bacharach (2006: 18), for example, use efficiency among equilibria or outcomes to cull equilibria. 8. See Harsanyi (1973), Aumann (1987a), Gibbons (1992: 39), and Vanderschraaf (2001: 47–50) for the reasons to interpret Nash equilibrium as equilibrium-in-beliefs. They concern justification of participation in a Nash equilibrium. 9. A subjective equilibrium may be aptly called a Bayesian equilibrium. I do not adopt this terminology because Myerson (1991: 127–29) and others use the term Bayesian equilibrium as a technical term of the theory of sequential games with imperfect information. 10. Stalnaker (1997: Sec. 6) introduces strong rationalizability for sequential games. This brief review does not treat that type of rationalizability. 11. Binmore (2007: 424–25) presents a brief version of the proof. 12. To see the difference, compare, “If Shakespeare had not written Hamlet, someone else would have,” with, “If Shakespeare did not write Hamlet, someone else did.” The first conditional supposes its antecedent causally and is false, whereas the second conditional supposes its antecedent evidentially and is true. Weirich (1998: 26–28) reviews evidential supposition, usually expressed in the indicative mood, and causal supposition, usually expressed in the subjunctive mood. Shin (1992), Skyrms (1998), and Stalnaker (1999) discuss counterfactual conditionals in games. Stalnaker (2002, 2005) uses the distinction between indicative and subjunctive conditionals to explain forwards and backward induction in extensive games. 13. Weirich (1998: 20–24; 2007a) compares collective, universal, and joint rationality. 14. I thank Shaun McDonough for helpful points about this example. 15. An extensive literature covers backward induction’s support of a rollback equilibrium. It assumes an ideal game with perfect information in which players have common knowledge of their game and their rationality. Behavior off the equilibrium path raises questions. Deviant behavior is irrational, contrary to the assumption of players’ rationality. The backward induction assumes resilient rationality in Sobel’s (1994: Chap. 16) sense. Given counterfactual irrational behavior, resiliency gives priority to rationality in the distance relation among worlds used to obtain nearest antecedent-worlds in the Lewis-Stalnaker
238
16.
17.
18. 19.
20.
21.
Notes to Pages 90–92 semantics for counterfactuals, or treats rationality as entrenched in a belief-revision approach to counterfactual conditions. For a discussion of backward induction, see Bicchieri (1988, 1989), Pettit and Sugden (1989), Reny (1992), Kadane and Seidenfeld (1992), Aumann (1995), Samet (1996), Stalnaker (1997, 1998, 1999), Rabinowicz (1998), Broome and Rabinowicz (1998), Dixit and Skeath (2004), and Sobel (2005). An ideal agent has greater capacities than the Turing machines Shin and Williamson (1997) treat. Not everything an ideal agent knows need be inferred from finite data using a finitely axiomatizable formal system of proof. Barwise (1988) and Vanderschraaf and Sillari (2005) survey various accounts of common knowledge. Aumann (1976) characterizes common knowledge in terms of sets of worlds that are common epistemic possibilities. Gold and Sugden (2007) define common knowledge in terms of individual agents in a series of pairs, triples, and so on. Weirich (2006) defines it using a “she knows what he knows” principle for arbitrary pairs of individuals. This principle generates the same strings of knowledge that the infinite hierarchy of mutual knowledge generates. Binmore and Brandenburger (1990), Fudenberg and Tirole (1991: Chap. 14), Bacharach (1992), Brandenburger (1992), Geanakoplos (1992, 1994), Gibbons (1992: 6–7), and Rubinstein (1998: Chap. 3) explain the role of common knowledge in games. Tan and Werlang (1988), Kreps (1990: 135), Osborne and Rubinstein (1994: Chap. 5), and Dixit and Skeath (2004: 144–50) make similar points. A correlated equilibrium is self-enforcing in the sense that no agent has an incentive to depart if he believes others will do their parts. One may doubt that a rational agent will follow a self-enforcing strategy, such as following an arbitrator’s instructions for coordination. An agent has no reason to follow those instructions unless he believes others will. For discussion of correlated equilibrium, see Gintis (2000: Sec. 4.43), Osborne and Rubinstein (1994: Sec. 3.3), and Fudenberg and Tirole (1991: 53–60). For discussion of Aumann’s theorem and Nash equilibria, see Binmore and Brandenburger (1990: 134–5). For discussion of Aumann’s theorem and the assumption of selfknowledge, see Skyrms (1989: Sections 5 and 6). A correlated equilibrium involves exogenous correlation of strategies, say, because of an arbitrator’s signal, and not because of features of the game itself. Vanderschraaf (2001: 74–77) introduces endogenous correlated equilibrium. Learning and convention generate correlation of agents’ strategies. It is explained and is not just a parameter. Vanderschraaf ’s model explains correlation along with realization of a profile that depends on it. Randomization of choices generates the standard mixed extension of a game and eliminates evidential correlation of strategies. That elimination is a feature of the extension and not a feature of the definition of Nash equilibrium. Nash equilibrium does not rule out evidential correlation of strategies. Also, although correlated equilibrium generalizes Nash equilibrium, some games lack correlated equilibria, for example, games without Nash equilibria but with probabilistically independent strategies. Stalnaker (1997: 353) offers a derivation of Nash equilibrium. His theorem states, “For any two-person game, the set of Nash equilibrium strategies, interpreted as belief profiles, is characterized by the class of models in which each player knows the other’s beliefs about his strategy choice, and each knows that the other is rational.” His theorem is similar to Aumann and Brandenburger’s.
Notes to Pages 93–104
239
22. In games with two equilibria such as the Stag Hunt, Harsanyi and Selten (1988: 82–90) propose risk-dominance as a criterion for selecting an equilibrium. It compares the equilibria according to risks players run by participating. It favors an equilibrium if the product of the players’ losses from deviation exceeds the product of their losses from deviation from the rival equilibrium. In Figure 5.7, neither equilibrium is riskdominant. 23. Friedman (1990: 47) describes the relationship between proper equilibrium, perfect equilibrium, sequential equilibrium, and subgame perfect equilibrium. All are refinements of Nash equilibrium. See also Gintis (2000: Sec. 4.30). 24. Epistemic logic, especially interagent epistemology, studies reasoning leading to equilibria in games. See Montague and Kaplan (1974), Vardi (1988), Walliser (1992), Bacharach, Ge´rard-Varet, Mongin, and Shin (1997), Gilboa (1999), and Halpern (2003). 25. See Schelling (1960: 54–58) on focal points. They are profiles that all expect others to help realize. Rational agents may adopt straightaway the acts they would agree to perform if they were to communicate. Their hypothetical agreement may be a focal point.
C HAPTER 6 1. Trying to show gratitude by being kind is also trying directly to show gratitude. This is an exceptional case. Trying to do something indirectly is not typically also a way of trying directly to do it. 2. Morton (2004) examines the pros and cons of monitoring one’s acts. 3. Suppose that even an indirect approach to utility maximization is self-defeating. Then what does a comprehensively rational agent do? The agent follows the best compromise between utility maximization and other goals. Articulating that compromise is a subtle matter this chapter puts aside. 4. Sugden (2000a: 126) claims that payoffs, that is, outcomes, cannot be sensitive to probabilities. Hence he excludes from outcomes certain factors such as risk, that may matter to an agent. The theory of utility Weirich (2001) presents requires that outcomes be comprehensive. It counts risk as part of a risky option’s outcome although risk is a probability-sensitive factor. Outcomes must be comprehensive if realizing an option with maximum expected utility is to be necessary for rationality. 5. Weirich (2004: App. A; forthcoming a; forthcoming b) argues that a probability and a utility attach to a proposition relative to a way of grasping or understanding the proposition. This chapter puts aside that complication by assuming that an agent always understands a proposition in the same canonical way. 6. An alternative with a similar effect says that an option maximizes utility if and only if no other option has greater utility. An option not compared with other options maximizes according to this definition. 7. Weirich (2004: Chaps. 6–7) presents a generalization of the principle of utility maximization called the principle of acceptability. It governs evaluation of options for an agent who has made mistakes. The principle says that an option is rational only if it maximizes utility after an evaluator corrects for the agent’s unacceptable mistakes.
240
Notes to Pages 106–111
8. Uncertainty of an opponent’s strategy need not be independent of an agent’s strategy. An agent’s adoption of a strategy may alter his assignment of probabilities to an opponent’s strategies. Adoption of a strategy may not causally influence the opponent’s strategy but may still alter evidence of her strategy. Given realization of a Nash equilibrium, some player may have multiple utility-maximizing strategies, only one of which is his equilibrium strategy. He lacks a reason to do his part in the equilibrium unless his strategy is evidence of other players’ strategies. A justification of the Nash equilibrium requires that his equilibrium strategy furnish this evidence. For more on ratification in game theory, see Harper (1989, 1991, 1999), Skyrms (1989, 1990b), Eells and Harper (1991), Shin (1991), and Vanderschraaf (2001: 91–97). 9. To simplify, I ignore termination in a set of options. In some cases termination in a set of fine-grained options reduces to termination in a single coarse-grained option. Whether paths of incentives cycle or not depends partly on the grain of options. Conflating several fine-grained options into one coarse-grained option, or a probability mixture, may eliminate a cycle among options. Graph theory furnishes good representations of incentive structures. A directed edge represents a preference, and directed edges that are connected represent a path of incentives. 10. Reasons of overall strategy may instruct an agent to pursue incentives so that the pattern of pursuit that emerges favors him, even if a step in the pattern is nonoptimal. This chapter puts aside cases in which such strategic considerations revise the selection and stopping rules. Weirich (1998) formulates the revisions. 11. McMahon (2001: 162–63, 183) says that in a bargaining problem, when one’s ideal is unattainable, it may be reasonable to abandon pursuit of incentives. He asserts that forgoing an incentive in order to cooperate may be in accordance with reason. Selfsupport is not a matter of cooperation, but it condones forgoing some incentives. 12. Beliefs, desires, and acts influence each other. Desires and beliefs influence acts. Acts such as collecting information change beliefs and also change desires among acts. Even basic desires may change in response to acts. Acts such as tasting new foods influence basic desires. A person persisting in a job he initially dislikes may eventually become content with it. An equilibrium of beliefs, desires, and acts is a cognitive goal but not a requirement of rationality. In nonideal cases the cognitive equilibrium may be out of reach. 13. Kreps (1990: 145–78) generalizes Nash equilibrium for a game in which a player has an infinite number of pure strategies, for example, a two-player game in which the player picking the higher integer wins. His generalization does not cover games such as Matching Pennies, however. Correlated equilibrium is another generalization of Nash equilibrium. Given prescience, strategies in Matching Pennies are correlated. An agent knows, for each of her strategies, the other agent’s strategy. No correlated equilibrium exists, however. For every strategy profile, some agent has an incentive to deviate unilaterally. 14. Camerer (2003: 118–21, 134–35), for example, doubts the realization of a mixed strategy equilibrium in real life. 15. Evolutionary game theory may explain the origin of a pattern for pursuit of incentives in a game.
Notes to Pages 111–125
241
16. Brams (1994) introduces nonmyopic equilibrium for multistage games. Strategic equilibrium structurally resembles nonmyopic equilibrium but arises in single-stage games. 17. Rational certainty rather than knowledge is crucial, but I do not pursue this refinement. 18. Myerson (1991: 4, 114–15) describes the circle of strategic reasoning that arises when applying expected-utility maximization in games of strategy. 19. Weirich (2004: Chap. 9) shows that ratification breaks the circle of strategic reasoning. 20. In ideal games, because supposition of a profile carries knowledge of the profile, a profile’s strategies are all self-ratifying if and only if they are jointly self-ratifying. Hence, if all players adopt self-ratifying strategies, they realize a subjective and so objective Nash equilibrium.
C HAPTER 7 1. Cubitt and Sugden (2003) present and explicate Lewis’s treatment of convention. Sugden (1986) investigates the evolution of conventions. He characterizes a convention as stable equilibrium of a game with two or more stable equilibria, more precisely, profiles of evolutionarily stable strategies (p. 32). Vanderschraaf (2001) treats the evolution of conventions, too. He characterizes a convention as a correlated equilibrium of a coordination game (pp. 4–5). 2. Humans plainly have altruistic desires. Parents have intrinsic desires that their children prosper. Altruistic desires may explain cooperation in games of strategy, and evolution may explain the origin of such desires. Sober and Wilson (1998: Chap. 10) argue that evolution furnishes evidence that humans have an ultimate desire to help others. They argue for multilevel selection (pp. 10, 147, 183, 191–94, 332–33). Accordingly, a group may function as an organism, and group selection may yield cooperative traits in some circumstances. Sober and Wilson view the unit of selection as a matter of perspective (pp. 86, 331). I assume that they mean that group selection supervenes on gene selection and so is compatible with gene selection. Consequently, their points about groups acting as organisms are not contrary to Chapter 4’s points about groups functioning as agents through the acts of their members. Sober and Wilson criticize evolutionary game theory for its individualism and failure to account for cooperation (pp. 79–86). However, Skyrms’s (2004) evolutionary game theory shows that selection at the level of genes or behaviors may account for the evolution of cooperation because selection takes account of environmental factors such as opportunities for interaction with neighbors. 3. McClennen (1990) and Coleman (1992: 53–57, 59) side with Gauthier on the rationality of cooperation in the Prisoner’s Dilemma. McClennen (2004) claims that rationality requires following rules that maximize personal utility rather than performing acts that maximize personal utility. This type of constrained maximization resembles ruleutilitarianism in ethics and yields cooperation in the Prisoner’s Dilemma. 4. Colman also considers Stackelberg reasoning (Sec. 8.2). It has agents participate in an equilibrium if doing so yields evidence of superior results. Stackelberg reasoning is not rational because it contravenes principles of causal decision theory.
242
Notes to Pages 125–136
5. Many theorists advance methods similar to team reasoning. Graham (2002: 128–30, 134–37) endorses following a group’s plan for realizing a group’s goal. Jacobsen (1996) uses a player’s knowledge of others’ expectations of his behavior, and also a player’s consistent plan for all players, to ground a unique strict Nash equilibrium taken as an equilibrium-in-beliefs. Rovane (2004: 334–41) claims that a person may operate at different levels of agency and may become part of a collective agent and not assert her own agency. Gibbard ([1972] 1990: 243–46), in cases such as Hi-Lo, entertains the possibility that a purely benevolent agent promotes cooperation with other purely benevolent agents by adopting High without attention to whether others adopt High too. 6. Kierland (2006) revises Zimmerman’s account of the requirement to be cooperative. Obligations are for acts in one’s control. Agents are not obligated to change their characters, but are to blame for having bad, noncooperative characters. Acquiring a cooperative disposition may not be in an agent’s control, and so may be exempt from evaluation as an act, but having a cooperative disposition is still a requirement of good character. 7. In response to Regan, Conee (1983) suggests that groups as well as their members must maximize collective utility. He entertains the possibility that groups have obligations to maximize that may be unfulfilled even if every member meets his obligation to maximize. This suggestion applied to obligations of rationality is contrary to compositionality. According to compositionality, a group meets its obligations if its members meet their obligations. Conee (2001: 429–30) retracts the view that a group of individuals form an agent with independent obligations. Nonetheless, an individual in a group may have a moral obligation to provide other members with an opportunity to generate value, say, to generate the superior equilibrium in Hi-Lo. Although morality may require providing such opportunities, rationality does not because the reasons it recognizes arise within the individual. 8. Young (1998: 144) calls rationality for ideal agents’ hyperrationality. This is not hyperrationality in Sobel’s sense. 9. Hardin (1982: Chaps. 6, 7) considers the role of cooperative dispositions, public-spirit, and team-spirit in collective action problems such as the Prisoner’s Dilemma. Schmidt (2000: 137–40) points out that players in a game may have a desire to cooperate that guides their choices. 10. Sugden (1986: 165) observes that in repeated games a bold individual may rationally make the first cooperative move when among cautious reciprocators. Her first step toward cooperation is rational because she expects reciprocation. She may expect reciprocation because of its rationality. This section’s points about instigators of coordination in one-shot games resemble Sugden’s points about bold first-steppers in repeated games. 11. Weirich (2004: Chap. 9) elaborates this argument for participation in a unique efficient Nash equilibrium. 12. This section draws on Weirich (2003, 2007b). 13. According to Verbeek (2008: 161), an agent creates a reason to perform an act by intending to perform an act because fulfillment of an intention is a reason to act. 14. Broome (2001: 114–16) provides this example of incomparable acts.
Notes to Pages 136–148
243
15. Kavka’s toxin puzzle (1983) dramatizes this point. The intention to drink, despite its good consequences, is irrational unless there are reasons to drink.
C HAPTER 8 1. According to Moulin (1995: 5), cooperation involves mutual assistance. I allow for cooperation that assists people outside the group of cooperators. Dixit and Skeath (2004: 394–95) say that a group’s members act cooperatively if and only if they act in a manner best for the group. Maximizing collective utility may, however, require self-sacrifice and not just cooperation. Myerson (1991: 370) says that to cooperate is to act together with a common purpose. However workers on an assembly line may cooperate to build a car even if each has only private goals, and the purpose of building a car does not unite them. 2. Zimmerman (1996: 267) states that a collective act involves causal interaction of agents. As I classify acts, only a joint act, and not a collective act, requires causal interaction. My characterization of a joint act is stipulative, but common. It agrees with Tuomela’s (1995: 161) characterization, for instance. 3. The possibility of communication that cooperative games offer does not solve all coordination problems. Some coordination problems are not resolvable by communication. Take coordination on a language to use in communication. It requires solving a coordination problem without communication. The situation is not hopeless, however. Lewis (1969) shows that conventions resolving coordination problems may emerge without communication. 4. Dixit and Skeath (2004: 266–68) explain how communication may yield efficient coordination. Communication generates a sequential game with strategies concerning a signal to send and a response to a signal. The sequential game has a rollback equilibrium that yields efficient coordination. 5. Harsanyi and Selten (1988: 1–7) and Raiffa (2002: 82) treat a similar modification of the Prisoner’s Dilemma. Myerson (1991: 244–46) treats a modification with possible contracts concerning jointly randomized strategies, or correlated strategies. 6. Aumann (1987b: 463), Myerson (1994: 828–35), Friedman (1990: 22, 205), Vanderschraaf (2001: 70), and Dixit and Skeath (2004: 26) use the availability of binding contracts to define cooperative games. Myerson (1991: 370–71), Osborne and Rubinstein (1994: 2–3, 255–56), Moulin (1995: 403), and Raiffa (2002: 80–81) use the possibility of joint action to define cooperative games. 7. For another example, consider coalition-proof Nash equilibrium as introduced by Bernheim, Peleg, and Whinston (1987). Coalition-proofness is a criterion for selecting a Nash equilibrium in games without binding agreements but with communication and joint action. In these games coalitions of players may jointly deviate from a Nash equilibrium. I call the games for which coalition-proofness is proposed cooperative because they afford opportunities for joint action despite the absence of binding agreements. 8. Ordeshook (1986: chap. 7) exhibits cooperative and noncooperative analyses applying to the same game. 9. A sequential game may have conditional strategies that are causally independent although they have causally dependent stages and together realize joint acts. At the
244
10.
11.
12.
13.
14. 15.
16.
Notes to Pages 148–154 start of a sequential game, agents may pick conditional strategies for playing the whole game. They may simultaneously and independently choose for the whole sequential game. Their strategies may form a Nash equilibrium. Then the agents evidentially coordinate strategies. Nonetheless, implementation of a strategy requires an agent to respond causally to his opponent. The implementation involves causal coordination of moves and thus joint action. Nash’s program derives a cooperative game’s solutions from solutions of an underlying noncooperative game. Myerson (1991: 371) says that Nash’s program fails to identify solutions to a cooperative game. Too many strategy profiles of the underlying game are Nash equilibria. The program needs supplementation by a principle of equilibrium selection in the underlying game. As I interpret Nash’s program, it grounds a cooperative game’s solution in a noncooperative game’s solution, not just in its Nash equilibria. It includes principles of equilibrium selection. Raiffa (2002: 449) assumes that in all cooperative games players have common knowledge of their game and their rationality. I take this common knowledge as a feature of an ideal cooperative game. Dixit and Skeath (2004: Sec. 18.4) and Binmore (2007: Sec. 18.5) review Shapley’s proposal. Ordeshook (1986: 462–63), Myerson (1991: 445–46, 455–56), Moulin (1995: 4, 15, 403), and Raiffa (2002: 442) say that it rests on equity. Moulin (2003: 12, 146) says that it is game theory’s most important contribution to the theory of distributive justice. Myerson (1991: 419–20) characterizes a coalitional game in terms of coalitions’ ability to negotiate effectively. Effective negotiation yields joint action, so his characterization agrees with mine. Moulin (1995: 402) and Raiffa (2002: 431) identify cooperative games with coalitional games. I provide for cooperative games that are not coalitional games. Using my characterizations of cooperative and coalitional games, whether there are such games depends on whether individuals may perform a joint act without forming a coalition. Settling this question requires more precise definitions of joint action and coalition formation than this chapter provides. For a review of coalitional games, see Shubik (1982: Chap. 6), Greenberg (1994), and Weber (1994). A simple coalitional game has a different technical definition. See Ordeshook (1986: 313–17). Binmore (2007: Chap. 18) states that a coalition’s value is its security level, or a minimum its efforts guarantee. In an elementary coalitional game, the maximum a coalition may achieve from joint action equals its security level. In some coalitional games, the value of a coalition depends on which other coalitions form. In politics, a coalition may rule if the opposition is divided but not if the opposition is united. Ordeshook (1986: 330–36, 340– 45, 382–84) reviews interpretations of a coalition’s value and a coalitional game’s characteristic function. The interpretations lead to variations of the core called the alpha-core and the beta-core. Abdou and Keiding (1991) treat effectivity functions that assign a coalition’s value in light of outsiders’ behavior. Shubik (1982: 134–35), Myerson (1991: 422–24), and Moulin (1995: 402–03) discuss transferable utility. Aumann (1987b: 471–72, 478) treats coalitional games with nontransferable utility.
Notes to Pages 155–162
245
17. Kalai and Smorodinsky (1975) propose a rival solution to bargaining games. Friedman (1990: Chap. 6) compares Nash’s solution and Kalai and Smorodinsky’s solution for two-person bargaining. Roth (1985) collects essays on applications of game theory to bargaining. For solutions to bargaining games, Coleman (1992: 120–21) adopts a standard of equal resistance. Weirich (2001: Chap. 6) defends a similar standard. A three-person bargaining game does not reduce to a series of two-person bargaining games. The additional person changes the bargaining process. Hence, Nash’s solution to two-person bargaining games does not apply iteratively to n-person bargaining games. Its extension to n-person bargaining games requires additional argumentation. 18. Gillies (1953, 1959) introduces the core. Other authors treating the core are Shubik (1982: 147), Varian (1984: 235), Aumann (1985: 49; 1989: 38), Myerson (1991: 427–28), Kannai (1992), Peleg (1992), Osborne and Rubinstein (1994: 258–59), Moulin (1995: 15, 46, 404–05), Starr (1997: 158–59), Dixit and Skeath (2004: 606–12), and Osborne (2004: Chap. 8). The definition of the core differs slightly from author to author depending on the type of game, whether the characteristic function is superadditive, whether utility is transferable, whether comparisons use strategy profiles or outcome profiles, and whether outcomes are objective payoffs or subjective utilities. 19. Osborne and Rubinstein (1994: 258) also compare Nash equilibrium and the core. 20. The revision is unnecessary if generalization of Nash equilibrium for coalitions insists only on joint optimization by coalitions in a profile’s coalition structure, that is, coalitions whose formation the profile’s realization entails. 21. Ordeshook (1986: 383) makes a similar point about games that characteristic functions represent. Aumann (1987b: 470–71) objects to the core using cases like Raiffa’s in which the unique core allocation is counterintuitive. His cases, which Myerson (1991: 429) reviews, involve cutthroat competition in markets with excess supply. The core allocation is counterintuitive in Aumann’s cases only because one envisages real-life conditions rather than ideal conditions that a characteristic function assumes. Application of the core to a market that a characteristic function represents assumes that the market is competitive so that individuals may collaborate but may not collude. 22. Aumann (1987b: 475) and Kannai (1992) describe balancedness, a general condition for the non-emptiness of a coalitional game’s core. 23. In a nonideal game, if each agent is ignorant about others, then each agent’s rationality ensures joint rationality. Supposition of a profile’s realization does not alter circumstances that make strategies rational. However, each agent’s having some knowledge of others, short of ideal knowledge, may prevent joint rationality. Although each individual and each coalition can act rationally, they may not be able to achieve joint rationality given constraints their background conditions impose. Nonideal games may lack a solution. 24. For the bargaining set, see Aumann and Maschler ([1964] 1997) and Davis and Maschler (1967). For the kernel, see Davis and Maschler (1965). For the nucleolus, see Schmeidler (1969). For comparisons, see Aumann (1987b: 473), Shubik (1982: Chap. 11), Myerson (1991: 452–56), Maschler (1992), and Osborne and Rubinstein (1994: Chap. 14). 25. Aumann (1989: 38, 74–86) discusses the bargaining set and sufficient reasons to change allocations. He says (p. 74) that no single solution-concept fits cooperative games. He does not propose the bargaining set as a general solution.
246
Notes to Pages 168–196 C HAPTER 9
1. In elementary coalitional games, where utility is transferable, if a coalition’s formation may yield gains for some members without loses for other members, then the coalition may also distribute its value so that each member gains. Consequently, under both accounts of collective incentives, a coalition has an incentive to form if and only if all members may gain from its formation. 2. Also, treating the group of coalitions as an agent may yield a shortcut method of identifying a coalitional game’s equilibria using an extension of the equilibrium-search methods in Weirich (1998: Sec. 6.4). 3. I assume the existence of a nearest alternative profile in the cases I treat. A more complex general theory handles cases in which no alternative is nearest. 4. Similar independence does not hold for a unit-coalition’s payoffs from its strategies. Its payoff from forming is independent of others. It is its value. Its payoff from not forming is also independent of others. It is zero (although the unit-coalition’s unique member may gain as a member of a multi-individual coalition that forms). However, as Section 9.1 explains, the unit-coalition has attempting not to form rather than not forming as an option. Its payoff from attempting not to form depends on others, because its nonformation depends on others. 5. Examples, for definiteness, suppose that in a hypothetical deviation from a strategic equilibrium, agents violate the stopping rule rather than the selection rule.
C HAPTER 10 1. Game theorists such as Dixit and Skeath (2004: 143) acknowledge that psychological features besides rationality settle solutions to bargaining problems. 2. Nash’s solution uses symmetry to select an efficient outcome. Some hold that symmetry is a principle of fairness. Skyrms (2004: 18–19) observes that in the bargaining problem Divide-the-Dollar, efficiency and symmetry yield the 50–50 split. This is justice. He claims that rationality cannot explain the 50–50 split. Symmetry goes beyond rationality. He means symmetry in a bargaining problem’s representation. The agents may have unrepresented psychological differences. One may bully the other into accepting less than 50 percent. Then the 50–50 split requires fairness. However, if agents are psychologically symmetrical, fairness may be a focal point in a bargaining problem, as Moulin (1995) observes. Agents may achieve the symmetric outcome because of a desire to coordinate rather than because of a desire to be fair. 3. Some decision theorists recommend conflating an agent’s options if they have the same utility. In noncooperative games, this advice suggests conflating all an agent’s mixed strategies if the agent knows that his opponent adopts her part of a mixed-strategy Nash equilibrium. However, only the agent’s part of the equilibrium is self-supporting. The conflated representation lacks resources for specifying self-supporting strategies, equilibria, and solutions. It is inadequate. 4. Luce and Raiffa (1957: 288) argue against the standard of independence in bargaining games.
Notes to Pages 197–216
247
5. Ordeshook (1986: 353–55) and Binmore (2007: 466) review Coase’s Theorem, and Ordeshook (1986: 357–61) reviews efficiency in market games. Coase (1960) presents the proposition known as Coase’s Theorem. Fudenberg and Tirole (1991: 245) state the proposition this way: “In the absence of transaction costs and with symmetric information, bargaining among parties concerned by a decision leads to an efficient decision, i.e., to the realization of gains from trade.” Black (2002) characterizes the theorem as an argument that the market yields efficient use of resources given any assignment of property rights and without state regulation or taxation. Samuelson (1985) argues that in certain ideal conditions individual rationality leads to efficiency in bargaining problems, but he does not address coalitional games with an empty core. Aivazian and Callen (1981) argue that Coase’s Theorem does not govern coalitional games with empty cores. Coase’s (1981) reply regiments bargaining protocols to obtain an efficient outcome but sacrifices generality. Coleman (1988: 69) restricts the theorem to twoperson games to avoid empty cores. Halpin (2007) criticizes the theorem in two-person cases because it ignores an agent’s full range of options. 6. Myerson (1991: 430) observes that players in a coalitional game may have such reasons not to seek efficiency.
C HAPTER 11 1. Suppose that the bargainers have the option of reaching agreement in one stage instead of in two stages. They may, for example, each submit (9, 1) as a proposal in a preliminary stage that yields an agreement when proposals match. If they fail to reach agreement in that preliminary stage, then they follow the two-step bargaining protocol. Given these provisions, to save time, ideal agents at the outset agree on the division that they foresee arising from the two-step protocol. The Ultimatum Game’s bargaining protocol is not optimal. It achieves in two stages the result another bargaining protocol achieves in one stage. 2. Osborne (2004: Sec. 16.4) reviews Binmore’s result, and Osborne (2004: Sec. 16.1.3) reviews Rubinstein’s result. For discussion of Rubinstein’s result, see Myerson (1991: 394–99), Fudenberg and Tirole (1991: 113–17, 397–98), Binmore, Osborne, and Rubinstein (1992), Gibbons (1992: 68–71), Osborne and Rubinstein (1994: Chap. 7, Sec. 15.4), Young (1998: 23, 117), Kreps (1990: 123–28), Gintis (2000: 97, 350, 353, 486, Sec. 5.6, Sec. 15.4, Sec.15.6), and Dixit and Skeath (2004: 577–79, 582–87). Binmore (1985) applies Rubinstein’s methods to bargaining in three-player, three-cake problems. Those methods have not been extended to all n-person coalitional games. 3. Skyrms (2004: 71) treats bargaining with communication as a signaling game. An offer is a signal. Acceptance is a response to a signal. The agents at a time wonder which signal to send or which response to make. Components of his analysis apply to the agreement game this chapter analyzes.
C HAPTER 12 1. Dasgupta (2007) presents an account of social institutions and their effect on a nation’s economy.
248
Notes to Pages 216–225
2. Some contractarian theories use a rational social contract, which may be hypothetical, to justify a government’s authority. Others use it to ground all of morality. Some recent contractarians are Rawls (1971), Gauthier (1986), Binmore (1994, 1998, 2005), and Skyrms (1996). As Taylor (1987) and Skyrms (2004) observe, besides the Prisoner’s Dilemma, noncooperative games such as Chicken and the Stag Hunt represent types of social interaction that a social contract ameliorates. 3. Utility maximization does not ensure that all beneficial trades are made if circumstances are not ideal. Some beneficial trades may not be made because negotiations break down. For results concerning efficiency in competitive markets, see Debreu (1959: 90), Arrow (1974: 20), Varian (1984: Sec. 5.5), and Moore (2007: 142–53, 205–11, 227–44). 4. Condorcet’s Jury Theorem presents conditions under which majority rule enhances reliability. For an analysis of the theorem, see Bovens and Rabinowicz (2004). 5. Dasgupta and Maskin (2003) show that majority rule is more robust than are rival voting methods in the sense that it yields transitive collective preferences in more cases than they do. Although no voting system is manipulation-proof, Maskin (2005) argues that majority rule is less manipulable than are rival voting methods. 6. Osborne and Rubinstein (1994: Chap. 10) and Vannucci (1996) review implementation theory. 7. Klemperer (2004) reviews the design of auctions. Besides designing auctions to reveal information, one may design them to attain goals of collective rationality such as efficiency. For instance, a government may auction ownership of broadcast frequencies using an auction designed to promote efficiency. 8. The literature on the economics of information treats games of asymmetric information and, in particular, games with obstacles to information sharing. See, for example, Akerlof (1970), Spence (1973), Stiglitz (1974), Varian (1984, Chap. 8), and Campbell (2006). 9. Some accounts of ideal negotiations give citizens equal voices. 10. For a review of these results, see Debreu and Scarf ([1963] 1997), Aumann ([1966] 1997), Varian (1984: 236, 238, 239, Sec. 6.5), Aumann (1987b: 474), Hildenbrand (1987), Friedman (1990: 97), Anderson (1992), Coleman (1992: 87–91), Osborne and Rubinstein (1994: Sec. 13.4.3, Sec. 13.6.2), Mas-Colell, Whinston, and Green (1995: 652–60), Moulin (1995: 45–47, 103–4, Chaps. 2, 3), Starr (1997: 99, 135, 160–1, 162–6), McKenzie (2002: Chap. 5), Moulin (2003: 273), Dixit and Skeath (2004: 615), and Moore (2007: 142–53, 205–11, 227–44, 311–30). 11. General principles such as compositionality entail principles for groups. Compliance with principles for individuals entails compliance with principles for groups. These are not conflicting explanations of principles for groups. The principles and their realizations are distinct. A collective principle’s explanation by a general principle is compatible with its realization’s explanation by an individualistic principle’s realization. Also, because explanation concerns acts propositionally individuated, individuals’ acts realizing a collective act may not explain an event that the collective act explains.
Bibliography
Abdou, J. and H. Keiding. 1991. Effectivity Functions in Social Choice. Dordrecht: Kluwer. Aivazian, V. and J. Callen. 1981. “The Coase Theorem and the Empty Core.” Journal of Law and Economics 24: 175–81. Akerlof, G. A. 1970. “The Market for ‘Lemons’: Qualitative Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84: 488–500. Anderson, R. 1992. “The Core in Perfectly Competitive Economies.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 1, pp. 413–57. New York: Elsevier Science. Arrow, K. 1951. Social Choice and Individual Values. New Haven: Yale University Press. ———. 1974. The Limits of Organization. New York: Norton. Aumann, R. 1959. “Acceptable Points in General Cooperative n-Person Games.” In R. D. Luce and A. W. Tucker, eds., Contributions to the Theory of Games IV, Annals of Mathematical Studies 40, pp. 287–324. Princeton, NJ: Princeton University Press. ———. [1966] 1997. “Existence of Competitive Equilibria in Markets with a Continuum of Traders.” In H. Kuhn, ed., Classics in Game Theory, pp. 170–91. Princeton: Princeton University Press. ———. 1974. “Subjectivity and Correlation in Randomized Strategies.” Journal of Mathematical Economics 1: 67–96. ———. 1976. “Agreeing to Disagree.” Annals of Statistics 4: 1236–39. ———. 1985. “What Is Game Theory Trying to Accomplish?” In K. Arrow and S. Honkapohja, eds., Frontiers of Economics, pp. 28–99. Oxford: Blackwell. ———. 1987a. “Correlated Equilibrium as an Expression of Bayesian Rationality.” Econometrica 55: 1–18. ———. 1987b. “Game Theory.” In J. Eatwell, M. Milgate, P. Newman, eds., The New Palgrave: A Dictionary of Economics, vol. 2, pp. 460–82. London: MacMillan. ———. 1989. Lectures on Game Theory. Boulder, CO: Westview Press. ———. 1995. “Backward Induction and Common Knowledge of Rationality.” Games and Economic Behavior 8: 6–19. Aumann, R. and A. Brandenburger. 1995. “Epistemic Conditions for Nash Equilibrium.” Econometrica 63: 1161–80. Aumann, R. and M. Maschler. [1964] 1997. “The Bargaining Set for Cooperative Games.” In H. Kuhn, ed., Classics in Game Theory, pp. 140–69. Princeton, NJ: Princeton University Press.
249
250
Bibliography
Bacharach, M. 1987. “A Theory of Rational Decision in Games.” Erkenntnis 27: 17–55. ———. 1992. “The Acquisition of Common Knowledge.” In C. Bicchieri and M. Chiara, eds., Knowledge, Belief and Strategic Interaction, pp. 285–316. Cambridge: Cambridge University Press. ———. 1999. “Interactive Team Reasoning: A Contribution to the Theory of Cooperation.” Research in Economics 53: 117–47. ———. 2006. Beyond Individual Choice: Teams and Frames in Game Theory. Edited by N. Gold and R. Sugden. Princeton: Princeton University Press. Bacharach, M., L. Ge´rard-Varet, P. Mongin, and H. Shin, eds. 1997. Epistemic Logic and the Theory of Games and Decisions. Boston: Kluwer. Barwise, J. 1988. “Three Views of Common Knowledge.” In M. Vardi, ed., Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge, pp. 365–79. Los Altos, CA: Morgan Kaufmann. Bates, J. 1999. “Reflective Equilibrium.” Doctoral dissertation. Columbia, MO: University of Missouri. Bergmann, M. 2006. Justification without Awareness: A Defense of Epistemic Externalism. Oxford: Clarendon Press. Bermu´dez, J. 2002. “Rationality and Psychological Explanation without Language.” In J. Bermu´dez and A. Millar, eds., Reason and Nature: Essays in the Theory of Rationality, pp. 233–64. Oxford: Clarendon Press. Bernheim, B. D. 1984. “Rationalizable Strategic Behavior.” Econometrica 52: 1007–28. Bernheim, B. D., B. Peleg, and M. D. Whinston. 1987. “Coalition-Proof Nash Equilibria I. Concepts.” Journal of Economic Theory 42: 1–12. Bicchieri, C. 1988. “Strategic Behavior and Counterfactuals.” Synthese 76: 135–69. ———. 1989. “Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge.” Erkenntnis 30: 69–85. ———. 1993. Rationality and Coordination. Cambridge: Cambridge University Press. ———. 1997. “Learning to Cooperate.” In C. Bicchieri, R. Jeffrey, and B. Skyrms, eds., The Dynamics of Norms, pp. 17–48. Cambridge: Cambridge University Press. ———. 2004. “Rationality and Game Theory.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 182–205. New York: Oxford University Press. ———. 2006. The Grammar of Society: The Nature and Dynamics of Social Norms. Cambridge: Cambridge University Press. Bicchieri, C. and M. Chiara, eds. 1992. Knowledge, Belief and Strategic Interaction. Cambridge: Cambridge University Press. Bicchieri, C., R. Jeffrey, and B. Skyrms, eds. 1999. The Logic of Strategy. New York: Oxford University Press. Binmore, K. 1985. “Bargaining and Coalitions.” In A. Roth, ed., Game-Theoretic Models of Bargaining, pp. 269–304. Cambridge: Cambridge University Press. ———. 1987. “Nash Bargaining Theory II.” In K. Binmore and P. Dasgupta, eds., The Economics of Bargaining, pp. 61–76. Oxford: Blackwell. ———. 1992. Fun and Games: A Text on Game Theory. Lexington, MA: D. C. Heath. ———. 1994. Playing Fair, vol. 1, Game Theory and the Social Contract. Cambridge, MA: MIT Press. ———. 1998. Just Playing, vol. 2, Game Theory and the Social Contract. Cambridge, MA: MIT Press.
Bibliography
251
———. 2005. Natural Justice. New York: Oxford University Press. ———. 2007. Playing for Real: A Text on Game Theory. New York: Oxford University Press. Binmore, K. and A. Brandenburger. 1990. “Common Knowledge and Game Theory.” In K. Binmore, Essays on the Foundations of Game Theory, pp. 105–50. Oxford: Blackwell. Binmore, K., M. Osborne, and A. Rubinstein. 1992. “Noncooperative Models of Bargaining.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 1, pp. 179–225. New York: Elsevier Science. Bittner, R. 2001. Doing Things for Reasons. Oxford: Oxford University Press. Black, D. 1948. “On the Rationale of Group Decision-Making.” Journal of Political Economy 56: 23–34. Black, J. 2002. A Dictionary of Economics. Second edition. Oxford: Oxford University Press. Block, N. 1991. “Troubles with Functionalism.” In D. Rosenthal, ed., The Nature of Mind, pp. 211–28. New York: Oxford University Press. Bovens, L. 2001. “Review of Equilibrium and Rationality. ” Mind 110: 288–92. Bovens, L. and W. Rabinowicz. 2004. “Voting Procedures for Complex Collective Decisions: An Epistemic Perspective.” Ratio Juris 17: 241–58. Brams, S. 1990. Negotiation Games: Applying Game Theory to Bargaining and Arbitration. New York: Routledge. ———. 1994. Theory of Moves. Cambridge: Cambridge University Press. Brandenburger, A. 1992. “Knowledge and Equilibrium in Games.” Journal of Economic Perspectives 6: 83–101. Brandenburger, A. and E. Dekel. 1989. “The Role of Common Knowledge Assumptions in Game Theory.” In F. Hahn, ed., The Economics of Missing Markets, Information, and Games, pp. 46–61. Oxford: Clarendon Press. Bratman, M. 1987. Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press. ———. 1999. Faces of Intention: Selected Essays on Intention and Agency. Cambridge: Cambridge University Press. Broome, J. 2000. “Instrumental Reasoning.” In J. Nida-Ru¨melin and W. Spohn, eds., Rationality, Rules, and Structure, pp. 195–208. Dordrecht: Kluwer. ———. 2001. “Are Intentions Reasons? And How Should We Cope with Incommensurable Values?” In C. W. Morris and A. Ripstein, eds., Practical Rationality and Preference: Essays for David Gauthier, pp. 98–120. Cambridge: Cambridge University Press. ———. 2002. “Practical Reasoning.” In J. Bermu´dez and A. Millar, eds., Reason and Nature: Essays in the Theory of Rationality, pp. 85–112. Oxford: Clarendon Press. Broome, J. and W. Rabinowicz. 1999. “Backwards Induction in the Centipede Game.” Analysis 59: 237–42. Camerer, C. 2003. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton: Princeton University Press. Campbell, D. 2006. Incentives: Motivation and the Economics of Information. Second edition. Cambridge: Cambridge University Press. Carnap, R. 1962. Logical Foundations of Probability. Second edition. Chicago: University of Chicago Press. Chant, S. 2006. “The Special Composition Question in Action.” Pacific Philosophical Quarterly 87: 422–41. Coase, R. 1960. “The Problem of Social Cost.” Journal of Law and Economics 3: 1–44.
252
Bibliography
Coase, R. 1981. “The Coase Theorem and the Empty Core: A Comment.” Journal of Law and Economics 24: 183–87. Coleman, J. 1988. Markets, Morals and the Law. Cambridge: Cambridge University Press. ———. 1992. Risks and Wrongs. Cambridge: Cambridge University Press. Colman, A. 2003. “Cooperation, Psychological Game Theory, and Limitations of Rationality in Social Interaction.” Behavioral and Brain Sciences 26: 139–98. Conee, E. 1983. “Review of Utilitarianism and Co-operation by Donald Regan.” Journal of Philosophy 80: 415–24. ———. 2001. “Review of ‘Hedonistic Utilitarianism.’ ” Philosophical Review 110: 428–30. Copp, D. 1995. Morality, Normativity, and Society. New York: Oxford University Press. ———. 2007. “The Collective Moral Autonomy Thesis.” Journal of Social Philosophy 38: 369–88. Cubitt, R. and R. Sugden. 2003. “Common Knowledge, Salience, and Convention.” Economics and Philosophy 19: 175–210. Dasgupta, P. 2007. Economics: A Very Short Introduction. Oxford: Oxford University Press. Dasgupta, P. and E. Maskin. 2003. “On the Robustness of Majority Rule.” Economics Working Papers, Number 36, Institute for Advanced Study, School of Social Science. d’Aspremont, C. and L. Gevers. 1977. “Equity and the Informational Basis of Collective Choice.” Review of Economic Studies 44: 199–209. Davis, M. and M. Maschler. 1965. “The Kernel of a Cooperative Game.” Naval Research Logistics Quarterly 12: 223–59. ———. 1967. “Existence of Stable Payoff Configurations for Cooperative Games.” In M. Shubik, ed., Essays in Mathematical Economics in Honour of Oskar Morgenstern. Princeton, NJ: Princeton University Press. Debreu, G. 1959. Theory of Value: An Axiomatic Analysis of Economic Equilibrium. New York: Wiley. Debreu, G. and H. Scarf. [1963] 1997. “A Limit Theorem on the Core of an Economy.” In H. Kuhn, ed., Classics in Game Theory, pp. 127–39. Princeton: Princeton University Press. Dietrich, F. 2006. “Judgment Aggregation: (Im)possibility Theorems.” Journal of Economic Theory 126: 286–98. Dixit, A. and S. Skeath. 2004. Games of Strategy. Second edition. New York: Norton. Dretske, F. 1988. Explaining Behavior: Reasons in a World of Causes. Cambridge, MA: MIT Press. Dutta, P. 1999. Strategies and Games: Theory and Practice. Cambridge, MA: MIT Press. Edgeworth, F. Y. 1881. Mathematical Psychics. London: Kegan Paul. Eells, E. and W. Harper. 1991. “Ratifiability, Game Theory, and the Principle of Independence of Irrelevant Alternatives.” Australasian Journal of Philosophy 69: 1–19. Elster, J. 1985. “Rationality, Morality, and Collective Action.” Ethics 96: 136–55. Feldman, F. 1986. Doing the Best We Can. Dordrecht: Reidel. Finkelstein, C. 2004. “Legal Theory and the Rational Actor.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 399–416. New York: Oxford University Press. Fischer, J. M. 1994. The Metaphysics of Free Will: An Essay on Control. Oxford: Blackwell. French, P. 1998. “Morally Blaming Whole Populations.” In P. French, ed., Individual and Collective Responsibility, pp. 13–33. Rochester, VT: Schenkman.
Bibliography
253
Friedman, J. 1990. Game Theory with Applications to Economics. Second edition. New York: Oxford University Press. Fudenberg, D. and J. Tirole. 1991. Game Theory. Cambridge, MA: MIT Press. Gauthier, D. 1986. Morals by Agreement. Oxford: Oxford University Press. ———. 1997. “Resolute Choice and Rational Deliberation.” Nouˆs 31: 1–25. Geanakoplos, J. 1992. “Common Knowledge.” Journal of Economic Perspectives 6: 53–82. ———. 1994. “Common Knowledge.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 2, pp. 1437–96. New York: Elsevier Science. Gibbard, A. [1972] 1990. Utilitarianism and Coordination. New York: Garland. ———. 2002. “The Reasons of a Living Being.” Proceedings and Addresses of the American Philosophical Association 76, 2: 49–60. Gibbard, A. 2003. Thinking How to Live. Cambridge, MA: Harvard University Press. Gibbons, R. 1992. Game Theory for Applied Economists. Princeton: Princeton University Press. Gigerenzer, G. 2000. Adaptive Thinking: Rationality in the Real World. New York: Oxford University Press. ———. 2002. “Bounded Rationality: The Adaptive Toolbox.” LOFT 5 Conference, Turin, Italy, June 28, 2002. Gigerenzer, G. and R. Selten. 2000. “Rethinking Rationality.” In G. Gigerenzer and R. Selten, eds., Bounded Rationality: The Adaptive Toolbox, pp. 1–12. Cambridge, MA: MIT Press. Gilbert, M. 1996. Living Together: Rationality, Sociality, and Obligation. Lanham, MD: Rowman & Littlefield. ———. 2000. Sociality and Responsibility: New Essays in Plural Subject Theory. Lanham: Rowman & Littlefield. ———. 2001. “Collective Preferences, Obligations, and Rational Choice.” Economics and Philosophy 17: 109–19. Gilboa, I. 1999. “Can Free Choice be Known?” In C. Bicchieri, R. Jeffrey, and B. Skyrms, eds., The Logic of Strategy, pp. 163–74. New York: Oxford University Press. Gillies, D. B. 1953. “Some Theorems on n-Person Games.” Doctoral dissertation. Princeton: Princeton University. ———. 1959. “Solutions to General Non-Zero-Sum Games.” In A. W. Tucker and R. D. Luce, eds., Contributions to the Theory of Games, vol. 4, Annals of Mathematical Studies 40, pp. 47–85. Princeton, NJ: Princeton University Press. Ginet, C. 1990. On Action. Cambridge: Cambridge University Press. Gintis, H. 2000. Game Theory Evolving: A Problem-Centered Introduction to Modeling Strategic Behavior. Princeton: Princeton University Press. Gold, N. and R. Sugden. 2007a. “Theories of Team Agency.” In F. Peter and H. Schmid, eds., Rationality and Commitment, pp. 280–312. Oxford: Oxford University Press. ———. 2007b. “Collective Intentions and Team Agency.” Journal of Philosophy 104: 109–37. Good, I. J. 1952. “Rational Decisions.” Journal of the Royal Statistical Society, Ser. B, 14: 107–14. Graham, K. 2002. Practical Reasoning in a Social World: How We Act Together. Cambridge: Cambridge University Press.
254
Bibliography
Greenberg, J. 1990. The Theory of Social Situations: An Alternative Game-Theoretic Approach. Cambridge: Cambridge University Press. ———. 1994. “Coalition Structures.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 2, pp. 1305–37. New York: Elsevier Science. Halpern, J. 2003. Reasoning about Uncertainty. Cambridge, MA: MIT Press. Halpin, A. 2007. “Disproving the Coase Theorem.” Economics and Philosophy 23: 321–41. Hammond, P. 1988. “Consequentialist Foundations for Expected Utility.” Theory and Decision 25: 25–78. ———. 1998. “Consequentialism and Bayesian Rationality in Normal Form Games.” In W. Leinfellner and E. Ko¨hler, eds., Game Theory, Experience, Rationality: Foundations of Social Sciences, Economics and Ethics: In Honor of John C. Harsanyi, pp. 187–96. Dordrecht: Kluwer. ———. 1999. “Consequentialism, Non-Archimedean Probabilities, and Lexicographic Expected Utility.” In C. Bicchieri, R. Jeffrey, and B. Skyrms, eds., The Logic of Strategy, pp. 39–66. New York: Oxford University Press. Hardin, R. 1982. Collective Action. Baltimore: Johns Hopkins University Press. Harper, W. 1989. “Decisions, Games and Equilibrium Solutions.” In A. Fine and J. Leplin, eds., PSA 1988, vol. 2, pp. 344–62. East Lansing, MI: Philosophy of Science Association. ———. 1991. “Ratifiability and Refinements.” In M. Bacharach and S. Hurley, eds., Foundations of Decision Theory: Issues and Advances, pp. 263–93. Oxford: Blackwell. ———. 1999. “Solutions Based on Ratifiability and Sure Thing Reasoning.” In C. Bicchieri, R. Jeffrey, and B. Skyrms, eds., The Logic of Strategy, pp. 67–81. New York: Oxford University Press. Harsanyi, J. 1966. “A General Theory of Rational Behavior in Game Situations.” Econometrica 34: 613–34. ———. 1973. “Games with Randomly Distributed Payoffs: A New Rationale for MixedStrategy Equilibrium Points.” International Journal of Game Theory 2: 1–23. Harsanyi, J. and R. Selten. 1988. A General Theory of Equilibrium Selection in Games. Cambridge, MA: MIT Press. Hildenbrand, W. 1987. “Cores.” In J. Eatwell, M. Milgate, and P. Newman, eds., The New Palgrave: A Dictionary of Economics, vol. 1, pp. 666–70. London: MacMillan. Hooker, B. and B. Streumer. 2004. “Procedural and Substantive Rationality.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 57–74. New York: Oxford University Press. Hunter, D. 1994. “Act Utilitarianism and Dynamic Deliberation.” Erkenntnis 41: 1–35. Hurley, S. 1989. Natural Reasons: Personality and Polity. New York: Oxford University Press. ———. 2003. “The Limits of Individualism Are Not the Limits of Rationality.” Behavioral and Brain Sciences 26: 164–65. Jackson, F. 1987. “Group Morality.” In P. Pettit, R. Sylvan, and J. Norman, eds., Metaphysics and Morality: Essays in Honour of J. J. C. Smart, pp. 91–110. Oxford: Blackwell. Jackson, F. and R. Pargetter. 1986. “Oughts, Options, and Actualism.” Philosophical Review 95: 233–55. Jacobsen, H. 1996. “On the Foundations of Nash Equilibrium.” Economics and Philosophy 12: 67–88. Jeffrey, R. 1983. The Logic of Decision. Second edition. Chicago, IL: Chicago University Press.
Bibliography
255
Jensen, M. C. 2005. “Management and the Capital Markets: Managing the Tensions Between Two Cultures.” Address at the University of Missouri, Columbia, MO, April 14, 2005. Joyce, J. 1999. The Foundations of Causal Decision Theory. Cambridge: Cambridge University Press. ———. 2007. “Are Newcomb Problems Really Decisions?” Synthese 95: 537–62. Kadane, J. and P. Larkey. 1982. “Subjective Probability and the Theory of Games.” Management Science 28: 113–20. Kadane, J. and T. Seidenfeld. 1992. “Equilibrium, Common Knowledge, and Optimal Sequential Decisions.” In C. Bicchieri and M. Chiara, eds., Knowledge, Belief and Strategic Interaction, pp. 27–46. Cambridge: Cambridge University Press. Kahneman, D. and A. Tversky. 1979. “Prospect Theory.” Econometrica 47: 263–91. Kalai, E. and M. Smorodinsky. 1975. “Other Solutions to Nash’s Bargaining Problem.” Econometrica 43: 513–18. Kannai, Y. 1992. “The Core and Balancedness.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 1, pp. 355–95. New York: Elsevier Science. Kavka, G. 1983. “The Toxin Puzzle.” Analysis 43: 33–36. Keeney, R. and H. Raiffa. 1976. Decisions with Multiple Objectives. New York: Wiley. Kierland, B. 2006. “Cooperation, ‘Ought Morally,’ and Principles of Moral Harmony.” Philosophical Studies 128: 381–407. Kincaid, H. 1990. “Eliminativism and Methodological Individualism.” Philosophy of Science 57: 141–48. Klemperer, P. 2004. Auctions: Theory and Practice. Princeton: Princeton University Press. Kohlberg, E. and J. Mertens. 1986. “On the Strategic Stability of Equilibria.” Econometrica 54: 1003–37. Kolodny, N. 2005. “Why Be Rational?” Mind 114: 509–63. Kreps, D. 1990. Game Theory and Economic Modelling. Oxford: Clarendon Press. Levi, I. 1982. “Conflict and Social Agency.” Journal of Philosophy 79: 231–47. Lewis, D. 1969. Convention: A Philosophical Study. Cambridge, MA: Harvard University Press. List, C. and P. Pettit. 2002. “Aggregating Sets of Judgments: An Impossibility Result.” Economics and Philosophy 18: 89–110. Luce, R. D. and H. Raiffa. 1957. Games and Decisions: Introduction and Critical Survey. New York: Wiley. Ludwig, K. 2004. “Rationality, Language, and the Principle of Charity.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 343–62. New York: Oxford University Press. ———. 2007. “Collective Intentional Behavior from the Standpoint of Semantics.” Nouˆs 41: 355–93. Maschler, M. 1992. “The Bargaining Set, Kernel, and Nucleolus.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 1, pp. 591–667. New York: Elsevier Science. Mas-Colell, A., M. Whinston, and J. Green. 1995. Microeconomic Theory. New York: Oxford University Press. Maskin, E. 2005. “On the Robustness of Majority Rule.” LGS4 Conference, University of Caen, France, June 23, 2005.
256
Bibliography
McClennen, E. 1990. Rationality and Dynamic Choice: Foundational Explorations. Cambridge: Cambridge University Press. ———. 2000. “The Rationality of Rules.” In J. Nida-Ru¨melin and W. Spohn, eds., Rationality, Rules, and Structure, pp. 17–34. Dordrecht: Kluwer. ———. 2004. “The Rationality of Being Guided by Rules.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 222–39. New York: Oxford University Press. McKenzie, L. 2002. Classical General Equilibrium Theory. Cambridge, MA: MIT Press. McMahan, J. 2002. The Ethics of Killing: Problems at the Margin of Life. New York: Oxford University Press. McMahon, C. 2001. Collective Rationality and Collective Reasoning. Cambridge: Cambridge University Press. McNaughton, D. and P. Rawling. 2004. “Duty, Rationality, and Practical Reasons.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 110–31. New York: Oxford University Press. Mele, A. 1992. Springs of Action: Understanding Intentional Behavior. New York: Oxford University Press. ———. 2001. Self-Deception Unmasked. Princeton: Princeton University Press. ———. 2003. Motivation and Agency. New York: Oxford University Press. Mele, A. and P. Rawling, eds. 2004. The Oxford Handbook of Rationality. New York: Oxford University Press. Melnyk, A. 2003. A Physicalist Manifesto: Thoroughly Modern Materialism. Cambridge: Cambridge University Press. Millar, A. 2002. “Reasons for Action and Instrumental Rationality.” In J. Bermu´dez and A. Millar, Reason and Nature: Essays in the Theory of Rationality, pp. 113–34. Oxford: Clarendon Press. Mongin, P. 2005. “Logical Aggregation, Probabilistic Aggregation and Social Choice.” LGS4 Conference, University of Caen, France, June 22, 2005. Montague, R. and D. Kaplan. 1974. “A Paradox Regained.” In R. Montague, Formal Philosophy: Selected Papers of Richard Montague, R. Thomason, ed., pp. 271–85. New Haven: Yale University Press. Moore, J. C. 2007. General Equilibrium and Welfare Economics: An Introduction. Berlin: Springer. Morton, A. 2004. “Epistemic Virtues, Metavirtues, and Computational Complexity.” Nouˆs 38: 481–502. Moulin, H. 1994. “Social Choice.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 2, pp. 1091–25. New York: Elsevier Science. ———. 1995. Cooperative Microeconomics: A Game-Theoretic Introduction. Princeton: Princeton University Press. ———. 2003. Fair Division and Collective Welfare. Cambridge, MA: MIT Press. Myerson, R. 1991. Game Theory: Analysis of Conflict. Cambridge, MA: Harvard University Press. ———. 1994. “Communication, Correlated Equilibria and Incentive Compatibility.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 2, pp. 827–47. New York: Elsevier Science. Nash, J. [1950] 1997a. “Equilibrium Points in n-Person Games.” In H. Kuhn, ed., Classics in Game Theory, pp. 3–4. Princeton, NJ: Princeton University Press.
Bibliography
257
———. [1950] 1997b. “The Bargaining Problem.” In H. Kuhn, ed., Classics in Game Theory, pp. 5–13. Princeton: Princeton University Press. Nida-Ru¨melin, J. 1997. Economic Rationality and Practical Reason. Dordrecht: Kluwer. Olson, M. 1965. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press. Ordeshook, P. 1986. Game Theory and Political Theory: An Introduction. Cambridge: Cambridge University Press. Osborne, M. 2004. An Introduction to Game Theory. New York: Oxford University Press. Osborne, M. and A. Rubinstein. 1994. A Course in Game Theory. Cambridge, MA: MIT Press. Papineau, D. 2003. The Roots of Reason. Oxford: Oxford University Press. Parfit, D. 1984. Reasons and Persons. Oxford: Oxford University Press. ———. 2001. “Bombs and Coconuts, or Rational Irrationality.” In C. W. Morris and A. Ripstein, eds., Practical Rationality and Preference, pp. 81–97. Cambridge: Cambridge University Press. Pearce, D. 1984. “Rationalizable Strategic Behavior and the Problem of Perfection.” Econometrica 52: 1029–50. Peleg, B. 1992. “Axiomatizations of the Core.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 1, pp. 397–412. New York: Elsevier Science. Pettit, P. 1993. The Common Mind: An Essay on Psychology, Society, and Politics. New York: Oxford University Press. ———. 2001. A Theory of Freedom: From the Psychology to the Politics of Agency. New York: Oxford University Press. ———. 2003. “Groups with Minds of Their Own.” In F. Schmitt, ed., Socalizing Metaphysics: The Nature of Social Reality, pp. 167–93. Lanham, MD: Rowman & Littlefield. Pettit, P. and R. Sugden. 1989. “The Backward Induction Paradox.” Journal of Philosophy 86: 169–82. Pindyck, R. and D. Rubinfeld. 1989. Microeconomics. New York: Macmillan. Pollock, J. 2002. “Rational Choice and Action Omnipotence.” Philosophical Review 111: 1–23. ———. 2006. Thinking about Acting: Logical Foundations for Rational Decision Making. New York: Oxford University Press. Rabinowicz, W. 1992. “Tortuous Labyrinth: Noncooperative Normal-Form Games between Hyperrational Players.” In C. Bicchieri and M. Chiara, eds., Knowledge, Belief and Strategic Interaction, pp. 107–26. Cambridge: Cambridge University Press. Rabinowicz, W. 1998. “Grappling with the Centipede: Defence of Backward Induction for BI-Terminating Games.” Economics and Philosophy 14: 95–126. Rachlin, H. 2002. “Altruism and Selfishness.” Behavioral and Brain Sciences 25: 239–96. Raiffa, H. 1982. The Art and Science of Negotiation. Cambridge, MA: Harvard University Press. ———. 2002. Negotiation Analysis: The Science and Art of Collaborative Decision Making. Written with J. Richardson and D. Metcalf. Cambridge, MA: Harvard University Press. Rawls, J. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press. Regan, D. 1980. Utilitarianism and Co-operation. Oxford: Clarendon Press. Reny, P. 1992. “Rationality in Extensive-Form Games.” Journal of Economic Perspectives 6: 103–18.
258
Bibliography
Rescher, N. 1988. Rationality: A Philosophical Inquiry into the Nature and the Rationale of Reason. Oxford: Clarendon Press. ———. 2003. Sensible Decisions: Issues of Rational Decision in Personal Choice and Public Policy. Lanham, MD: Rowman & Littlefield. Roth, A., ed. 1985. Game-Theoretic Models of Bargaining. Cambridge: Cambridge University Press. Rovane, C. 2004. “Rationality and Persons.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 320–42. New York: Oxford University Press. Rubinstein, A. 1982. “Perfect Equilibrium in a Bargaining Model.” Econometrica 50: 97–109. ———. 1998. Modeling Bounded Rationality. Cambridge, MA: MIT Press. Samet, D. 1996. “Hypothetical Knowledge and Games with Perfect Information.” Games and Economic Behavior 17: 230–51. Samuelson, L. 1997. Evolutionary Games and Equilibrium Selection. Cambridge, MA: MIT Press. Samuelson, W. 1985. “A Comment on the Coase Theorem.” In A. Roth, ed., Game-Theoretic Models of Bargaining, pp. 321–39. Cambridge: Cambridge University Press. Scanlon, T. M. 1998. What We Owe to Each Other. Cambridge, MA: Harvard University Press. Schelling, T. 1960. The Strategy of Conflict. Cambridge, MA: Harvard University Press. ———. 1971. “Dynamic Models of Segregation.” Journal of Mathematical Sociology 1: 143–86. Schmeidler, D. 1969. “The Nucleolus of a Characteristic Function Game.” SIAM Journal on Applied Mathematics 17: 1163–70. Schmidt, T. 2000. “Structural Reasons in Rational Interaction.” In J. Nida-Ru¨melin and W. Spohn, eds., Rationality, Rules, and Structure, pp. 131–146. Dordrecht: Kluwer. Schmidtz, D. 1995. Rational Choice and Moral Agency. Princeton: Princeton University Press. Schofield, N. 1995. “Democratic Stability.” In J. Knight and I. Sened, eds., Explaining Social Institutions, pp. 189–216. Ann Arbor, MI: University of Michigan Press. Searle, J. 1995. The Construction of Social Reality. New York: Free Press. ———. 2001. Rationality in Action. Cambridge, MA: MIT Press. Selten, R. [1975] 1997. “Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games.” In H. Kuhn, ed., Classics in Game Theory, pp. 317–54. Princeton: Princeton University Press. Sen, A. 2002. Rationality and Freedom. Cambridge, MA: Harvard University Press. Shapley, L. S. [1953] 1997. “A Value for n-Person Games.” In H. Kuhn, ed., Classics in Game Theory, pp. 69–79. Princeton: Princeton University Press. Shin, H. 1991. “Two Notions of Ratifiability and Equilibrium in Games.” In M. Bacharach and S. Hurley, eds., Foundations of Decision Theory: Issues and Advances, pp. 242–62. Oxford: Blackwell. ———. 1992. “Counterfactuals and a Theory of Equilibrium in Games.” In C. Bicchieri and M. Chiara, eds., Knowledge, Belief and Strategic Interaction, pp. 397–413. Cambridge: Cambridge University Press.
Bibliography
259
Shin, H. and T. Williamson. 1997. “Representing the Knowledge of Turing Machines.” In M. Bacharach, L. Ge´rard-Varet, P. Mongin, and H. Shin, eds., Epistemic Logic and the Theory of Games and Decisions, pp. 169–92. Boston, MA: Kluwer. Shubik, M. 1982. Game Theory in the Social Sciences: Concepts and Solutions. Cambridge, MA: MIT Press. Simon, H. 1982. Models of Bounded Rationality, vol. 2, Behavioral Economics and Business Organization. Cambridge, MA: MIT Press. Skyrms, B. 1989. “Correlated Equilibria and the Dynamics of Rational Deliberation.” Erkenntnis 31: 347–64. ———. 1990a. The Dynamics of Rational Deliberation. Cambridge, MA: Harvard University Press. ———. 1990b. “Ratifiability and the Logic of Decision.” In P. French, T. Uehling, and H. Wettstein, eds., The Philosophy of the Human Sciences, Midwest Studies in Philosophy, vol. 15, pp. 44–56. Notre Dame, IN: University of Notre Dame Press. ———. 1996. Evolution of the Social Contract. Cambridge: Cambridge University Press. ———. 1998. “Bayesian Subjunctive Conditionals for Games and Decisions.” In W. Leinfellner and E. Ko¨hler, eds., Game Theory, Experience, Rationality: Foundations of Social Sciences, Economics and Ethics: In Honor of John C. Harsanyi, pp. 161–72. Dordrecht: Kluwer. ———. 2004. The Stag Hunt and the Evolution of Social Structure. Cambridge: Cambridge University Press. Smith, Adam. [1776] 1976. The Wealth of Nations. Indianapolis, IN: Liberty Classics. Sobel, J. H. 1992. “Hyperrational Games: Concept and Resolutions.” In C. Bicchieri and M. Chiara, eds., Knowledge, Belief and Strategic Interaction, pp. 61–92. Cambridge: Cambridge University Press. ———. 1994. Taking Chances: Essays on Rational Choice. Cambridge: Cambridge University Press. ———. 2005. “Backward Induction without Tears?” In D. Vanderveken, ed., Logic, Thought and Action, pp. 433–61. Berlin: Springer. Sober, E. and D. Wilson. 1998. Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge, MA: Harvard University Press. Spence, A. M. 1973. Market Signaling: Information Transfer in Hiring and Related Processes. Cambridge, MA: Harvard University Press. Spohn, W. 1982. “How to Make Sense of Game Theory.” In W. Stegmuller, W. Blazer, and W. Spohn, eds., Philosophy of Economics, pp. 239–70. Berlin: Springer-Verlag. ———. 2000. “A Rationalization of Cooperation in the Iterated Prisoner’s Dilemma.” In J. Nida-Ru¨melin and W. Spohn, eds., Rationality, Rules, and Structure, pp. 67–84. Dordrecht: Kluwer. Stalnaker, R. 1997. “On the Evaluation of Solution Concepts.” In M. Bacharach, L. Ge´rard-Varet, P. Mongin, and H. Shin, eds., Epistemic Logic and the Theory of Games and Decisions, pp. 345–64. Boston, MA: Kluwer. ———. 1998. “Belief Revision in Games: Forward and Backward Induction. Mathematical Social Sciences 36: 31–56. ———. 1999. “Knowledge, Belief and Counterfactual Reasoning in Games.” In C. Bicchieri, R. Jeffrey, and B. Skyrms, eds., The Logic of Strategy, pp. 3–38. New York: Oxford University Press.
260
Bibliography
Stalnaker, R. 2002. “Counterfactuals and Dispositional Properties in Games.” LOFT 5, Turin, Italy, June 29, 2002. ———. 2005. “Counterfactual Propositions in Games.” Pacific APA Convention, March, 2005. Starr, R. 1997. General Equilibrium Theory: An Introduction. Cambridge: Cambridge University Press. Stiglitz, J. E. 1974. “Incentives and Risk Sharing in Sharecropping.” Review of Economic Studies 41: 219–55. Strasnick, S. 1975. “Preference Priority and the Maximization of Social Welfare.” Doctoral dissertation. Cambridge, MA: Harvard University. Sugden, R. 1986. The Economics of Right, Co-operation and Welfare. Oxford: Blackwell. ———. 2000a. “The Motivating Power of Expectations.” In J. Nida-Ru¨melin and W. Spohn, eds., Rationality, Rules, and Structure, pp. 103–130. Dordrecht: Kluwer. ———. 2000b. “Team Preferences.” Economics and Philosophy 16: 175–204. ———. 2001. “Review of Equilibrium and Rationality. ” Philosophical Review 110: 425–27. Tan, T. and S. Werlang. 1988. “The Bayesian Foundations of Solution Concepts of Games.” Journal of Economic Theory 45: 370–91. Taylor, M. 1987. The Possibility of Cooperation: Studies in Rationality and Social Change. Cambridge: Cambridge University Press. Thagard, P. 2004. “Rationality and Science.” In A. Mele and P. Rawling, eds., The Oxford Handbook of Rationality, pp. 363–79. New York: Oxford University Press. Tuomela, R. 1995. The Importance of Us: A Philosophical Study of Basic Social Notions. Stanford: Stanford University Press. Vallentyne, P. 1999. “Review of Equilibrium and Rationality. ” Ethics 109: 684–86. Vanderschraaf, P. 2001. Learning and Coordination: Inductive Deliberation, Equilibrium, and Convention. New York: Routledge. Vanderschraaf, P. and G. Sillari. 2005. “Common Knowledge.” Online Stanford Encyclopedia of Philosophy. Vannucci, S. 1996. “Social Ethics and Implementation Theory.” In F. Farina, F. Hahn, and S. Vannucci, eds., Ethics, Rationality, and Economic Behavior, pp. 301–31. Oxford: Clarendon Press. Vardi, M., ed. 1988. Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge. Los Altos, CA: Morgan Kaufmann. Varian, H. 1984. Microeconomic Analysis. New York: Norton. Verbeek, B. 2007. “Rational Self-Commitment.” In F. Peter and H. Schmid, eds., Rationality and Commitment, pp. 150–74. Oxford: Oxford University Press. Vogler, C. 2001. “We Never Were in Paradise.” In C. W. Morris and A. Ripstein, eds., Practical Rationality and Preference, pp. 209–39. Cambridge: Cambridge University Press. von Neumann, J. and O. Morgenstern. [1944] 1953. Theory of Games and Economic Behavior. Third edition. Princeton: Princeton University Press. Walliser, B. 1992. “Epistemic Logic and Game Theory.” In C. Bicchieri and M. Chiara, eds., Knowledge, Belief and Strategic Interaction, pp. 197–226. Cambridge: Cambridge University Press. Watson, G. 2004. Agency and Answerability: Selected Essays. Oxford: Clarendon Press.
Bibliography
261
Weber, R. 1994. “Games in Coalitional Form.” In R. Aumann and S. Hart, eds., Handbook of Game Theory, vol. 2, pp. 1285–303. New York: Elsevier Science. Weibull, J. 1997. Evolutionary Game Theory. Cambridge, MA: MIT Press. Weirich, P. 1998. Equilibrium and Rationality: Game Theory Revised by Decision Rules. Cambridge: Cambridge University Press. ———. 2001. Decision Space: Multidimensional Utility Analysis. Cambridge: Cambridge University Press. ———. 2003. “From Rationality to Coordination.” Behavioral and Brain Sciences 26: 179–80. ———. 2004. Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances. New York: Oxford University Press. ———. 2006. “A Syntactic Treatment of Common Knowledge in Simultaneous-Move Games.” In G. Bonanno, W. van der Hoek, and M. Wooldridge, eds., Proceedings of the Seventh Conference on Logic and the Foundations of Game and Decision Theory (LOFT 2006). Liverpool: University of Liverpool. ———. 2007a. “Collective, Universal, and Joint Rationality.” Social Choice and Welfare 29: 683–701. ———. 2007b. “Initiating Coordination.” Philosophy of Science 74: 790–801. ———. Forthcoming a. “Probabilities in Decision Rules.” In E. Eells and J. Fetzer, eds., The Place of Probability in Science. Chicago, IL: Open Court. ———. Forthcoming b. “Utility and Framing.” In P. Weirich, ed., Realistic Standards for Decisions, a special issue of the journal Synthese. DOI:10.1007/s11229-009-9485-0. Wooldridge, M. 2002. An Introduction to Multi-Agent Systems. New York: Wiley. Yi, B. 2002. Understanding the Many. New York: Routledge. Young, H. P. 1998. Individual Strategy and Social Structure: An Evolutionary Theory of Institutions. Princeton, NJ: Princeton University Press. Zimmerman, M. 1996. The Concept of Moral Obligation. Cambridge: Cambridge University Press.
This page intentionally left blank
Index
Aumann, R., 81, 91, 156, 236 n. 1, 236 n. 12, 237 n. 8, 237–38 n. 15, 238 n. 17, 243 n. 6, 244 n. 16, 245 n. 18, 245 n. 21, 245 n. 22, 245 n. 24, 245 n. 25, 248 n. 10 Aumann, R. and A. Brandenburger, 92 Aumann, R. and M. Maschler, 245 n. 24 auctions, 220–21 second-price sealed bid, 221 autonomy, 7–8, 11, 16, 18. See freedom
a priori truths, 36 Abdou, J. and H. Keiding, 244 n. 15 ability, 37 and obligation, 37–38 as sensitive to context, 233 n. 19 act, 12–16. See also collective act, composite act, extended act, joint act basic, 19, 21 free, 13–14, 17–18 individuation of, 12 intentional, 13, 25–26, 231 n. 19 by proxy, 14 voluntary, 17 agent, 7–12. See also collective agent composite, 3, 8 free, 7, 55 perfect, 9, 26, 136 simple, 8 agreement, 143–44 self-enforcing, 146 agreement game, 206–210 underlying a bargaining problem, 208–209 underlying a coalitional game, 207–208 Aivazian, V. and J. Callen, 247 n. 5 Akerlof, G. A., 248 n. 8 analysis of a game cooperative, 147, 148 noncooperative, 147, 148 Anderson, R., 248 n. 10 Arrow, K., 235 n. 7, 248 n. 3 Arrow’s Theorem, 66, 152, 236 n. 9
Bacharach, M., 80, 86, 125, 130, 235 n. 3, 237 n. 7, 238 n. 17 Bacharach, M., L. Ge´rard-Varet, P. Mongin, and H. Shin, 239 n. 24 backward induction, 76, 89–90, 149, 204–205, 237 n. 15 balancedness, 245 n. 22 bargaining, 139, 155, 187–190, 220 and efficiency, 197–98 optimal protocol for, 247 bargaining power, 209 bargaining set, 161–62 Barwise, J., 238 n. 17 Bates, J., 35 Bergmann, M., 232 n. 13 Bermu´dez, J., 231 n. 10 Bernheim, B. D., 85 Bernheim, B. D., B. Peleg, and M. D. Whinston, 243 n. 7 Bicchieri, C., 91, 121, 235 n. 7, 237–38 n. 15 Bicchieri, C. and M. Chiara, 237 n. 4
263
264
Index
Bicchieri, C., R. Jeffrey, and B. Skyrms, 237 n. 4 binding contract, 140 Binmore, K., 3, 33, 147, 152, 161, 205– 206, 233 n. 18, 237 n. 3, 237 n. 11, 244 n. 12, 244 n. 15, 247 n. 5, 247 n. 2, 248 n. 2 Binmore, K. and A. Brandenburger, 238 n. 17, 238 n. 19 Binmore, K., M. Osborne, and A. Rubinstein, 247 n. 2 Bittner, R., 15, 32 Black, D., 66 Black, J., 247 n. 5 blame, 38, 72 Block, N., 230 n. 6 Bovens, L., 109 Bovens, L. and W. Rabinowicz, 248 n. 4 Brams, S., 236 n. 2, 241 n. 16 Brandenburger, A., 238 n. 17 Brandenburger, A. and E. Dekel, 92 Bratman, M., 15, 16, 135, 137–38, 143, 234 n. 24 Broome, J., 242 n. 14 Broome, J. and W. Rabinowicz, 237–38 n. 15 Camerer, C., 240 n. 14 Campbell, D., 248 n. 8 Carnap, R., 81 Chant, S., 15 coalition incentive of, 156, 163–64, 167–68, 169, 198–99 knowledge of, 168 options of, 164–66 value of, 244 n. 15 coalition, type grand, 153 political, 222 unit, 165–66 coalition structure, 154, 164 coalitional game, 153–56 characteristic function of, 153, 159, 177 efficiency in, 158 elementary, 153–54
core of, 156, 159–62 core allocation of, 156 underlying sequential game of, 166, 179, 201–204 Coase, R., 247 n. 5 Coase’s Theorem, 197 cognitive equilibrium, 240 n. 12 Coleman, J., 235 n. 6, 241 n. 3, 245 n. 17, 247 n. 5, 248 n. 10 collaboration, 140 collective act, 3–4, 14 and convention, 14 full rationality of, 71 collective agent, 10–11. See also agent, composite collective intention, 15, 27 collective interest, 16, 65 collective preference, 64–66 collective rationality, 3, 31, 54–55, 181–82 and analogy, 56–57 and analysis, 57 goal of, 60–62, 88, 217–18 and individual rationality, 55 standard of, 88 collective utility, 64–69 maximization of, 68–69 Colman, A., 32, 124, 128, 130, 241 n. 4 commitment, 132–33 common knowledge, 82, 90–91, 112, 116, 128 communication, 143 composite act, 7, 57 and control, 22, 28 and simple act, 48–49 compositionality, 29, 45–52, 69–74 and consistency, 46–47 and strategic equilibrium, 201 comprehensive outcome, 239 n. 4 comprehensive rationality, 43 and coordination, 123, 126–27, 133–34 and efficiency, 200 and a solution to a game, 82 conditional probability, 237 n. 5 conditional rationality, 44–45 detachment of, 233 n. 18 conditionals
Index
265
counterfactual, 115 indicative, 115 subjunctive, 115 Condorcet’s Jury Theorem, 248 n. 4 Conee, E., 242 n. 7 consistency, 4, 39, 46–47, 58 of individual and collective rationality, 57, 70 dynamic, 79 constitution, 48, 72 control, 11, 16–22 autonomous, 13 and awareness, 22, 24, 26 direct, 19, 48 immediate, 20 free, 25 full, 21, 25, 100 cooperation, 72, 140 and collaboration, 140–41 cooperative games, 75, 145–53 and binding contracts, 145–46 and joint action, 145–46 and underlying sequential games, 148–50 coordination, 120, 139–40 causal, 120, 141–42 epistemic, 113 evidential, 120–21, 141–42 on a strategic equilibrium, 210 Copp, D., 18, 31, 72, 229 correlated equilibrium, 91–92, 146–47, 240 n. 13 correlation of strategies, 238 n. 20 Cubitt, R. and R. Sugden, 241 n. 1
disposition to cooperate, 125, 126–27, 217 to coordinate, 133–135 Dixit, A. and S. Skeath, 75, 77, 236 n. 10, 237–38 n. 15, 238 n. 18, 243 n. 1, 243 n. 4, 243 n. 6, 244 n. 12, 245 n. 18, 246, 247 n. 2, 248 n. 10 dominance, 195 and Pareto superiority, 195–96 and strategic equilibrium, 196 Dretske, F., 229 n. 3 Dutta, P., 62, 236 n. 10
Dasgupta, P., 247 Dasgupta, P. and E. Maskin, 248 n. 5 d’Aspremont, C. and L. Gevers, 236 n. 9 Davis, M. and M. Maschler, 245 n. 24 Debreu, G., 248 n. 3 Debreu, G. and H. Scarf, 248 n. 10 decision, 18, 19–20 degree of desire, 232 n. 6 Dietrich, F., 235 n. 5 dilemma of rationality, 39, 100 Discursive Dilemma, 235 n. 5
fallacy of composition, 74 Feldman, F., 52 Fine, K., 234 n. 1 Finkelstein, C., 218–19 Fischer, J. M., 231 n. 12 focal points, 93, 128 cultural, 93 from hypothetical agreement, 134 and salience, 128 structural, 128 force, 151
Edgeworth, F. Y., 223 Eells, E. and W. Harper, 240 n. 8 efficiency, 4, 58–64, 93. See Pareto optimality among Nash equilibria, 127–38, 150 Elster, J., 50 emotions, 42 epistemic logic, 239 n. 24 equilibrium-in-beliefs, 84–85, 92, 109 evaluability, 23–30 and control, 55 for rationality, 23, 25 for utility, 23, 25 excuses, 27, 38, 100 explanation, 12, 49, 248 n. 11 direction of, 51, 53, 72 and justification, 94–95, 121–22, 185 in mathematics, 234–35 n. 1 and normative principles, 32 extended act, 9, 45–46, 47, 50, 52 and momentary act, 20, 47, 50
266
Index
freedom, 7–8, 11, 13, 16. See autonomy and awareness, 17 evidence of, 34 French, P., 72 Friedman, J., 162, 239 n. 23, 243 n. 6, 245 n. 17, 248 n. 10 Fudenberg, D. and J. Tirole, 77, 79, 152, 236 n. 1, 238 n. 17, 238 n. 19, 247 n. 5, 247 n. 2 game theory, 4–5, 75 evolutionary, 80 game, 75–81. See also analysis of a game, cooperative games, representations of games, solution to a game concrete, 83–84, 147, 202 multistage, 77 noncooperative, 75–76 single-stage, 77 of strategy, 79 Gauthier, D., 51, 123, 248 n. 2 Geanakoplos, J., 238 n. 17 Gibbard, A., 36–37, 121, 232 n. 11, 234 n. 28, 242 n. 5 Gibbons, R., 237 n. 8, 238 n. 17, 247 n. 2 Gigerenzer, G., 38, 99 Gigerenzer, G. and R. Selten, 38 Gilbert, M., 15, 16, 18, 124, 125, 128 Gilboa, I., 239 n. 24 Gillies, D. B., 245 n. 18 Ginet, C., 230 n. 7 Gintis, H., 32, 33, 80, 91, 237 n. 3, 238 n. 19, 239 n. 23, 247 n. 2 goal, 26, 39 cognitive, 34 collective, 61 of individual rationality, 62 and a standard, 39 Gold, N. and R. Sugden, 125, 238 n. 17 Good, I. J., 67, 103 Good’s principle, 103–104 Graham, K., 16, 32, 242 n. 5 Greenberg, J., 162, 244 n. 13 group of coalitions, 169 habit, 26 Halpern, J., 239 n. 24
Halpin, A., 247 n. 5 Hammond, P., 79 Hardin, R., 73–74, 242 n. 9 Harper, W., 240 n. 8 Harsanyi, J., 236 n. 12, 237 n. 8 Harsanyi, J. and R. Selten, 93, 111, 147, 239 n. 22, 243 n. 5 Hildenbrand, W., 248 n. 10 Hi-Lo, 124–26 and binding contracts, 144 Hooker, B. and B. Streumer, 232 n. 7 Hunter, D., 234 n. 23 Hurley, S., 12, 125 hyperrationality, 128–30 ideal game, 94 idealizations, 40, 232 n. 14 for games, 80, 150–51, 199–200, 202 implementation theory, 220 incentives 164. See also preferences insufficient, 185 self-undermining, 173 sufficient, 110, 173 independence causal, 96 evidential, 96 individual rationality, 54 and strategic equilibrium, 215 information asymmetric, 220–21, 236 n. 12 complete, 78 economics of, 248 n. 8 perfect, 79 pooling, 67 sharing, 220–21 instigation of joint action, 170, 187, 207, 211–12, 215 intentions, 135–38 bootstrapping, 137 and coordination, 135–38 firmness of, 233 n. 17 and reasons, 137 invisible hand, 141 Jackson, F., 14 Jackson, F. and R. Pargetter, 233 n. 18
Index Jacobsen, H., 242 n. 5 Jeffrey, R., 105 Jensen, M. C., 220 joint act, 139 and causal coordination, 142 and collaboration, 142 and a collective act, 139 joint rationality, 81 and collective rationality, 87–88 and a solution to a game, 81 joint self-ratification, 241 n. 20 joint self-support, 213 and Nash equilibrium, 213 and strategic equilibrium, 213–15 joint utility maximization, 210 and Nash equilibrium, 211 Joyce, J., 135, 137, 231 n. 13 Kadane, J. and P. Larkey, 88 Kadane, J. and T. Seidenfeld, 237–38 n. 15 Kahneman, D. and A. Tversky, 32 Kalai, E. and M. Smorodinsky, 245 n. 17 Kannai, Y., 245 n. 18, 245 n. 22 Kavka, G., 243 n. 15 Keeney, R. and H. Raiffa, 236 n. 11 kernel, 161–62 Kierland, B., 242 n. 6 Kincaid, H., 16 Klemperer, P., 248 n. 7 Kohlberg, E. and J. Mertens, 93 Kolodny, N., 231 n. 3, 233 n. 18 Kreps, D., 147, 238 n. 18, 240 n. 13, 247 n. 2 law, 218–219 Levi, I., 235 n. 2 Lewis, D., 83, 91, 121, 122, 243 n. 3 List, C. and P. Pettit, 235 n. 5 Luce, R. D. and H. Raiffa, 152, 237 n. 7, 246 Ludwig, K., 11, 15, 16 majority rule, 219 majority-rule game, 159–60, 172, 177, 183–85, 207–208, 213–14 markets, 219 and competitive equilibrium, 223 and strategic equilibrium, 222–23
267
Maschler, M., 245 n. 24 Mas-Colell, A., M. Whinston, and J. Green, 248 n. 10 Maskin, E., 248 n. 5 Matching Pennies, 82–83, 96, 106, 107, 109, 110–11, 114–19 as a cooperative game, 190–95 McClennen, E., 51, 241 n. 3 McKenzie, L., 248 n. 10 McMahan, J., 230 n. 4 McMahon, C., 15, 123–24, 240 n. 11 McNaughton, D. and P. Rawling, 232 n. 4 Mele, A., 13, 17, 234–35 n. 1 Mele, A. and P. Rawling, 32 Melnyk, A., 233 n. 20 mental causation, 17 methodological individualism, 16 Millar, A., 13 mistakes, 44–45, 239 n. 7 mixed strategies, 109, 160 Mongin, P., 235 n. 5 Montague, R. and D. Kaplan, 239 n. 24 Moore, J. C., 248 n. 3, 248 n. 10 morality, 25 and rationality, 34–35 Morton, A., 41, 239 n. 2 Moulin, H., 16, 59, 94, 152, 158, 160, 161, 236 n. 9, 243 n. 1, 243 n. 6, 244 n. 12, 244 n. 13, 244 n. 16, 245 n. 18, 246, 248 n. 10, 248 n. 10 Myerson, R., 33, 59, 81, 93, 146, 147, 148, 236 n. 2, 237 n. 9, 241 n. 18, 243 n. 1, 243 n. 5, 243 n. 6, 244 n. 10, 244 n. 12, 244 n. 13, 244 n. 16, 245 n. 18, 245 n. 21, 245 n. 24, 247 n. 6, 247 n. 2 Nash, J., 76, 155 Nash equilibrium, 63, 76, 84. See also equilibrium-in-beliefs and collective rationality, 88 realization of, 106, 111–19, 127 refinement of, 93 in a sequential game, 205 Nash equilibrium, types coalition-proof, 243 n. 7 rollback, 90, 149, 205
268
Index
Nash equilibrium, types (continued) subgame perfect, 79, 90, 206 subjective, 84–85, 194 Nash’s solution, 155. See also bargaining asymmetric version, 209 Nash program for, 205–206, 244 n. 10 and strategic equilibrium, 188–90 naturalism, 36 nearest alternative profile, 171 Newcomb’s problem, 234 n. 28 Nida-Ru¨melin, J., 229 nucleolus, 161–62 Olson, M., 235 n. 7 options, 22, 28 collective, 59 comparison of, 104–105 conflation of, 246 Ordeshook, P., 147, 152, 243 n. 8, 244 n. 12, 244 n. 14, 244 n. 15, 245 n. 21, 247 n. 5 Osborne, M., 75, 90, 245 n. 18, 247 n. 2 Osborne, M. and A. Rubinstein, 33, 79, 80, 81, 148, 161, 236 n. 2, 238 n. 18, 238 n. 19, 243 n. 6, 245 n. 18, 245 n. 19, 245 n. 24, 247 n. 2, 248 n. 6, 248 n. 10
Pollock, J., 231 n. 15, 232 n. 15 pragmatic equivalence, 71, 84 preferences, 41–43. See incentives all-things-considered, 107–108 conditional, 172 cyclical, 107 revealed, 68 preparation for a game, 131–32, 200, 210 prescience, 86, 94–96, 210 in a coalitional game, 168 and common knowledge, 94 Prisoner’s Dilemma, 59–60, 72, 75–77 and binding contracts, 144, 149–50 cooperative version of, 212 probability, 101 profile of strategies, 76, 81 in a coalitional game, 164 feasible, 165–66 proposition, 12 under a mode of presentation, 230 n. 8 pursuit of incentives, 107 dynamics of, 174–75 selection during, 107, 173–75, 199 stopping, 107, 173–75, 199 quantization of beliefs and desires, 103
Papineau, D., 36 Pareto optimality, 59. See efficiency Parfit, D., 99, 123, 230 n. 4 part, 12. See also constitution path of incentives, 106 in a coalitional game, 170 path of pursued incentives, 107 in a coalitional game, 169–175 payoff transformation, 127, 130–31 Pearce, D., 85 Peleg, B., 245 n. 18 person, 10 person-stage, 9 personal identity, 56–57, 230 n. 4 Pettit, P., 15, 16, 17, 24, 33, 41, 62, 235 n. 5 Pettit, P. and R. Sugden, 237–38 n. 15 phone call interrupted, 83, 86–87, 95 Pindyck, R. and D. Rubinfeld, 219 plans, 50, 51–52 and coordination, 143
Rabinowicz, W., 129, 135, 237–38 n. 15 Rachlin, H., 49 Raiffa, H., 158, 220, 243 n. 5, 243 n. 6, 244 n. 11, 244 n. 12, 244 n. 13 ratification, 105–106, 113, 117 rationality, 31–37. See also collective rationality, comprehensive rationality, conditional rationality, individual rationality, joint rationality, universal rationality instrumental, 33, 43, 56 practical, 34 and reasons, 31–32 and self-interest, 33 and success, 36 theoretical, 34 rationalizability, 85, 92 Rawls, J., 248 n. 2 reasoning
Index best-response, 112 Stackelberg, 241 n. 4 strategic, 112, 121, 168 reasons, 13, 15–16, 26, 27, 35 and expected utility, 32 insufficient, 108 pragmatic, 137 sufficient, 110 reflective equilibrium, 35–36 Regan, D., 126, 130 Reny, P., 237–38 n. 15 representations of beliefs and desires, 101 quantitative, 101–102 representations of games, 78 cooperative, 147 and equilibria, 187, 193–95 multiple, 89, 204 noncooperative, 147 sequential, 149, 202 and solutions, 83–84, 192–93, 203–206 representations of propositions, 102 Rescher, N., 27, 231 n. 21, 231 n. 1 responsibility, 23, 27 attributive, 231 n. 16 and awareness, 232 n. 13 causal, 23 normative, 23–24 risk-dominance, 239 n. 22 Roth, A., 245 n. 17 Rovane, C., 232 n. 9, 242 n. 5 Rubinstein, A., 206, 238 n. 17 rules, 49–50 Samet, D., 237–38 n. 15 Samuelson, L., 237 n. 3 Samuelson, W., 247 n. 5 satisficing, 103–104 Scanlon, T. M., 231 n. 16 Schelling, T., 15, 128, 237 n. 7, 239 n. 25 Schmeidler, D., 245 n. 24 Schmidt, T., 242 n. 9 Schmidtz, D., 36 Schofield, N., 222 Searle, J., 15, 18, 33, 41, 230 n. 5, 231 n. 18 self-deception, 234–35 n. 1 self-defeat, 98–100
269
self-enforcing strategy, 238 n. 19 self-support, 106 in a coalitional game, 178 Selten, R., 79 Sen, A., 31, 58 Shapley, L. S., 152 Shapley value, 152–53 Shin, H., 237 n. 12, 240 n. 8 Shin, H. and T. Williamson, 238 n. 16 Shubik, M., 81, 152, 244 n. 13, 244 n. 16, 245 n. 18, 245 n. 24 signaling game, 247 n. 3 Simon, H., 103–104 Skyrms, B., 32, 111, 121, 122, 234 n. 27, 237 n. 3, 237 n. 12, 238 n. 19, 240 n. 8, 241 n. 2, 246, 247 n. 3, 248 n. 2 Smith, Adam, 61 Sobel, J. H., 128, 129, 237 n. 15, 237–38 n. 15 Sober, E. and D. Wilson, 10, 232 n. 8, 233 n. 16, 241 n. 2 social choice, 236 n. 9 social contract, 216, 221–22 and efficiency, 221–22 social institutions, 216 design of, 217 and strategic equilibrium, 221–23 solution to a game, 81–88, 180–81 and equilibrium, 84 explication of, 81 objective, 81–82 subjective, 81–82 sophisticated choice, 234 n. 22 Spence, A. M., 248 n. 8 Spohn, W., 80, 229, 234 n. 22, 236 n. 1 stable set, 152 Stag Hunt, 92–93, 121 Stalnaker, R., 237 n. 10, 237 n. 12, 237–38 n. 15, 238 n. 21 standard of evaluation, 40, 97 procedural, 41, 58 and a procedure, 40–41, 97–100 substantive, 41, 58 Starr, R., 245 n. 18, 248 n. 10 Stiglitz, J. E., 248 n. 8 Strasnick, S., 236 n. 9
270 Index strategic equilibrium, 97, 108–11 in a coalitional game, 164, 177, 184 and collective rationality, 181–82 existence of, 178 identification of, 179–80, 203–04 realization of, 184 strategic equilibrium, comparisons with the core, 185–86 with efficiency, 111, 189, 197–200 with independence, 196–97 with Nash equilibrium, 186–87, 190–95 strong Nash equilibrium, 156–57 Sugden, R., 32, 93, 95–96, 125, 128, 130, 237 n. 3, 239 n. 4, 241 n. 1, 242 n. 10 supposition evidential, 81 indicative, 106 subjunctive, 106 swampman, 229 n. 3 symmetry, 4, 62–64 Tan, T. and S. Werlang, 238 n. 18 Taylor, M., 33, 236 n. 1, 237 n. 7, 248 n. 2 team reasoning, 124–26 team spirit, 130–31 Thagard, P., 235 n. 6 theoretical unity, 5–6, 57, 164, 223–25 tie-breaking principle, 133 Tuomela, R., 15, 230 n. 9, 243 n. 2 Ultimatum Game, 145, 149, 158, 205 unified mind, 8, 11–12 universal rationality, 71, 181 and collective rationality, 87–88 and joint rationality, 86–87 universalizability, 232 n. 8 utility, 100–101 analysis of, 33
interpersonal comparisons of, 236 n. 9 profile, 154 transferable, 154 utility-equivalence, 195 utility maximization, 32–33, 39–40, 50, 67, 98–104 constrained, 123 and control, 29 and coordination, 135 generalization of, 104 Vallentyne, P., 38 Vanderschraaf, P., 237 n. 3, 237 n. 8, 238 n. 19, 240 n. 8, 241 n. 1, 243 n. 6 Vanderschraaf, P. and G. Sillari, 238 n. 17 Vannucci, S., 248 n. 6 Vardi, M., 239 n. 24 Varian, H., 245 n. 18, 248 n. 3, 248 n. 8, 248 n. 10 Verbeek, B., 242 n. 13 Vogler, C., 16 von Neumann, J. and O. Morgenstern, 82, 106, 111–12, 152, 232 n. 5 voting, 152 Walliser, B., 239 n. 24 Watson, G., 229 Weber, R., 244 n. 13 Weibull, J., 237 n. 3 welfare economics, 197, 219 Wooldridge, M., 235 n. 4 Yi, B., 16 Young, H. P., 15, 121, 232 n. 15, 237 n. 3, 242 n. 8, 247 n. 2 Zimmerman, M., 25, 73, 126, 243 n. 2