Power, Freedom, and Voting
M.J.H., 35 u 27 cm, water colour / paper, Katharina Kohl, 2007.
Matthew Braham · Frank Steffen (Editors)
Power, Freedom, and Voting
Essays in Honour of Manfred J. Holler
123
Dr. Matthew Braham Faculty of Philosophy University of Groningen Oude Boteringestraat 52 9712 GL Groningen The Netherlands
[email protected]
ISBN 978-3-540-73381-2
Dr. Frank Steffen The University of Liverpool Management School Chatham Street L69 7ZH Liverpool UK
[email protected]
e-ISBN 978-3-540-73382-9
DOI 10.1007/978-3-540-73382-9 Library of Congress Control Number: 2008922456 c 2008 Springer-Verlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Production: le-tex Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: WMX Design GmbH, Heidelberg Printed on acid-free paper 987654321 springer.com
Preface The chapters in this volume are intended to honour the career and work of Manfred J. Holler by presenting a body of work in economics, political science, and philosophy that is intimately related to some of the major themes that have occupied his attention for the past twenty-five years or so. Some of the chapters are developments of topics that Holler has consistently worked on during his career (power indices, voting rules, principles and applications of game theory); others are on topics that currently occupy his attention (freedom and responsibility). We even include an essay by Holler on the quintessential theorist of power: Niccolò Machiavelli. For those who know Manfred personally, the breadth of viewpoints and methods that can be found in this volume should come as no surprise. His academic career is depicted by intellectual diversity and experimentation: he has published papers in many branches of economics that can be classified as purely theoretical, as applied theory, as empirical, and historical. The person who has provoked this book was born on 25 July 1946 in Munich, Germany, where he grew up and completed his secondary and university education: MA in economics (1971), doctoral degree (1975), and Habilitation (1984), the last of which is a post-doctoral qualification (the socalled ‘second book’) that until recently was generally required by German universities for appointment to a professorship. After a senior research and lecturing position in Munich, Holler took up an associate professorship in economics at the University of Aarhus, Denmark, in 1986. In 1991 he was appointed as full professor of economic theory at the University of Hamburg where, at the time of writing, he still works. The German university landscape has often received bad press for the hierarchical and rigid structures and fiefdoms that Herr Prof. Dr. carves out and jealously guards, even at the expense of the openness and frankness required for healthy academic discourse and development. While this is undoubtedly true, the same landscape -- which is now rapidly disappearing in the process of turning German universities into degree factories – has another side. Depending on the personalities that are able to find their way into the niches of German universities, they could be equally hospitable for an entirely different form of academic life. Holler’s niche is an open and academically and educationally progressive one. In the words of the great American philosopher John Dewey, it is a place of ‘connected experiences’. The autonomy of the traditional German university Lehrstuhl meant that such niches can also be the ideal environment for creative and highly individual endeavour. One of the very important characteristics of the
vi
Matthew Braham and Frank Steffen
Institute of SocioEconomics (the Anglicized name that Holler gave to his Lehrstuhl) is that it is refreshingly free of rules and regulations. As long as we have known Manfred, he has always made every effort to free his staff from unnecessary administrative burdens and he never burdened anyone with his own work. For him, it was important to ensure that there was sufficient time to do what is supposed to be done in universities: read, think, experiment, and write. Even the mundane task of setting exam questions and marking was a collaborative and intellectually stimulating exercise. Manfred has been very suspicious of targets, research clusters, and endless creation of new fangled ways of managing academic life. For him, it is a matter of providing time and encouragement to initiate and organize research activities on one’s own initiative. Over the years we organized workshops from Siena to the Isle of Sylt in the North Sea. Many of the authors in this volume contributed to these events so that in some sense this volume records the recent history of the Institute of SocioEconomics. Both of us have known Holler closely from our days as his doctoral students and have been greatly affected and influenced by his intellectual generosity. We wanted to repay this charity by not only putting together a Festschrift but also an event that is close to his heart: the combination of art and science. Thus the idea of a Festschrift conference with an art event was conceived in the autumn of 2004 and organized as a secret together with Manfred’s wife, Barbara Klose-Ullmann. For the days of 17–20 August 2006, about a month after Manfred’s sixtieth birthday most of the contributors met at the Elsa-Brändström-Haus, a villa and former residence of the Warburg family on the banks of the River Elbe just outside Hamburg, to present and discuss early drafts of the papers. Not only did we exhibit some artworks by a couple of Manfred’s artist friends in the main conference room, but we also dedicated an evening to two artists to present their work. Nicola Atkinson-Davidson presented her piece, ‘Thank You for Shopping with Us! (And Have a Nice Day?)’, which is best described as a piece of social sculpture à la Beuys. The piece centred on an installation: a fan and an empty plastic bag. It was about establishing an interaction among the audience and the social environment. Alex Close presented the ‘essence of a container’. This essence is documented in his essay entitled ‘Container Manifesto: Or Why Boxes Have to be Closed’. We cannot go into detail here about these works, but details can be found on www.nadfly.com and Alex’s manifesto will be published in 2008 in Homo Oeconomicus, a journal that Manfred founded and edits together with one of us (m.b.) and Hartmut Kliemt. As it is not possible to represent the great variety of Manfred’s work in one volume, we decided to restrict our attention to that area of his work which overlaps with our own interests. Hence the focus of this book on foundational issues in the analysis of power, freedom, and voting. However, even such a thematic restriction posed a number of problems. The mere number of friends and colleagues that Manfred has who work in this area
Preface
vii
meant that we could easily have produced a second 400 page tome. With the help of some of his close friends and collaborators down the years we decided to invite those who have either been co-authors, co-editors, or contributors to his numerous and diverse projects. In addition we wanted obtain a representative historical and geographical spread of friends that have spanned his career. We also wanted the book to capture more than Manfred’s past; the idea being that it should also be a continuation of his own projects in this field. 1 Here we took the liberty of inviting a few contributors who in some capacity became closely involved with the Institute of SocioEconomics from 2000 onwards. Although it would have been aesthetically pleasing to compile this volume into distinct parts, the overlapping themes made any division of chapters arbitrary. As readers familiar with the literature on power indices will know, discussions of power are intimately related to discussions of voting rules and vice versa. And we cannot easily separate discussions of power from discussions of coalitions; and nor can we discuss freedom without any reference to power. Thus, at most we have grouped chapters together that are closely related. Otherwise there is no particular order. We can now turn to a brief description of the contents. The Festschrift opens with a chapter that is very close to Manfred’s current interests: the nature and measurement of responsibility. That challenge that is set out in Chapter 1 by one of the editors (m.b.) is a very abstract one: how to untangle ascriptions of social power and causation and explicate the difference in their meaning. As the reader will see from the acknowledgements and references this is the continuation of work that was jointly initiated with Manfred. The main thrust of the paper is the introduction of a game theoretic framework to formulate a demarcation principle and demonstrate that social power is a special case of social causation. One of the interesting results of this analysis for normative theory is that it is shown that an agent can make a causal contribution to some state of affairs although be entirely powerless with respect to that state of affairs. This of course raises the question of whether power is an appropriate criterion for attributing responsibility. In the next chapter (2), written by František Turnovec, Jacek Mercik, and Mariusz Mazurkiewicz, Felsenthal and Machover’s ‘I-power’ (power as ‘influence’) and ‘P-power’ (power as ‘prize’) distinction is subjected to a penetrating analysis. The authors argue that the classification of the Banzhaf index as I-power and the Shapley-Shubik index as P-power, and therefore Felsenthal and Machover’s disqualification of the Shapley-Shubik index for particular kinds of analyses, does not hold. Both measures can be modelled 1 In the 1980s, he edited two volumes that overlap with this one: Power, Voting, and Voting Power (Physica Verlag, 1982) and Coalitions and Collective Action (Physica Verlag, 1984). More recently, together with Guillermo Owen he edited Power Indices and Coalition Formation (Kluwer, 2001).
viii
Matthew Braham and Frank Steffen
as values of cooperative games and as probabilities of being ‘decisive’ without reference to game theory at all. The basic point being that ‘pivots’ (Shapley-Shubik index) and ‘swings’ (Banzhaf index) can be taken as special cases of a more general concept of ‘decisiveness’. The authors complete their study by introducing a more general measure of a priori voting power that covers the Shapley-Shubik, Banzhaf, and Holler’s Public Good indices as special cases. In Chapter 3, Dan Felsenthal and Moshé Machover continue their studies of the expediency and stability of alliances by examining a co-operative non-transferable utility game that is derived from a simple voting game. In this game, a strategy is the formation of an alliance in the simple game and the payoffs are the Penrose voting powers in the resulting composite voting game. Felsenthal and Machover highlight a number of paradoxical outcomes that can occur under super-majority rules and how a dummy player can be empowered by participating in an expedient alliance. Chapter 4 by René van den Brink and the other editor (f.s.) is the result of a quite a number of years of painstaking investigation. It is a solution to a puzzle that f.s. started working on for both his MA and doctoral theses under Manfred’s supervision. The question is simple, the answer somewhat complicated: how to measure power in a hierarchy given the sequential nature of decision-making in such structures? The solution is an analogue of the Banzhaf index for extensive game forms. Although for some this will not come as a real surprise, deriving such an analogue is not a straightforward affair because the precise meaning of a ‘swing’ in an extensive game form is far from obvious. Understanding power in hierarchical structures requires the integration of a variety of conceptual and formal frameworks. Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach take up one of Holler’s favourite themes in Chapter 5: the construction of new power indices. They define and axiomatize what they call the Public Help Index – although it is questionable if this can really be called a ‘power index’ given that it assigns a positive value to a dummy player. The paper’s significance is that it fills in a logical gap in the axiomatics of power measures based on simple games. The authors also provide an algorithm for their new index as well as for Holler’s Public Good Index. Stefan Napel and Mika Widgrén’s (Chapter 6) analysis of power in the UN Security Council is written in a refreshingly unconventional style. They set up their analysis as a football match between two teams: the local Y-team, their own strategic power index, and the renowned Princeton G-team, the Shapley-Shubik index. The match ends in a draw as Napel and Widgén are able to show that the Shapley-Shubik index is a special case of their index – this result is the equalizing goal scored with the last kick of the game. By the assumptions of the model, the values generated by the two indices are the same. But hinting at their belief in the superiority of their index, the Y-team
Preface
ix
calls for extra time. The chapter has all the merits of the perfect Festschrift contribution, being a playful but rigorous analysis that is littered with anecdotes of the intellectual life in Hamburg over the last years and hints at provocative debates among a number of the contributors to this volume. For the record, the presentation of this paper was actually done as a staged performance with two commentators taking on the roles given in the chapter. Chapter 7, by Guillermo Owen, Ines Lindner, and Bernie Grofman, is concerned with modified power indices for indirect voting, such as in the US Electoral College. The authors demonstrate the need, and propose a method for, modifying classical measures such as the Shapley-Shubik and Banzhaf indices for dealing with this kind of institutional arrangement. The problem with the classical measures for indirect voting is that they generate counter-intuitive results are due to their insensitivity to important features of the voting environment. In the case of the Electoral College, it is the insensitivity to voter correlation in the different US states. In another very original empirical analysis (Chapter 8), Joe Godfrey and Bernie Grofman apply the Shapley-Owen value, the spatial analogue of the Shapley value, to test the hypothesis that lobbying efforts are directed to potentially pivotal voters. To do this they examine the lobbying activities surrounding the 1993 Clinton Health Care reform proposal. Apart from the unique attempt of using the Shapley-Owen value on historical data the value of the study is that the authors are able to pinpoint key players in the process who lobbyists failed to identify as important. Vincent Chua and Dan Felsenthal’s chapter (9) continues the thread of contributions on the empirical analysis of power. They put Robert Aumann’s coalition formation hypothesis to test. In an interview, Auman had hypothesized that a party charged with forming a governing coalition will choose the coalition that maximizes its Shapley-Shubik index value. Chua and Felsenthal examine three versions of the hypothesis and find that for each version the hypothesis is actually outperformed by different conjectures: the Leiserson-Axelrod closed minimal range theory and the GamsonRiker minimum size principle. 2 In their chapter (10), Friedel Bolle and Yves Breitmoser take up the relationships among coalition formation, agenda selection, and power. They develop a model of coalitional bargaining in which a formateur asks the other parties about their aspirations from participating in a government. The authors then show how the value functions can lead to structurally different predictions of the outcome and assessments of power. They tie up their chapter by looking at the relationship between their conception of power and the power index introduced by Napel and Widgrén. Werner Güth, Hartmut Kliemt, and Stefan Napel’s chapter (11) is a 2 A follow-up to this paper by Chua and Felsenthal can be found in Homo Oeconomicus, vol. 25 (2008).
x
Matthew Braham and Frank Steffen
game theoretic analysis of an enduring and classical problem of political theory, that of whether or not a democracy can incorporate foes of democracy. They investigate the conditions under which letting the foes of democracy participate (‘have a positive share of political power’) in the democratic system will not pose a serious risk to the system itself and may in fact be beneficial. In particular, it is required that the otherwise democratically competing parties can rein in the foes of democracy by acting as a democratic cartel. In Chapter 12, Steven Brams and Marc Kilgour present three models in which two players agree to share power in a particular ratio, but, as in a duel, either can subsequently ‘fire’ at the other to try to eliminate him. If neither is successful, the agreement remains in place; if one is successful, that player obtains all the power; and if both are eliminated, neither player gets anything. The authors examine three versions of the model: the oneshot version, the repeated version with discounted payoffs, and a version with a damage. Brams and Kilgour’s analysis throws up a very pessimistic conclusion: no matter what the values of any of the parameters are, rational players will be impelled to try to eliminate a partner in a power-sharing agreement. Hence, power sharing appears to be inherently unstable and this is what makes ending conflicts so difficult. 3 Chapter 13 by Donald Wittman is a comparison of proposal and veto power in a majority-rule voting system. In his model, the proposer chooses a position that maximizes his utility subject to the constraint that a majority and a veto player prefer the position to the status quo. Although there may exist many proposers and vetoers, one needs only to concentrate on a few of the players to determine the outcome of the legislative game. Norman Schofield (Chapter 14) studies the relationship between party positions and electoral response. He develops a model of valence that can be used to predict individual voter choice and interpret the political game. The central result of his analysis is that the ‘centripetal’ tendency of political strategy, inferred from the spatial voting model is contradicted by empirical evidence (data from Israeli elections). Schofield thus challenges the generality of a cherished theorem of modern political science: the validity of the ‘mean voter theorem’ depends on a limit to electoral variance as well as bounds on the variation of valence between the parties. Tommi Meskanen and Hannu Nurmi’s chapter (15) brings the topic of voting to a close. They dwell on the question of why there is such a variety of voting rules. They demonstrate that there is in fact a unity in the diversity because most systems are actually related in that they are all characterized by the same goal state: consensus. The different procedures are just different ways of measuring the distance from consensus. 3 This model has been further analysed by Martin Leroch in a forthcoming article in Homo Oeconomicus.
Preface
xi
Chapter 16 is the first of four papers that are explicitly about the relationship between freedom and power. In this chapter, Keith Dowding and Martin van Hees analyse the similarities and differences between the concepts of freedom and power. Their analysis turns on the concept of ‘ability’ that underpins both freedom and power ascriptions. The new contribution of this analysis is that Dowding and van Hees demonstrate that there are cases in which the abilities (powers) of a person fail to coincide with her freedom. Similar to Chapter 1 where it is shown that ‘power’ and ‘cause’ come apart although the two concepts share a common conceptual structure, here it is shown that ‘power’ and ‘freedom’ can come apart despite the common structure. The upshot of this fine-grained analysis is that the precise content of the intuitive relation between these two concepts is anything but clear and this opens up a whole host of new and difficult conceptual and normative questions. In Chapter 17, Marlies Ahlert develops the concept of ‘guarantees in game forms’ as a method of making sense of interpersonal comparisons of freedom, liberty, rights, and power. Starting from the intuition that individuals should have some control over their lives such that they can secure for themselves certain levels of well-being, she uses the construct of a fictitious external observer (the ‘policymaker’) to order game forms. This gives us a ‘personal welfare function’ for the individual’s well-being under specific rules of interaction. Ahlert then applies the concept to dictator and ultimatum bargaining games. Sebastiano Bavetta et al. (Chapter 18) use the Millian idea of ‘autonomy freedom’ as the basis of an empirical analysis of how freedom affects an individual’s attitude towards income inequality. In their study of Italy, they find that there is a negative association between the level of perceived autonomy freedom and the support for redistributive policies. The authors then consider how their results can be used to reconcile views about the classical trade-off between freedom and inequality in liberal democracies. Chapter 19 by Luciano Andreozzi is another new analysis of a canonical problem in normative theory that brings together the problems of power and freedom: is it legitimate to coerce people to contribute to a public good that they have not consented to. More bluntly: ‘do people have a right to punish free-riders?’ Andreozzi emends H.L.A Hart’s ‘principle of fairness’ and uses an elementary game theoretic model to derive conditions for ‘fair’ coercion. In particular he demonstrates that when cooperators cannot punish free-riders the resulting equilibrium is Pareto inefficient, but allowing for mild punishments produces a Pareto superior outcome. The final three chapters differ somewhat in topic and style from the rest of the book. In Chapter 20, Frederick Guy and Peter Skott’s take up a Marxian theme and explore the ways in which employee power (and thus the willingness of firms to pay wages) will be affected by changes in information and communication technology (ICT). Developments in ICT allow manag-
xii
Matthew Braham and Frank Steffen
ers to monitor the actions of employees more closely and act in increasingly discretionary ways. The result of such technology is to reduce the ability of workers to exert power which has a knock-on effect on the functional distribution of income. Chapter 21 by Timo Airaksinen casts a look at the concept of social capital and how it is related to trust, responsibility and power. His short paper is an intuitive study of how we might talk about these concepts. In particular he argues that the existence of social capital depends on an absence of trust and social capital itself implies certain types of coercive power relations. We close the book (Chapter 22) with an essay by Manfred Holler in which he revisits a classical text on political power: Machiavelli’s The Prince. Holler sets out to explore what this text has to offer for modern political theory and shows how Machiavelli actually grappled with problems that has plagued political theory for centuries: the aggregation of preferences, the origin of the state and the law, the status of power and morality in politics, and the dynamics and efficiency of political systems. In a provocative twist, Holler proposes that Machiavelli’s ‘prince’ is none other than Arrow’s dictatorship condition in disguise. On behalf of all the contributors, it is our pleasure to be able to offer Manfred Holler this Festschrift on a series of topics that have an extensive and long-standing place in economics, politics, and philosophy. For, the three concepts of power, freedom, and voting are concerned with the difficulty of coordinating individual and collective activities of the members of a society and the problems associated with such coordination will never cease to exist. These are concepts that find their concrete expression in law and the social practices that are used to allocate scarce resources among competing ends. We hope that readers will find the contributions intellectually stimulating, challenging, and – this is most important for Manfred – be part of an ongoing discussion. As a closing word, we would like to thank the sponsors and colleagues who made all aspects of this project possible. Our sponsors were: the University of Hamburg, the City of Hamburg, Eurohypo AG (Hamburg Branch), and the Alfred Toepfer Foundation (Hamburg). As for direct assistance, special thanks must go to Dr. Harald Schlüter, Division for Research Management and Funding, University of Hamburg, who was always on hand to help us through with the financing not only of this project but many others in the past that led up to it. Lastly, we are indebted to Martin Leroch who provided invaluable assistance in organising and running the conference. Thanks must also go to Nicola Maaser for arranging a piano concert by Hong Chun Youn, one of the most promising young pianists in Germany. m.b. f.s.
Contents Preface 1. Social Power and Social Causation: Towards a Formal Synthesis Matthew Braham
v 1
2. Power Indices Methodology: Decisiveness, Pivots, and Swings František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
23
3. Further Reflections on the Expediency and Stability of Alliances Dan S. Felsenthal and Moshé Machover
39
4. Positional Power in Hierarchies René van den Brink and Frank Steffen
57
5. A Public Help Index Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
83
6. Shapley-Shubik versus Strategic Power – Live from the UN Security Council Stefan Napel and Mika Widgrén
99
7. Modified Power Indices for Indirect Voting Guillermo Owen, Ines Lindner, and Bernard Grofman
119
8. Pivotal Voting Theory: The 1993 Clinton Health Care Reform Proposal in the U.S. Congress Joseph Godfrey and Bernard Grofman
139
9. Coalition Formation Theories Revisited: An Empirical Investigation of Aumann’s Hypothesis Vincent C.H. Chua and Dan S. Felsenthal
159
10. Coalition Formation, Agenda Selection, and Power Friedel Bolle and Yves Breitmoser
185
11. Democratic Defences and (De-)Stabilisations Werner Güth, Hartmut Kliemt, and Stefan Napel
209
12. The Instability of Power Sharing Steven J. Brams and D. Marc Kilgour
227
xiv
Contents
13. The Power to Propose versus the Power to Oppose Donald A. Wittman
245
14. Divergence in the Spatial Stochastic Model of Voting Norman Schofield
259
15. Closeness Counts in Social Choice Tommi Meskanen and Hannu Nurmi
289
16. Freedom, Coercion, and Ability Keith Dowding and Martin van Hees
307
17. Guarantees in Game Forms Marlies Ahlert
325
18. Individual Control in Decision-Making and Attitudes Towards Inequality: The Case of Italy Sebastiano Bavetta, Antonio Cognata, Dario Maimone Ansaldo Patti, and Pietro Navarra
343
19. The Principle of Fairness: A Game Theoretic Model Luciano Andreozzi
365
20. Power, Productivity, and Profits Frederick Guy and Peter Skott
385
21. Trust, Responsibility, Power, and Social Capital Timo Airaksinen
405
22. Exploiting The Prince Manfred J. Holler
421
1. Social Power and Social Causation: Towards a Formal Synthesis Matthew Braham Faculty of Philosophy, University of Groningen, The Netherlands
Power and Cause are the same thing. Correspondent to cause and effect, are POWER and ACT; nay, those and these are the same things. — Hobbes, English Works, 1, X
1. The Challenge An impressive list of economists, political scientists, and philosophers starting with Thomas Hobbes and including Herbert Simon (1957), James March (1955), Robert Dahl (1957, 1968), Felix Oppenheim (1961, 1976, 1981), William Riker (1964), Virginia Held (1972), and Jack Nagel (1975) have claimed that there are key and compelling similarities between what is ordinarily considered to be an ascription of social power and that which is considered under the more general rubric of causality. Hence to say: x ‘i has (had) power to x’ , is to assert that i can (did) cause an outcome x . x ‘i has (had) power over j’ , is to assert that i can (did) cause j to act in a specific way (in a manner that he would not otherwise do). Despite this common recognition, which centres on the reference to the same necessity and sufficiency conditions, there is quite a gamut of views and indeed barely any consensus about the precise nature of the link. In broad terms (and picking only a few of the authors), Riker (1964: 347) for instance, says that the difference between power and causality is one of ‘potential’ and ‘actual’: ‘Power is potential cause. Or, power is the ability to exercise influence while cause is the actual exercise of it’. That is, if i caused some outcome x , then i had power with respect to x ; and if i has power with respect to x, then i can cause x . For March and Dahl the relation is weaker. They assert that ‘power relations are logically a subset of causal relations’ (Dahl 1968: 410). While Oppenheim (1981: 33) judges the relation to be even weaker still: power
2
Matthew Braham
and causal ascriptions come apart completely and are only ‘overlapping categories’. Knowledge of how causal and power ascriptions are related is clearly of fundamental methodological and normative importance. For example, if causal and power ascriptions are two sides of the same coin, as Riker suggests, what descriptive or normative work does the concept of power actually do? Why not follow Riker’s (1964: 348) provocative proposal and delete the concept of power from the vocabulary of social science and political philosophy? 1 If, on the other hand, every power ascription implies a causal ascription but not vice versa as Dahl and March suggest, what exactly demarcates power from causality and makes the set of power ascriptions a proper subset of the set of causal ascriptions? What, if at all, is the normative significance of this distinction for, say, theories of responsibility? 2 For moral judgements are very much dependant upon our understanding of causality and power. Unfortunately, given that none of the above mentioned writers established their claims with any rigour, it is not really possible to evaluate the coherence and veracity of their claims. Riker’s belief, for instance, that a causal ascription entails power is entirely incidental; it is based on the observation of certain ‘parallelisms’ between different concepts of cause and different concepts of power. Or to put it another way, he did not prove that it is not the case that if i is attributed as having been causal for x then i had power with respect to x . (I will demonstrate that Riker’s claim is false.) In a similar fashion, neither March nor Dahl formally demonstrated that the class of power ascriptions is a proper subset of causal ascriptions. March, for instance, only observes that (emphasis added) it appears to be true that any statement of influence 3 that can be just as easily formulated in terms of causality. Since in common parlance (and the present framework) it is not true that all statements of causality can be treated as statements of influence (i.e. influence and causality are not equivalent), the set of all influence relations can be understood to be a proper subset of all causal relations (March 1955: 437).
Yet there is nothing in this argument or the rest of March’s article that guarantees that the set of power relations is a subset and not an intersection 1 Riker’s proposal echoes Bertrand Russell’s (1913) famed argument that philosophers should abandon the concept of causation as being ‘a relic of a bygone age’ and replace it with the concept of ‘functional relationship’. 2 See, for instance, Braham and Holler (2008). This current essay is in fact a further development of that paper. See also the well-known statement by Connolly (1983: 84–137). 3 Here I am following Oppenheim (1960, 1976), and Barry (1976) and taking ‘power’ to be the generic concept, with concepts of ‘influence’, ‘coercion’, and ‘control’ as sub-categories of ‘power’. This corresponds to the ideas in March’s article. In contrast, Dahl (1963) assumes ‘influence’ to be the more general concept, while Riker (1964) considers the concepts to be synonyms.
Social Power and Social Causation
3
of the set of causal relations, as is claimed by Oppenheim. Furthermore, March (and for that matter Dahl as well) ignores entirely the condition that distinguished a power relation from a causal relation. Oppenheim’s claim that there is only an overlap between causal and power ascriptions is by far the most comprehensive and systematic. It is based on two observations. (i) The set of power ascriptions come in two forms: having power and exercising power. Only exercising power is causal. That is, an agent i may have power over j with respect to some outcome x (in the sense of being able to coerce j to do x or punish j for not doing x) but j brings about x for reasons entirely unrelated to i ’s power. (ii) Causal effect does not imply power: i may have performed an action which causally affects j, yet i has no power over j with respect to j performing any action. Although Oppenheim’s set intersection argument is correct in general, his analysis is incomplete. Firstly, it should be noticed that Oppenheim’s set of causal relations is that of ‘actual causation’ and thus the appropriate comparison with power should be restricted to the set of ‘actual’ or ‘exercising of power’ relations. In this case Oppenheim admits that the exercising of power does indeed refer to causation. Agreeing with Simon, he says that ‘cause’ is the ‘defining expression’ of power so that logically speaking the exercising of power is a proper subset of causal relations (and therefore his position is not so far from that of Dahl and March.) However, as with the others, Oppenheim, too, fails to spell out a criterion for demarcating a power from a causal ascription (although this should not be taken to mean that it is not present). In his discussion of what he considers the three basic expressions of ‘exercising power’, that of ‘influencing’, ‘preventing’ and ‘punishing’, he concentrates only on the correspondence between causal language and the definitions of these expressions, and not on the differences between power and cause. That is, Oppenheim restricts his attention to the observation that in one way or another ‘influencing’, ‘preventing’, and ‘punishing’ all are defined in terms of the building blocks of causal ascriptions, that of the necessity and/or sufficiency of one set of events (i ’s actions) in bringing about another event ( j’s actions, or a state of affairs or outcome x). In contrast to all these positions, another other prominent theorist, Alvin Goldman claimed that causal ascriptions and power ascriptions are entirely disjoint categories: Causal terminology is especially unhelpful in dealing with the kinds of cases of paramount interest to a theory of power, i.e., cases in which an outcome is a function of the actions of numerous agents. In thinking about distributions of power and degrees of power among many persons and many groups, the use of causal terminology is likely to obscure the crucial questions rather than illuminate them. It seems advisable, therefore, to avoid all reliance on the concept of causation from the outset (Goldman 1972: 227).
4
Matthew Braham
The challenge of this paper should now be evident. It is simply to untangle ascriptions of social power and causation and explicate the difference in their meaning. I will propose a terse game-theoretic synthesis whereby we can visibly demarcate the two concepts and show their relation – which happens to be one of set-inclusion. Before the discussion begins, three caveats are in order. First, although I use the term ‘social causation’ I am aware that it is conceptually more precise to use the term ‘contributory effect’ or speak of ‘conventional causality’ or ‘conventional generation’ because the term ‘causality’ generally refers to the connection between two events which are related by empirically ascertainable ‘laws of nature’. Social outcomes are governed by law and convention and not merely by laws of nature as such. For example, that a house burns down following the outbreak of a fire of a certain size follows from laws of nature; that a particular policy is implemented following the agreement of a certain number of people follows from legal rules and conventions. 4 However, to use this alternative terminology would unnecessarily burden the discussion without adding anything. Second, as the two expressions that open this paper suggest, power ascriptions come in two forms: ‘power to’ and ‘power over ’. In line with Morriss’ (1987/2002) acute – and much overlooked – analysis, I take the first expression to be the generic form. I will explain the reasons for this later. Third, as my task is primarily analytical I will not undertake a comprehensive survey of the different opinions about the relationship between social power and social causation. I take it that my terse review in this introduction is sufficient to indicate that there is a problem to be dealt with.
2. Causal Ascriptions Let us begin with the concept of causality. 5 A ‘cause’ is a relation between events, processes, or entities in the same time series, in which one event, process, or entity C has the efficacy to produce or be part of the production, of another, the effect x , i.e. C is a condition for x . Arguments among philosophers about what actually constitutes this efficacy are often concerned with whether or not the efficacy be a necessity or sufficiency requirement. There are three senses for each of these. Starting with necessity, in descending order of stringency: Strict necessity
C is necessary for the occurrence of x whenever x occurs.
Strong necessity
C was necessary for x on the particular occasion.
4
The distinction is discussed in more detail in Kramer (2003: 280) This section is summarized from Hart and Honoré (1959), Honoré (1995), Kramer (2003: 277ff), Mackie (1965, 1974), Pearl (2000: 313–315 ) and Wright (1985, 1988). 5
Social Power and Social Causation
Weak necessity
5
C was a necessary element of some set of existing conditions that was sufficient for the occurrence of x .
For sufficiency: Strict sufficiency C is sufficient by itself for the occurrence of x . Strong sufficiency C was a necessary element of some set of existing conditions that was sufficient for the occurrence of x . Weak sufficiency C was an element of some set of existing conditions that was sufficient for the occurrence of x . There are three remarks to be made here. Firstly, it is obvious that we can instantly dispense with the concept of weak sufficiency for the simple reason that it is trivially satisfied by any condition C by simply adding C to an already existing set of conditions that is sufficient for x , although C is totally irrelevant as regards x . That is, weak sufficiency can fail to satisfy the simple counterfactual test of efficacy known as the ‘but-for’ test in a very unacceptable way. According to this test, C was a cause of some result x only if but for the occurrence of C, then x would not have occurred. Suppose, for example, a house burns down and evidence is found that there was a shortcircuit, there was flammable material nearby, there was no automated and efficient water sprinkler, and all three conditions together are sufficient for the house burning down. Now add the fact that there is also evidence that a Bob Dylan CD was in an unplugged CD player in the house. In absence of some empirically ascertainable laws of nature that govern the spontaneous combustion of unplugged CD players with Bob Dylan CD loaded in the deck, it is entirely unreasonable that the CD player with the Bob Dylan CD should qualify as having causal status for the fire. Secondly, there is no question that if C is strictly necessary for x , then C should be ascribed causal status for x ; and if C is strongly necessary for x then C should also be ascribed causal status for x . 6 In both cases, C obviously satisfies the ‘but-for’ test. Yet strict necessity is inadequate to cover causal relations in general because it will often not be fulfilled. It would mean that evidence of an electrical short-circuit that combined with other conditions and resulted in a house burning down would not qualify for causal status given that it is not true that for all instances of a house burning down there must have been an electrical short-circuit. Strict sufficiency is 6 Note here that necessity can be considered as too ‘weak’ for picking out a cause because it means that the presence of air is a cause of car accidents, rapes, etc. Obviously there is a ‘boundary problem’ to deal with and the problem does not disappear even if we restrict ourselves to those events which are actions, because while we can assume away nomic conditions (‘laws of nature’) we cannot simply assume away past actions as conditions for the present. Here I go along with Mackie’s (1974: 63) concept of the ‘causal field’, which is the background against which causing goes on, i.e. consequents and antecedents do not float about on their own but happen in some setting.
6
Matthew Braham
also inadequate because it too may fail to exist: it is rare that a single condition C is sufficient for x . Strict sufficiency also rules out ascribing the electrical short-circuit causal status for the house burning down given that it alone cannot lead to the fire; the presence of flammable material, etc. is also called for. Moreover, the fact that C may be strictly sufficient for x and was present on the occasion of x’s occurrence does not imply that C actually caused x ; x may have brought about by another strictly sufficient condition that was also present on the occasion. Thirdly, although strong necessity is ostensibly an attractive candidate for ascribing causality, it too can easily crumble in cases of causal over-determination. These are those circumstances in which the causes of an event are cumulatively more than sufficient to generate the effect. That is, there exist situations in which a strongly necessary condition will fail to exist. Causal over-determination comes in two broad types which Wright (1985: 1775) has usefully labelled as duplicative and pre-emptive causation. A case of duplicative causation is one in which two similar and independent causal processes C 1 and C 2 , each of which is sufficient for the same effect x , may culminate in x at the same time. It will be best to turn to some examples. Assassin 1 Assassin1 and Assassin2 independently and simultaneously aim and shoot at Victim such that each shot is sufficient to kill Victim at exactly the same moment. Clearly, neither assassin was strongly necessary for Victim’s death because, ceteris paribus, if either had not fired his shot Victim would have died anyway due to the shot fired by the other assassin. Although one could argue that this example is intuitively resolved by saying that a cause is either strictly sufficient, strictly necessary, or strongly necessary, this will not work for the following case. Assassin 2 Victim is held up in his car by Assassin1, Assassin2, and Assassin3 who together tie him up and lock him in his car and then push the car over a cliff killing Victim. It is sufficient for Victim’s death that only two assassins hold up Victim and push the car over the cliff. To simplify matters, we need only focus on the pushing of Victim’s car over the cliff. Obviously, the pushing of Victim’s car by any assassin alone is neither strictly necessary nor strictly sufficient nor strongly necessary for Victim’s death. The absence of strict sufficiency and necessity is obvious: no assassin pushing alone can kill Victim, and no particular assassin must push so that Victim’s car will go over the cliff. To determine strong necessity we again apply the simple counterfactual but-for test by holding the pushing by the other assassins constant and ask whether the car would have gone over the cliff killing Victim had the third assassin not been pushing. The answer
Social Power and Social Causation
7
is obviously affirmative, because it takes the pushing of at least two assassins for this to happen, and since under the circumstance this is true then the car would have gone over the cliff and killed Victim. Thus, the pushing by the third assassin cannot be attributed as having causal status for Victim’s death. The same holds true for the pushing by the other two assassins. We can now turn to pre-emptive causation. This is the case in which an effect x is brought about by some actual sufficient condition(s) C 1 such that had C 1 not occurred then x would have been brought about by some alternative sufficient condition(s) C 2 . There are two sub-cases of preemptive causation: (i) The alternative set of conditions C 2 emerged after C 1 brought about x. (ii) The alternative set of conditions C 2 did not emerge because C 1 emerged and brought about x. The canonical example of type (i) pre-emptive causation is: Assassin 3 Assassin1 shoots and kills Victim who is embarking on a trek across the desert just as Victim was about to take a swig from his water bottle which was poisoned by Assassin2. Here strong necessity comes to grief given the failure of the but-for test to allocate causal status to either assassin: Assassin1 cannot be said to have caused Victim’s death on the occasion in question because had he not shot and killed Victim, Assassin2’s poisoning would have, given that Victim was just about to take a swig from his water bottle; and Assassin2 cannot be said to have caused Victim’s death because it was in fact Assassin1’s shot that did it and not his poisoning of Victim’s water. A type (ii) case is: Assassin 4 Assassin1 shoots and kills Victim. Assassin2 decides not to poison Victim’s water bottle (but would have had Assassin 1 not shot Victim). In this example strong necessity comes to grief because Assassin1 cannot be considered as having caused Victim’s death on the occasion in question because had he not shot and killed Victim, then Assassin2 would have poisoned Victim’s water; yet Assassin2 cannot be attributed as causal for Victim’s death because he not in fact poisoned the water; he only would have done if Assassin had shot Victim. I come now to the remaining and equivalent criteria of weak necessity and strong sufficiency. These criteria were introduced by Hart and Honoré (1959), Mackie (1965, 1974), and Wright (1985, 1988) in various different forms to deal with causal ascriptions when an outcome is a result of a complex of conditions.
8
Matthew Braham
The idea behind the weak necessity/strong sufficiency requirement is that to be ascribed causal status for an event x a prior event C need not be the cause as in the sense of it being the single sufficient condition present on the occasion or a cause in the senses of strict or strong necessity, but merely shown to be a ‘causally relevant condition’ of x. That is, weak necessity/strong sufficiency drops what can actually be considered a deeply rooted and too narrow presupposition that, to be ascribed causal status for x , C must be directly related to x in that any state of affairs is brought about by some singularly identifiable prior event. This is the idea of ‘cause as authorship’. In its place comes a conception of causality that allows for an indirect relation between a prior event C and its consequence x ; but – and this is important – the indirect relation still maintains a necessity requirement. This is obtained by combining a sufficiency and necessity requirement in a set-based framework. That is, weak necessity/strong sufficiency means it need not be the case that a single condition is sufficient for x but that a set of conditions be sufficient (strict sufficiency is then the special case of a singleton set) on this occasion and combines this with a necessity requirement in that all members of this set are necessary for the set to be sufficient. Mackie (1965) denoted a test of weak necessity/strong sufficiency by the acronym INUS which we he defined as ‘an i nsufficient but n ecessary part of a condition which is itself u nnecessary but sufficient for the result’. In Wright’s (1985, 1988) simpler mnemonic, ‘a n ecessary e lement of a s ufficient s et’ or NESS – a mnemonic which we will use for the rest of this paper. 7 We can formally define the NESS criterion as follows. First define a set of events or states of affairs S as minimally sufficient for x if no proper subset of S is itself sufficient for x. This implies that S is the conjunction of conditions each of which is necessary (non-redundant) for S to be sufficient (i.e. each of the conjuncts in S satisfies the but-for test for the sufficiency of S). If S is not minimally sufficient it means that it contains events that are not an integral part of any set of actual conditions that is minimally sufficient for x and therefore these events could not properly be classified as a cause of x because by being redundant they are devoid of any causal efficacy for x coming about. Next, define the set M \S 1 , S 2 ,!, S n ^ as the collection of every minimally sufficient set of conditions (MSC) for x. An event C is, then, a NESS for x if, and only if, C is an element (conjunct) of some S i . For instance, if x can be written as x (AB or CD ) with (AB or CD ) being the necessary and suf7 There is a formal difference in Mackie’s (1965) initial formulation of INUS and Wright’s formulation of NESS in that in the original formulation of INUS Mackie includes a condition that rules out over-determination (condition 4). Mackie required that to qualify as a cause a condition must be a necessary member of the single sufficient set that was present on the occasion or if more such sets were present the condition was a necessary member of each such set. In his later formulation of INUS, Mackie (1974) dropped this condition.
Social Power and Social Causation
9
ficient condition for x (either for all instances of x or on this particular occasion), then C is a NESS for x by dint of it being a member of a disjunct, CD, which is minimally sufficient for x and therefore C is to be ascribed causal status for x if under the circumstances D were present, i.e.: AB x CD
The attractive feature of weak necessity/strong sufficiency is that it is general enough to cover strict and strong necessity and strict sufficiency: (a) if C is a member of every S i for any instance of x, then C is strictly necessary for x ; (b) if C was a member of every S i for a given instance of x then C is strongly necessary for x ; and (c) if C is sufficient for one S i , then C is strictly sufficient for x. To see how NESS works in cases of duplicative causation the idea is to resolve the excess sufficient set into its component MSCs. Consider again Assassin 1. Each assassin’s act of shooting is a necessary element of some S i and S j and therefore the behaviour of each marksman can be attributed causal status for Victim’s death because the behaviour of each marksman was necessary to complete a set of conditions (the marksman possessing a loaded gun in good working order, the absence of protection to the victim, etc.) sufficient for Victim to die and each of these conditions was present on the occasion in question. In Assassin 2, we see that given that it is true that the pushing of the car by each of the three assassins belongs to at least one S i consisting of pushing by two assassins and each of these MSCs were present on the occasion (the excess sufficient set being a superset of each of these MSCs), then the pushing by each assassin qualifies for causal status for Victim’s death. 8 8
The NESS test is not entirely immune from other problems that but-for causality also suffers from, such as the problem of collateral effects and the problem of causal priority. Kramer (2003: 285–295) provides a succinct overview and a work around the problems. Further, there are additional objections to NESS on the grounds that the inclusion of a sufficiency condition apparently implies that there is no causation without determinism. As many writers note, causation can take place in a probabilistic environment. For a discussion see Suppes (1970); and for the view that a cause is an event C that increased the probability that E occurred, see Vallentyne (2008). I grant that NESS is not the complete story of causation, but it would seem to apply to most social outcomes that we are discussing and therefore we need not be too concerned about this issue here. Here it is worthwhile remarking that Vallentyne’s (2008) proposal that a cause of an outcome is a prior event that increases the probability of an outcome presupposes the NESS test. We cannot go into detail here, but if an event increases the probability of an outcome then that event must be a NESS condition.
10
Matthew Braham
It is similarly easy to disentangle the cases of pre-emptive causation. Here the idea is to check if a particular act is an element of an MSC that forms a pre-empting cause. In Assassin 3, Assassin1’s act of shooting is a necessary element of some S i and therefore a cause, although Assassin2’s poisoning of the water is not because although poisoned water would be sufficient for killing Victim, in the circumstances it cannot be an actual sufficient condition because Victim has already been killed by Assassin1’s shot. In Assassin 4, Assassin1’s act of shooting is a cause of Victim’s death, but Assassin2’s intention to poison Victim’s water is not. What the NESS condition does in cases of pre-emptive causation is to evaluate the claim that C 1 could not be a causal factor of x because of the existence of an alternative cause C 2 that either emerged or could have emerged subsequent to the occurrence of C 1 . This fact does not vitiate the causal status of C 1 . 9 Before tying up this section, there is still a class of problems to deal with that have thus far not been highlighted but are important because they represent so many instances in our social and institutional lives. Very generally, it is the case when one event C 1 is sufficient for x but there is a second simultaneous event C 2 that is neither sufficient for x nor necessary for a sufficient set. Yet, C 2 still ‘contributes’ in some manner. Call this redundant causation. Assassin 5 Victim is held up in his car by Assassin1 who ties and locks him up in his car. Together with Assassin2 the car is pushed over the cliff killing Victim. Assassin1 is sufficient for holding up Victim and pushing the car over the cliff. Assassin2 cannot prevent Assassin1 killing Victim.
To make this example more concrete, suppose that it requires at least 2 units of force to push Victim’s car over the cliff. Suppose further that Assassin1 has the physical strength to push with 0, 1, 2 or 3 units of force, while Assassin2 can only push with 0 or 1 unit of force. Finally, suppose Assassin1 and Assasin2 push the car over the cliff with 3 units and 1 unit of force respectively. The question here is, given that Assassin1 is sufficient for the death of Victim to what extent did Assassin2 contribute to Victim’s death? Is the naïve answer that Assassin2 obviously did not contribute to Victim’s death true? If causal contribution is a necessary (but not sufficient) criterion for moral or legal responsibility, this would mean we could not hold Assassin2 to account for his participation in the events that led to Victim’s death. Here, too, there is a way out using the NESS test. What we are after is to find an MSC such that the pushing by both assassins makes up this set. To accomplish this, we have to replace the but-for 9 In fact the NESS test is a framework for evaluating excuses that are based on ‘alternative cause’.
Social Power and Social Causation
11
test with an alternative counterfactual analysis. In the standard but-for test we hold the actions of all others constant and vary the action of the agent i and check if by doing this whether the remaining actions are still sufficient for the outcome. For the case of redundant causation, however, we have to hold the action of i constant and vary the action of some other agent j and ask if by doing so i would become a necessary member of a sufficient set. That is, we ask if there is a feasible combination of actions such that i satisfies the NESS test. If the answer is affirmative, then we can say that i ’s action is a causally relevant factor for the given outcome. We determine not only the freedom an agent has to perform a specified action, but also the freedom she has to remain passive with respect to that action. 10 In Assassin 5 we hold constant the force that Assassin2 applied to the car (1 unit) and ask if Assassin1 can apply an amount of force such that when combined with Assassin2’s force the combination is just sufficient for Victim’s car to roll over the precipice. In other words, if Assassin1 can regulate his strength, by applying 1 unit of force, which he can, then Assassin2 satisfies the NESS test, and not otherwise. We now have the answer we were looking for. It is worth remarking that this reasoning drives a further wedge between a particular type of counterfactual analysis (the but-for test) and a causal ascription. In cases of causal over-determination it is just too restrictive to only consider the i-variants in actions; we must also consider the j-variants – or the potential causal contributions.
3. Power Ascriptions I can now turn to ascriptions of social power. Two things need to be explained. The first is the generic notion of power that will be used as the basis of a comparison with social causation. The second is the choice of the criterion that demarcates social power from social causation. I start, then, with the generic concept of power. Call this power simpliciter . In very general terms, when we ascribe ‘power’ to a person we are at base referring to an ability of some sort that that the person has: what a person can do under specified circumstances. 11 It is, as Morriss’ (1987/2002: 29ff) acutely observed, the capacity to ‘effect something’ (an ‘outcome’ or ‘state of the world’ x). That is, power is about ‘accomplishing something’. 12 It is in this very basic Hobbsian conception of ‘a power’ that we see why we need only focus on the locution ‘power to’ rather than the very common ‘power over ’ when analyzing the relation between power and causal 10
I thank Martin van Hees for this insight. Note that there is an overlap between causality and action theory here and that while the NESS test works in any environment, this insight only applies to (human) action contexts (I thank Hannu Nurmi for this comment). 11 See in particular Goldman (1972) and Morriss (1987/2002). 12 This is exactly Hobbes’s (1651, Chap. 10) notion in Leviathan.
12
Matthew Braham
ascriptions. For ‘power to’ and ‘power over’ are not two separate concepts, but rather the latter is derivative of the former because we can always cash out ‘power over j’ in terms of the variable x in the locution of ‘power to’. ‘If I want to say that I have the power to get my students to read Leviathan, I can say just that; I don’t have to say that I have power over my students with respect to their reading Leviathan …’ (Morriss 1987/2002: 35). Moreover, not every locution of ‘power to’ can be cashed out in terms of ‘power over’. The fact that the President of the United States can veto a bill passed by Congress with anything less than two-thirds majority does not imply that the President can get anyone to act in a counter-preferential way. And my power to throw a rock of a given size and weight fifty metres does not imply that I can get anyone to act in a counter-preferential way. In addition, ‘power over’ entails ‘power to’ in another respect. If i has power over j with respect to some issue x then it entails that that i has the power to do something such that j will perform an action such that either x occurs or it does not depending on what i wants. 13 I can now turn to the second issue. Our starting point here is the observation that the generic definition of power as effecting something is imprecise. To say that ‘i has power to x ’ means ‘i can effect x ’ helps only in as much as that it it allows us to set off power from the related, but distinct, concept of ‘influence’ which Morriss characterized by the verb ‘affecting’: the ‘altering of’ or ‘impinging on’ something in someway. But defining ‘power’ as merely ‘effecting’ does not help us to distinguish power from causation because a causal ascription can also be said to be about effecting something. A definition of power is in need of more structure. First and foremost, such a definition must establish a correspondence between actions (or a sequence of actions) and outcomes. And given that – as with causes – an agent’s action may effect an outcome only in the presence of certain other antecedent conditions, the generic definition of power must postulate that i has an action (or sequence of actions) that is at least weakly necessary (or strongly sufficient) for the occurrence of x (depending on the stated or implied conditions). Assuming anything stronger than weak necessity – in other words the NESS test – would result in a definition of power that comes to grief on the problems of overdetermination. Without going through motions all over again, recall the example of Assassin 2. In such a case any criterion stronger than weak necessity would result in the conclusion that none of the agents had the ability to effect Victim’s death, although clearly Victim could be, or was, killed. 14 13 The more basic nature of a ‘power to’ ascription is discussed in more detail in Braham and Holler (2005). 14 Note Goldman’s (1972) rejection of causal terminology is in all likelihood connected to the fact that he took such terminology to refer to stronger necessity and sufficiency conditions and was therefore faced with problems of overdetermination. I have pointed out elsewhere (Braham and Holler 2008) that Goldman’s definition of social power is one of weak
Social Power and Social Causation
13
Obviously ‘power’ as the mere ability to effect outcomes appears to be none other than ‘potential’ or ‘actual’ causal contribution, as Riker suggested. If this is the case the only difference between the concepts is linguistic. There is a sense in which this is true; but it is not the complete picture. The problem is that satisfaction of the NESS test does not fully describe the way in which we use the term power. Power is not about a mere potential or actual conditional contributive effect, but an actual or potential contributive effect of a special kind. Return once more to Assassin 5. Here ascriptions of causal contribution and power come apart because intuition says that Assassin1 had the power to kill Victim but Assassin2 did not. Yet, the NESS test can be used to ascribe Assassin2 as having made a causal contribution to Victim’s death. So wherein lies the difference between ‘power’ and ‘causal contribution’? From the story of Assassin 5, the difference is certainly not one of ‘potential’ and ‘actual’ as Riker, and to an extent Oppenheim as well, posited. If this idea would be correct, then we would be committed to saying that from Assassin2’s actual causal contribution it follows that he had some power with respect to Victim’s death. But this cannot be correct because the objective description of the circumstances is such that Assassin2 is not in possession of an action that is sufficient to kill Victim himself or is unquestionably necessary for the death of Victim and nor does he possess an action that is sufficient or necessary to prevent Victim’s death at the hands of Assassin1. For all intents and purposes, Assassin2 appears to be powerless with respect to Victim living or dying. One response is to restrict the ascription ‘i has power to x ’ to those cases in which i is effective for x and not-x (given the stated or implied conditions). This is exactly Goldman’s (1972) approach, and one that is quite widely quoted in the philosophical literature. 15 In summary form, and ignoring the trappings and trimmings of his theory of action such as the time indices, Goldman says that a power ascription takes the following form: ‘i has power to x if, and only if, the following two conditions hold: (i) if i wants x , then i would perform an appropriate act (or sequence of acts) and given the stated or implied conditions x would occur, and (ii) if i wants notx , then i would perform an appropriate act (or sequence of acts) and given the stated or implied conditions not-x would occur.’ But this will not work. It is still too weak. Consider the classic Master and Slave relation in which Slave can only obtain x at the sole behest of Master. That is, Master can physically prevent Slave obtaining x although in absence of Master’s intervention Slave has an action (or sequence of actions) for obtaining x. Now suppose Master is a benevolent soul and with respect to x
necessity/strong sufficiency, i.e. the NESS test in disguise. 15 See, for instance French (1984: Chp. 5), Lukes (1986), and Kernoahan (1989) for acceptance of Goldman’s definitions.
14
Matthew Braham
always wants whatever Slave wants. Hence, if Slave wants x (not-x) then both Slave and Master perform the appropriate action (or sequence of actions) such that Slave obtains x (not-x). This means that under the stated or implied conditions of Master’s preference tracking behaviour, Goldman’s definition of power would ascribe Slave the ‘power to x ’ because whether or not x obtains now depends on the actions that Slave chooses. This cannot be correct because as far as x is concerned, Slave is utterly dependent on Master’s good will. Note that as in Assassin 5, both actors have actions that form a minimally sufficient set of conditions for x : Master’s performance of an action such that Slave in unhindered in his attempt to obtain x , and Slaves action to obtain x. Thus, neither ‘wants’ as such (preferences or desires) nor conditional control over x or not-x are necessary conditions for a power ascription. The distinguishing feature of a power ascription lies elsewhere. The feature of a power ascription that we are hunting for is well-known, but not uncontested. Generally speaking, when we make a power ascription in a social context we are assuming a situation of actual or hypothetical conflict, which means that the demarcation criterion is that of overcoming resistance or opposition. It is worthy of note, here, that Goldman acknowledges that he left out reference to such a criterion in his definitional framework on the grounds that such a criterion is required for making comparisons of power but is not required for a power ascription as such. This may be true in the case of defining personal powers as abilities to do specific things (throw a rock of a specified size and weight a given distance or play the piano to a certain level), but not true of social power. But in a later paper (1974) he claimed that his definitions are designed to capture the conception of power as ‘the ability to get what one wants, or the ability to realize one’s will. More specifically, it is the ability to realize one’s will despite the (possible) resistance or opposition of others’ (emphasis in the original). The Master–Slave example should make it quite clear that satisfying our wants or realizing our will is possible even at the sole behest of a dictator – it only requires that the dictator tracks our preferences, unless of course we add more structure to the meaning of ‘wants’ or ‘realizing our will’. 16 Thus a definition of power must explicitly include resistance or opposition as a variable. Echoing Max Weber’s definition of power, 17 we have: Power ‘i has power to x (not-x)’ if, and only if, i has an action (or sequence of actions) that is at least weakly necessary/strongly sufficient under the 16 Including ‘overcoming potential resistance’ in the meaning of ‘getting what I want’ or ‘realizing my will’ is not an innocent move because rules out the existence of adaptive preferences. 17 ‘In general, we understand by ‘power’ the chance of a man or of a number of men to realize their own will in a communal action even against the resistance of others who are participating in the action’ (in Gerth and Mills 1948: 180).
Social Power and Social Causation
15
stated or implied conditions for the occurrence of x (not-x) despite the actual or possible resistance of some others to the occurrence of x (not-x). This definition requires further qualification. Firstly the reference to x or not-x ensures generality because it is quite possible that an agent may have power for x or not-x but not both (e.g. in Assassin 1, assuming that neither assassin can kill the other, then both assassins only have the power to kill Victim but not the power to see that he survives). The definition could equally be written as ‘for a particular state of affairs s, i has some power with respect to an outcome if …’ Secondly, that the resistance need not be actual we avoid preferencetracking effects such as in the Master–Slave example. For Slave to have the ‘power to x’, x must occur as a result of Slave’s action even if Master is against x occurring and performs the requisite action to prevent x ; or for Slave to have some power in this relationship he must at least be able to prevent x , i.e. guarantee not-x . As neither of these conditions are met Slave is powerless. Thirdly, the application of the existential rather than the universal quantifier on the resistance variable is to ensure that, given our knowledge of the stated or implied conditions, the agents, and the available actions, we can always identify a powerful agent in any situation. To demand that i must be able to the overcome the possible or actual resistance of all others (universal quantifier) before i can be deemed as having power to x , is to demand too much. It is equivalent to saying that i must at least have a veto with respect to x for i to be ascribed ‘power to x ’ (see section 4). In most circumstances such a condition cannot be satisfied (e.g. as in Assassin 2). So, either we dump the resistance criterion, in which case we could not tell apart a causal from a power ascription, or we can weaken it by applying the existential quantifier. 18 The resistance criterion illuminates a further aspect of the use of ‘power’, that of ‘forcing’ and ‘winning’. If i has the potential to effect x against the will of another, then i can force or guarantee that outcome, i.e. ‘wins in a clash of wills’. Now we have the fuller story: social causality is about contributing to some social outcome; social power is about contributing to forcing it. These are conceptually distinct abilities. These two abilities may or may not coincide. In Assassin 1, for instance, we have the coincidence of power and cause: each assassin can force Victim’s death against the will of the other because each has an action that is sufficient for Victim’s death 18
In a recent article Dowding (2003: 312) rejected the resistance criterion for a definition of power on the assumption that the universal quantifier must be applied. He does not discuss the use of the existential quantifier. Dowding’s intention is to replace the ‘clash of wills’ account of power with a ‘resource-based’ one. It must be said, however, that Dowding does not demonstrate that the inclusion of a resistance criterion in a definition of power is incompatible with his resource account. If it were, he would have to demonstrate how his account could handle the causality–power distinction.
16
Matthew Braham
and on the same account each is a causal condition for his death. In Assassin 2, each assassin can force Victim’s death because each has an action that is a necessary part of a set of sufficient conditions that can force Victims’ death against the will of another (e.g. Assassin1 and Assassin2 against Assasin3). From the construction of this example, power and cause once again perfectly coincide. The sets of actions that are minimally sufficient are precisely those that are minimally sufficient for forcing the outcome. But in Assassin 5 the situation is different: power and cause come apart because there exists a combination of actions that is minimally sufficient for the death of Victim, but this combination does not correspond to a set of agents each of which has an action such that each agent’s action is necessary to force Victims death. In this example Assassin2 can contribute to Victim’s death, but he could not, and cannot, force or preclude anything against the will of Assassin1. Therefore, for a causal ascription we are required to examine any combination of actions (or sequences of actions) that are minimally sufficient for the outcome; but power ascriptions demand more. The resistance criterion focuses our attention on those combination of actions (or sequence of actions) that each agent would choose if each agent wanted to force the outcome against the will of others. It is this restriction that makes the set of power ascriptions a subset of the set of causal ascriptions.
4. Power and Cause Untangled 19 Although we have the result we are looking for and could wind up the analysis here, there is still a useful formal exercise to perform. The task is to illuminate the power–causality distinction exactly so that we can see why the set inclusion conjecture is true. Given that we are dealing with strategic combinations of actions, the natural formal framework to do this is that of game form. Let X be a set of outcomes and N \1,!, n ^ a set of agents. Let G (a 1 ,!a n , Q) be a game form on X. Each a i is a set of actions, Q is a function from the set of action combinations, or plays, onto X. (Note that since the function is onto X, each element is an outcome in at least one play of the game.) Although we have seen that causal ascriptions are more basic than power ascriptions, it is simpler to start with the latter. For this we can make use of the concept of an effectivity function which is a model of power among agents and coalitions. Given X and N, we say that a coalition T N is B-effective for A X if, and only if, (i) T is non-empty: there is an action profile (or combination) a T such that for any action profile a N T of others, 19 This section is a result of close collaboration with Martin van Hees. The idea of ‘coalitional events’ as a way of formalizing the sufficiency condition is due to him.
Social Power and Social Causation
17
Q(a T , a N T ) A ; (ii) T is empty: A X . For non-empty T we can also say that T is B-effective for A through a T . 20 Next, we say that a coalition T is a minimal winning coalition (MWC) for A if, and only if, it is B-effective for A and no proper subset of T is as well. We say that an agent ‘i has power to bring about A’ if, and only if, there is a MWC for A that contains i. Note that membership of an MWC guarantees the fulfilment of the resistance condition because if T is an MWC for A, then by definition T \i ^ is not B-effective for A so that ceteris paribus i can force an outcome, X A against the resistance of the remaining members of T; and if T \i ^ is not B-effective for A but T \i ^ is, then ceteris paribus, i can force an outcome, A, against the resistance of the members of N T . We can now turn to causation. In particular we have to find an appropriate game theoretic formulation of the NESS test. Recall from the discussion of causal relations the NESS test checks if certain events are part of a set of conditions that are minimally sufficient for an outcome. In this case the events are ‘actions’ performed by agents. That is we have to translate an MSC into a game theoretic definition. This formalization must be able to capture the fact that in Assassin 5, both assassins have actions that are together an MSC but also that Assassin1 has an action that alone is an MSC. First we need the concept of a sufficient condition, which very naturally is a profile (a 1 ,!a n ) that is sufficient for A. As we are dealing with event causation, call such a profile an event for A. Next, we need the concept of a ‘coalitional event’ or T-event, a T , which is a (possibly empty) set, the elements of which are the actions of members of T (one for each member). A T-event is sufficient for A if, and only if, (i) T is non-empty: T is B-effective for A through a T ; (ii) T is empty: A X . Next, for any i T v , let a T \i ^ denote the T \i ^ -event in which the members of T \i ^ adopt the same actions as in a T . A T-event a T is a minimally sufficient set of conditions (MSC) for A, iff, (i) a T is a sufficient condition for A; and (ii) if T v , then for all i T : a T \i ^ is not a sufficient condition for A. Thus for a given A, i satisfies the NESS test and therefore makes a causal contribution to A if i has an action that is part of an MSC. Note that an MSC says nothing about resistance because all that matters is that the T-event is sufficient for A and not about A being guraranteed. It should now be obvious from comparing the definitions of power (MWC) and causation (MSC), for any A and any T , if T is an MWC for A, then there is a T-event that forms a MSC for A. Hence a power ascription implies a causal ascription. The converse, however, is not true: Assassin 5 is the relevant counter-example. Thus the set of MSCs for A contains the set of 20 I use the term B-effective to distinguish from a weaker concept of effectivity, that of Ceffectivity. Under this definition, a coalition T ’s power is determined not by the coalitions’s ability to realize A directly but by the ability of N T to interfere with T obtaining A. T is Ceffective for A if N T cannot veto A. I will not make use of C-effectivity here.
18
Matthew Braham
MWCs for A, but, depending on the game form, there may exist an MSC that does not correspond to an MWC. QED. To cut a long story short, whereas power is about individuals being necessary elements of coalitions that are sufficient for some event A, causality is about actions being necessary elements of events, the action profiles, that are sufficient for A. Both types of ascriptions make use of the NESS test, but in each case the NESS test refers to different objects. 21
5. Concluding Remarks I will now tie up this essay by glancing at the consequences of this analysis. While it could be said that I have done a lot for very little, the very little has broad and fruitful consequences. As I see it, these consequences can be classified into two camps: methodological and normative. Methodologically, the result of this study allows us to finally put to rest the idea that power is a redundant concept, with no separate meaning to that of causality (Riker’s thesis). There is a distinct difference between the two concepts: an ascription of causal contribution contains no reference to resistance and force; and formally speaking cause and power refer to different objects, namely, MSCs and MWCs respectively. And we can also put to rest the idea that power and cause are only overlapping categories (Oppenheim’s thesis) or the idea that causal terminology is not useful for our understanding of power (Goldman’s thesis). The March–Dahl setinclusion thesis is the correct one. 22 Secondly, proving that power ascriptions are a subset of causal ascriptions is, as intimated at the outset, significant for normative theory, in particular to theories of responsibility. This is important, because the allocation of blame or praise is one of the ways in which we judge and regulate the behaviour of our fellows, our institutions, and everyday affairs. I cannot enlarge on this matter here, but I can suggest two domains of responsibility for which the power–cause distinction is of special significance. The first is the domain of what we may call ‘retrospective responsibility’, or responsibility for what has occurred. In such situations, the concept of power is, in general, too narrow because it will exonerate some agents even if they have contributed to a particular state of affairs (e.g. Assassin 5). If a theory of responsibility should connect individual actions to outcomes, then causal contribution appears to be a necessary criterion. The second domain of moral responsibility for which the concept of 21 Note that for certain types of games in which action and outcome sets are binary (simple games) power and causality coincide. This indicates why power can be used in such circumstances to allocate responsibility. See, for instance, Holler (2007). 22 There is another methodological consequence that is related to the construction of power indices, which I cannot discuss here. However, a brief analysis can be found in Braham (2005).
Social Power and Social Causation
19
power has played a vital role is that of prospective responsibility; or that which we owe to others. This domain includes questions of, for instance, who bears positive duties to others in need. In this context, an ethical proposition by Peter Singer (1972) raises its head: ‘if it is in our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought to do it.’ Clearly by ‘power’ Singer means the ‘can of ability’ and his moral proposition turns on the canonical deontic principle of ‘ought implies can’. But is this ‘can of ability’ that of ‘power to’? If it is, then this rendition of ‘ought implies can’ is, with all probably, too strong for the purposes of allocating positive duties. Obviously, if our behaviour does not impinge either actually or potentially on the lives of others then we cannot be thought to have any duty towards others. But we need not have power before we can impinge on the lives of others; we can do so even if we are powerless (see Assassin 5 once more). In this fashion our positive duties towards others should be based on our potential causal contributions, irrespective of our power. If by acting alone Norman can prevent Manfred being harmed (he has power to prevent harm to Manfred), without sacrifice to anything of moral importance but Bernie can only assist Norman in doing so as a causal contribution (and also without any sacrifice), Bernie has an obligation to assist Norman in preventing the harm to Manfred. Not only can powerlessness not be cited as a basis for exoneration from what we have done; it cannot even be cited as a basis of exonerating us from a moral obligation to perform an act that is an overdetermining cause. 23 The only valid basis for being exonerated in this arena is causal inefficacy. But it would be mistaken to think that theories of responsibility can be liberated from the concept of power. It undoubtedly maintains a central importance because being ascribed power places special duties upon us: that of leadership and organization. To have power with respect to preventing a harm (or righting a wrong) implies not only a duty of preventing the harm (or righting the wrong) but of mobilizing the moral community – all those who can, via their causal effects, impinge upon the lives of others. It is the choices of the powerful that govern how effectual the powerless will be.
Acknowledgements This paper grew out of earlier joint work with Manfred Holler to whom I am indebted for endless discussions and inspiration over the last seven years. I would also like to thank Hartmut Kliemt and Hannu Nurmi for written comments as well as the participants at the conference on ‘Power: 23
This position is, admittedly a contentious one. It stands in direct contrast to Parfit’s (1984: 82–83) claim, using an argument that is for all intents and purposes the INUS/NESS test, that a person does not have a duty to perform an act that is an overdetermining cause.
20
Matthew Braham
Conceptual, Formal, and Applied Dimensions’, 17–20 August 2006, Hamburg, for criticisms and suggestions. Special thanks go to Martin van Hees as he really deserves co-authorship because much of the analysis is based on our collaborative efforts to untangle the concepts analyzed here. Section 4, in particular, bears Martin’s footprint and has now found expression in new joint work.
References Barry, B. (1976) Power: An Economic Analysis, in B. Barry (ed.) Power and Political Theory: Some European Perspectives, John Wiley, 69–101. Braham, M. (2005) Causation and the Measurement of Power, Homo Oeconomicus 22: 645–52. Braham, M. and Holler, M.J. (2005) The Impossibility of a Preference-based Power Index, Journal of Theoretical Politics 17: 137–158. Braham, M. and Holler, M.J. (2008) Distributing Causal Responsibility in Collectivities, in T. Boylan and R. Gekker (eds) Economics, Rational Choice, and Normative Philosophy, Routledge. Connolly, W.E. (1983) The Terms of Political Discourse. Martin Robinson. Dahl, R.A. (1957) The Concept of Power, Behavioral Science 2: 201–215. Dahl, R.A. (1963) Modern Political Analysis, Prentice Hall. Dahl, R.A. (1968) Power, in D.L. Sills (ed.) International Encyclopedia of the Social Sciences, Macmillan, 405–415. Dowding, K. (2003) Resources, Power, and Systematic Luck: A Response to Barry, Politics, Philosophy and Economics 2: 305–22. French, P.A. (1984) Collective and Corporate Responsibility, Columbia University Press. Gerth, H.H. and Mills, C.W. (eds) (1948) From Max Weber: Essays in Sociology, Routledge. Goldman, A.I. (1972) Toward a Theory of Social Power, Philosophical Studies 23: 221– 268. Goldman, A.I. (1974) On the Measurement of Power, Journal of Philosophy 71: 231– 252. Hart, H.L.A. and Honoré, A.M. (1959) Causation in the Law, Oxford University Press. Held, V. (1972) Coercion and Coercive Offers, in J.R. Pennock and J.W. Chapman (eds) Coercion, Aldine Atherton. Hobbes, T. (1651) Leviathan, Oxford University Press. Holler, M.J. (2007) Freedom of Choice, Power, and the Responsibility of Decision Makers, in A. Marciano and J.-M. Josselin (eds) Democracy, Freedom and Coercion: A Law and Economics Approach, Eward Elgar, 22–42. Honoré, A.M. (1995) Necessary and Sufficient Conditions in Tort Law, in D. Owen (ed.) Philosophical Foundations of Tort Law, Oxford University Press, 363–385. Kernohan, A. (1989) Social Power and Human Agency, Journal of Philosophy 86: 712– 726. Kramer, M.H. (2003) The Quality of Freedom, Oxford University Press. Lukes, S. (ed.) (1986) Power, Basil Blackwell. Mackie, J.L. (1965) Causes and Conditions, American Philosophical Quarterly 2: 245– 264.
Social Power and Social Causation
21
Mackie, J.L. (1974) The Cement of the Universe, Oxford University Press. March, J.G. (1955) An Introduction to the Theory and Measurement of Power, American Political Science Review 49: 431–451. Morriss, P. (1987/2002) Power: A Philosophical Analysis, Manchester University Press. Nagel, J. (1975) The Descriptive Analysis of Power, Yale University Press. Oppenheim, F.E. (1960) Degrees of Power and Freedom, American Political Science Review 54: 437–446. Oppenheim, F.E. (1961) Dimensions of Freedom, St. Martins Press. Oppenheim, F.E. (1976) Power and Causation, in B. Barry (ed.) Power and Political Theory: Some European Perspectives, John Wiley, 103–116. Oppenheim, F.E. (1981) Political Concepts: A Reconstruction, University of Chicago Press. Parfit, D. (1984) Reasons and Persons, Oxford University Press. Pearl, J. (2000) Causality: Models, Reasoning, Inference. Cambridge University Press. Riker, W.H. (1964) Some Ambiguities in the Notion of Power, American Political Science Review 58: 341–349. Russell, B. (1913) On the Notion of Cause, Proceedings of the Aristotelian Society 13: 1– 26. Simon, H.A. (1957) Models of Man, Wiley. Singer, P. (1972) Famine, Affluence, and Morality, Philosophy and Public Affairs 1: 229–243. Suppes, P. (1970) A Probabalistic Theory of Causality, North Holland. Vallentyne, P. (2008) Brute Luck and Responsibility, Politics, Philosophy and Economics (forthcoming). Wright, R. (1985) Causation in Tort Law, California Law Review 73: 1735–1828. Wright, R. (1988) Causation, Responsibility, Risk, Probability, Naked Statistics, and Proof: Pruning the Bramble Bush by Clarifying the Concepts, Iowa Law Review 73: 1001–1077.
2. Power Indices Methodology: Decisiveness, Pivots, and Swings František Turnovec Institute of Economic Studies, Charles University, Prague, Czech Republic
Jacek W. Mercik Wroclaw University of Technology, Poland
Mariusz Mazurkiewicz Wroclaw University of Technology, Poland
1. Introduction Three most frequently used measures of a priori voting power of members of a committee were proposed by Shapley and Shubik (1954), Penrose (1946) and Banzhaf (1965), and Holler and Packel (1983). We shall refer to them also as SS -power index, PB-power index and HP-power index. There exist also some other well defined power indices, such as Johnston index (1978) and Deegan-Packel index (1979). 1 In this paper we analyse the Shapley-Shubik, Penrose-Banzhaf and Holler-Packel concepts of power measurement the so called I-power (voter’s potential influence over the outcome of voting) and P-power (expected relative share in a fixed prize available to the winning group of committee members) classification introduced by Felsenthal et al. (1998). We show that objections to the Shapley-Shubik index, based on its interpretation as a P-power concept, are not sufficiently justified. The Shapley-Shubik, PenroseBanzhaf and Holler-Packel measures can all be successfully derived as cooperative game values, and at the same time they can be interpreted as probabilities of being in some decisive position (pivot, swing) without using cooperative game theory at all (Turnovec 2004; Turnovec et al. 2004). It is demonstrated in the paper that both pivots and swings can be introduced as special cases of a more general concept of decisiveness based on assumption of equi-probable orderings expressing preferences of com1 For a comprehensive survey and analysis of power indices methodology, see Felsenthal and Machover (1998).
24
František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
mittee members’ support for voted issue. New general a priori voting power measure was proposed in Turnovec (2007), distinguishing between absolute and relative power and covering Shapley-Shubik, Penrose-Banzhaf and Holler-Packel indices as its special cases.
2. Basic Concepts Let N (1,!, n ) be the set of members (players, parties) and X i (i 1,!, n ) be the (real, non-negative) weight of the i-th member such that
X
i
U, Xi p 0
i N
(e.g. the number of votes of party i, or the ownership of i in number of shares), where U is the total weight of all members. Let H be a real number such that 0 H b U . The (n 1)-tuple [H , X] = [H , X 1 ,y, X n ] such that n
X
i
U , X i p 0, 0 H b U
i=1
we call a committee (or a weighted voting body) of the size n with quota H, total weight U, and allocation of weights X. Any non-empty subset S N we call a voting configuration. Given an allocation X and a quota H, we say that S N is a winning voting configuration, if i S X i p H and a losing voting configuration, if i S X i H. A configuration S is winning, if it has a required majority, otherwise it is losing. Let n ¯ G ¡(H , X) \ n 1 : X i U , X i p 0, 0 b H b U ° ¡¢ °± i 1
be the space of all committees of the size n, total weight U and quota H. A power index is a vector valued function 1 : G l \ n mapping the space G of all committees into \ n . A power index represents for each of the committee members a reasonable expectation that she will be ‘decisive’ in the sense that her vote (yes or no) will determine the final outcome of voting. The probability to be decisive we call an absolute power index of an individual member; by normalization of an absolute power index we obtain a relative power of an individual member. By 1 i (H , X) and Q i (H , X) we denote the absolute and relative power an index grants to the i-th member of a committee with allocation of weights X, total weight U, and quota H. Minimal requirements usually imposed on power mapping 1 : G l \ n are the anonymity, dummy, and symmetry axioms. Let [H , X] be a committee with the set of members N and T : N l N be a permutation mapping. The committee [H , T X] we call a permutation of the committee [H , X], T(i ) being the new name (number) of the member
Power Indices Methodology
25
with original name i. The anonymity axiom requires that 1 i (H , X) 1 T(i )(H , T X). This says that power is a property of being a committee member and not of the name or number of the committee member. A member i N of a committee [H , X] is said to be dummy if she cannot benefit any voting configuration by joining it, i.e. the member i is dummy if k S X k p H º k S \ {i } X k p H for any winning configuration S N such that i S . The dummy axiom requires that 1 i (H , X) 0 if and only if i is dummy. Two distinct members i and j of a committee [H , X] are called symmetric if their benefit to any voting configuration is the same, i.e. for any S such that i , j S then k S i X k p H k S j X k p H. This axiom requires that the power of symmetric members is the same. To define a particular power measure means to identify some qualitative property (decisiveness) whose presence or absence in voting process can be established and quantified (Nurmi 1997). Generally there are two such properties related to the position of committee members in voting that are used as a starting point for quantification of an a priori voting power: a swing position and a pivotal position of a member.
3. Pivots and Swings The SS -power index is based on the concept of pivot. Let the numbers 1,!,n be the fixed names of committee members and (i 1 ,!, i n ) be a permutation of the members of the committee. Let us assume that member k is in a position r in this permutation, i.e. k i r . We say that k is in a pivotal position (has a pivot) with respect to a permutation (i 1 ,!, i n ) , if r 1
X j 1
r ij
H
and
X
ij
p H.
j 1
Assume that a strict ordering of members in a given permutation expresses an intensity of their support (preferences) for a particular issue in the sense that, if a member i s precedes in this permutation a member i t , then support by i s for the particular proposal to be decided is stronger than support by i t . One can expect that the group supporting the proposal will be formed in the order of positions of members in the given permutation. If it is so, then the k will be in situation when the group composed from preceding members in the given permutation still does not have enough of votes to pass the proposal, and a group of members place behind her in the permutation has not enough of votes to block the proposal. The group that will manage his support will win. A member in a pivotal situation has a decisive influence on the final outcome. Assuming many voting acts and all possible preference orderings equally likely, under the full veil of ignorance about other aspects of individual preferences, it makes sense to evaluate an
26
František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
a priori voting power of each committee member as a probability of being in pivotal situation. This probability is measured by the SS-power index: 1 iSS(H , X)
pi . n!
Here p i is the number of pivotal positions of the committee member i and n ! is the number of permutations of the committee members, i.e. n number of different orderings of n elements. From i 1 p i n ! it follows that Q i (H , X) 1 i (H , X) (i.e. the relative SS-power index is equal to an absolute one). The PB-power index is based on the concept of swing. Let S be a winning configuration in a committee [H , X]. We say that i S has a swing in configuration S if k S X k p H and k S \ {i } X k H. Let s i denotes the total number of swings of the member i in the committee [H , X]. The original Penrose definition of voting power was in the absolute form (the absolute PBpower index) given by: 1 iPB(H , X)
si . 2 n 1
Assuming that all configurations are equally likely this is nothing else but the probability that the given member will be decisive (the probability to have a swing). The PB-power index is frequently used in relative form and is obtained by normalization of the absolute PB-index: Q iPB(H , X)
si
s
. k
k N
The Holler-Packel power index also belongs to the class of swing-based measures. Let S be a winning configuration in a committee [H , X]. We say that S is a minimal winning configuration if for any i S it holds that k S X k p H and k S \ {i } X k H. The HP-power index assigns to each member of a committee the share of power proportional to the number of swings in minimal winning configurations of which he is a member. Let m i denote the total number of swings of the member i in minimal winning configurations in the committee, then the Holler-Packel index in relative form is: Q iHP (H , X)
mi . m k k N
It is assumed that all winning configurations are possible but only minimal critical winning configurations are being formed to exclude free-riding of the members that cannot influence the outcome of voting. The ‘public
Power Indices Methodology
27
good’ interpretation (the power of each member of a minimal winning configuration is identical with the power of the minimal winning configuration as a whole, power is indivisible) is used to justify the HP index. Although Holler and Packel never presented an absolute form of their index, it is possible to do it following the logic of the PB index. Assuming that only minimal winning configurations will be formed and all of them are equally likely, we obtain the absolute HP-power index as:
1 iHP(H , X)
mi N(H , X)
where N(H , X) stands for number of minimal winning configurations in committee [H , X]. The absolute HP-power index gives the probability that given member will have a swing in a minimal winning configuration (ratio of the number of her memberships in minimal winning configurations to the total number of minimal winning configurations). The SS-power index, PB-power index, and the HP-power index satisfy anonymity, dummy and symmetry axioms.
4. I-Power and P-Power Felsenthal et al. (1998) introduced the concepts I- and P-power. By I-power they mean: … voting power conceived of as a voter’s potential influence over the outcome of divisions of the decision making body: whether proposed bills are adopted or blocked. Penrose’s approach was clearly based on this notion, and his measure of voting power is a proposed formalization of a priori I-power (Felsenthal and Machover 2004: 9).
By P-power they mean: … voting power conceived as a voter’s expected relative share in a fixed prize available to the winning coalition under a decision rule, seen in the guise of a simple TU (transferable utility) cooperative game. The Shapley-Shubik approach was evidently based on this notion, and their index is a proposed quantification of a priori Ppower (Felsenthal and Machover 2004: 9).
Hence, the fundamental distinction between I-power and P-power is in the fact that the I-power notion takes the outcome to be the immediate one, passage or defeat of the proposed bill, while the P-power view is that passage of the bill is merely the ostensible and proximate outcome of a division; the real and ultimate outcome is the distribution of fixed a purse – the prize of power – among the winners (Felsenthal and Machover, 2004: 9–10). As a conclusion it follows that SS-power index does not measure a priori voting
28
František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
power, but says how to agree on dividing the ‘pie’ (benefits of victory). As the major argument of this classification the authors provide a historical observation: Penrose’s 1946 paper was either unnoticed or ignored by mainstream – predominantly American – social choice theorists, and Shapley and Shubik’s 1954 paper was seen as inaugurating the scientific study of voting power. Because the Shapley-Shubik paper was wholly based on cooperative game theory, it induced among social scientists an almost universal unquestioning belief that the study of power was necessarily and entirely a branch of that theory (Felsenthal and Machover, 2004: 8). The conclusion follows on the grounds that given cooperative game theory with transferable utility is about how to divide a pie, and SS-power index was derived as a special case of Shapley value of cooperative game, the SS-power index is about P-power and does not measure voting power as such. We demonstrated above that one does not need cooperative game theory to define and justify SS-power index. The SS-power index is a probability to be in a pivotal situation in an intuitively plausible process of forming a winning configurations with no division of benefits involved whatsoever. It is interesting to note that the SS-power index appeared as a special case of Shapley value for cooperative games with the transferable utility, but in exactly the same way one can handle the PB-index. Let us make a short excursion into cooperative game theory. Let N be the set of players in a cooperative game (cooperation among the players is permitted and the players can form coalitions and transfer utility gained together among themselves) and 2 N the power set of N, i.e. the set of all subsets S N , called coalitions, including empty coalition. The characteristic function of the game is a mapping v : 2 N l \ with v() 0. The interpretation of v : 2 N l \ v is that for any subset S N the number v(S) is the value (worth) of the coalition S, in terms how much ‘utility’ the members of S can divide among themselves in any way that sums to no more than v(S) if they all agree. The characteristic function is said to be super-additive if for any two disjoint subsets S ,T N we have v(S T ) p v(S ) v(T ) i.e. the worth of the coalition S T is equal to at least the sum of worth of its parts acting separately. Let us denote cooperative game in characteristic function form by [N , v ] . The game [N , v ] is said to be super-additive if its characteristic function is super-additive. By a value of the game [N , v ] we mean a non-negative vector K(N , v ) such that i N K i (N , v ) v(N ). By c(i ,T ) v(T ) v(T \ {i }) we denote the marginal contribution of the player i N to the coalition T N . Then, in an abstract setting, the value K i (N , v ) of the i-th player in the game [N , v ] can be defined as a weighted sum of his marginal contributions to all possible coalitions he is a member of: K i (v ) T N ,i T B(T )c(i ,T ). Different weights B(T ) lead to different values. Shapley (1953) defined his value by the weights
Power Indices Methodology
B(T )
29
(t 1)!(n t )! n!
where t is the cardinality of T. He proved that it is the only value that satisfies the following three axioms: (i) dummy axiom (see above), (ii) anonymity axiom (see above), and (iii) additivity axiom (sum of two games [N ,v ] and [N , u ] is the value K(N , v u ) K(N , v ) K(N , u )). As Owen (1982) noticed, the relative PB-index is meaningful for general cooperative games with transferable utilities. One can define Banzhaf value by setting the weights B(T )
v(N ) . c(k ,T )
k N ,T N
Owen demonstrates a certain relation between the Shapley value and Banzhaf value of cooperative game with transferable utilities: both give averages of player’s marginal contributions, the difference lies in the weighting coefficients (in the Shapley value coefficients depend on size of coalitions, in the Banzhaf value they are independent of coalition size). There is also a similar generalization of the Holler-Packel public good index (which is based on membership in minimal winning configurations, in which each member has a swing) as a cooperative game value (Holler and Li 1995). The relation between values and power indices is straightforward: a cooperative characteristic function game represented by a characteristic function v such that v takes only the values 0 and 1 is called a simple game. With any committee with quota H and allocation X we can associate a superadditive simple game such that £ ¦1 if i S X i p H v(S ) ¦ ¤ ¦ ¦ ¥0 otherwise.
Super-additive simple games can be used as natural models of voting in committees. Shapley and Shubik (1954) applied the concept of the Shapley value for general cooperative characteristic function games to the superadditive simple games as a measure of voting power in committees. Here we demonstrated that PB relative power index can be extended as a value for general cooperative characteristic function games. But we do not need cooperative games to model voting in committees.
5. A Generalized Concept of Decisiveness In a committee [H , X] with the set of members N, let (S 1 ,!, S k ), k b n be a partition of N, i.e. * kj 1 S j N and for any s , t N , s v t it holds that
30
František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
S s S t . Let ( j 1 ,!, j k ) be a permutation of numbers (1,!, k ), then S (S j1 ,!, S j k ) we call an ordering defined on N, ordered collection of groups of voters. Considering a particular issue being voted, the following interpretation of an ordering S on N is possible. Let r S j u , t S j v . (a) If u v , then a member r ’s support for the particular proposal is stronger than the support of member t. (b) If u v , then support for a particular proposal by the both r and t is the same. If u v , it is plausible to assume, that if members of S j v vote ‘yes’, then the members of S j u also vote ‘yes’, if the members of S j u vote ‘no’, then the members from S j v also vote ‘no’. From formal reason we denote S j o \0^ and X 0 0. Let S (S 1 ,!, S v ,!, S k ) be an ordering. We say that a group S v is in a pivotal position in ordering S, if v 1
v
X i H
and
j 0 i S j
X
i
p H.
j 1 i S j
Under our assumptions, if the pivotal group votes ‘yes’, then the outcome of voting is ‘yes’ and if they vote ‘no’, then the outcome is ‘no’. We say that a member t S v (v 0) is in a decisive situation (has a swing) in an ordering (S 1 ,!, S v ,!, S k ) if S v is pivotal group and S v \ \t ^ is not a pivotal group. Let the group S v is pivotal and t S v , then t is decisive if and only if either v1
X
i
j 0 i S j
Xi H
i S v \ {t }
(i.e. if the group S v joins preceding groups voting ‘yes’, then by changing unilaterally his ‘yes’ to ‘no’ member t changes the outcome of voting from ‘yes’ to ‘no’), or v 1
X
i
Xt p H
j 0 i S j
(i.e. if the group S v does not join preceding groups voting ‘yes’, then by changing unilaterally his ‘no’ to ‘yes’ member t changes the outcome of voting from ‘no’ to ‘yes’). The generalized concept of decisiveness combines the logic of pivots and swings: member of a committee is decisive, if she has swing in a pivotal group.
6. Partition and Ordering Numbers Let N be a finite set of size n, k a positive integer (1 b k b n ) , and T (N , k ) (T1 ,T 2 ,!,T k ) be a set of k disjoint nonempty subsets of N such that for any k r , s b k , r v s ,T r T s , * j 1T j N (partition of N of the size k). Let
Power Indices Methodology
31
Table 1. Partition Numbers p(n , k ) , Stirling’s Numbers of the Second Kind n k
1
2
1 2 3 4 5 6 7 8
1 1 1 1 1 1 1 1
1 3 7 15 31 63 127
n k
1
2
1 2 3 4 5 6 7 8
1 1 1 1 1 1 1 1
2 6 14 30 62 126 254
3
1 6 25 90 301 966
4
1 10 65 350 1701
5
6
1 15 140 1050
1 21 266
7
8
Total
1 28
1
1 2 5 15 52 203 877 4140
7
8
Total
Table 2. Ordering Numbers o(n , k )
3
6 36 150 540 1806 5796
4
5
6
24 240 120 1560 1800 720 8400 16800 15120 5040 40824 126000 191520 141120
1 3 13 75 541 4683 47293 40320 545835
P (N , k ) be the set of all partitions of N of the size k, and p(n , k ) the cardinality of P (N , k ). Then, setting p(n , k ) 1, and p(n , n ) 1 for any positive integers k and n such that 1 k n we have: p(n , k ) p(n 1, k 1) kp(n 1, k ).
The above recurrence relation generates so called Stirling’s numbers of the second kind (we shall refer to them also as partition numbers), which count the number of ways to partition a set of n elements into k nonempty subsets, see e.g. Hall (1967). In Table 1 we provide several values of Stirling’s numbers of the second kind. For instance, the number p(6, 4) 65 in column k 4 and row n 6 , indicating that there exist 65 distinct partitions of 6 elements into 4 nonempty subsets, is given by 65 25 (4 q10), where 25 is the number above and to the left of 65, 10 is the number above 65, and 4 is the number of column containing the 10. The last column provides total number of partitions of n elements. There exists also an explicit formula for partition numbers:
p(n , k )
1 k k (1)k j j j n . k ! j 1
32
František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
Having a partition of N of the size k, there exist k ! ways how to order its elements. Let us denote by O(N , k ) the set of all orderings of the size k defined on N, and P(n , k ) its cardinality. Then, setting P(n ,1) 1 and P(n , n ) n ! for any positive integers k and n such that 1 k n , we have P(n , k ) k P(n 1, k 1) k P(n 1, k ). This recurrence relation directly follows from Stirling’s numbers of the second kind. We shall refer to P(n , k ) as ordering numbers. In Table 2 we provide several values of ordering numbers. For instance, the number P(5,3) 150 in column 3 and row n 3 , indicating that there exist 150 distinct orderings of 5 elements with 3 nonempty indifference classes, is given by 150 3 q14 3 q 36 , where 14 is the number above and to the left of 150, 36 is the number above 150, and 3 is the number of column containing the 150. The last column provides total number of orderings of n elements. There exists also an explicit formula for ordering numbers: k
k P(n , k ) (1)k j j j n . j 1
7. Generalized Power Indices Using definition of decisiveness from section 5, it makes sense to measure a priori voting power of the member of a committee by number of her decisive situations. Assuming that in sufficiently large number of voting acts all orderings (expressing preferences of members’ groups) are equiprobable, then measure of so called absolute power of the member i is given by the ratio of the number of her decisive situations to the number of orderings, and relative power is given by the ratio of the number of decisive situations of the member i to the total number of decisive situations. n In a committee [H , X] with set of members N, let O(N ) * k 1O(n , k ) be a set of all orderings, including the weak ones. By O(n , n ) we denote the set of all strict orderings, and by O(n ,2) the set of all binary orderings. To follow standard logic of voting, we extend the set of binary orderings by two orderings: (, N ) and (N , ), reflecting two bipartitions of ‘no’ and ‘yes’ voters (everybody prefers vote ‘no’, everybody prefers vote ‘yes’). Let us denote extended set of binary orderings by B(N ) O(n ,2) \(, N ),(N , )^ .
Then, for evaluation of a priori voting power, we consider generic set of orderings n
W (N ) * O(n , k ) \(, N ),(N , )^. k 2
Power Indices Methodology
33
Clearly, (S(N )) n ! and (B(N )) 2 n and n
n
k
k (W (N )) P(n , k ) 2 (1)k j j j n 2. k 2 k 2 j 1 Let us define
£ ¦1 if i is decisive in O S i (O ) ¦ ¤ ¦ ¦ ¥0 otherwise where O W (N ) , and R(O ) such that R(O ) p 0, O W (N ) R(O ) 1
which is a probability that ordering O will appear. Then, 1 i ,R(H , X)
R(O )S i (O )
O W ( N )
is a probability that, given a probability distribution R over the set W (N ) , i N will be decisive in a committee [H , X], an absolute general form of power index. The general form of a relative power index of the member i is:
R(O )S (O ) (H , X) R(O )S (O ) i
Q i ,R
O W ( N )
k
k N O W ( N )
which is the share of the i-th member in total power. Considering strict orderings only and selecting £ 1 1 ¦ ¦ ¦ S(N ) n ! R (O ) ¤ ¦ ¦ ¦ ¦ ¥0 SS
if O S(N ) otherwise
which is the equi-probability of strict orderings and from which we obtain the absolute SS-power index. From i N O S (N ) R SS (O )S i (O ) n ! it follows that absolute SS-power index is equal to relative SS-power index. Considering the extended set of binary orderings only and selecting £ 1 1 ¦ ¦ ¦ R (O ) ¤ B(N ) 2 n ¦ ¦ ¦ ¦ ¥0 PB
if O B(N ) otherwise
which is the equi-probability of binary orderings, we obtain the absolute PBpower index: the ratio of number of swings to the number of binary orderings. The framework can also be used to define the Holler-Packel index. Let
34
František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
D(N ) B(N ) be a set of such binary orderings in which each member of pivotal group has swing (orderings with minimal pivotal groups). Considering only such binary orderings and selecting R
HP
£ 1 ¦ ¦ ¦ D(N ) (O ) ¤ ¦ ¦ ¦ ¦ ¥0
if O D(N ) otherwise
we obtain the absolute and relative versions of this index. Considering all orderings, including weak ones, and selecting £¦ 1 1 ¦¦ n k if S S(N ) ¦ ( ( ) W N k j k n ¦ GW (1) R (O ) ¤ j j 2 ¦¦ k 2 j 1 ¦¦ otherwise ¦¥0
which is the equi-probability of all orderings, we obtain the absolute General Weak ordering index (GW-power index). It is easy to prove that for all power indices defined above (based on a generalized concept of decisiveness and the equi-probability of relevant orderings) satisfy the anonymity, symmetry, and dummy axioms defined above.
8. Illustrative Example To illustrate the concepts introduced above let us use a simple example. Let N \1,2,3^ . Consider a committee [51;50,30,20]. In this case set W (N ) S(N ) B(N ) consists of 14 orderings: (a) The set of strict orderings S(N ) consisting of six orderings (1,2,3),(1,3,2),(2,1,3),(2,3,1),(3,1,2),(3,2,1) ; and (b) the extended set of binary orderings B(N ) consisting of eight binary orderings. Table 3 provides list of orderings and decisive positions. Decisive members in a given ordering by an asterisk. Column T i denotes the decisiveness of the i-th member in a given ordering (1 for decisive, 0 for non decisive). Assuming equi-probable orderings we obtain:
GW-index 10 Absolute: 1 1GW 14 , 1 GW 143 , 1 GW 143 . 2 3 10 3 GW GW GW Relative : Q1 16 , Q 2 16 , Q 3 163 . Shapley-Shubik index (strict orderings only) 4 1 SS SS SS Absolute/relative: 1 1SS Q 1SS 46 , 1 SS 2 Q2 6 , 1 3 Q3 6 . Penrose-Banzhaf index (binary orderings) Absolute: 1 1PB 68 , 1 2PB 28 , 1 3PB 28 . Relative: Q 1PB 106 , Q 2PB 102 , Q 3PB 102 .
Power Indices Methodology
35
Table 3. List of Orderings and Decisive Situations for the Committee [51;50,30,20]
Orderings
T1
T2
T3
Sum
R SS
0
1
0
1
1 6
R PB 0
R HP 0
R GW
(1,2 * ,3)
0
0
1 14
1 14
(1,3 ,2)
0
0
1
1
1 6
(2,1* ,3)
1
0
0
1
1 6
0
0
1 14
1
1 6
0
0
1 14
0
0
1 14
*
(2,3 ,1) *
1
0
0
(3,1* ,2)
1
0
0
1
1 6
(3,2,1* )
1
0
0
0
1 6
0
0
1 14
1 4
1 14
(1,(2 * ,3 * ))
0
1
1
2
0
1 8
(2,(1* ,3))
1
0
0
1
0
1 8
0
1 14
0
1 14
(3,(1* ,2))
1
0
0
1
0
1 8
((1* ,2 * ),3)
1
1
0
2
0
1 8
1 14
1 14
1 14
1 14
((2,3),1* )
1
0
0
1
0
1 8
((1* ,3 * ),2)
1
0
1
2
0
1 8
1 14
1 14
0
1 14
0
1 14
((1* ,2,3))
1
0
0
1
0
1 8
((1,2,3))
0
0
0
0
0
1 8
for GW for SS for PB for HP
10 4 6 3
3 1 2 2
3 1 2 2
16 6 10 7
Holler-Packel power index (binary orderings with minimal pivotal groups) Absolute: 1 1HP 43 , 1 2HP 42 , 1 3HP 42 . Relative: Q1HP 73 , Q 2HP 72 , Q 3HP 72 .
Note that while the SS-power index and PB-power index based on generalized concept of decisiveness provides the same values as the original definitions, it is not the case for HP-index. For instance, the original definition the HP-power index (relative form) is in our case 42 , 41 , 41 , but we obtained 73 , 72 , 72 . This follows from the difference in concepts of minimal winning configurations and minimal pivotal groups. The set of binary orderings with minimal winning configurations is a subset of the set of binary orderings with minimal pivotal groups, and permutation of the same binary partition does not necessarily provide the same decisiveness to the same committee members.
9. Concluding Remarks The primary purpose of this paper was not to introduce a new power index, but rather to demonstrate that there is no contradiction between pivot-
36
František Turnovec, Jacek W. Mercik, and Mariusz Mazurkiewicz
based and swing-based measures of voting power: there is no fundamental distinction between pivots and swings. Both concepts appear to be special cases of a more general concept of decisiveness based on full range of possible preferences of individual committee members and their groups; and both follow from the same logic and can be formulated in the same framework. Moreover, we do not need cooperative game theory to define and analyze a priori voting power. In a sense a game-theoretical setting of the problem restricts the set of analytical tools. Nevertheless, it might be of some interest to investigate more deeply properties of generalized concept of power, namely its monotonicity properties: local monotonicity (the member with greater weight cannot have less power than the member with smaller weight) and global monotonicity (if the weight of one member is increasing and the weights of all other members are decreasing or staying the same, then the power of the ‘growing weight’ member will at least not decrease) of absolute and relative forms of power measures. Reconsideration of other power indices (Johnston, Deegan-Packel, spatial modifications of power indices) in the framework of generalized concept of decisiveness might contribute to deeper understanding of quantification of voting power. Also an application of this approach to transferable utility cooperative games and its values can bring new results.
Acknowledgements This research was supported by the Czech Government Research Target Programme, proje ct No. MSM0021620841, and by the joint program of Ministry of Education of the Czech Republic and Ministry of Education of the Republic of Poland, project No. 2006/12. We thank anonymous referees for useful comments and suggestions.
References Banzhaf, J.F. (1965) Weighted Voting Doesn’t Work: A Mathematical Analysis. Rutgers Law Review 19: 317–343. Deegan, J. and Packel, E.W. (1979) A New Index of Power for Simple n-person Games. International Journal of Game Theory 7: 113–123. Felsenthal, D.S. and Machover, M. (1998) The Measurement of Voting Power, Edward Elgar. Felsenthal, D.S. and Machover, M. (2004), A Priori Voting Power: What is it About? Political Studies Review 2: 1–23. Felsenthal, D.S., Machover, M., and Zwicker, W. (1998) The Bicameral Postulates and Indices of A Priori Voting Power, Theory and Decisions 44: 83–116. Hall, M. (1967) Combinatorial Theory. Blaisdell Publishing Company. Holler, M.J. and Packel, E.W. (1983) Power, Luck and the Right Index, Zeitschrift fĦr Nationalökonomie (Journal of Economics) 43: 21–29. Holler, M.J. and Li, X. (1995) From Public Good Index to Public Value. An
Power Indices Methodology
37
Axiomatic Approach and Generalization, Control and Cybernetics 24: 257–270. Johnston, R.J. (1978) On the Measurement of Power: Some Reactions to Laver, Environment and Planning 10: 907–914. Nurmi, H. (1997), On Power Indices and Minimal Winning Coalitions, Control and Cybernetics 26: 609–611. Owen, G. (1982) Game Theory (2nd ed.), Academic Press. Penrose, L.S. (1946) The Elementary Statistics of Majority Voting, Journal of the Royal Statistical Society 109: 53–57. Shapley, L.S. (1953) A Value for n-Person Games, in H.W. Kuhn and A.W. Tucker (eds), Contributions to the Theory of Games II, Princeton University Press, 307–317. Shapley, L.S. and Shubik, M. (1954) A Method for Evaluating the Distribution of Power in a Committee System, American Political Science Review 48: 787–792. Turnovec, F. (2004) Power Indices: Swings or Pivots? in Matti Wiberg (ed.) Reasoned Choice, Finnish Political Science Association. Turnovec, F., Mercik, J.W., and Mazurkiewicz, M. (2004), Power Indices: ShapleyShubik or Penrose-Banzhaf? in R. Kulikowski, J. Kacprzyk, and R. Slowinski (eds) Badania Operacyjne i Systemowe 2004, Podejmowanie Decyzji, Podstawy Metodyczne i Zastosowania, EXIT, Warszawa. Turnovec, F. (2007) New Measure of Voting Power, Czech Economic Review (Acta Universitatis Carolinae Oeconomica) 1: 1–11.
3. Further Reflections on the Expediency and Stability of Alliances Dan S. Felsenthal Department of Political Science, University of Haifa, Israel
Moshé Machover Department of Philosophy, King’s College, University of London, UK
1. Introduction The study of the formation and dissolution of alliances of voters aiming to increase their voting power is relatively new. The present note is a sequel to our earlier paper on this subject (see Felsenthal and Machover 2002). Since the latter’s publication, we have obtained some new results which can be viewed also as a complement to some of the results obtained by Gelman (2003). We report here our findings regarding the possibility of forming expedient and stable alliances within an assembly of voters under various decision rules, depending on the size of the assembly. We shall also show that sometimes an alliance containing a dummy can be expedient – a fact which may explain a real-life historical puzzle. For the reader’s convenience, we repeat in the next section the relevant definitions used in our earlier paper, which we shall be using in this paper as well. In sections 3, 4 and 5 we report our new results. We conclude in section 6.
2. Preliminaries We assume the reader is familiar with the definitions of simple voting game (SVG) and weighted voting game (WVG) and with the basic definitions and notation pertaining to them, as laid down by Felsenthal and Machover (1998). 1 In particular, we shall use the square-bracket notation for WVGs. 1
In particular we refer the reader to Definitions 2.3.4, 2.3.10, 2.3.23, 3.2.2 and Theorems 3.3.11 and 3.3.14. We use the term ‘game’ in this connection out of deference to common usage. But although an SVG does have the formal structure of a simple co-operative game with
40
Moshé Machover and Dan Felsenthal
We shall denote by ‘M n ’ the canonical simple majority WVG with n voters. That is, M n [n 2 1 ;1,1, !,1].
(1)
n times
By ‘M n ’ we shall denote the dual of M n ; that is M n [ n2 ;1,1, !,1].
(2)
n times
Note that if n is odd, M n M n ( M n is self-dual). But if n is even M n v M n , and the latter WVG is improper. Further, we shall denote by ‘Bn ’ the canonical unanimity WVG with n voters. That is, Bn [n ;1,1, …,1].
(3)
Bn [1;1,1, …,1].
(4)
n times
The dual of Bn is n times
In what follows, the starting point is some given SVG W whose assembly (i.e. set of voters) is N. We shall denote by ‘Z’ the Penrose measure of voting power and refer to it simply as ‘power’, without further qualification. 2 We denote by ‘Za [ W ]’ the power of voter a in W . By a well-known theorem, among all SVGs W with n voters, the sum of the voters’ powers attains its maximum when W ! M n . If n is odd, this condition is also necessary for maximizing the sum of the powers. If n is even, there are several isomorphism types of SVG that maximize the sum; among them, those isomorphic to M n have the lowest number of winning coalitions, whereas those isomorphic to its dual, M n , have the highest. Also, among all SVGs W with n voters, the sum of the voters’ powers attains its minimum if W ! Bn or W ! Bn . For n 2, this condition is also necessary. We recall that ‘W ] & S ’ denotes the SVG that results from W when a coalition S N fuses and forms a bloc & S . The assembly of W ] & S is (N S ) {& S }. If W is a weighted voting game (WVG), then so is W ] & S : take the weight of & S to be the sum of the weights that the members of S had in W , while the weights of all other voters as well as the quota are kept the same as in W . transferable utility, we do not treat it here as such but as a plain decision rule. For a fuller discussion of this point, see Comment 2.2.2 in our book. 2 For its definition see Felsenthal and Machover (1998), Definition 3.2.2, where it is denoted by ‘C a’ and referred to as the ‘Bz [Banzhaf] measure of voting power’.
Further Reflections on the Expediency and Stability of Alliances
41
In our paper (Felsenthal and Machover, 2002: 303), an alliance is defined as a bloc & S together with an SVG W S whose assembly is S, and which is referred to as the internal SVG, or decision rule, of the alliance. We shall denote this alliance by (S ; W S ). Informally, we think of the alliance as formed voluntarily by the members of S. When the members of S N form an alliance (S ; W S ). this gives rise to a new composite SVG, which we denote by ‘W & W S ’. The assembly of W & W S is N, the same as that of W . The winning coalitions of W & W S are all sets of the form X Y , with X S and Y N S , satisfying at least one of the following two conditions: x Y is a winning coalition of W ; x X is a winning coalition of W S and S Y is a winning coalition of W .
3
Informally speaking, W & W S works as follows. When a bill is proposed, the members of S decide about it using W S , the agreed internal SVG of their alliance. Then, when the bill is brought before the plenary, the assembly of W , all the members of S vote as a bloc, in accordance with their internal decision; so that now the final outcome is the same as it would have been in W ] & S with the bloc voter & S voting according to the internal decision. From now on, we put: s S
(5)
Note that each member of S now has direct voting power in the SVG W S , as well as indirect voting power in W & W S , which s/he exercises via the bloc & S . It is easy to prove (Felsenthal and Machover 2002: Theorem 4.1): For every a S Za [W & W S ] Za [W S ]¸ Z& S [W ] & S ].
(6)
However, for voters b N S , the equality Zb [W & W S ] Zb [W ] & S ] does not always hold; it does hold provided the number of winning coalitions of the internal SVG W S is 2 s1 , exactly half of the number of all coalitions (Felsenthal and Machover 2002: 304). In what follows we take W to be exogenously imposed. Its voters are allowed to form alliances but not to change W itself. We shall therefore rule out alliances with S N , because that would amount simply to all members of the assembly agreeing to adopt a new decision rule instead of W . Using the terminology introduced in Felsenthal and Machover (2002: 303), we say that the alliance (S W S ) is feasible if Za [W & W S ] p Za [ W ] for all a S ;
(7)
and expedient if 3 For an equivalent definition of W & W S , which shows it to be a special case of the general operation of composition of SVGs, see Felsenthal and Machover (2002: 310).
42
Moshé Machover and Dan Felsenthal
Za [W & W S ] Za [ W ] for all a S .
(8)
Moreover, we say that a bloc & S is feasible or expedient if there exists some internal SVG W S such that the resulting alliance (S ; W S ) is feasible or expedient, respectively. The idea behind these definitions is the following. We may regard a voter’s power as a payoff – not in the original W , or indeed in any ‘voting game’ 4 – but in a new power game P W induced by W . This P W is a genuine co-operative game with non-transferable utility, in which the players are the voters of W , and forming an alliance (in W ) is a strategy that affects the payoffs of all players (not only of members of the alliance). Games of the form P W , induced in this way by some SVG, have a rather intricate structure. As far as we know, they have not been investigated in depth. In Felsenthal and Machover (2002) we merely scratched the surface. Gelman (2003) presents a few additional results. In the present paper we add a few observations concerning some games of this form. In the sequel, we shall make use of the following simple result: Proposition 1 (Felsenthal and Machover 2002: Theorem 4.2) A bloc made up of two voters is never expedient. It is feasible iff originally the two voters have equal powers, or at least one of them is a dummy.
3. Simple and Super Majority In this and the following section we shall consider cases where W is symmetric. In this context, we confine our attention to alliances (S ; W S ) in which all voters are equipotent, that is, have equal power. We shall say that the alliance is optimal if its voters are equipotent and their power attains its maximal value – which equals the power of a voter in M s . Of course, if s is odd then the alliance is optimal just in case W S ! M s . Gelman (2003) considers a simple majority WVG, W ! M n , and within it alliances of m voters with internal decision rule of the form W S ! M m . He is particularly interested in the case where n as well as m and mn are large. He concludes that when these numbers are sufficiently large, then although such an alliance will be expedient, it will nevertheless be unstable because the power of the voters left outside the alliance will be smaller than the voting power of those inside it, so the former will try to form another alliance, possibly with some of the members of the first alliance. But ‘if all voters form coalitions, 5 they become worse off than if they had stayed apart’ (Gelman, 2003: 9). This may imply, in turn, an incessant process of forming and disbanding alliances. In fact, Gelman (2003: 11) conjectures quite ex4 5
See footnote 1 above. By ‘coalition’ Gelman means here what we call an ‘alliance’.
Further Reflections on the Expediency and Stability of Alliances
43
plicitly that ‘the coalition-formation process is inherently unstable … By this we mean that if voters are in an ongoing process of joining and leaving coalitions, with each decision made myopically to immediately increase voting power, then there is no stable equilibrium coalition structure.’ Although it is true that the formation of some alliances may result in instability, stable alliances do exist even where W ! M n and W S ! M m . As we shall see, this happens when m is just larger than n2 – a case not investigated by Gelman (2003). Consider first the following three instructive examples. Example 1 Let W ! M 5 . Here the power of each voter is 38 . In view of Proposition 1, we need only consider alliances (S W S ) of three or four voters. Any such alliance will make the bloc & S a dictator in W ] & S . Hence by (6) for every a S Za [W & W S ] Za[W S ].
(9)
For reasons of symmetry, we need consider only alliances whose voters are equipotent. In fact, it is sufficient to consider optimal alliances, because in this way the members of S maximize their (direct and indirect) power. For s 4 , an optimal (S W S ) gives each a S Za [W & W S ] Za [W S ] 38 . So no alliance of four voters is expedient. 6 But for s 3, we have W S ! M 3 , which gives each a S Za [ W & W S ] Za [W S ] 12 . Such an alliance is expedient, and, once formed, it is stable: no member wishes to defect, and the two voters left out can do nothing to improve their position. 35 Example 2 Let W ! M 9 . Here the power of each voter is 128 . In view of Proposition 1 we need only consider alliances (S W S ) with 3 b s b 8; and as in the preceding example we may assume that W S is optimal. An alliance with s 8 is not expedient, as in this case Za [W & W S ] 35 Za [W S ] 128 . On the other hand, simple calculations show that alliances with s 3 4 5 6 7 are expedient, giving each member of S indirect power 25 45 3 5 5 64 128 8 , 16 and 16 , respectively. However, alliances with s 3 4 6 and 7 are unstable. In the case of s 3 or 4, the voters excluded from the alliance, whose powers decrease as a result of its formation, 7 can retaliate by forming a counter-alliance, reducing the powers of the members of S and even turning them into dummies. In the case of s 6 or 7, five of the six or seven members will be tempted to eject the remaining one or two, thereby increasing their (direct and indirect) power from 165 to 38 .
Note that for all n the power of a voter in M 2n is the same as in M 2n 1. This can be seen without any detailed calculation: it follows from the power-maximizing property of M n . If W ! M n and an expedient alliance is formed, then by definition the powers of its members increase. Hence the total power of all voters excluded from the alliance must decrease, and by symmetry the power of each of them also decreases. 6
7
44
Moshé Machover and Dan Felsenthal
If voters behave rationally, they will anticipate the adverse consequences of forming alliances with s 3 4 6 and 7, and hence such alliances will not be formed, although they are expedient according to our definition (see Section 2). 8 But an alliance with s 5 is stable. The four members left out of it become dummies and can do nothing about it. Note that although three of the five alliance members may be tempted to form an internal sub-alliance (in order to raise their indirect power from 38 to 21 ), they are in fact deterred from doing so because they know that, in reaction, the two members of S left out of the three-member cabal will surely dissolve the original fivemember alliance, and may even form a new (dictatorial) alliance with three of the four members that were left out of S. An interesting situation arises when W ! M n with even n, and an alliance that contains exactly half of the members. 35 Example 3 Let W ! M 8 . Here the power of each voter is 128 . Now consider an alliance (S ; W S ) with s 4. As before, we may assume that the alliance is optimal. Then its members’ direct power is 38 and their indirect 45 . So this alliance is expedient. power is 128 However, the remaining four voters, whose power is much reduced, can retaliate by forming their counter-alliance, (T W T ), where T N S . Now the indirect power of members of S will depend on the choice of W T . This indirect power will be greatest if the probability of the bloc & T voting ‘yes’ is greatest, because only if & T votes ‘yes’ will the vote of members of S make any difference. But the probability of & T voting ‘yes’ will be greatest iff the number of winning coalitions of W T is as high as possible – which is the case precisely if W T ! M 4 (that is, two votes sufficient to approve a bill). In this case, which is the best that members of S can hope for, their 33 – less than their original power in W . As indirect power will still be only 128 33 at most, depending on the for members of T: they too will have power 128 rule W S chosen by the first alliance. In any case, all voters are now worse off than originally. This seems to give rise to a kind of prisoner’s dilemma: 9 if one of the two alliances dissolves, its members will be in the worst possible position; but if neither of them dissolves, all voters will still be worse off than in the original W . So the alliances will not dissolve unless they can agree to do so simultaneously. However, this apparent dilemma can be resolved in a more radical way. Every member of each alliance will be tempted to defect and join the opposite camp – thereby making the latter dictatorial, and reducing the remaining members of the former to dummies. Thus alliances of
8
The definition assumes that formation of the alliance is the only change that takes place. Cf. Gelman’s (2003) discussion of situations of this kind. As our analysis shows, the analogy with a true prisoner’s dilemma is incomplete. 9
Further Reflections on the Expediency and Stability of Alliances
45
size 4 are unstable, and if voters behave rationally they will anticipate this and will not form such alliances. The following theorem surveys in general the expediency and stability of optimal alliances when W ! M n . Theorem 1 Let W ! M n , with n 3. Consider optimal alliances (S W S ) where S N . We distinguish four cases, according to the residue of n modulo 4. Case 0 n 4m . Then the alliance is expedient iff 3 b s b n 1, and expedient as well as stable iff s 2m 1. Case 1 n 4m 1 Then the alliance is expedient iff 3 b s b n 2, and expedient as well as stable iff s 2m 1. Case 2 n 4m 2 Then the alliance is expedient iff 3 b s b n 1, and expedient as well as stable iff s 2m 2 or s 2m 3. Case 3 n 4m 3 Then the alliance is expedient iff 3 b s b n 2, and expedient as well as stable iff s 2m 2 or s 2m 3.
Proof The arguments used in the preceding examples can easily be extended and seen to apply here. As far as the expediency claims are concerned, for small values of n one can perform manually the simple, albeit somewhat tedious, computation for each alliance of size s. For larger values of n, say n p 13. the calculation is still quite simple for extreme values of s (close to 3 or to n). For the general case, see Gelman (2003, Subsection 3.3). 10 An alternative way to prove the expediency claims is to use the so-called Rae index S (where, for any SVG W , S a [W ] is the probability that voter a of W is successful; i.e. that the outcome of a division accords with a’s vote). Since by Penrose’s identity Za 2S a 1 (Felsenthal and Machover 1998: Theorem 3.2.16), it is enough to show that, for the s and n specified by our theorem, S a [W & W S ] S a [W ] for all a S .
(10)
This inequality can be proved using the fact that S a [W S ] is sufficiently greater than 21 . 11 As for the stability claims, the following observations are in order. In Case 0, if m 1 an alliance of size 2m is unstable for the reason illustrated for m 2 by Example 3. The same kind of thing happens for all m 1. 10
By the way, Gelman shows that as n increases, the alliance size that maximizes its members’ indirect power asymptotically approaches a value of approximately 14 n . 11 Intuitively speaking, this means that a ’s vote has a sufficiently good chance of ‘carrying along with it’ all the votes of the other s 1 members of S. This boosts a ’s probability of belonging to the majority camp in W & W S compared to that in W .
46
Moshé Machover and Dan Felsenthal
Case 1 is unproblematic: here Examples 1 and 2 are entirely typical. In Case 2, an alliance of size 2m 1 is expedient, but the internal power of its members is less than twice their original power. The 2m 1 remaining voters – whose power is reduced – can retaliate by forming a counteralliance. Now all voters’ indirect power is half of their internal power – which is less than their original power – so the situation is unstable. This is similar to the situation in Case 0 with alliances of size 2m , except that now, since 2m 1 is odd, each alliance of this size can only choose the (self-dual) majority rule as its internal SVG. In Case 2 and Case 3, an alliance of size 2m 3 contains a ‘redundant’ member, because even an alliance of size 2m 2 is dictatorial. However, ejecting one of the 2m 3 members will not change the powers of the re, maining 2m 2, 12 so they have nothing to gain by this. We know that if W ! M n and an expedient alliance is formed, then all the voters excluded from the alliance lose power. 13 The following example shows that this need not happen if W is a super-majority WVG: somewhat surprisingly, formation of an alliance can be Pareto optimal! Example 4 Let W ! [41 1 1 1 1]. Here the power of each voter is 41 , so the total power is 45 . Now suppose an alliance (S ; W S ) is formed, where s 3 and W S ! M 3 . This alliance is expedient, because the indirect power of each of its members is 38 . However, the power of each of the two excluded voters is still 41 , so they are not worse off than before. The total power is now 138 . Moreover, this alliance is stable. On the one hand, no member of the alliance would wish to defect because a defector would lose power (even if the two members left behind were to maintain a – necessarily inexpedient – two-member alliance!). On the other hand, members of the alliance have no incentive to admit one of the two excluded voters, because this cannot increase the powers of the three old members. Nevertheless, it is worth noting that an optimal four-member alliance is also expedient, as the power of each of its members is 38 – the same as in the preceding case. Such an alliance is also clearly stable. Since the single excluded voter is a dummy, the total power is 32 . So if four of the assembly members are vindictive towards the fifth, they will be inclined to form a four-member alliance; otherwise a three-member alliance seems more likely to form. As we have seen, this will maximize the total power, and is in fact Pareto optimal.
This somewhat counter-intuitive example shows that Gelman’s (2003: 5) 12 13
See footnote 6. See footnote 7.
Further Reflections on the Expediency and Stability of Alliances
47
assertion (clearly made with W ! M n in mind) that ‘[f]orming coalitions 14 is beneficial to those who do it but is negative to ‘society’ as a whole, at least in terms of average voting power’ does not hold in general.
4. Unanimity In this section we shall assume W is an n-voter unanimity WVG: W ! Bn . The power of each a N in W is Za [ W ]
1 2
n 1
.
(11)
We consider alliances (S W S ) where 2 s n ; and, as before, for reasons of symmetry we can confine our attention to cases where the members of S are equipotent. We rule out the cases W S ! Bs and W S ! Bs in which W S itself is a unanimity WVG or its dual, as the alliance would then be inexpedient. 15 On the other hand, any other alliance in which the voters are equipotent is expedient. To see this, let I[W S ] be the Banzhaf score in W S of any a S (that is, the number of winning coalitions in which a is critical). Then the direct power of a is I[ W S ]2 s 1 , and a’s indirect power is Za [ W & W S ]
I[W S ] 1 I[W S ] ¸ n s n 1 s 1 2 2 2
(12)
As we have ruled out W S being a unanimity WVG or its dual, it is clear that I[ W S ] 1, so by (11) the alliance is expedient, as claimed. However, voters excluded from the alliance gain even more power than those inside it. To show this, let us denote by ‘X[W S ]’ the number of winning coalitions in W S . Then it is easy to see that the power of each b N S in W & W S is Zb [ W & W S ]
X[W S ] 1 X[W S ] ¸ ns 1 n1 2s 2 2
(13)
Since W S is not a unanimity WVG, it is impossible for every a S to belong, let alone be critical in, every winning coalition. So clearly X[W S ] I[W S ]; hence Zb [W & W S ] Za[ W & W S ] for every b N S and a S , as claimed. 16 14
See footnote 5. In both cases the indirect power of a member of the alliance is the same as in W . Moreover, forming an alliance with an internal unanimity rule is vacuous, as the composite SVG W & W S would be identical to the original W . 16 We have ruled out W S ! Bs , as in this case the alliance, albeit feasible, is not expedient. If such an alliance were formed, then I[ W S ] 1 and X[ W S ] 2 s 1 . So by (12) the alliance members gain no power, while by (13) those excluded from it do gain power. As power is the 15
48
Moshé Machover and Dan Felsenthal
Thus, if considerations of envy were admitted, they would imply that no single expedient alliance in W can be stable. In what follows, we ignore envy. From now on let us make the reasonable assumption that only optimal alliances are formed. We can specify I[W S ] precisely for optimal alliances, distinguishing two cases, according as s is even or odd. As is well known (Felsenthal and Machover 1998: Theorem 3.3.8), £ 1 ¦ ¦ ¦ ¦2 I[ W S ] ¦ ¤ ¦ ¦ ¦ ¦ ¦ ¥
2mm if s 2m, 2mm if s 2m 1.
(14)
By the way, from (12) and (14) it is easy to see that whether s is even or odd, the power of a member of an optimal alliance of size s is greater than that of a member of any expedient alliance of size s 1. Thus Gelman’s (2003: 7) assertion (clearly made with W ! M n in mind) that ‘it is never a good idea to have a coalition 17 with an even number of members: if m is even, it is always as good or better to be in a coalition of size m 1’ does not hold in general. As for X[W S ], we can specify it precisely for odd s. For even s we can specify the two extreme values: for W S ! M s and W S ! M s . It is easy to see that £ 1 ¦ ¦ 2 2m 1 ¦ ¦ 2 ¦ ¦ ¦ 1 X[W S ] ¦ ¤2 2m 1 ¦ 2 ¦ ¦ ¦ 2 2m ¦ ¦ ¦ ¦ ¥
2mm if s 2m and W 2mm if s 2m and W
S
! Ms,
S
! M s,
(15)
if s 2m 1.
What happens if a member of an optimal alliance (S ; W S ) leaves it? Of course, if the alliance were then to disband, all voters would lose power. But we now make the reasonable assumption that if s 3 the s 1 remaining members will re-form an optimal alliance. The possible exception is s 3 , in which case the remaining two members cannot form an expedient alliance, so they may not form one at all. It is easy to see that if a member of an optimal alliance is expelled from it, the remaining members will lose power. So we can rule out expulsion as a source of instability. On the other hand, we saw that when an expedient alliance (S ; W S ) is formed, the voters excluded from it gain more power than (only) payoff in the power-game P W we are considering in this paper, there is no reason – other than pure altruism – for forming such an alliance. This is why we assume it will not form. 17 See footnote 5.
Further Reflections on the Expediency and Stability of Alliances
49
those included in it. Would this tempt a member of an optimal alliance to defect? To see whether defection from (S ; W S ) is advantageous, let us first consider the case where s is even, s 2m , where m 1. From (12) and (14) it follows that before the defection the power of the prospective defector is 2mm 2 n . After defecting, the defector will be one of the voters excluded from an optimal alliance of size 2m 1, so by (13) and (15) her power will be 2 2m 1 2 n . It is easy to prove by induction on m that 2 2m 1 2mm for all m 1. So defection is indeed advantageous. It follows that optimal alliances of even size are unstable, being vulnerable to defection. Now let us turn to the somewhat more tricky case where s is odd, s 2m 1, where m p 1. In this case the defector deserts 2m partners who will form an optimal alliance, except perhaps in the special case m 1. Let us first assume that these 2m voters form an alliance with internal decision rule isomorphic to M 2m . By (12) and (14), before defecting the power of the prospective defector is 2mm 2 n 1. By (13) and (15), his power after defecting will be 2 2m 1 2 n1. By induction on m it is easy to see that 2 2m 1 p 2mm for all m p 1, so the defector will gain power, and the defection is advantageous. But a prospective defector cannot depend on this calculation, because the 2m deserted partners may not adopt a decision rule isomorphic to M 2m but one with a smaller number of winning coalitions. The worst-case scenario for defection is that the rule they choose is isomorphic to to M 2m . 18 From (12), (13), (14) and (15) it can be seen that in case they do so, the defector will gain power if 2 2m 1 21 2mm 2mm , or, tidying up:
2m 2 2m 3 m ;
(16)
and the defector will lose power if the opposite inequality holds. It is easy to prove (16) by induction for all m p 3. However, for m 1 and m 2 the opposite inequality holds. It follows that all alliances except those of size 3 or 5 are unstable, as a member of such an alliance will be better off defecting. Only alliances of size 3 and 5 can be stable, as defection of a single member from them is ill-advised. For brevity, we shall call an alliance of size 3 with rule ! M 3 a trio, and an alliance of size 5 with rule ! M 5 a quintet. We shall call a voter who is not a member of any alliance a singleton. Once a trio or a quintet is formed, then from the viewpoint of the other voters it behaves as a single voter. For example, if n 3 and one trio is 18 This may even be most likely, because, among all optimal alliances this is the only one with a proper SVG – which has some advantages. If m 1 , the two deserted members have no reason at all to form an alliance (see Proposition 1); and forming an alliance with decision rule isomorphic to M 2 is vacuous, because M 2 B2 (see footnote 15).
50
Moshé Machover and Dan Felsenthal
formed, then from the viewpoint of the remaining n 3 voters it looks as if they are in a unanimity WVG with n 2 voters. If there are three singletons left, they may form a new trio. Similarly, five singletons may form a new quintet. Also, two singletons may join a trio and turn it into a quintet. Ultimately there will emerge a configuration in which the original n voters form quintets, trios and singletons. 19 We shall refer to these as ‘5–3–1’ configurations. Which 5–3–1 configurations are stable and therefore likely to form? Of course, if n 4 or n 5, there is only one stable option: the formation of one trio. For n 5 we must distinguish five cases, according to the residue of n modulo 5. Case 1 If n 6, there are two Pareto optimal 5–3–1 configurations: First, one quintet and one singleton. Each member of the quintet has indirect power 163 and the singleton has power 21 . Second, two trios, each of whose members has indirect power 41 . Any other 5–3–1 configuration is Pareto inferior to one of these two, but neither of them is Pareto superior to the other. So we cannot say for certain which of the two is more likely to form. Similarly, for all n 5m 1, where m p 1 : we can have m quintets and a singleton; or m 1 quintets and two trios. Case 2 If n 7, there are two Pareto optimal 5–3–1 configurations: First, one quintet and two singletons. Each member of the quintet has indirect power 323 and each singleton has power 41 . Second, two trios and a singleton. Each member of a trio has indirect power 18 and the singleton has power 41 . Again, any other 5–3–1 configuration is Pareto inferior to one of these two, but neither of them is Pareto superior to the other. So we cannot say for certain which of the two is more likely to form. Similarly, for all n 5m 2, where m p 1 : we can have m quintets and two singletons; or m 1 quintets, two trios and a singleton. Case 3 If n 8, there is just one Pareto optimal 5–3–1 configuration: a quintet and a trio. Each member of the former has indirect power 163 and each member of the latter has 41 . This 5–3–1 configuration is Pareto superior to any other, and is therefore the one likely to form. Similarly, for all n 5m 3, where m p 1 : we have m quintets and one trio. Case 4 If n 9, there are two Pareto optimal 5–3–1 configurations: First, one quintet, one trio and a singleton. Each member of the quintet has indirect power 323 , each member of the trio has indirect power 18 , and the singleton power 41 . 19 We rule out the formation of ‘super-alliances’ one or more of whose members are themselves a trio or quintet, because such a super-alliance would not be optimal as an alliance in W .
Further Reflections on the Expediency and Stability of Alliances
51
Second, three trios, each of whose members has indirect power 18 . Again, any other 5–3–1 configuration is Pareto inferior to one of these two, but neither of them is Pareto superior to the other. So we cannot say for certain which of the two is more likely to form. Similarly, for all n 5m 4, where m p 1 : we can have m quintets, one trio and one singleton; or m 1 quintets and three trios. Case 5 If n 10, there is just one Pareto optimal 5–3–1 configuration: two quintets, each of whose members has indirect power 163 . This 5–3–1 configuration is Pareto superior to any other, and is therefore the one likely to form. Similarly, for all n 5m , where m 1 : we have m quintets.
5. Alliances with a Dummy In this section we wish to explore the possible role of a dummy in the formation of an alliance. Somewhat surprisingly, a dummy can become empowered via a feasible alliance, as the following example shows. Example 5 During the first period of the European Union (1958–73), the so called ‘qualified majority’ (QM) decision rule prescribed for its sixmember Council of Ministers by the Treaty of Rome (1957) was a WVG that assigned to France, Germany, Italy, Belgium, The Netherlands and Luxembourg weights 4, 4,4,2,2 and 1, respectively; and the quota required for passing resolutions on most issues was 12. In this WVG, which is isomorphic to [12;4,4,4,2,2,1], the powers of the members were 165 , 165 , 165 , 163 , 163 and 0, respectively. Although the Treaty stipulated that this decision rule would only become effective in 1966, and in practice it was rarely invoked from 1966 to 1973, as decisions were normally made by consensus, the rule is quite puzzling. What was the point of giving Luxembourg a useless weight of 1, and why did Luxembourg agree to be a dummy? This puzzle has often been mentioned in the literature on decision making in the EU. Of course, it is possible that the original signatories of the Treaty of Rome, including the government of Luxembourg, were simply unaware of the QM anomaly. But there may be another explanation. As is well known, Belgium, The Netherlands and Luxembourg had operated a Benelux Customs Union since 1947 (replaced in 1966 by the Benelux Economic Union). It is quite possible that the Benelux countries agreed informally to act as an alliance within the EU. A reasonable internal decision rule could be ! [3;2,2,1] – which is in fact a simple majority rule. It is easy to verify that under this alliance the indirect power of each of the three members would be 163 . Thus the alliance, albeit inexpedient, is feasible – and it would em-
52
Moshé Machover and Dan Felsenthal
power Luxembourg! 20 Note that, somewhat surprisingly, a Benelux alliance would benefit the three non-Benelux members: each would now have power 38 . Is it also possible to form an expedient alliance with a dummy? Moreover, is it possible to form such an alliance which will also increase the power of all those excluded from the alliance? The answer to both these questions is positive, as shown by the following two examples. Example 6 Let W [13;5,5,3,3,3,1]. Here the power of each of the first two voters is 167 , that of each of the next three voters is 163 and the last voter is a dummy. Now suppose the last four voters form an alliance with internal decision rule ! M 4 . Then the indirect power of each of these four is 329 . So the alliance is expedient. True, this alliance is not stable, for if the voter with weight 1 is ejected, and the remaining three form an alliance with internal rule ! M 3 , then the indirect power of each of these three will be 38 . 21 Example 7 Let W [10;2,2,2,2,2,1] . Here each of the first five voters has power 161 and the last voter is a dummy. Now suppose the last five voters form an alliance with internal decision rule ! M 5 . Then each of these five has indirect power 163 and the first voter, excluded from the alliance, has power 21 . So the alliance is not only expedient, but also beneficial to the voter excluded from it. Note that in this case ejecting the voter with weight 1 from the alliance will not benefit the remaining four: the best they can do is form an alliance with internal rule isomorphic to M 4 or M 4 – which will still give them indirect power 163 . However, neither of these two alliances is stable. In the case of the alliance of size 5, which includes the [former] dummy, a voter with weight 2 will be better off defecting. Following such defection, the remaining three voters with weight 2 will be better off if they eject the dummy. In the case of the alliance of four voters with weight 2, it will also be advantageous for any of them to defect.
20 We are indebted to Simon Hix for this explanation. We are also grateful to Frank Steffen and Matthew Braham for independently raising with us the question as to whether a dummy can be empowered via an alliance. 21 However, this alliance too is unstable: the two voters with weight 5 excluded from it – whose loss of power would be substantial as a result of its formation – would be able to tempt one of the three alliance members to join them in forming a 3-member dictatorial alliance with an internal decision rule ! M 3 , thereby increasing their power to 12 . This alliance would be stable.
Further Reflections on the Expediency and Stability of Alliances
53
6. Discussion The results obtained in this paper constitute a modest advance on the ground covered by us in Felsenthal and Machover (2002) and by Gelman (2003). The main result of Section 3, Theorem 1, is not too surprising. It confirms – as well as making more precise – what is intuitively almost obvious: if W ! M n , then only [dictatorial] alliances whose size is just over n2 can be both expedient and stable. A smaller alliance is vulnerable to annihilating retaliation by those excluded from it, whereas a larger alliance is, in a sense, too large for its own good. However, in the cases where n is even, and particularly where it is divisible by 4, the problem of stability of an alliance of size n 2 does require some careful scrutiny. Example 4 illustrates a seemingly paradoxical phenomenon that can occur in a super-majority WVG: a stable expedient alliance may also not hurt those excluded from it. This of course depends on the fact that power games are not constant-sum games. Section 4 dealing with the case W ! Bn , provides a more extreme illustration of this phenomenon. But the main surprise in this section is the exceptional behaviour of quintets and trios, and the unique stability of some 5–3–1 configurations. We think that these results could hardly have been anticipated without detailed analysis. Finally, in Section 5 we illustrate the seemingly paradoxical fact that a feasible, and even expedient, alliance may empower a dummy while at the same time benefiting the voters excluded from it. This is a possible explanation for the acquiescence of Luxembourg in its dummy status under the QM rule of the original 6-member EU Council of Ministers. Nevertheless this paper should not be viewed as an attempt to provide an explanation of actual alliance formation in terms of Penrose power. If this were the case, then it would be important to compare this with explanations based on other measures of power. Rather, our idea was to do a ‘what if’ exercise: what would happen if voters were to play a cooperative ‘power game’, which is played by forming alliances and in which the resulting Penrose power of the players-voters were regarded by them as the payoff (which, by the nature of Penrose power must be a non-transferable ‘utility’). Of course, one could similarly invent other games, based on some other payoff, which could well be some other measure of voting power (e.g. the Shapley-Shubik (1954) index, or the Penrose-Banzhaf power index as modified by Owen (1982) for voting games ‘with a priori coalitions’), and which could well lead to different results than we obtained. Admittedly, the ground covered so far in the study of power games is rather limited. In this paper we have confined ourselves entirely to WVGs rather than dealing with SVGs that may not be weighted. Moreover, we have concentrated for the most part on symmetric WVGs of the simplest kinds:
54
Moshé Machover and Dan Felsenthal
those isomorphic to M n or Bn . A more general theory of power games P W remains to be developed. There is also need for studies of the extent to which considerations of a priori voting power can explain the formation, dissolution and [in]stability of real-life alliances, as well as defection of voters from these alliances. So far – apart from a claim by Aumann (see Damme, 1997), unsupported by any detailed data – there are only two proper studies of this kind known to us – one by Chua and Felsenthal (2008), and the other by Andjiga et al. (2006)– both of which are concerned with the formation of governmental alliances, i.e. ruling coalitions within legislatures, and both arrive at negative or sceptical conclusions. 22 Thus, for example, we re-analysed the 77 governmental alliances examined by Chua and Felsenthal (2008), and found that in 49 of them the largest party was a dictator, that in additional 21 governmental alliances at least one member became a dummy or lost some power by joining the alliance, and that only seven of these alliances were either feasible or expedient in comparison to a situation where no alliance was formed. These results clearly indicate that considerations of feasibility and expediency (in the technical sense of the present paper) play no significant role in the formation of governmental alliances. However, in contrast to governmental alliances – which must form in legislatures where no party controls an absolute majority of the votes – considerations of feasibility and expediency in forming alliances may play a more significant role in international organizations or corporate boards of directors, where alliances may but need not form. To verify this one would need to conduct empirical research. But this will not be easy, because the formation of an alliance in such bodies is often tacit and hence difficult to detect.
References Andjiga, N.G., Badirou, D. and Mbih, B. (2006) On the Evaluation of Power in Parliaments and Government Formation, mimeo. http://crem.univ.rennes1.fr/site_francais/doctravail/2006/ie-200618.pdf Chua, V.C.H. and Felsenthal, D.S. (2008) Coalition Formation Theories Revisited: An empirical investigation of Aumann’s Hypothesis, in M. Braham and F. Steffen (eds) Power, Freedom, and Voting, Springer, 159–183. van Damme, E. (Interviewer) (1997) On the State of the Art in Game Theory: An Interview with Robert Aumann, in W. Albers et al. (eds) Understanding Strategic Interaction: Essays in Honor of Reinhard Selten, Springer. Reprinted in Games and
22
Two additional studies of this kind known to us are those by Riker (1959), and by Felsenthal and Machover (2000), which aim to ascertain whether inter-party migrations of delegates in the French National Assembly during the period 1953–54 can be explained by considerations of voting power. However, both these studies seem to us inappropriate, as they ignore in their calculations the existence of a (dictatorial) governmental alliance.
Further Reflections on the Expediency and Stability of Alliances
55
Economic Behavior 24 (1998): 181–210. Felsenthal, D.S. and Machover, M. (1998) The Measurement of Voting Power: Theory and Practice, Problems and Paradoxes, Edward Elgar. Felsenthal, D.S. and Machover, M. (2000) Voting Power and Parliamentary Defections: the 1953–54 French National Assembly Revisited, mimeo. http://eprints.lse.ac.uk/archive/00000594. Felsenthal, D.S. and Machover, M. (2002) Annexations and Alliances: When are Blocs Advantageous A Priori? Social Choice and Welfare 19: 295–312. Gelman, A. (2003) Forming Voting Blocs and Coalitions as a Prisoner’s Dilemma: A Possible Theoretical Explanation for Political Instability, Contributions to Economic Analysis and Policy 2, Article 13, 1–14. http://bepress.com/bejeap/contributions/vol2/iss1/art13 Owen, G. (1982) Modification of the Banzhaf-Coleman Index for Games with A Priori Unions, in M.J. Holler (ed.) Power, Voting, and Voting Power, Physica Verlag. Riker, W.H. (1959) A Test of the Adequacy of the Power Index, Behavioral Science 4: 120–131. Shapley, L.S. and Shubik, M. (1954) A Method for Evaluating the Distribution of Power in a Committee System, American Political Science Review 48: 787–792.
4. Positional Power in Hierarchies René van den Brink Department of Econometrics, Free University Amsterdam, The Netherlands
Frank Steffen University of Liverpool Management School, UK
1. Introduction Power is a core concept for the analysis and design of organisations. The literature contains a wide variety of contributions from various disciplines dealing with different types and aspects of power in organisations. These can broadly be classified by a combination of two features: the subject of the analysis and its primary origin (Morriss 1987/2002: 107f). The subject is either the possession of power or its exercise and its primary origin is either an individual or a position in an organisation. That is, power is taken to be rooted in an individual or a position. Based on these two features, the literature can, in very general terms, be said to consist of four categories: studies on (i) positional power, (ii) individual power, (iii) the exercise of positional power, and (iv) the exercise of individual power. This paper is devoted to the study of the first category. Positional power is what results from the interplay of two components of an organisation’s architecture: the arrangement of positions in the organisation and the decision-making mechanisms in use. It is well recognized that the organisational architecture and its resulting power structure are essential ingredients for the success of an organisation (Brickley et al. 2004; Daudi 1986; Johnston and Gill 1993; Martin 1998). Although there exists plenty of work addressing the positional power relations in organisational architectures, 1 very little attention has been given to investigating such power relations in hierarchical organisations i.e. where there are dominance relations among the actors. This would be unremarkable if hierarchies would not play a significant role in social life. But the opposite is true: hierarchies can be found in all areas of social life. One of the 1 See, for instance, Felsenthal and Machover (1998), Holler and Owen (2000, 2001, 2002), Holler et al. (2002), Holler and Gambarelli (2006), and the references therein.
58
René van den Brink and Frank Steffen
problems with most of the extant literature on positional power in hierarchies is that it is restricted to the analysis of power only in terms of the arrangement of the actors. 2 While such an analysis informs us about the authority structure within an organisation, it ignores the decision-making mechanisms completely. To the best of our knowledge, only a handful of studies on positional power in hierarchies take into account the decision-making mechanisms (van den Brink 1994, 1997, 1999, 2001; Gilles et al. 1992; Gilles and Owen 1994; van den Brink and Gilles 1996; Berg and Paroush 1998; Shapley and Palamara 2000a,b ; and Steffen 2002). All these studies make use of adaptations of well-established approaches for the analysis of power in non-hierarchical organisations such as the Banzhaf (1965) measure; and thus they are all based on the structure of a simple game, i.e. they are ‘membership-based ’. Roughly speaking, in such a set-up power is ascribed to an actor i, if i has the potential to alter an outcome that has been forced by some members of the organisation, by either leaving or joining the subset of actors which has effected the present outcome. In the course of our analysis we will demonstrate that such an approach is in general inappropriate for characterizing power in hierarchies. To deal with the problem of how to characterize power relations in a hierarchy, we develop an action-based approach based on an extensive game form. Here power is ascribed to an actor i, if i has the potential to alter an outcome forced by the members of the organisation by altering his action. A similar action-based approach has been already suggested for decisionmaking mechanisms in non-hierarchical set-ups (Miller 1982), but has so far received only little attention in the literature as it was regarded as an equivalent representation of the membership-based approach. Our analysis will demonstrate that this equivalence only holds for a certain class of decisionmaking mechanisms and that the membership-based approach in contrast to the action-based cannot be extended to a class of decision-making mechanisms which allow certain actors to terminate a decision on behalf of the hierarchy before all other members have been involved. This kind of decision-making mechanism is particularly relevant for hierarchies. The contribution of our paper can be summarized as follows: (i) We argue that the existing membership-based approaches for the analysis of positional power in hierarchies are only relevant for a (less important) subclass of decision-making mechanisms in hierarchies. (ii) We show that an adequate consideration of the more relevant mechanisms requires an actionbased approach represented by an extensive game form. (iii) We extend the existing action-based approach which has been formulated for strategic 2
See, for instance, Copeland (1951), Russett (1968), Grofman and Owen (1982), Daudi (1986), Brams (1968), van den Brink (1994, 2002), van den Brink and Gilles (2000), Mizruchi and Potts (1998), Hu and Shapley (2003), Herings et al. (2005), and the references therein.
Positional Power in Hierarchies
59
games forms to extensive game forms. (iv) We illustrate that the existing membership-based approaches can be represented as special cases of the action-based approach.
2. Directed Graphs Given that we will represent both the description of the arrangements of the actors and decision-making mechanisms in hierarchical organisations by directed graphs – the two are not necessarily identical – we start by briefly recalling some definitions of such graphs. A directed graph (or digraph) is a set of objects, called nodes joined by directed links called arcs. Formally, a digraph is an ordered pair (V , D ), where V is a finite set of nodes (or vertices) of the graph and D is a set of ordered pairs of V called arcs of the graph, i.e. D V q V is a binary relation on V . An arc (i , j ) D is considered to be directed from node i to node j where i is called a predecessor of j and j is called a successor of i in D . By S (i ) and S -1(i ) , respectively, we denote the set of all successors and predecessors of i V in D ; i.e. S (i ) \ j V (i , j ) D ^ and S 1(i ) \ j V ( j ,i ) D ^. A node i is called a terminal node, if no arc is starting from it, i.e. S (i ) . If (i , j ) D , we say that there exists a path from i to j denoted by P (i , j ) def i l ... l j , if i v j and j can be reached from i by following arcs of the graph, e.g. if (i , h ),(h, j ) D , then P (i , j ) i l h l j . If such a path P (i , j ) exists, we call i and h ancestors of j, and h and j descendants of i. Moreover, the set of all nodes being descendants of a node i is denoted by Sˆ(i ) and the set of all ancestors of i by Sˆ1(i ). If, instead, (i , j ) D , we say that P (i , j ) denotes a direct path, and if (i , j ) D , but i j , we say that P (i , j ) denotes a degenerated path. A directed graph is said to be a tree T if (i) there exists a distinguished node r V (the root of the tree) that has no arcs going into it, and (ii) for every other node i V \ \r ^ there exists exactly one path from r to i. Furthermore, a tree T becomes a labelled tree denoted by the triple ( T ,l V ,l D ), if there exist labelling functions l V : V l L V and l D : D l L D with L V and L D being the corresponding sets of labels. Moreover, a branch of a tree T is a directed graph having nodes starting at a node i V and containing all its descendents together with their original arcs. By T i we denote the branch starting at i. Thus, T i is itself a tree, the root of which is i.
3. Hierarchies Hierarchies form a certain subclass of organisational architectures. Following van den Brink (1994) they distinguish themselves from other architectures by the arrangement of its members being connected via directed relations, which we interpret as dominance (or superior to) relations. Loosely speaking,
60
René van den Brink and Frank Steffen
we can say that an actor i in a dominating position has an influence on the ‘powers’ of other actors who are in positions that are dominated by i. Domination can be either indirect or direct, i.e. with or without intermediate actors, respectively. Note, that if we just make use of the term ‘domination’ without further specification, we allow for indirect and direct domination. Actors in dominating positions are called superiors (or principals) - bosses or managers in common parlance -, while the actors in dominated positions are called subordinates (or agents). If we refer to a superior who directly dominates another actor, the dominating actor is called a predecessor, and if we refer to a subordinate who is directly dominated by another actor, the dominated actor is called a successor. Formally, we can represent the dominance structure of a hierarchy as a digraph, where the nodes j V represent the actors i N , i.e. l V : V l N, and the arcs indicate direct dominance relations between the actors, i.e. if there is an arc (i , j ) D predecessor i dominates successor j and that j is dominated by i. Moreover, we say j Sˆ(i ) that j is a subordinate of i, and j S ˆ 1(i ) that j is a superior of i. The dominance relations in a hierarchy are assumed to fulfil the following three properties (see Radner 1992 for a similar set of properties): Property 3.1 (Transitivity) If i dominates j, and j dominates k, then i dominates k. Property 3.2 (Anti-symmetry) If i dominates j, then j cannot dominate i.
Note that by Property (3.1) in combination with (3.2) we assume i N : i Sˆ(i ), i.e. we exclude the case that a dominance structure is cyclic. Property 3.3 (Single Topness) There is exactly one actor, called the top (or root), who is in a position such that he dominates all other actors. Except for the top, every other actor has at least one predecessor. Thus, formally, there exists an i N such that Sˆ(i ) N \ \i ^ and Sˆ1(i ) .
For an illustration Fig. 1 depicts all feasible dominance structures of hierarchies with two and three members (except structures with relabelled members) which fulfil properties (3.1)–(3.3). Formally, for Fig. 1a these are given by S (a ) \b ^ and S (b ) , for Fig. 1b by S (a ) \b ^ , S (b ) \c ^ and S (c ) , and for Fig. 1c by S (a ) \b ,c ^ and S (b ) S (c ) . Note that the dominance relations in a hierarchy cannot always be represented by a tree, i.e. it is not necessarily the case that except for the top each other actor has one and only one predecessor. In other words, dominance relations which can be represented by a tree form a subclass of all feasible sets of dominance relations fulfilling properties (3.1)–(3.3).
Positional Power in Hierarchies
a
b Fig. 1a
a
61
a
b b
c Fig. 1c
c Fig. 1b
Fig. 1.
Feasible dominance structures with two and three actors
As a dominance structure is only a partially ordered set, not all actors in a hierarchy are necessarily comparable in terms of their bare dominance relations. For instance, in Fig. 1c, b does not dominate c, nor does c dominate b (Radner 1992: 1391). However, in everyday language, the word hierarchy not only connotes an upside-down-tree-like dominance structure, but also an assignment of rank which allows for a certain mode of comparison of the positions of b and c. By a ranking of a dominance structure we shall mean an assignment of a natural number called the rank (or level) to each actor, such that: Property 3.4 i has a higher rank (larger number) than j, if i dominates j. Property 3.5 i and j have the same rank, if i does not dominate j nor j dominates i, i.e. if i and j are not comparable in terms of the dominance relations, and the length of the longest path to the top is the same for both.
While properties (3.4) and (3.5) are sufficient for existence of a ranking, they are only necessary but not sufficient for a unique ranking. Throughout this paper we will make use of a unique ranking which additionally satisfies the following properties: 3 Property 3.6 There is only one actor with the highest rank k called the top. Property 3.7 Recursively, an actor has rank l b k 1 iff all his predecessors have a rank h p l 1 and at least one of his predecessors has rank l 1. Property 3.8 We fix k such that the lowest ranked actor has rank 1. 3 Note, that with these properties we exclude hierarchies which contain direct dominance relations over more than one rank (Radner 1992: 1391). While we impose this restriction to keep the exposition of our paper traceable, this does not imply that our approach is not capable to take such hierarchies into account.
62
René van den Brink and Frank Steffen
An actor is said to be at the bottom (or a leaf ) if he does not dominate any other position, i.e. if he owns a terminal node. Note that this does not imply that such a position necessarily belongs to the lowest rank 1.
4. Decision-Making Mechanisms in Hierarchies Decision-making mechanisms are the other component of an organisation’s architecture that we need to examine. For the definition of a decision-making mechanism let us start with the general definition of a decision: Definition 4.1 A decision is the choice of a non-empty proper sub-set of elements out of a set of elements.
In our case the sets of elements from which an actor can choose is created by proposals submitted to the organisation. The chosen (or selected) elements are outcomes produced by members of an organisation. The choices are made by one or more actors belonging to the organisation where each of these actors has to perform an action to make his individual choice effective. How those actions and the outcomes are linked is given by the decision rule: Definition 4.2 A decision rule is a function that maps ordered sets of individual actions into outcomes.
Thus, a decision rule is an action guiding norm which states which ordered sets of actions generate which outcome. 4 A second component we require for our definition of a decision-making mechanism is that of a decision-making procedure: Definition 4.3 A decision-making procedure provides the course of actions of the actors for a collective decision and determines the actions to be counted, i.e. which actions go into the domain of the decision rule.
Now we can define a decision-making mechanism: Definition 4.4 A decision rule together with a decision-making procedure establishes a decision-making mechanism (DMM).
The canonical set-up for a DMM U in a hierarchy as found in the power literature is characterized by the following set of assumptions: 4
To put it into the language of causality: a decision rule defines the set of all necessary and sufficient conditions for each outcome, i.e. for each outcome it defines a set of sets of actions where each set of actions is sufficient for that outcome.
Positional Power in Hierarchies
63
Assumption 4.5 Proposals submitted to the hierarchy are exogenous: it is the task of the hierarchy either to accept or to reject the proposal, i.e. we have a binary outcome set O \acceptance , rejection ^. Assumption 4.6 A proposal can be submitted to the hierarchy only once. Assumption 4.7 A hierarchy contains a finite set of members N \a , b ,..., n ^ whose actions bring about the decision of the hierarchy. Assumption 4.8 Each actor i N has a binary action set Ai \yes , no ^ where yes means that i supports a proposal and no that i rejects the proposal.
Assumptions (4.5)–(4.8) are also common in the analysis of power in non-hierarchical organisations. In addition for hierarchies we need to assume: Assumption 4.9 The direction of the decision-making procedure through the hierarchy is bottom-up. Assumption 4.10 New proposals can only be received by those actors N N who have a contact to the outside world, i.e. to actors i N submitting the proposals. Assumption 4.11 Only certain subsets of N can receive a new proposal at the same time. The set of such feasible subsets is given by N 2 N . Assumption 4.12 If N a N receives a new proposal its members have to choose their individual actions. Depending on the DMM, these actions either establish a final decision on behalf of the hierarchy or they imply that the proposal is forwarded to the next higher rank in the hierarchy. Here Sˆ1(N a) defines the set of all superiors who may be involved in the decision about that proposal received by N a until a final decision on behalf of the hierarchy is made. Assumption 4.13 If a proposal is forwarded to the next higher rank, there is a set S min1(N a) of minimal feasible sub-sets of superiors Sˆmin1(N a) Sˆ1(N a) whose actions are required for the decision about a new proposal that has been presented to N a.
Assumptions (4.9)–(4.13) require some justification. Let us begin with the very basic rationale of a hierarchy: why may it be useful to have a hierarchy. Why do we have organisations with dominance structures? A common answer to this question says that hierarchies are cost saving because dominance structures allow for the decentralisation of decision-making (delega-
64
René van den Brink and Frank Steffen
tion), i.e. by dividing tasks between the members of the different ranks and positions. In our context delegation means that we can avoid that all members of an organisation are always involved in all decisions. Hence, DMMs in hierarchies imply that in certain instances particular actors are intentionally excluded from decisions (see, for instance, Mackenzie 1976: 103). This is usually achieved by a combination of two basic principles: (i) by reducing the breadth of the hierarchy involved in a particular decision and (ii) by allowing non-top actors to make certain types of final decisions on behalf of the whole hierarchy even before one of their superiors has been involved. It would therefore be natural, that a generic DMM for a hierarchy should take those principles into account. To investigate whether such a generic DMM also satisfies assumptions (4.9)–(4.13) we need to determine if there exists a reasonable example based on the very simple hierarchy given by Fig. 1c that fulfils both principles (i) and (ii) and the assumptions. Example 4.14 Assume that a hierarchy with a dominance structure as given by Fig. 1c is a development agency with a being its head and b and c being representatives in different developing countries. Moreover, assume that b and c receive funding applications for development projects, i.e. they have a contact to the outside world, while a does not. Suppose that the decision-making on these applications is delegated to b and c according to the above two principles. Based on principle (i) each representative is only responsible for the projects in his country, i.e. is not involved in decisions for the other country. If applications arrive at the desk of one of the representatives (b or c), this representative decides on them only together with a. Furthermore, based on principle (ii), if a representative (b or c) receives an application, he is entitled to reject it without contacting a (he can exclude a from the decision-making in this particular instance) although for an approval he requires the consent of a. 5
Thus, due to principle (i) a decision will never require the whole breath of the hierarchy (b and c together) to take part in a decision which implies that never all three actors are involved in a decision. In terms of its DMM the hierarchy is truncated twice, i.e. for each actor with a contact to the outside world we obtain a truncated hierarchy containing this actor and all his superiors. 6 To put it into other words: we have an ensemble of possible games on N with overlapping sets of actors, i.e. N 1 \a , b ^ and N 2 \a ,c ^ . 7 5 For an overview of further DMMs that can be derived from the literature on positional power in hierarchies see the appendix. 6 Formally this phenomenon can also taken as abstention, although in this case it is decreed rather than voluntary. 7 The idea of a possible game has been introduced in the context of abstention (see Braham and Steffen 2002). Assume a DMM under which abstention is permissible for any actor
Positional Power in Hierarchies
65
Whether b or c will be excluded from the decision-making (which possible game will be played) depends on who receives the new proposal. Moreover, due to principle (ii) all non-top actors are entitled to make a certain type of final decision on behalf of the whole hierarchy. This also means that principle (ii) implies a sequential decision-making procedure as it entitles nontop actors to terminate the collective decision-making, before actors on higher ranks are involved. Now let us determine if Example (4.14) and, thus, the application of the above principles, match with assumptions (4.9)–(4.13). Obviously, Assumption 4.9 is fulfilled. Moreover, the same applies to the remaining assumptions: N \b ,c ^ (4.10), N \\b ^ , \c ^^ (4.11), Sˆ 1 \b ^ S ˆ 1 \c ^ \a ^ (4.12), and S min1 \b ^ S min1 \c ^ \\a ^^ (4.13). However, note that these assumptions do not take into account that principle (ii) implies a sequential decision-making procedure. An essential characteristic of DMMs in hierarchies is overlooked.
5. Decision-Making Mechanisms and Extensive Game Forms For modelling a DMM in a hierarchy the extant literature applies the classical membership-based approach. However, this is inappropriate because it does not allow for the exclusion of agents based on principle (ii) nor does it allow for the bottom-up structure of the decision-making procedure (Assumption 4.9). Both exclusion and bottom-up decision-making require a sequential structure while the membership-based approach is inherently simultaneous (see Section 7). Hence, we suggest the natural method is an action-based approach that makes use of an extensive game form (EGF) as defined below. Let N \1,..., n ^ be the set of actors of a collective decision making body and N * def N R where R denotes ‘nature’ which behaves randomly, i.e. R ’s behaviour represents the exogenous effects of the outside world on N. The actors are part of a tree of a game form. This is a labelled tree ( T ,l V ,l D ) where (i) l V assigns to each non-terminal node j V exactly one i N * and to each terminal node k V an outcome o(k) O with O being a non-empty finite outcome set, and (ii) l D assigns to each arc a move a A where A denotes the set of all possible moves for all i N * . In a tree of a game form a non-terminal node is called a decision node if it is owned by an actor i N , and a chance node if it is owned by R . Moreover, we denote the set of nodes owned by an actor i by V i . An information set for an actor i is a set of nodes h i such that in a deci-
i N . Hence, a decision can be made by any subset N k 2 N . Here any game played on N k is called a possible game on N. Note that the idea of a possible game is different to that of a composed game (Shapley 1962). In case of a composed game all components of the game can be played at the same time, while out of the set of possible games only one will be played.
66
René van den Brink and Frank Steffen
sion-making procedure i knows that he must make the ‘next decision’, at some node j h i , but due to his lack of information of the ‘history’ of the decision-making procedure i does not know at which node j h i exactly he must make his decision; i only knows that j h i . Necessary conditions of an information set for an actor i are that (i) all nodes of h i are owned by i; (ii) no node of h i is related to any other node of h i , i.e. if j , h h i , then j is neither an ancestor nor a descendant of h; and (iii) all nodes of h i are equivalent for i with respect to the outgoing arcs, i.e. the number and labelling of arcs starting from each node j h i is the same. Moreover, let us denote the set of all information sets of an actor i by H i , i.e. H i def \h i 1 ,.., h ip ^. Note, that if the set of nodes of a tree of a game form is N * , i.e. it includes R , we follow the convention that nodes belonging to nature are always elements of singleton information sets Now we can characterise an EGF. 8 Definition 5.1 An EGF is a tree of a game form such that the decisionnodes have been partitioned into information sets that belong to the actors.
In an EGF we call C i : V i l 2 V the choice function of an actor i N * , if C i ( j ) = S ( j ) for all j V i . Thus, a choice function assigns to each j V i its corresponding set of successors S ( j ) in the tree of the game form (it assigns to each j V i the nodes that i can reach being in j and making a move). Furthermore, we say that i ‘has chosen’, if i has to make a move in a node j V i and has decided for a move to a node h S ( j ). For all i N we call such choices of a move an action of i in j, denoted by a ij . The set of all actions available to i at a node j V i is called the action set of i at j, denoted by Aij , i.e. Aij def \a ij1 ,...,a ijm ^. The set of all actions available to i at any node in V i is denoted by Ai, i.e. Ai def \ j V i : Aij ^. If instead nature R moves, it’s moves – if they are non-fictitious – are not actions of intention, but just moves (determining which ‘real’ actor has to choose next) which follow a probability distribution resulting from a function p : H R q A R l [0,1] that assigns probabilities to the ‘actions’ at information sets where nature moves, satisfying h R H R : \a Rj A Rj j h R : p(a Rj )^ 1. Next, let us say that for an i N * , j V i . and a ij Ai j in an EGF, h( j ,a ij ) denotes that node h S ( j ) such that l D ( j , h ) a ij , i.e. h( j ,a ij ) is that node that is reached from node j, if i makes the move a ij . Based on this notation we can define an action profile in an EGF: Definition 5.2 An ‘action profile’ a in an EGF is an ordered set of individ8 For more details about extensive game forms we refer the reader to Kolpin (1988, 1989) and for extensive form games to Kuhn (1953) and Selten (1975).
Positional Power in Hierarchies Table 1.
Choice Functions and Related Action Sets
j V i
S ( j)
Aij
j V a j V c
\acceptance , rejection ^ \a , rejection ^ \a , rejection ^
\yes , no ^ \yes , no ^ \yes , no ^
j V R
\b , c ^
–
j V b
67
Table 2. Action Profiles and Related Outcomes
aA
oO
(b is allowed to chooseT , yesb, yesa)
acceptance
(c is allowed to chooseT , yesc, yesa)
acceptance
(b is allowed to chooseT , yesb, noa)
rejection
(c is allowed to chooseT , yesc, noa)
rejection
(b is allowed to chooseT , nob)
rejection
(c is allowed to chooseT , noc)
rejection
ual moves, a def (a j1 , ..., a j p ), belonging to a subset of actors N (a ) N * , such that their moves form a path P ( j 1 , j p 1 ) j 1 l j 2 l ... l j p 1 with S 1( j 1 ) , S ( j p 1 ) , and l D ( j q , j q 1 ) a j q for all q \1, ..., p ^. Thus, an action profile a is a ‘path of moves’ within the tree of the game form where the first move begins at the root r j 1 and the last move ends at a terminal node k j p 1. The set of all action profiles will be denoted by A and by A i A we will denote the subset of all action profiles that contain an action of actor i. Notice that the action profiles in an EGF which represents a DMM consider only counted individual actions and not all feasible individual actions. That is, these action profiles ignore those actions which actors have available and can perform but which are not counted by the decision-making procedure. Let us return to Example (4.14) which can now be represented by an EGF with N * \a , b ,c , R^ , j V i ,i N : Ai j \yes , no ^, O \acceptance , rejection ^, i N : # H i # V i , and, thus, j V i : \ j ^ H i . Hence, we have an EGF with perfect information, i.e. the information sets are single-
68
René van den Brink and Frank Steffen
yes
acceptance
a no
yes
rejection
b no ½
rejection
R yes
acceptance
a ½ yes
no
rejection
c no rejection
Fig. 2. The DMM as an EGF
tons and there is one information set h i for each node j owned by i. 9 Given the triple (N * , U , S ) with U denoting the DMM as described by Example (4.14) we obtain the choice functions and the individual action sets as given by Table 1. From these we can derive the action profiles containing the counted individual actions and their related outcomes as given by Table 2. What is left to be specified is the probability function determining nature’s moves, p . It determines which actors with a contact to the outside world will obtain a proposal on their desk and, therefore, establishes which possible game will be played. In the absence of any (structural) information regarding the likelihood of N a N to receive a new proposal, we apply the principle of insufficient reason of classical probability theory. 10 This assigns equal probability to all admissible ‘atomic events’, i.e. in the present case equal probability to all N a N for receiving a new proposal: p (N a receives a new proposal) 1/# N 9 Note, that in a hierarchy an actor may own more than one decision node, even if he owns only one node in the dominance structure. 10 In the absence of any information about the outside world the application of the principle of insufficient reason appears to be legitimate here as we fulfil the condition that we have a finite probability space consisting of finitely many clearly distinguished indivisible ‘atomic events’ (Felsenthal et al. 2003).
Positional Power in Hierarchies
69
which implies that for each actor i N his probability to receive a new proposal is given by pˆ(i N receives a new proposal) # \N a N i N a^ /# N.
We can now give a proper graphical representation of a DMM as an EGF. For Example (4.14) this is given by Fig. 2. From Section 4 we know that we have two possible games with N 1 \a ,b ^ and N 2 \a ,c ^. Moreover, we have N \\b ^ , \c ^^ which implies that each possible game occurs with probability 21 . In the EGF this is reflected by the assignment of this probability to the moves of nature which lead to the branches T b and T c representing the possible games.
6. Measuring Positional Power: An Action-Based Approach Our understanding of ‘power’ is based on Harré (1970) and Morriss (1987/ 2002) who define power as a concept that always refers to a generic (and therefore, in a sense, timeless) ability or capacity of an object. In a social context this object is an actor and a power ascription refers to his ability: what the actor is able to do against the resistance of at least some other actor. Following Braham (2007) we say that an actor i has power with respect to a certain outcome if i has an action (or sequence of actions) such that the performance of the action under the stated or implied conditions will result in that outcome despite the actual or possible resistance of at least some other actor. That is, power is a claim about what i is able to do against some resistance of others irrespective of the actual occurrence of the resistance. Thus, power is a capacity or potential which exists whether it is exercised or not. In our context, this capacity is based on the positions of the actors in an organisation. The measurement of power involves the following steps: (i) The identification of the action profiles within the organisation that are sufficient for bringing about an outcome. (ii) The ascription of power to an individual actor in these action profiles by determining if the actor has an action that if performed will, ceteris paribus, alter the outcome of the collective action. (iii) The aggregation of the individual power ascriptions of each actor, giving us a bare power score. (iv) The weighting of the aggregated power ascriptions yields to a power measure. If this weighting is such that all aggregated power ascriptions sum up to unity we say that we have a power index. The difference between a score and a measure rests in the comparability of power structures: a score allows for ordinal comparisons only, while a measure allows for cardinal comparisons. In Section 5 we already described and illustrated step (i) (see Table 2). Thus, we can immediately proceed with step (ii). To ascribe power to an
70
René van den Brink and Frank Steffen
actor we examine for each action profile a whether i has an action, 11 which, if chosen, can make a decisive difference to the outcome against the actual or possible resistance of at least some other actor. That is, we determine if a given actor i has a swing. To define a swing in an EGF let O ˆ(g ) def \o O there is a path P (g , k ) with S (k ) and o = o(k )^, i.e. O ˆ(g ) is a relation between a decision node g V i and the outcome set O ˆ that can be achieved from this node. 12 Based on this relation we have: Definition 6.1 A swing of an actor i is a triple (a , a ij l ,aij l ) with a (a j1 , ..., a j p ) being an action profile and actions a ij l ,aij l Aiil for which there are two decision nodes j l V i and j k S ( j l ) that are reached by profile a and a decision node g S ( j l ) \ \ j k ^, such that l D ( j l , j k ) a ij l , l D ( j l , g ) aij l , and O ˆ(g ) \ \o(a j p )^ v .
Hence, i has a swing, if i can alter the resulting outcome o(a j p ) of an action profile a by, ceteris paribus, changing his action a ij l . At this point it is important to draw attention to the interpretation of the ceteris paribus condition. In the context of collective decision-making it is commonly said to imply that the actions of all other actors remain constant (in all information sets). That is, if i alters his action the only effect that can result out of this is a change in the outcome (then we say that i has a swing and we ascribe power to i ). While this ‘all other things being equal’ interpretation is appropriate for a simultaneous DMM, it no longer applies for our more general case of a sequential DMM, which may allow certain actors to exclude other actors from the decision-making as a result of their choices (see principle (ii) in Section 4). If we have an action profile a A i and we alter i ’s action a ij l in a it can happen that the decision-making process rea quires either the exclusion of actions of other actors a ija l from the domain of the decision rule and, hence, from action profile a , or the inclusion of a actions by other actors aija l in the domain of the decision rule and, therefore, in action profile a . If such information would be ignored, we can end up with an inappropriate power ascription resulting out of the alteration of i ’s action. In order to avoid this problem and to capture this information we have to go back to the idea behind the literal ‘all other things being equal’ interpretation of the ceteris paribus clause. The basic idea of the ceteris paribus clause is a comparison between two possible worlds: the world as it is (our initial action profile a and its associated outcome o(a j p ) ) and the world as it would be if an action were changed (the resulting action profile and its 11 The literature on power commonly uses the term ‘strategy’ as a synonym for an ‘action’ (see Miller 1982; Braham and Steffen 2003; Braham and Holler 2005) even if power is defined as the ability of an actor to effect outcomes by his chosen actions and not by his plan of action. 12 Note, that we allow P (g , k ) to be a direct path, but not to be a degenerated path.
Positional Power in Hierarchies
71
associated outcome if i ’s action a ij l were altered). In contrast to the standard interpretation of the ceteris paribus clause our analysis does not necessarily require that all other components of the action profile remain constant after we altered i ’s action; it requires that the action profiles after the initial change by one actor are consistent with the DMM. Hence, this interpretation of the ceteris paribus clause which is underlying Definition (6.1) is able to capture the above mentioned aspects of a sequential DMM. 13 If an actor i has a swing, we can distinguish between two different types of swings: Definition 6.2 A swing (a , a ij l ,aij l ) of an actor i is strong if #O ˆ(g ) 1, and weak if #O ˆ(g ) 1, with g as given in Definition 6.1.
Note, that for a swing (a , a ij l ,aij l ) and g as given in Definition 6.1, #Oˆ(g ) 1 implies that the unique outcome in O ˆ(g ) is different from the outcome o(a j p ) of the action profile a since by definition of a swing O ˆ(g ) contains at least one outcome that is different than o(a j p ). Thus, a strong swing enables an actor to alter a unique outcome into another unique outcome, while a weak swing only enables an actor to alter a unique outcome into a non-unique outcome. The distinction between strong and weak swings is novel and becomes necessary due to the sequential structure of our approach. Both swings are immediately comparable if we have set inclusion, i.e. a strong swing (a , a ij l ,aij l ) and a weak swing (a a , a ij l a ,aij l a ) with g and g a, respectively, as given in Definition (6.1) are comparable if O ˆ(g ) O ˆ(g a). In this case which applies to our binary set-up, a strong swing implies more power (or ability) than a weak swing as the outcome that an actor with a strong swing can enforce is more specific. Moreover, for a binary set-up the literature distinguishes between positive and negative swings: Definition 6.3 For a DMM represented by an EGF with j V i , i N : Aij \yes ,no ^ and O \acceptance , rejection ^ an actor i has a positive swing, if i by switching from a ‘no’- to a ‘yes’-action can alter the outcome from a ‘rejection’ to an ‘acceptance’ and has a negative swing, if i by switching from a ‘yes’- to a ‘no’-action can alter the outcome from an ‘acceptance’ to a ‘rejection’. Definition 6.4 Let S i \s i1 , ..., s iz ^ be the set of all swings of an actor i where each s i denotes a swing (a , a ij l ,aij l ), Moreover, let S is and S iw denote the sets of strong and weak swings of i respectively.
13 Note that the ceteris paribus clause in Definition (6.1) also guarantees the fulfillment of the resistance condition (see Braham 2007).
72
René van den Brink and Frank Steffen
For step (iii), the individual power ascriptions for each actor given by S i are aggregated in order to obtain a complete and transitive relation ‘at least as powerful as for o ’, Zo , defined over N . 14 To aggregate the individual power ascriptions we have to take into account the existence of nature, or in other words: the potential exclusion of actors from the decision-making due to principle (i) (exclusion due to principle (ii) is already taken into account via the action profiles). We obtain this by weighting each swing on a branch T j with j V i and i N with the likelihood that nature will choose this branch. For the EGF this implies that j V R : S 1( j ) , i.e. that nature owns the root r of the tree of the game form, and, hence, makes the first move. Furthermore, for our analysis we assume # V R 1, i.e. that nature owns only one node being the root. 15 Definition 6.5 Let p (a R a ) be the likelihood that nature chooses the move a R that is part of the action profile a , and a(s i ) be the action profile in a swing s i (a , a ij l ,aij l ). Then p (a R a(s i ) denotes the likelihood that nature chooses the move that is part of the action profile a in swing s i .
Note: (i) If in the (original) EGF j V R : S 1( j ) , # V R 1, and # N a 1 for all N a N , due to the principle of insufficient reason we have p (a R a ) ( p (N a receives a new proposal) 1/# N . (ii) If in the (original) EGF V R , i.e. if nature does not exist, we can represent this EGF by a strategically equivalent EGF with j V R : S 1( j ) , # V R 1, and # A R 1, which implies # N 1, and, hence, p (a R a ) p (N a receives a new proposal) 1. Definition 6.6 The power score of an actor i N in a decision-making situation represented by an EGF is given by I i (N * , U , S ) def \s i S is : p (a R a(s i ))^ \s i S iw : F ¸ p (a R a(s i ))^ with 0 F 1 to take into account the nature of weak swings. 16
Note, that here i , j N : i Zo j is to be interpreted as ‘i ’s degree of power to force o is at least as great as j ’s degree of power to force o ’, with ;o and o denoting the asymmetric and symmetric components of Zo , i.e. ;o denotes ‘the greater degree of power’ and o ‘the same degree of power’. 15 Although this assumption might appear to be quite strong, it does not impose any restriction on the applicability of our approach to DMMs represented by EGFs. Naturally, it can happen that for certain DMMs j V R : S 1( j ) v , i.e. that nature owns also other nodes than the root. However, in these cases, it is easy to proof via a backward induction procedure, that we can represent the original EGF by a strategically equivalent one where we have merged the nodes j V R , such that # V R 1 and S 1( j ) for j V R . 16 Note, that for the binary DMMs which are discussed in the literature (see Appendix) it is not necessary to specify the value of F as these DMMs all include (i) bottom-up procedures which (ii) always requires the presence of the top for an approval: (i) ensures that the top is 14
Positional Power in Hierarchies
73
From Definition (6.1) it follows that in this case Zo is a cardinality-based ranking such that i , j N : i Zo j l I i p I j . In order to obtain a power measure allowing for cardinal comparisons, we have to continue with step (iv) by applying different weightings to the action profiles and, thus, to the related power ascriptions (to the swings of the actors). From Definition (6.6) we can derive the natural analogue to the Banzhaf measure C a which is formulated in the language of the membership-based approach (Banzhaf 1965, Felsenthal und Machover 1998). 17 We obtain this analogue for an actor i N by weighting the score with the weighted sum of ‘potential swings’ of i in all action profiles a A i . Here the weight of each action profile a is determined by the likelihood that nature chooses this profile, i.e. p (a R a ). The number of potential swings of i in each profile a results out of the sum of alternative actions aij Aij \ \a ij ^ actor i has at hand for each action a ij a : Definition 6.7 The power measure of an actor i N in a decision-making situation represented by an EGF is given by Cia(N * , U , S ) def
\s S : p(a a(s ))^ \s S : F ¸ p(a a(s ))^ \a A : p(a a ) \ j V (a ) : (# A 1)^^ i
s i
R
i
i
R
w i
i
R
i
j
i
i
where V i (a ) is the set of decision nodes j V i that are reached by a . Now let us return to Example (4.14). For the analysis of the power structure of this example we start by exploring the actors’ swings. From Fig. 2 we know that a owns two nodes: one in each possible game, while b and c own one node in ‘their’ possible game. In each node each actor has two actions. Examining the effects of altering, ceteris paribus, the actions for all actors shows that a has a positive and a negative strong swing in each of the two games, whereas b and c only have one positive weak and one negative strong swing each. Let us have a closer look at b’s situation. If b switches from ‘no’ to ‘yes’ b can exclude a straightforward rejection against the resistance of all other actors performing any feasible action, and if b switches from a ‘yes’ to ‘no’, b the only one who owns a positive strong swing, while (ii) guarantees that the top is at least never less powerful than any other actor if a dominance structure satisfies properties (3.1)– (3.8). If we want to specify the value of F, we could be tempted to think about an actor dependent operator Fi . However, this would make the power of an actor i additionally dependent on the actions and, thus, the powers of his subsequent actors in the EGF – something which is in contradiction to our dispositional concept of power, i.e. that the power of an actor i is what i is able to do when he is in the position to act. 17 The derivation of other analogues of well known power measures, such as the ShapleyShubik (1954) index is left to future research.
74
René van den Brink and Frank Steffen
can ensure a straightforward rejection against the resistance of all other actors performing any feasible action. Furthermore, and as already indicated above, here the ceteris paribus clause for b implies not only that b has an effect on the outcome but also on a’s action: by switching his action b determines whether a – according to the DMM – has to act or not and whether a’s action is counted, i.e. if b opts for ‘yes’ a is obliged to act and his action is counted, while if b chooses ‘no’ a is neither obliged to act nor his action is counted. Thus, a switch in b’s action forces us either to consider an action performed by a which was not counted or even not existent before, or to ignore an originally counted action performed by a. However, the sole effect of b on a is something which is irrelevant for our analysis which is concerned with the question whether b has the ability to affect the outcome at that decision-making node. Taking into account that each possible game occurs with equal probability, it is obvious that a is more powerful than b and c and that b and c have equal power, i.e. a ;o b o c . Applying Definition (6.6) and (6.7) we obtain the following power distributions which reflect this ordering: I(N * , U , S ) (2.00, 0.50 0.50F, 0.50 0.50F) Ca(N * , U , S ) (1.00, 0.33 0.33F, 0.33 0.33F).
7. Measuring Positional Power: Pathologies of the MembershipBased Approaches We mentioned earlier that the literature on the measurement of positional power in non-hierarchical organisations is usually founded on a membership-based approach. We also stated that the membership-based approach is not suitable to deal with the particularities of DMMs which are typical for hierarchical organisations. To see why, let us start with the canonical set-up of the membership-based approach. This is represented by a simple game. Definition 7.1 A simple game (SG) is a pair (N , W ) where W is a collection of subsets (coalitions) of the set of actors N called the set of winning coalitions, which satisfies the following three conditions: W ; N W ; and (monotonicity) if T W and T T a, then T a W . A subset T N is said to be winning or losing according to whether T W or T W . Furthermore, a subset T is called minimal winning iff T W , but no subset of T is in W . The set of minimal winning subsets is denoted by M
Hence, a DMM represented by a SG can be either represented by W or due to the monotonicity condition by M , where a coalition T can be regarded as an ‘index’ of the actions of the actors which have chosen the same action, for instance, ‘yes’, if T W .
Positional Power in Hierarchies
75
Now, let us investigate the degree to which a SG is able to meet Assumptions (4.5) and (4.7)–(4.13) and both principles discussed in Section 4. This will provide us with an idea about the limitations of this approach and how we can convert the membership-based approach founded on a SG into a special case of our action-based approach. Like our EGF the definition of a SG is also based on a finite set of actors N (Assumption 4.7). The binary structure of the decision-making situation, i.e. the individual binary action sets (Assumption 4.8) and the binary outcome set (Assumption 4.5) are taken into account via the membership of actors in winning or losing subsets of actors. An actor establishes his membership of such a subset by his individual action, i.e. by either supporting or rejecting a proposal. Thus, what a SG does is that it subdivides all actors subject to their individual actions to belong either to T or N \ T where T contains those actors who “get their way” according to the DMM, i.e. whose individual actions corresponds to the collective outcome. 18 The fact that new proposals can only be received by actors with a contact to the outside world (Assumption 4.10) is taken into account by the requirement that T M : T N v . The assumption that only subsets of N can receive a new proposal at the same time (Assumption 4.11) is currently not taken into account by the existing membership-based approaches but could be incorporated by applying the idea of possible games with overlapping sets of actors. The same applies if a collective decision does not require the whole breath of a hierarchy, i.e. principle (i) (see Example 4.14). The information that in certain instances particular actors i are able to determine the collective decision by their individual action (Assumptions 4.12 and 4.13) can be considered via veto rights, i.e. that T M : i T . What remains is the bottom-up direction of the decision-making procedure (Assumption 4.9) and the assumption that particular actors might be able to make certain types of final decisions on behalf of the whole hierarchy even before one of their superiors has been involved (principle ii). Both cannot be taken into account by a SG because they require a sequential structure while a SG is inherently simultaneous. A SG only considers the memberships of actors in T W or T W , but not when the actors join such subsets. Thus, it implicitly assumes that all actors join the subsets at the same time. Hence, the membership-based SG setup can be represented by an EGF with simultaneous individual actions. Put it differently: by an EGF with imperfect information such that there is one information set containing all nodes owned by i, i.e. i N : # H i 1 and h i V i . 19 18
Note, that this additionally requires the usual assumption that the simple game is selfdual, i.e. that any binary division of actors results in exactly one winning and one losing subset (Taylor and Zwicker 1999). 19 Note, that a DMM represented by a SG can also be represented by a strategic game form (SGF) with unconditional actions (Miller 1982) which implies that a SGF with conditional
76
René van den Brink and Frank Steffen
yes
a
c no
yes
a
b no ½
c
yes no
a
a
R yes
a
b
½
no
yes
a
c no
b
yes no
a
a
yes
acceptance
no
rejection
yes
acceptance
no
rejection
yes
acceptance
no
rejection
yes
rejection
no
rejection
yes
acceptance
no
rejection
yes
acceptance
no
rejection
yes
acceptance
no
rejection
yes
rejection
no
rejection
Fig. 3. An EGF representation of a SG
Thus, the major difference between our and the existing approaches is that they differ in the assumptions of the information sets. This has farreaching consequences. Not only are the trees of the game form considerably different (see Fig. 3 for the EGF of Example 4.14 with imperfect information), but also the power structure. Altering the information sets of Example (4.14) in the appropriate way we obtain I(N * , U , S ) (6.00, 2.00, 2.00) and Ca(N * , U , S ) (0.75, 0.25, 0.25). The ordering is the same, but a becomes relatively more powerful than b and c. This is due to the fact that in a simultaneous decision-making structure – in contrast to a sequential structure – all feasible individual actions are counted. However, the effects of ignoring the sequential structure of a DMM in a hierarchy and instead applying an approach with a simultaneous structure might be stronger than in this example, i.e. it might change the power ordering of the actors. Example 7.2 Assume a hierarchy with the dominance structure given by Fig. 1b. Applying the membership-based set-up we have N * N \a , b , c ^ , actions is an alternative of equal rank to the EGF we suggest here.
Positional Power in Hierarchies
77
j V i ,i N : Aij \yes , no ^, i N : # H i # V i , and j V i : \ j ^ H i , and O \acceptance , rejection ^. Concerning the DMM U let us assume that only c has contact to the outside world, i.e. \N ^ N \\c ^^ , that the decision-making procedure is bottom-up, and that each actor is able to reject a proposal on behalf of the whole hierarchy if it arrives at his desk and he rejects it – which implies that subsequent individual actions should not be counted – while an acceptance of a proposal requires the approval of all three actors. Then we obtain I(N * , U , S ) (2.00, 1.00 F, 1.00 F,) and Ca(N * , U , S ) (1.00, 0.33 0.33F, 0.25 0.25F), while in case of i N : # H i 1 and h i V i , i.e. under a simultaneous structure with W M \\a , b ,c ^^ , we would obtain I(N * , U , S ) (2.00, 2.00, 2.00) and Ca(N * , U , S ) (0.25, 0.25 0.25). Hence, the power ordering a ;o b ;o c based on the power measure taking into account the sequential nature would be misrepresented by a o b o c under the simultaneous structure.
8. Concluding Remarks We would like to tie up this paper with a brief summary and some remarks. We provided an action-based framework for the analysis of positional power in organisational architectures which can account for the particularities of hierarchical structures. For this reason our approach is more general than the usual membership-based approach which does not allow for sequential DMMs. Moreover, we demonstrated that if it is still applied to a sequential DMM it may result in inappropriate cardinal and ordinal power structures as it ascribes power to actors in action profiles which should not be counted. In order to measure an actors’ power we have extended the notion of a swing from a simultaneous to a sequential set-up and aggregated the individual power ascriptions over all action profiles of which an actor is a member. This is in line with our definition of power as a generic ability which is a conditional disposition that exists irrespective whether it is exercised or not. For the derivation of our power measure in Definition (6.7) from our power score in Definition (6.6) we divided the score of an actor i by the weighted sum of his potential swings. Even though this reflects perfectly what the Banzhaf measure is doing for the simultaneous case, it is not so innocent as it seems to be – even for our binary set-up. While under the simultaneous structure all actors are members of all action profiles and, hence, each actor has the same chance to have a swing in each action profile, this is no longer necessarily the case under the sequential set-up. A consequence of this is that our power score and measure are not necessarily co-monotone, i.e. Felsenthal et al’s (1998) price monotonicity postulate is violated. Finally, we have to acknowledge that the suggested approach currently does not take into account the case of differing incentive structures between the actors, i.e. that actors have a damatis personae: they are the bearers of predetermined attributes and modes of behaviour. Actors in hierarchies of-
78
René van den Brink and Frank Steffen
ten play predetermined roles, such as salesman, financial officer, head of external affairs, etc., which are equipped with a bundle of incentive structures that also belong to the organisational architecture. Steffen (2002) argues that a proper measurement of power should take such information into account if it is available. For the membership-based approach Straffin’s (1977, 1978) partial homogeneity approach offers a solution to this problem. Even though we expect that the same idea could be applied to our action-based approach, this, as well as the axiomatization of our measure, are issues for future research.
Appendix From Assumptions (4.5)–(4.13) there are 12 DMMs found in the literature on positional power. These differ in terms of N , N , and S min1(N a) as given by Assumptions (4.10), (4.11), and (4.13). They all share the other assumptions with Sˆ 1 (N a) j Sˆ1(i ) i N a for Assumption (4.12). NB: here i and j are actors in a dominance structure and not actors in an EGF.
\
^
Table A.1
N
N
N
N
N
N
N
N
N
\N a 2
N
# N a 1^
N
\N a 2
N
# N a 1^
N
\N a 2
N
# N a 1^
\i N
S (i ) ^
N
\i N
S (i ) ^
N
\i N
S (i ) ^
N
\i N
S (i ) ^
\N a 2
N
# N a 1^
\i N
S (i ) ^
\N a 2
N
# N a 1^
\i N
S (i ) ^
\N a 2
N
# N a 1^
S min1(N a)
\\ j Sˆ
(i ) i N a S
1
^^
(j)
1
\P (i , j ) i N a j Sˆ (i ) S ( j ) ^ \\ j Sˆ (i ) i N a^^ \\ j Sˆ (i ) i N a S ( j ) ^^ \P (i , j ) i N a j Sˆ (i ) S ( j ) ^ \\ j Sˆ (i ) i N a^^ \\ j Sˆ (i ) i N a S ( j ) ^^ \P (i , j ) i N a j Sˆ (i ) S ( j ) ^ \\ j Sˆ (i ) i N a^^ \\ j Sˆ (i ) i N a S ( j ) ^^ \P (i , j ) i N a j Sˆ (i ) S ( j ) ^ \\ j Sˆ (i ) i N a^^ 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Positional Power in Hierarchies
79
Acknowledgements We would like to thank Marlies Ahlert, Steve Brams, Franz Dietrich, Keith Dowding, Bernhard Grofman, Thomas Hammond, Marc Kilgour, Hartmut Kliemt, Martin Leroch, Moshé Machover, Stefan Napel, Jan-Willem van der Rijt, Federico Valenciano, Stefano Vannucci, William Zwicker, and in particular Matthew Braham, Manfred Holler, and Pete Morriss for comments and discussions. Forerunners of this paper have been presented at the Meeting of the Dutch Social Choice Group at Tilburg University (January 2006), at the 2006 Annual Meeting of the European Public Choice Society in Turku (April 2006), at the Eighth International Meeting of the Society for Social Choice and Welfare in Istanbul (July 2006), at the conference on ’Power: Conceptual, Formal, and Applied Dimensions’ in Hamburg (August 2006), and at the 2006 Annual Meeting of the American Political Science Association in Philadelphia (August/September 2006). This article was largely written while Frank Steffen was at Tilburg University under a Marie Curie Intra-European Fellowship within the 6th European Community Framework Programme. He gratefully acknowledges this financial support.
References Banzhaf, J.F. (1965) Weighted Voting Doesn’t Work: A Mathematical Analysis, Rutgers Law Review 19: 317–341. Berg, S. and Paroush, J. (1998), Collective Decision Making in Hierarchies, Mathematical Social Sciences 35: 233–244. Braham, M. (2007) Social Power and Social Causation: Towards a Formal Synthesis, in M. Braham and F. Steffen (eds) Power, Freedom, and Voting, Springer, 1–21. Braham, M. and Holler, M.J. (2005) The Impossibility of a Preference-based Power Index, Journal of Theoretical Politics 17: 137–157. Braham, M. and Steffen, F. (2002) Voting Power in Games with Abstentions, in M.J. Holler et al. (eds), Jahrbuch für Neue Politische Ökonomie 20: Power and Fairness: Mohr Siebeck, 333–348. Braham, M. and Steffen, F. (2003) Voting Rules in Insolvency Law: A Simple Game Theoretic Approach, International Review of Law and Economics 22: 1–22. Brams, S. (1968) Measuring the Concentration of Power in Political Systems, American Political Science Review 62: 461–475. Brickley, J.A., Smith, C.W., and Zimmerman, J.L. (2004) Managerial Economics and Organizational Architecture, McGraw Hill/Irwin. Brink, R. van den (1994), Relational Power in Hierarchical Organizations, PhD dissertation, Tilburg University. Brink, R. van den (1997) An Axiomatization of the Disjunctive Permission Value for Games with a Permission Structure, International Journal of Game Theory 26: 27–43. Brink, R. van den (1999) An Axiomatization of the Conjunctive Permission Value for Games with a Hierarchical Permission Structure, in H. de Swart (ed.) Logic, Game Theory and Social Choice, Tilburg University Press: 125–139. Brink, R. van den (2001) Banzhaf Permission Values for Games with a Permission
80
René van den Brink and Frank Steffen
Structure, Technical Report 341, Department of Mathematics, University of Texas at Arlington. Brink, R. van den (2002) The Apex Power Measure for Directed Networks, Social Choice and Welfare 19: 845–867. Brink, R. van den and Gilles, R.P. (1996) Axiomatizations of the Conjunctive Permission Value for Games with a Permission Structure, Games and Economic Behavior 12: 113–126. Brink, R. van den and Gilles, R.P. (2000) Measuring Domination in Directed Graphs, Social Networks 22: 141–157. Copeland, A.H. (1951) A Reasonable Social Welfare Function, mimeo, Seminar on Applications of Mathematics to Social Sciences, University of Michigan. Daudi, P. (1986) Power in the Organization, Basil Blackwell. Felsenthal, D.S. et al. (2003) In Defence of Voting Power Analysis: Responses to Albert, European Union Politics 4: 473–497. Felsenthal, D.S. and Machover, M. (1998) The Measurement of Voting Power, Edward Elgar. Felsenthal, D.S., Machover, M., and Zwicker, W.S. (1998) The Bicameral Postulate and Indices of A Priori Voting Power, Theory and Decision 44: 83–116. Gilles, R.P., Owen, G., and Brink, R. van den (1992) Games with Permission Structures: The Conjunctive Approach, International Journal of Game Theory 20: 277–293. Gilles, R.P. and Owen, G. (1994) Games with Permission Structures: The Disjunctive Approach, Mimeo, Department of Economics, Virginia Polytechnic Institute and State University. Grofman, B. and Owen, G. (1982) A Game Theoretic Approach to Measuring Degree of Centrality in Social Networks, Social Networks 4: 213–224. Harré, R. (1970) Powers, British Journal of the Philosophy of Science 21: 81–101. Herings, P.J.-J., Laan, G. van der, and Talman, D. (2005) The Positional Power of Nodes in Digraphs, Social Choice and Welfare 24: 439–454. Holler, M.J. et al. (eds) (2002) Jahrbuch für Neue Politische Ökonomie 20: Power and Fairness, Mohr Siebeck. Holler, M.J. and Owen, G. (eds) (2000) Power Measures, Vol. I, Homo Oeconomicus 17. Holler, M.J. and Owen, G. (eds) (2001) Power Indices and Coalition Formation, Kluwer. Holler, M.J. and Owen, G. (eds) (2002) Power Measures, Vol. II, Homo Oeconomicus 19. Holler, M.J. and Gambarelli, G. (eds) (2006) Power Measures, Vol. III, Homo Oeconomicus 23. Hu, X. and Shapley, L.S. (2003) On Authority Distributions in Organisations: Controls, Games and Economic Behavior 45: 153–170. Johnston, P. and Gill J. (1993) Management Control and Organizational Behaviour, Paul Chapman Publishing. Kolpin, V.W. (1988) A Note on Tight Extensive Game Forms, International Journal of Game Theory 17: 187–191. Kolpin, V.W. (1989) Core Implementation via Dynamic Games Forms, Social Choice and Welfare 6: 205–225. Kuhn, H.W. (1953) Extensive Games and the Problem of Information: in H.W. Kuhn and A.W. Tucker (eds) Contributions to the Theory of Games II (Annals of Mathematics Studies 28), Princeton University Press, 193–216. Mackenzie, K.D. (1976) A Theory of Group Structures, Vol. I: Basic Theory, Gordon and Breach Science Publishers.
Positional Power in Hierarchies
81
Martin, J. (1998) Organizational Behaviour, International Thompson Business Press. Mas-Collell, A., Whinston, M.D., and Green, J.R. (1995) Microeconomic Theory, Oxford University Press. Miller, N.R. (1982), Power in Game Forms, in M.J. Holler (ed.) Power, Voting and Voting Power, Physica Verlag, 33–51. Mizruchi, M.S. and Potts, B.B. (1998) Centrality and Power Revisited: Actor Success in Group Decision Making, Social Networks 20: 353–387. Morriss, P. (1987/2002) Power: A Philosophical Analysis, Manchester University Press. Radner, R. (1992) Hierarchy: The Economics of Managing, Journal of Economic Literature 30: 1382–1415. Russett, B.M. (1968) Probalism and the Number of Units: Measuring Influence Concentration, American Political Science Review 62: 476–480. Selten, R. (1975) Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games, International Journal of Game Theory 4: 25–55. Shapley, L.S. (1962) Simple Games: An Outline of the Descriptive Theory, Behavioral Science 7: 59–66. Shapley, L.S. and Palamara, J.R. (2000a) Control Games and Organizations, UCLA Working Paper 795. Shapley, L.S. and Palamara, J.R. (2000b) Simple Games and Authority Structure, UCLA Working Paper 796. Shapley, L.S. and Shubik, M. (1954) A Method for Evaluating the Distribution of Power in a Committee System, American Political Science Review 48: 787–792. Steffen, F. (2002) Essays in the Theory of Voting Power, PhD Dissertation, University of Hamburg. Straffin, P.D. (1977) Homogeneity, Independence, and Power Indices, Public Choice 30: 107–118. Straffin, P.D. (1978) Probability Models for Power Indices, in: P.C. Ordeshook (ed.) Game Theory and Political Science, New York University Press, 477–510. Taylor, A.D. and Zwicker, W.S. (1999) Simple Games, Princeton University Press.
5. A Public Help Index Cesarino Bertini Department of Mathematics, Statistics, Computer Science and Applications, University of Bergamo, Italy
Gianfranco Gambarelli Department of Mathematics, Statistics, Computer Science and Applications, University of Bergamo, Italy
Izabella Stach Faculty of Management, AGH University of Science and Technology, Krakow, Poland
1. Introduction As far as we known, the first concept of power index dates back to 1780s and is due to Luther Martin (see Felsenthal and Machover (2005), Gambarelli and Owen (2004) and Riker (1986)). Lionel S. Penrose (1946) gave, probably, the first scientific discussion of voting power where he introduced the concept of a priori voting power (a similar analysis was independently carried out by John F. Banzhaf (1965)). Lloyd S. Shapley, in cooperation with Martin Shubik (Shapley and Shubik 1954), came up with a specialization of the Shapley (1953) value as a power index. Other power indices were introduced later; some derived from existing values, others built exclusively for simple games. The Public Good Index introduced by Manfred Holler in (1978) belongs to the latter category. In this paper the ‘Public Good Index’ is modified as a ‘Public Help Index’ (PHI), which takes into account all winning coalitions. In fact, sometimes every (not only minimal) winning coalition is relevant to bargaining for various reasons: electoral legislation (see, Bertini et al. 2005; Gambarelli 1999), strengthening of weak minimal majorities, social help and so on. The here presented Public Help Index R(v ) follows this philosophy and takes the name from the latter case. Contrarily to the PGI, defined on the simple monotone games, the PHI does not require this restriction of monotonicity. An axiomatic characterization and monotonicity properties of the new index is given. An algorithm for automatic computation of the PGI and the PHI is supplied.
84
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
2. Definitions A game is the set of rules describing a strategic situation. In the cooperative games, players can unite to obtain common advantages. Let N = \1, 2, ..., n ^ be the set of all players indexed by the first n natural numbers. A cooperative n-person game in characteristic function form is an ordered pair (N , v ) , where v : 2 N l R is a real-valued function on the family 2 N of all subset of N such that v() 0 . The real-valued function v is called a characteristic function of the game. Any subset S of N is called coalition and v(S ) the worth of the coalition S in the game. In this paper we denote a cooperative game (N , v ) just by its characteristic function v. A cooperative game v is said to be simple if its characteristic function has a value only in the set \0, 1^ : v(S ) = 0 or v(S ) = 1 for all S N . In the first case the coalition is said to be losing; in the second, winning. A coalition S is called a minimal winning coalition if v(S ) 1 but v(T ) 0 for all T S , T v S . Every player i who belongs to a minimal winning coalition S N is called crucial in S. If player i N belongs to no minimal winning coalition, then i is called a dummy player for game v. A simple game v is monotone if v(S ) b v(T ) whenever S T N . We call zero player every player that does not belong to any winning coalition. A null game v is a simple game in which S N v(S ) 0 . Obviously in null games every player is a zero player. Let G be the set of simple cooperative games v defined on N. Let G * G denote the set of all simple monotone games with exclusion of the null game. For all v G we denote: W (v ) the set of winning coalitions of v; W *(v ) the set of minimal winning coalitions of v; W i (v ) the set of winning coalitions of v containing the i-th player; W i *(v ) the set of minimal winning coalitions of v containing the i-th player. From now on we will omit the indication ‘(v)’ when it does not imply misunderstandings and we will denote by X the cardinality of the set X. Let v 1 and v 2 be two games of G. We define the games v 12(S ) and v 12(S ) as follows: def ¦£1 if v 1(S ) 1 or v 2(S ) 1 v 12(S ) max v 1(S ), v 2(S ) ¦ ¤ ¦¦¥0 otherwise def ¦£1 if v 1(S ) 1 and v 2(S ) 1 v 12(S ) min v 1(S ), v 2(S ) ¦ ¤ ¦¦¥0 otherwise.
Notice that: W i (v 12 ) W i (v 1 ) W i (v 2 ) and W i (v 12 ) W i (v 1 ) W i (v 2 ) W (v 12 ).
We say that v 1 and v 2 are mergeable if no minimal winning coalition of v 1 is or contains a minimal winning coalition of game v 2 and vice versa. In formulae:
A Public Help Index
85
S 2 and S 2 S 1. S 1 W *(v 1 ) and S 2 W *(v 2 ) º S 1
Let (X 1 , X 2 ,..., X n ) be a vector with non-negative components such that
n i 1 X i 1 . For any coalition S, X(S ) i S X i is the weight of the coali-
tion. Let q be the majority quota that establishes winning coalitions. We call a weighted majority game (q ; X 1 ,..., X n ) the simple game: £¦1 if X(S ) p q v(S ) ¦¤ ¦¦¥0 elsewhere. A power index K is a function that maps n-person simple game v, to n-dimensional real vector and it is a measure of the distribution of power in simple games. Let v :(q ; X 1 ,..., X n ) be a weighted majority game. We say that power index K is locally monotonic if X i X j º Ki (v ) p K j (v ).
Let v :(q ; X 1 ,..., X i ,..., X n ) and v a :(q ; X 1a ,..., X ia ,..., X na ) be two weighted majority games such that X i X ia for one i N and X j b X aj for all j v i . We say that power index K is globally monotonic if
Ki (v ) p Ki (v a).
3. The Public Good Index The PGI is a decisive point in the measuring of a priori voting power because it, contrary to previous approaches, considers the coalition value to be a public good and takes in consideration the distinction between power and luck, as introduced by Barry in (1980). Later on the PGI has increased significance by virtue of many important developments and applications: see for instance Holler (1978, 1982a) and Hołubiec and Mercik (1994). The index was axiomatized by Holler and Packel (1983). Napel (1999, 2001) showed the independence and non redundancy of the Holler and Packel axioms. Holler and Li (1995) introduced the ‘Public Value’, changing the ‘mergeability’ axiom. Other developments of the PGI were supplied by Brams and Fishburn (1995), Haradau and Napel (2007), Holler (1982b, 1998, 2002) and Widgrèn (2002). Moreover some properties common to various indices (Holler’s included) were studied by Brams and Fishburn (1996), Felsenthal and Machover (1995, 1998), Freixas and Gambarelli (1997), Gambarelli (1983), Holler (1997), Holler and Napel (2004), Holler and Owen (2002), Holler and Widgrèn (1999), Syed (1990) and Turnovec (1997).
86
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
3.1 Formulation
Let v be a game of G * . As stated above, W i denotes the number of minimal winning coalitions containing the i-th player. The Public Good Index h introduced by Holler in (1978) is defined as follows:
h i (v )
Wi
W
for all i N .
j
j N
3.2 A Leading Example In our treatment we will refer to the 3-person simple monotone game v(1) = v(2) = v(3) = v(2, 3) = 0; v(1, 2) = v(1, 3) = v(1, 2, 3) = 1. Let’s compute the PGI of our example. The set of minimal winning coalitions is: W * \\1,2^ , \1,3^^ . Since player 1 belongs to both minimal win ning coalitions, then W 1 2 . Since players 2 and 3 belong to only one minimal winning coalition, then W 2 W 3 1 . It follows that j N W j 2 4. Then h 1 4 and h 2 h 3 41 .
3.3 Axiomatization We recall the four axioms introduced by Holler and Packel in (1983) as axiomatization of the PGI for simple monotone games (G *) . A1 (symmetry)
For every permutation π and i N Q : N l N º Ki (v ) K Q(i )(Qv ).
where the game Qv is nothing other than the game v, with the roles of the players interchanged by the permutation π. A2 (efficiency)
K (v ) 1. i
i N
A3 (dummy player)
i is a dummy for v º K i (v ) 0.
A4 (mergeability) For all mergeable pairs v 1 , v 2 G * and for all i N
K i (v 1 )W j (v 1 ) K i (v 2 )W j (v 2 ) K i (v 12 )
j N
W j N
j N
j
j
(v 1 ) W (v 2 ) j N
.
A Public Help Index
87
Table 1. How the PHI changes when dummy players are added
n
R1
R2 R3
R 4 ... Rn
3
3 17
–
4 5
6 17 6 20
2 7 4 17 4 20
3 17 3 20
6
6 23
4 23
3 23
7
6 26
4 26
3 26
8
6 29
4 20
3 29
9
6 32
4 32
3 32
10
6 35
4 35
3 35
4. The Public Help Index Let v G . The Public Help Index ș, for a non-null game v, is defined as: Ri v
Wi W
i N j
j N
while in case of the null game v, Ri v 0 for every player i. Notice that, if S W , then Ri (v ) p 0 for all i N , i N Ri (v ) 1 and R is a normalized power index.
4.1 Examples Consider the game of the leading example. The set W of the winning coalitions is: W = \\1,2^ , \1,3^ , \1,2,3^^ . Since player 1 belongs to all the winning coalitions, then W 1 3. Since players 2 and 3 belong to two winning coalitions, then W 2 W 3 2. It follows that j N W j 7. Therefore R1 73 and R 2 R 3 72 . We report in Table 1 the values of the PHI for cases where some dummy players join the game. For instance, if only one dummy player is added, then the winning coalitions are \1,2^ , \1,3^ , \1,2,3^ , \1,2,4^ , \1,3,4^ , and \1,2,3,4^ and the values of R are 67 , 174 , 174 , 173 . Observe that the dummy player gets always a half of the value of player 1, because he takes part only to a half of the total winning coalitions to which player 1 belongs. In general the computation of Table 1 can be done using the formula: R1
6 14 3(n 3)
88
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
2 4 R 2 R 3 R1 3 14 3(n 3) 1 3 R i R1 for all i from 4 to n. 2 14 3(n 3)
Notice that, for all i from 1 to n (n 3), Ri vanishes for n l d.
4.2 Axiomatization The extension of R(v ), where v G , to all winning coalitions implies that axiom A2 (efficiency) is kept only for non-null games. Axiom A3 (dummy player) cannot be respected, because even non-crucial players can get positive payments (if only the game is not the null game). For analogous reasons even axiom A4 (mergeability) cannot be respected. So, only the first axiom A1 (symmetry) of the PGI will be kept properly extended to G, while A2 (efficiency), A3 (dummy player) and A4 (mergeability) will be replaced with A2bis 8 (PHI-efficiency), A3bis (zero player) and A4bis (PHImergeability). We propose the following axioms as axiomatization of the PHI for simple games G. A1 (symmetry)
For every permutation π and i N Q : N l N º Ki (v ) K Q(i )(Qv )
(where the game Qv is defined as in Section 3.3). A2bis (efficiency)
£¦1 if v is not the null game
K (v ) ¦¤¦0 i
¦¥
i N
A3bis (zero player)
otherwise.
Ki (v ) 0 if and only if the i-th player is a zero player.
A4bis (PHI-mergeability) Ki (v 12 )W j (v 12 ) Ki (v 1 )W j (v 1 ) Ki (v 2 )W j (v 2 ) Ki (v 12 )W j (v 12 ) j N
j N
j N
j N
for all v 1 ,v 2 G and i N .
The first axiom (symmetry) states that the power of a player remains invariant under the permutation of the players. The second axiom normalized the n-tuple of individual powers. The third axiom (zero player) states that zero players have no power and with reference to the dummy player
A Public Help Index
89
axiom we do not only replace dummy player with zero player, but also replace ‘if’ by ‘if and only if’. The fourth axiom (PHI-mergeability) states that the weighted sum of the powers of the merged games v 12(S ) and v 12(S ) is equal to the weighted sum of the powers of the component games. Theorem 1 (Necessity) For all v G , R(v ) satisfies A1, A2bis, A3bis, A4bis.
Proof A1(Symmetry) We have to prove that Ri (v ) R Q(i )(Qv ) for all i N and all permutations Q on N. If v is the null game the proof is trivial, because also the game Qv is the null game and Ri (v ) R Q(i )(Qv ) 0 (for all i N and all permutations Q on N ) from the definition of the PHI. In case of a non-null game v, we have for every permutation Q on N and every i N : Ri (v )
Wi W
W
j
j N
Q(i )
W
R Q(i )(Qv ).
Q( j )
Q( j )N
A2bis (PHI-efficiency) Let v G . If v is the null game , every player is zero player, so Ri (v ) 0 . For a non-null game v we i N have:
R (v ) i
i N
i N
Wi W
j
j N
1 W
W
i
1.
j i N
j N
A3bis (Zero player) We have to prove that Ri (v ) 0 if and only if i is a zero player. The demonstration for the null game is trivial, because in the null game every player is null player. For a non-null game v, from the definition of ș, we have: Ri (v )
Wi W
0 W i 0. j
j N
Then, W i 0 W i S i
v(S ) 0 i is a zero player.
A4bis (PHI-mergeability) We have to prove that, for all v 1 , v 2 G and all i N : Ri (v 12 )W j (v 12 ) Ri (v 12 )W j (v 12 ) Ri (v 1 )W j (v 1 ) Ri (v 2 )W j (v 2 ). j N
In fact:
j N
j N
j N
90
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
W i (v 12 ) W i (v 12 ) W j (v 12 ) W j (v 12 ) 12 W j (v ) j N W j (v 12 ) j N j N
j N
W i (v 1 ) W i (v 2 ) W i (v 12 ) W i (v 12 ) Ri (v 1 )W j (v 1 ) Ri (v 2 )W j (v 2 ) j N
j N
,
which completes the proof.
Theorem 2 (Sufficiency) For all v G , R(v ) is the unique index satisfying A1, A2bis, A3bis, A4bis.
Proof Let K be a generic power index defined on G and respecting A1, A2bis, A3bis, A4bis. We will prove that K R. Consider a game v G . If v is the null game then from A1, A2bis, A3bis we suddenly obtain i N K i (v ) 0 Ri (v ), otherwise let S 1 , S 2 , ! , S W be the winning coalitions of v. For every coalition S k (k from 1 to W ) we construct the game v k such that S k is the only winning coalition of v k . Observe that v(S ) max v k (S ) for all S N . k 1,...,W
Then the original game v can be expressed as v max v k . k 1,...,W
Moreover, min v k
k 1,...,W
is the null game. We can also deduce that
min(max v k , v j 1 ) is the null game for j W 1,W 2,...,1. k 1,..., j
(1)
Every index K respecting A1, A2bis, and A3bis applied to any game v k (k from 1 to W ), owns the following property, for all i N : £ 1 ¦ ¦ Ki (v k ) ¦¤ S k ¦ ¦ ¦ ¥0
if i S k otherwise.
Applying sequentially A4bis to the pairs of games: max v k and v j 1 ( j W 1,W 2,!,1)
k 1,..., j
(2)
A Public Help Index
91
and using (1) we obtain: W
K (v )W i
K i (v ) K i max v k k 1,...,W
k
k 1
j N
W
j
j
¬ (v k ) ®
(v )
.
(3)
j N
Observe that, for all k 1,!,W , S k is the unique winning coalition of W (v k ). Then,
W
j
(v k ) S k .
j N
From the above we obtain: W
K (v )S
K v W (v ) i
k
k 1
i
k
for all i N .
j
j N
Owing to (2) we have that Ki (v k ) v 0 if i S k . Therefore, 1
K i (v )
S S k i
W
¬ S k ®
k j
(v )
j N
Wi Ri (v ) for all i N . W j (v ) j N
which completes the proof.
,
5. Monotonicity There are some properties that are reasonable in power indices and one of them is monotonicity (see Felsenthal and Machover (1995, 1998), Holler (1982a), Holler and Napel (2004), Holler et al. (1999), Turnovec (1997)). It is known that the PGI neither satisfies global nor the local monotonicity, whereas the PHI satisfies both. Theorem 3 (Monotonicity) The power index R is globally monotonic.
Proof Let n p 2 and let v :(q ; X 1 ,..., X i ,..., X n ) and v a :(q ; X 1a ,..., X ia ,..., X na ) be two weighted majority games such that:
X i X ia for one i N and X j b X aj for all j v i . We have to demonstrate that:
(4)
92
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
Ri (v a) b Ri (v ).
(5)
Given that X i X ia (see (4)), then all winning coalitions containing player i in the game v a are also winning in the game v : W ia W i . Thus, we have W i a b W i . From
X
X aj 1
j
j N
j N
we have that at least N is a winning coalition so: W i p W i a p 1, W j p n , W ja p n . j N
j N
We can also observe that
W j N
j
S S W
S b W a b j
S W '
j N
where b is an integer number defined in the following way: b d d : d
and d
S p0
S (W i \ W ia)
S p 0.
S (W a \ W )
Let a W i W i a . Then we can write: Ri (v )
Wi W ia a W j W ja b j N
j N
where a and b are integer numbers and 0 b a b (2 n 1 1). (Note that there are 2 n 1 different coalitions containing player i N ). Let a p 0 . There are two possible cases: (a p 0 b b 0) or (a 0 b 0) . In the first case b d d b 0 so
W a b b W a j
j
j N
j N
and we have Ri (v )
Wi W ia a W ia W ia p p Ri (v a). W j W ja b W ja b W ja j N
j N
j N
j N
It remains to prove (5) in the second case (a 0, b 0). Assume that a 1 and d 0. It implies that b d \n 1, n 2,!, 1^. Every value of b determinates
nb 11
A Public Help Index
93
types of possible weighted majority games v a satisfying (4). These games are different as regards the set W i a . We can observe that W i a must contain all winning coalitions S i such that b 1 b S b n and at most
nb 11 1 coalitions S i of the cardinality b. It is impossible that S W i a and S b because from the monotonicity of the weighted majority games we have that every coalition T S , T b º T W i a. This violates the emergence of the new coalition of the cardinality b. Hence, if we denote by v ba k and v bk the pair of n-person weighted majority games satisfying the condition (4) then for all b n 1, n 2,..., 1 and
n 1 k 0,1,..., b 1 1
we have n b 1 n b 1 ¦£¦ k n 1 n 1 ¯ k , W ja ¡ n l ¸ ¦¦v ba : W ia l l °±° k ¸ b l 0 j N l 0 ¢¡ ¦¤ ¦¦ k ¦¦v b : W i W i a 1, W j W ja b . j N j N ¦¥
(6)
From (6) we obtain:
n 1 Ri (v ' bk 1 ) Ri (v bk ) b n 1, n 2,...,1, k 0, 1,..., b 1 2
and
n 1 1 b n 1, n 2,!,2.
b 1 Ri v ba01 Ri v b
From (6) for b n 1, k 0 and because 1 1 n p 2 n n 1
we have Ri (v ' n0 1 )
1 1 n n 1
n p 2.
(7)
It is easy to demonstrate that for all positive integer numbers c, d, e, f: c c e e c e , whenever . d d f f d f
(8)
94
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
Now, if we start from (7) and for any b n 1,...,2 we sequentially apply (8) with c e 1 Ri (v ba k ), d f b
varying from k 0 to
n 1 k b 1 1
and basing on the trivial inequality 1 1 b b 1
we obtain £ ¦ W ia W ia 1 1 ¦ Ri (v ' bk ) Ri (v bk ) ¦ ¦ b ¦ W ja W ja b ¦ ¦ j N j N ¤ ¦ ¦ n 1 ¦ 1 1 1 . ¦ Ri (v ' b01 ) Ri v b b 1 ¦ ¦ b b 1 ¦ ¥
(9)
If b 1 then
n 1 k b 1 1 0. 1 From (9) for b 2 we have that Ri (v '10 ) Ri (v 2n 2 ) 1 so applying (8) 2 with c e 1 Ri (v 1a 0 ), d f 1
we obtain Ri (v '10 ) Ri (v 10 ).
(10)
So, we have proved (5) in the particular case: a 1 and d 0 (see (9) and (10)). Now, consider the general second case: a 0 and b d d 0 . Let S 1 , S 2 ,..., S a W i \ W i a such that S 1 pS 2 p... p S a p 1 a and l 1 S l d . Then, since (5) holds in the case a 1 and d 0 we get: Ri (v ')
W ia
W a j
j N
W ia 1 W ia 2 W ia a ... W ja S 1 W ja S 1 S 2 W ja d j N
j N
j N
A Public Help Index
b
95
W ia a Ri (v ) W ja d d j N
which completes the proof.
,
Turnovec (1997) proved that any power index which is globally monotonic and symmetric, is also locally monotonic. Because the PHI is symmetric and we have proved that it is also globally monotonic so it is also locally monotonic.
6. Further Developments As suggested by Manfred Holler, a development of the present work could consist in reformulation of the index without normalization. Such reformulation would make the contribution assigned to the dummy players more explicit, without contextually decreasing the contribution of the crucial players. Such a new index would naturally require a specific formalization.
Appendix
An Algorithm for the Computation of the PGI and PHI
We report an algorithm of automatic computation of the Public Good Index and the Public Help Index for weighted majority games.
Symbols Ri PHI of the i-th player (i 1,..., n ). h i PGI of the i-th player (i 1,..., n ). W i number of winning coalitions containing player i (i 1,..., n ). number of winning coalitions. W number of minimal winning coalitions containing player Wi i (i 1,..., n ). number of minimal winning coalitions. W d , j , k , p , s1, s 2, t auxiliary variables (Integer type). S[ j ] auxiliary variable (j-th element of coalition S ( j 1,..., k ) ). Include auxiliary variable of Boolean type (Include = True if player i belongs to coalition S). MWC auxiliary variable of Boolean type (MWC = True if S is minimal winning coalition).
Input
96
n q wi
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
number of players. majority quota. weights of the players, i 1, ..., n .
Begin for i : 1 to n do begin if w i p q then begin W i : 1 ; W i : 1 end else begin W i : 0 ; W i : 0 end; for k 2 to (n 1) do {in this loop k-element coalitions are formed} begin for j 1 to k do begin S[j ]:=j ; t :=t +w j end; {t = total power of S} p : k while p p 1 do begin s1 : 0 ; Include:=False; MWC:=False; for j : 1 to k do begin s 2:=s1 ; s1:=s1+w S[j ] ; if S[ j ] i then Include:= True; if s 2 p q then begin j := k ; MWC:=False end;{S is winning but not minimal} else begin if s1 p s 2 then MWC:=True else MWC:=False if MWC =False and ( j k ) then j := k end; end; if MWC and Include and (s1 p q ) then begin for j :=1 to k do {this loop verifies if S is really a minimal winning coalition} begin d := t - w S[j ] ; if (d p q ) then MWC:=False end; if MWC then W i : W i 1 ; end; if (s1 p q ) and Include then W i : W i 1 ; if S[k ] = n then p := p -1 else p:= k ; if p p 1 then for j := k down to p do S[j ]:= S[p ]+j - p +1 ; end; end;
A Public Help Index
W i : W i 1 ; W : W W i ; W : W W i end; for i := 1 to n do begin Ri : W i W ; if W 0 then h i : 1 n else h i : W i W unique winning coalition^ end; End.
\W
97
0 only if N is the
Acknowledgements This paper is sponsored by MIUR 40% (PRIN 2005). We would like to thank Manfred Holler and René van den Brink for their useful suggestions.
References Barry, B. (1980) Is it Better to be Powerful or Lucky: Part I and II, Political Studies 28: 183–194, 338–352. Banzhaf, J.F. (1965) Weighted Voting Doesn’t Work: A Mathematical Analysis, Rutgers Law Review 19: 317–343. Bertini, C., Gambarelli, G. and Stach, I. (2005) Apportionment Strategies for the European Parliament, in G. Gambarelli and M.J. Holler (eds) Power Measures III, Homo Oeconomicus 22(4): 589–604. Brams, S.J. and Fishburn, P.C. (1995) When is Size a Liability? Bargaining Power in Minimal Winning Coalitions, Journal of Theoretical Politics 7: 301–316. Brams, S.J. and Fishburn, P.C. (1996) Minimal Winning Coalitions in Weighted Majority Games, Social Choice and Welfare 13: 397–417. Felsenthal, D. and Machover, M. (1995) Postulates and Paradoxes of Relative Voting Power – A Critical Review, Theory and Decision 38: 195–229. Felsenthal, D. and Machover, M. (1998) The Measurement of Voting Power. Theory and Practice, Problems and Paradoxes, Edward Elgar. Felsenthal, D. S. and Machover, M. (2005) Voting Power Measurement: A Story of Misreinvention, Social Choice and Welfare 25 (2–3): 485–506. Freixas, J. and Gambarelli, G. (1997) Common Properties Among Power Indices, Control and Cybernetics 26, 4: 591–603. Gambarelli, G. (1983) Common Behaviour of Power Indices, International Journal of Game Theory 12: 237–244. Gambarelli, G. (1999) Minimax Apportionments, Group Decision and Negotiation 8: 441–461. Gambarelli G. and Owen, G. (2004) The Coming of Game Theory, in G. Gambarelli (ed.) Essays on Cooperative Games – in Honour of Guillermo Owen, Theory and Decision 36: 1–18. Haradau, R. and Napel, S. (2007) Holler-Packel Value and Index – A New Charac-
98
Cesarino Bertini, Gianfranco Gambarelli, and Izabella Stach
terization, Homo Oeconomicus 24: 257–270. Holler, M.J. (1978) A Priory Party Power and Government Formation, Munich Social Science Review 1: 25–41. Holler, M.J. (1982a) Forming Coalitions and Measuring Voting Power, Political Studies 30: 262–271. Holler, M.J. (1982b) Party Power and Government Formation: A Case Study, in M.J. Holler (ed.) Power, Voting and Voting Power, Physica Verlag. 273–282. Holler, M.J. (1997) Power Monotonicity and Expectations, Control and Cybernetics 26: 605–607. Holler, M.J. (1998) Two Stories, one Power Index, Journal of Theoretical Politics 10: 179–190. Holler, M.J. and Li, X. (1995) From Public Good Index to Public Value: An Axiomatic Approach and Generalization, Control and Cybernetics 24: 257–270. Holler, M.J. and Napel, S. (2004) Monotonicity of Power and Power Measures, Theory and Decision 56: 93–111. Holler, M.J., Ono, R. and Steffen, F. (1999), Constrained Monotonicity and the Measurement of Power, Theory and Decision 50: 385–397. Holler, M.J. and Owen, G. (2002) On the Present and Future of Power Measure, in M.J. Holler and G. Owen (eds.) Power Measures I, Homo Oeconomicus 19: 281–295. Holler, M.J. and Packel, E.W. (1983) Power, Luck and the Right Index, Zeitschrift für Nationalökonomie (Journal of Economics) 43: 21–29. Holler, M.J. and Widgrèn, M. (1999) The Value of a Condition is Power, Homo Oeconomicus 15: 497–512. Hołubiec, J. and Mercik, J.W. (1994) Inside Voting Procedures, SESS Vol. 2, Munich: Accedo Verlag. Napel, S. (1999) The Holler-Packel Axiomatization of the Public Good Index Completed, Homo Oeconomicus 15: 513–520. Napel, S. (2001) A Note on the Holler-Packel Axiomatization of the Public Good Index (PGI), in M.J. Holler and G. Owen (eds) Power Indices and Coalition Formation, Kluwer, 143–151. Penrose, L.S. (1946) The Elementary Statistics of Majority Voting, Journal of the Royal Statistical Society 109: 53–57. Riker, W. (1986) The First Power Index, Social Choice and Welfare 3: 293–295. Shapley, L.S. (1953) A Value for n-Person Games, in H.W. Kuhn and A.W. Tucker (eds) Contributions to the Theory of Games II, Princeton University Press, 307–317. Shapley, L.S. and Shubik, M. (1954) A Method for Evaluating the Distributions of Power in a Committee System, American Political Science Review 48: 787–792. Syed, S.A. (1990) Some Applications of n-Person Game Theory to the Social Sciences, Dept. of Applied Math., Faculty of Engineering Science, Osaka University. Turnovec, F. (1997) Monotonicity of Power Indices, East European Series 41, Institute for Advanced Studies, Vienna. Widgrèn, M. (2002) On the Probabilistic Relationship Between the Public Good Index and the Normalized Banzhaf Index, in M.J. Holler and G. Owen (eds) Power Measures II, Homo Oeconomicus 19: 373–386.
6. Shapley-Shubik vs. Strategic Power: Live from the UN Security Council Stefan Napel Department of Economics, University of Bayreuth, Germany
Mika Widgrén Turku School of Economics and ETLA, Finland
Reporter 1 Hello, and welcome to Hamburg, where a very good crowd has turned out to watch the local team, wearing a big Y on their shorts, play the renowned G-team from Princeton. The guests are easily distinguished by the dozen or so small but boldprint symbols on their T-shirts, and it’s their first direct encounter in this tournament. Today’s challenge is an analysis of power in the United Nations Security Council. And both teams are still warming up … Reporter 2 Yes, so we have a little time to look at their respective background … The G-team clearly has the longer history, founded back in 1954 by the legendary Lloyd Shapley and Martin Shubik. They were the first to introduce the ‘axiomatic approach’ to the power circus. It still has many loyal fans all across the globe and, according to hearsay, keeps producing a tenured position every now and then. With its tough training methods and many cooperative games, the G-team scored a couple of important victories in the 1950s and ’60s. I’m sure some older scholars in the audience will still remember what a creative attack they had, with players like William Lucas, Irving Mann, Guillermo Owen, Pradeep Dubey, Bob Aumann, Al Roth, Martin Shubik and, of course, Shapley himself taking skilful measurements of the US Congress, the Electoral College, and many toy examples. Their power-play gave rise to techniques such as the generating function method, power polynomials, and the multilinear extension almost as by-products … Reporter 1 Right, and if it hadn’t been for Straffin’s independence and the emergence of the C-team, coached jointly by Lionel Penrose and John F. Banzhaf … Reporter 2 Of course also enjoying the influential endorsement from the FelsenthalMachover press …
100
Stefan Napel and Mika Widgrén
Reporter 1
They might still be the only team in town!
Reporter 2 Well, we shouldn’t forget about Deegan-Packel’s formation of a minimal winning coalition and, of course, Manfred Holler’s h -fraction that they helped to get started. Did you hear already that it was recently promoted from the F&M Minor Indexship to the Freedom League? But that’s a different story, dear listeners, as I see that the referee is already preparing a number of fair coin tosses behind his veil of ignorance. So we should better have a quick look at the locals: some of them actually started their career as inferior players in the G and C -camps. That their own team is now wearing a Y must be due to coming late, when much of the Greek alphabet had already been used up by others. Their current managers claim that power measurement is all about ‘sensitivity’, which is especially popular with their female supporters. Some of them hold up signs now, saying ‘Preferences for power!’ Reporter 1 Right, provoking the stadium announcer to declare teasingly: ‘Impossible!’ But now the referee’s coins have produced a resounding 7:1 vote in favour of the random proposal. This means that the locals start off – and we will directly tune in to their introductory prose, mathematical assumptions and computations … The United Nations Security Council is the dominant political organ of the United Nations (UN). It is in charge of deciding upon the ‘effective collective measures for the prevention and removal of threats to the peace, and for the suppression of acts of aggression or other breaches of the peace …’ which are mentioned by the Charter of the United Nations after defining the UN’s prime purpose: ‘to maintain international peace and security’ (Art. 1(1)). The Security Council consists of 15 members altogether. Britain, China, France, Russia, and the US are permanent members, i.e. belong to the Council at any point in time. The remaining ten seats are filled by non-permanent members that, at the time of writing, were: Argentina, Congo, Denmark, Ghana, Greece, Japan, Peru, Qatar, Slovakia, and Tanzania. They are elected by the UN General Assembly according to regional quotas for a term of two years, with five members replaced each year, and no possibility of a direct re-election. Representation in the Security Council is an interesting topic in its own right, with conflicting criteria such as financial contributions, participation in peacekeeping missions, population size, and equitable geographic distribution to be taken into account. Our focus is different, however. We want to investigate the distribution of influence on decisions taken by the Security Council, given its current composition of five permanent members and ten elected members. According to Art. 27(3) of the UN Charter, the Security Council’s decisions ‘…shall be made by an affirmative vote of nine members including the concurring votes of the permanent members …’. 1 The latter 1
This applies to all matters that are not regarded as ‘procedural.’ On procedural matters,
Shapley-Shubik vs. Strategic Power
101
provision makes a ‘nay’-vote by any permanent member tantamount to a veto. Though the phrasing ‘including the concurring votes of the permanent members’ would also allow for a different interpretation, it has been the Security Council’s consistent practice to consider a resolution as adopted if permanent members abstain (rather than concur explicitly), provided it otherwise receives the specified support. A key problem in taking this into consideration in power analysis is the almost unavoidable arbitrariness involved with specifying (at least probabilistically) when a voter abstains. 2 Abstention would be a natural response by a voter if a proposal leaves him indifferent. But perfect indifference seems a degenerate case; it should a priori arise only with very low or even zero probability if there are no restrictions on the character of possible proposals. Another source of abstention could be exogenous costs of explicitly supporting a resolution, like an informal obligation to contribute to its implementation or reprisals by the targeted country. This would create a free-rider problem amongst the voters in favor of the resolution, involving abstention by free-riders. 3 Our analysis will, however, avoid such complications and in line with many other authors (see Felsenthal and Machover (1998: 280) for an incomplete list) abstract from abstentions altogether. Adoption of a resolution then simply requires unanimous support of the five permanent members and a four-out-of-ten ‘majority’ amongst the elected members. We assume that decisions of the Security Council are not merely between passing a resolution or not, but they also involve the content of the resolution. We simplify analysis by taking the latter to be reflected by a variable with arbitrary values on a bounded one-dimensional scale. This position of the Security Council regarding a given threat to peace may at an intuitive level represent the quality or quantity of demands that it makes, how hawkishly or dovishly it deals with an aggressor, the level of sanctions that it imposes, etc. We normalize possible positions to the unit interval X [0 1] and also refer to X as the issue-specific policy space. We presume that if no resolution regarding some issue is passed, a status quo q X prevails. This assumption can be artificial when issues are on the agenda for which the Security CounArt. 27(2) specifies an affirmative vote of nine members without special role for permanent members. Whether an issue is to be regarded as procedural or not is usually not specified explicitly; it may, however, itself be the matter of a vote, known as ‘the preliminary question.’ See, e.g. Chapter 4 of the Repertoire of the Practice of the Security Council (http://www.un. org/Depts/dpa/repertoire). 2 See, for example, Felsenthal and Machover (1998: 286–293) for an explicit treatment of abstentions that assumes somewhat ad hoc that ‘yeah’, ‘nay’, and abstention all have the same likelihood. 3 A preliminary consideration of this case suggests that a voter i is powerful if and only if exactly a minimal winning coalition is in favor of the proposal and includes i, matching the assumptions of Holler (1978) and Holler and Packel (1983).
102
Stefan Napel and Mika Widgrén
cil has never formulated a position before. However, a status quo quite naturally seems to exist for many recurring themes such as resolutions on Israel’s relations with its Arab neighbours. Moreover, there is also a natural status quo regarding decisions about ‘complete or partial interruption of economic relations and of rail, sea, air, postal, telegraphic, radio, and other means of communication, and the severance of diplomatic relations’ (Art. 41) or ‘[military] action by air, sea, or land forces as may be necessary to maintain or restore international peace and security’ (Art. 42). All members of the Security Council are assumed to rationally pursue their own interests. These take the form of spatial preferences over X. On any given issue with status quo q, we refer to the respective most preferred policy alternative of the five permanent members by Q1 , Q 2 ,…, Q 5 . They have the objective to pass a resolution as close to their respective ideal one as possible. The corresponding positions held by the ten elected members are denoted by F1 , F 2 ,…, F10 . We want to provide an estimate of how much influence the described voting rule a priori generates for the permanent and non-permanent Security Council members. This does intentionally not take into account any information about historical voting patterns or preferences of current Security Council members, and is not meant to identify the critical member(s) on any particular real-world issue, say, the next resolution on the Iranian nuclear program or the situation on the Balkan. We rather care about a quantification of the obvious asymmetry of power that the UN Charter gives to the two classes of members a priori. In line with the general framework developed in Napel and Widgrén (2004), we conceive of power as the ability to affect outcomes. This ability can be operationalized very naturally by looking at predicted outcomes for given behaviour of all agents and then asking: how sensitive is this outcome to a possible variation of an agent’s behaviour? The considered agent i is a priori more powerful or influential than another agent j if outcomes are more responsive to i ’s behaviour than to j’s behaviour on average. We regard agents’ behaviour as a consequence of rational and strategic pursuit of goals, captured by preferences. 4 The actual source of a behaviour variation is then a preference change of the agent in question. Under this view, an agent’s ex post power (referring to a given preference and status quo configuration) gauges how much the predicted outcome would change if the agent’s goals changed. If a preference change of a given magnitude has double the affect for player i than for player j, we consider i twice as powerful as j. Ex ante or a priori power averages this potential outcome change using an appropriate probability measure over all possible preference and status quo configurations. 4 See the debate between Braham and Holler (2005) and Napel and Widgrén (2005) on this issue.
Shapley-Shubik vs. Strategic Power
103
Members’ ex post and ex ante power are of interest also to outsiders without vote but an interest in the Security Council’s decision, e.g. the conflicting parties or lobbyists. If they can change Council members’ preference by bribe or persuasion, then ex post power indicates how effective (or potentially valuable) ‘good relations’ with a particular Security Council member are for a specific issue. Ex ante power does the same from a long-term perspective. Formally, sensitivity of a collective decision x X to a given member i ’s preferences for the ideal point configuration (Q 1 ,…, Q 5 , F1 ,…, F10 ) and status quo q can be captured by the partial derivative of x with respect to i ’s ideal policy, i.e. sx sQ i or sx sFi . 5 A priori power is then the expected value of sx sQ i or sx sFi . But first we need to make a plausible prediction of the collective decision for a given configuration (Q1 ,…, Q 5 , F1 ,…, F10 ) and q, i.e. specify a mapping x [0,1]16 l [0,1], whose partial derivatives will then capture the 15 members’ respective ex post influence. In general, x can result from very sophisticated strategic interaction, captured by game-theoretic solution concepts such as subgame perfect equilibrium applied to an explicit extensive game (see Napel and Widgrén 2006b). This is the reason why we refer to the indicated sensitivity analysis as measurement of strategic power. Anecdotal evidence suggests that before a resolution is actually put to a vote before the full Security Council, representatives of the permanent members confer and negotiate amongst themselves how encompassing, explicit, far-reaching, etc. it should be. After some preliminary agreement is reached in this ‘inner circle’ of the Security Council, the elected members get involved. Again there is some informal negotiation until a final proposal is drafted and put to a vote, producing the policy outcome anticipated in the preceding deliberations. One way to model this two-step negotiation process is to consider, first, independent preference aggregation amongst the permanent members and amongst the elected members; then, second, analyze bargaining between these two groups based on their respective aggregate opinion. This culminates in a formal vote of approval if a suitable agreement was found, and otherwise the status quo prevails. Let us start by considering preference aggregation amongst the permanent members. Denote the left-most of the respective five ideal points by Q(1) , the second left-most by Q(2) , etc. Then, the permanent members’ aggregate position Qˆ must coincide with the status quo q whenever this falls in between Q(1) and Q(5) : any proposal to the right or left of q would be vetoed by some permanent member strictly preferring to move into the opposite direction. If, in contrast, all five permanent members prefer a new pol5 The derivative focuses on the effect of small preferences changes. See Napel and Widgrén (2004) for a discussion of alternatives.
104
Stefan Napel and Mika Widgrén
icy x X that lies, say, to the right of q, i.e. if q Q(1) , then also q Qˆ . The aggregate position Qˆ cannot lie too far to the right, however, because then the most conservative permanent member (that is, the one closest to q) would rather vote ‘nay’. It turns out to be necessary that Qˆ ¢ Q(1) ,min \2Q(1) q , Q(5) ^¯± , where the first term in the minimum makes sure that Qˆ is not farther from Q(1) than q and the second term guarantees Pareto optimality of the preliminary agreement. In case that all permanent members prefer a new policy x X to the left of q, i.e. q Q(5) , the analogous expression is Qˆ ¢ max \Q(1) ,q 2Q(5) ^ , Q(5) ¯± . The indicated intervals correspond to the core of the respective bargaining problems and are therefore consistent with strategic behaviour (Banks and Duggan 2000). Without specific knowledge about the bargaining process amongst permanent members, no definite prediction regarding where in the core Qˆ is located can be made. One plausible a priori assumption would be that all core elements have equal probability (Napel and Widgrén 2006a). In line with Napel and Widgrén (2006b), we will however assume here that the agreement is always on the smallest common denominator, i.e. the core element that is closest to the status quo, resulting in £ Q(1) if q Q(1) ¦ ¦ ¦ Qˆ ¦ Q ¤ (5) if q Q(5) ¦ ¦¦q if Q b q b Q . (5) (1) ¦ ¥
By analogous reasoning, we obtain £ F(7 ) if q F(7) and q Q(1) ¦ ¦ ¦ ¦ Fˆ ¤F(4) if q F(4) and q Q(5) ¦ ¦ ¦ otherwise ¦ ¥q
as the result of the deliberations amongst the elected members. It is noteworthy here that only four out of ten elected members need to support a resolution endorsed by the permanent members for it to pass. Since there can exist four elected members that want to move to the left of the status quo and simultaneously four others that want to move to the right, it is determined by the permanent members who is actually the pivotal elected member. If, say, the former want to move to the left, they will talk to the four elected members most eager to pass a resolution x q ; amongst these, the elected member with position F(4) is the least eager and thus critical. Similarly, the support by the elected member wanting F(7) brings about the required total majority of nine votes if all permanent members want to move to the right. Negotiations between permanent members and (the critical voter amongst) elected members could be modelled in various ways. If, however, all members of the Security Council are assumed to be risk-neutral, i.e. their
Shapley-Shubik vs. Strategic Power
105
utility from policy x is falling linearly in the distance between x and their ideal point, then Nash bargaining and hence approximately also many variations of Rubinstein’s (1982) alternating offers game result in the most conservative bargainers’ ideal point (Napel and Widgrén 2006b). Thus we obtain £¦Q(1) ¦¦ ¦¦F(7) ¦ x (Q1 ,…, Q 5 , F1 ,…, F10 ; q ) ¦¤Q(5) ¦¦ ¦¦F(4) ¦¦ ¦¥q
if q Q(1) b F(7) if q F(7) Q(1) if F(4) b Q(5) q if Q(5) F(4) q
(1)
otherwise
as our predicted policy outcome for a given preference configuration of members of the Security Council. This outcome may be a new resolution; it can also be an implicit confirmation of the given status quo since any resolution would fail to obtain the required level of support. The respective ex post power following from (1) is sx £¦¦1 if q Q(1) b F(7) or F(4) b Q(5) q ¤ sQˆ ¦¦¥0 otherwise
(2)
for the whole group of permanent members, and sx £¦¦1 if q F(7) Q(1) or Q(5) F(4) q ¤ sFˆ ¦¦¥0 otherwise
(3)
for the group of elected members. The respective group’s a priori influence following from above ex post considerations will be denoted by Y Qˆ for the permanent and Y Fˆ for the elected group. It is the expected value of (2) and (3). Exploiting the simple structure of (2) and (3) we obtain Y Qˆ Pr(q Q(1) b F(7) ) Pr(F(4) b Q(5) q )
(4)
Y Fˆ Pr(q F(7) Q(1) ) Pr(Q(5) F(4) q ).
(5)
and
From an a priori perspective, which seeks to evaluate the pure effects of institutional rules in ignorance of special historical preference patterns in the Security Council, it is hard to justify any particular probability distribution to be used in the actual evaluation of above expressions. In line with Laplace’s principle of insufficient reason and common practice in the literature on binary voting (Felsenthal and Machover (1998: 280), Nurmi (1998)), we deem it a reasonable benchmark to consider all preference and status quo configurations equally likely, i.e. we assume that q, Q i , and F j are inde-
106
Stefan Napel and Mika Widgrén
pendently uniformly distributed on X [0,1] for all i 1,…,5 and j 1,…,10 . Symmetry then reduces (4) and (5) to Y Qˆ 2 ¸ Pr(q Q(1) b F(7) ) 1
2¨ 0
1
¨
Q(1)
(F) F Q(1) (q )> f F(7 ) (F)d F dq
q
and Y Fˆ 2 ¸ Pr(q F(7) Q(1) ) 1
2¨ 0
1
¨
F(7 )
(Q) F F(7) (q )> f Q(1) (Q)d Q dq
q
where F and f refer to the cumulative distribution and density functions of the indicated random variable. Given that all variables are independently [0,1]-uniformly distributed, the above order statistics have beta distributions. For example, the seventh smallest out of ten independent [0,1]-uniform draws is beta distributed with parameters 7 and 4, corresponding to f F(7 ) (x )
10 6 x (1 x )3 6 3
and x
F F(7) (x ) ¨ 0
10 6 s (1 s )3 ds . 6 3
Using this and the analogous expressions for f Q(1) and F Q(1) , we obtain Y Qˆ x 031352 and Y Fˆ x 001632 Given the symmetry within the two groups of members, we can now evaluate the a priori influence of the 15 members of the UN Security Council as Y x (006270,…, 006270, 000163,…, 000163),
where the first five entries indicate the power of individual permanent members and the last ten that of individual elected members. The interpretation of above numbers is as follows: ex ante, when we take all possible configurations of the ideal points and the status quo to be equally likely, a small preference shift of a given permanent member by %x translates into an expected shift of the outcome by 006270%x . In contrast, a corresponding ideal point shift of an elected member moves the outcome by only 000163%x in expectation. So an elected member has an about 40 times smaller effect or influence on the outcome. Or, if we take an out-
Shapley-Shubik vs. Strategic Power
107
sider’s point of view, bribing a permanent member is on average about 40 times more attractive ceteris paribus than correspondingly shifting the preferences of an elected member. Reporter 1 Not a bad start, but it seems that the referee is unconvinced … He’s discussing with his two linesmen and, yes, he annuls the Y-team’s power vector. It’s still 0:0! The referee explains to the local team’s captain that there was unfair twisting of assumptions, pretty soon after the kick-off. Reporter 2 Well, why didn’t he complain earlier then? But Mika and Stefan seem to admit that the Security Council is an institution with two classes of members but, in fact, no official segregation into two distinct chambers. I guess when it comes to preference aggregation, they still haven’t stopped thinking of their beloved Conciliation Committee … Reporter 1 I see the calculator is blinking again … this time it’s the Princeton G’s making their first computations.
One of the two most commonly used measures of agents’ voting power is the Shapley-Shubik index (SSI), introduced by Shapley and Shubik (1954). It is a special case of the more general Shapley value of cooperative coalitionalform games (Shapley 1953). The SSI is based on the broad idea that an agent that can turn a winning coalition into a losing one, or vice versa, exerts power. More formally, let N be a set of n agents and let S N denote any coalition of agents having s members. Voting games such as that in the Security Council can be characterized by a characteristic function v with value v(S ) 1 when coalition S can pass a proposal and v(S ) 0 otherwise. In this socalled simple game setting, the Shapley-Shubik index Gi of a voter i N can be written as Gi
(s 1)( n s ) ¯ ¢¡v(S ) v S \ \i ^ ±°. n S N st i S
The first term in the summation captures the probability of country i being in a potentially pivotal position in coalition S and the latter term equals one if in fact voter i is able to swing S from winning into losing, i.e. S is winning and the removal of i makes the remaining coalition S \ \i ^ losing. The SSI can be characterized by four axioms. The dummy axiom states that a player without any contribution (swings) to any coalition is powerless. The efficiency axiom states that the value v(N ) of the grand coalition N is fully allocated. For canonic simple games this means that all agents’ SSI-values Gi must sum up to unity. The symmetry axiom states that the names of the players do not affect the allocation but only their voting rights do and,
108
Stefan Napel and Mika Widgrén
finally, the transfer axiom prescribes a way to combine several games. The other major power index, the Banzhaf index (Penrose 1946; Banzhaf 1965), obeys all these except the efficiency axiom (Dubey and Shapley 1979). 6 One intuitive characterization of the SSI refers to all possible orderings (or permutations) of the agents, expressing something like the intensity of their support for a proposal or, related, in what temporal order they endorse the proposal. In a given ordering, the player whose ‘arrival’ or endorsement first establishes a winning coalition and thus passes the proposal is called pivotal. The SSI presumes that the relative shares of the players’ pivot positions capture their influence in a voting situation. In particular, the SSI corresponds to the normalized total number of pivot positions of the players, which implicitly assumes that all orderings are equally likely or at least should carry equal weight in the summation. This makes the computation of the index for members of the UN Security Council in a sense very straightforward: simply look at all 15! different orderings of the 15 players in the UN Security Council and check who is pivotal. A majority requires the support of all five permanent members and at least four other members. In terms of voter permutations, for example, the ninth member is pivotal when all permanent members are in positions smaller or equal than nine. Otherwise the pivot is the fifth permanent member in an ordering. Let us denote the permanent member i by Pi with (i 1,…,5) and the elected member j by E j with ( j 1,…,10). Now, for instance, consider the ordering (P1 , P2 , P3 , P4 , P5 , E 1 , E 2 , E 3 , E 4 , E 5 , E 6 , E 7 , E 8 , E 9 , E 10 )
In this case, the pivotal player is E 4 : all permanent members P1 ,…, P 5 and elected members E 1 , E 2 , E 3 already gave their endorsement of whatever proposal produced above ordering. Then E 4 ‘arrives’ and swings the losing coalition {P1 … E 3 } into the winning one \P1 ,…, E 3 , E 4 ^. Similarly, E 4 is the pivotal player in the ordering (P1 , P 2 , P 3 , P 4 , P 5 , E 1 , E 2 , E 6 , E 4 , E 8 , E 3 , E 7 , E 5 , E 10 , E 9 )
which differs from the former one in a way that does not affect with whose support the proposal is first passed. It is easy to see that there is a huge number of orderings where player E 4 is pivotal. In these orderings all permanent members come before E 4 and there are exactly three nonpermanent members before E 4 . By changing the positions of E 4 and E 5 , say, and using the same logic we can write the orderings where E 5 is pivotal like (P5 , P2 , P3 , P4 , P1 , E 1 , E 2 , E 3 , E 5 , E 4 , E 6 , E 7 , E 8 , E 9 , E 10 )
or, for instance, 6
For an alternative characterization Laruelle and Valenciano (2001).
Shapley-Shubik vs. Strategic Power
109
(P3 , P2 , P1 , P5 , P4 , E 7 , E 2 , E 3 , E 5 , E 10 , E 6 , E 1 , E 8 , E 4 , E 9 )
Permanent member P1 is pivotal in an ordering like (E 1 , E 2 , E 3 , E 4 , P5 , P4 , P3 , P 2 , E 5 , E 6 , P1 , E 7 , E 8 , E 10 , E 9 )
and also in many others where four permanent and at least four elected members come before him. Applying some combinatorics, it is fortunately not necessary to explicitly look at all 1307674368000 orderings. Still, direct computation of the SSI in the above fashion requires some paper work. Reporter 2 They might be on their way to an interesting paradox. I can see our colleagues Hannu and Steven scribble down some notes. But this is gonna last too long. Reporter 1 Yes, and a defender of the Y-team has just now appropriated all scrap paper. So the home team is in the offensive. Local fans shout ‘Take the derivative again!’
Returning to an explicit consideration of ideal point and status quo configurations, it is, of course, possible to look at the simultaneous determination of outcome x by all 15 members, rather than the above two-step negotiation format. What was really driving the prediction made in (1) was (a) the UN Security Council’s requirements for passage of a resolution, and (b) the assumption that the respective institution’s most conservative member whose support is necessary to change the status quo determines the content of the resolution. Even if one considers simultaneous deliberations amongst all 15 members of the Security Council, their outcome is by (a) still an implicit confirmation of the status quo q unless either q Q(1) and q F(7) , or q Q(5) and q F(4) , i.e. all permanent members and at least four elected members want to move away from q in the same direction. Considering the cases in which indeed a required majority prefers to replace the status quo by a policy to its right (q Q(1) and q F(7) ) , application of (b) to the full set of members of the Security Council results in either the position Q(1) or the position F(7) , since only these can correspond to the most reluctant necessary supporter of a resolution. By the same logic, q’s replacement by a policy to its left must be either x Q(5) or x F(4) , depending on which of these is more conservative (in the sense of being closer to the status quo). Going through all the cases, one ends up with £Q(1) ¦ ¦ ¦ ¦ F(7) ¦ ¦ ¦ x (Q1 ,…, Q 5 , F1 ,…, F10 ; q ) ¤Q(5) ¦ ¦ ¦ F(4) ¦ ¦ ¦ ¦ ¥q
if q Q(1) b F(7) if q F(7) Q(1) if F(4) b Q(5) q if Q(5) F(4) q otherwise
110
Stefan Napel and Mika Widgrén
– just as before. This time, no assumptions about linearity of preferences and Nash bargaining between representatives of the two groups are actually needed, making (1) more robust than it may have seemed. By the same computations as before, we again end up with the a priori power vector Y x (006270,…, 006270, 000163,…, 000163)
Reporter 1 It’s a GOAL! 1:0 for the local measurers! A frantic noise fills the stadium … They have hit exactly the same spot inside the 15-dimensional unit cube as before – quite an achievement. And this time, the referee did not see any obvious foul. Reporter 2 How will the visitors respond to this? … Their coach indicates that he wants to bring Guillermo Owen into the game, who is now quickly warming up. Reporter 1 Yes, there he is on the field. And he seems to get ready for his famous multilinear extension trick …
The multilinear extension (MLE) of a simple game v was introduced by Owen (1972). It can be seen as aggregating the worths v(S ) of all possible coalitions S N weighted by their respective formation probability under the assumption that any player i a priori accepts a proposal and thus joins the randomly formed coalition with a probability p i [0,1] . The actual acceptance decisions are assumed to be taken independently across players. So, given that a coalition’s worth is either one or zero, the MLE of a simple game with characteristic function v is f v ( p 1 ,…, p n ) p i S N i S
(1 p
j N \ S
j
)v(S )
p (1 p i
S st v (S )1 i S
j
)
j S
When f v is evaluated, it gives the ex ante probability that a winning coalition is formed under the assumptions about p 1 ,…, p n ; and by construction it is also equal to the expected worth of the coalition that will be formed. The first derivative of f v ( p 1 ,…, p n ) with respect to p i , denoted by f i v , is the so-called power polynomial of player i and indicates the contribution that player i makes to this expected worth. By the multilinearity of f v , it also corresponds to the probability of player i having a swing in the random coalition formed according to the so-called acceptance rates p 1 ,…, p n . Power polynomials can be used to efficiently compute various power indices by making suitable assumptions about players’ acceptance probabilities. The reader is referred, e.g. to Owen (1995: chp. XII) for a very accessible presentation. In particular, the SSI results if all acceptance rates p i have the same value t and this value t is a priori uniformly distributed on [0,1]. Then, a player i’s SSI in simple game v is simply the expected value of his power polynomial f i v . 7 7
The Banzhaf index follows similarly by, e.g. assuming that pi 12 .
Shapley-Shubik vs. Strategic Power
111
The voting game played in the UN Security Council is most efficiently modelled as a compound game v u[w 1 w 2 ] , where w 1 is a five-player simple game involving the permanent members, w 2 is a simple game played by the ten elected members, and u aggregates the two. In w 1 only the grand coalition wins, in w 2 any coalition of four or more members wins, and u equals one if a winning coalition is formed in both w 1 and w 2 (and zero otherwise). In particular, the subgame w 1 has the following MLE f w1(p1, p 2 , p 3 , p 4 , p 5 ) p1 p 2 p 3 p 4 p 5 .
Similarly, we have as w 2 ’s MLE f
w2
( p 1 ,, p 10 )
p (1 p i
S {1,…,10} st ]S ]p4 i S
j
).
j S
Finally, the unanimity game u that combines the component games w 1 and w 2 has the MLE f u( f
w1
f
w2
) f
w1
f
w2
.
Using the fact that f iu w
df u sf u sf w 2 w2 ¸ sf sp i dp i
for an elected member i, the corresponding power polynomial under the SSI’s assumption of p i w t for all voters i is f i u (t ,…, t ) f
w1
9 (t ,…, t )¸ f i w 2 (t ,…, t ) t 5 ¸ 3 t 3 (1 t )6 .
Now from the assumption that t is [0,1]-uniformly distributed, under which the expectation of f i u (t ,…, t ) gives player i ’s SSI-value, we obtain 1
Gi ¨ 84t 8(1 t )6 dt 0
4 . 2145
421 It then follows from efficiency and symmetry of the SSI that G j 2145 for each permanent member, resulting in the UN Security Council’s SSI-vector
421 421 4 4 ¬ G , …, , , …, 2145 2145 2145 2145 ® x (019627,…, 019627, 000186,…, 000186).
Reporter 1 This makes it 1:1! The match is open again. Reporter 2 I see that the local team’s captain tries to protest: This is just the result
112
Stefan Napel and Mika Widgrén
of their own measurement if the status quo is fixed to zero! But the referee remains unimpressed. Some critics held that G can produce results only when playing in the Ppower League – but watch this, Felsenthal-Machover! Reporter 1 The referee hands the local’s captain the Powerpoint, suggesting that his team actually demonstrates that the G-vector results as a special case in their framework. I guess they will have a hard time backing up that claim … but no! The coach shouts instructions that sound like ‘Institutional inertia!’, and his players get moving again …
The vector Y computed above by considering independently uniformly distributed status quo and ideal points adds up to slightly less than 0.32984. Noting that for almost every preference and status quo configuration there is either a unique member in the Security Council whose ex post power is 1 (the most conservative supporter of a resolution who was necessary to pass it), or the status quo is confirmed, it follows that 8 1 Y k x 067016 k
gives the probability that, in fact, the status quo is confirmed. It is hence a measure of institutional status quo bias or inertia created by the Security Council’s voting rules. For two thirds of the issues that arise, no agreement can be reached and no member of the UN Security Council exerts any creative influence (in the sense of moderate preference changes translating into any outcome change). Quantification of the UN Security Council’s tendency to not pass any resolution in our view provides a valuable piece of information – one which any index satisfying the efficiency axiom, like SSI, cannot deliver. If one is nevertheless interested in individual members’ relative share of creative power, i.e. wants to abstract from institutional status quo bias, one can normalize Y. This yields Y
Y
k
Yk
x (019011,…, 019011, 000495,…, 000495).
It adds up to unity by construction, but this clearly implies a loss of information relative to the non-normalized Y. Note that even Y differs from SSI-vector G. To see why this must be the case and why, in the class of normalized indices, Y is in our view still a better choice than G, let us consider UN Security Council members’ power conditional on q 0 . So we restrict attention to situations in which by construction all members of the Security Council want to pass a resolution 8 The qualifier ‘almost every’ refers to cases in which several members’ ideal points coincide. These are zero probability events for any continuous distribution.
Shapley-Shubik vs. Strategic Power
113
that moves to the right of the current status quo. Since in such a situation, any institutional inertia is ruled out, the corresponding power vector Y a must even without normalization add up to unity. In particular, we obtain 1
Y Qˆ Pr(0 Q(1) b F(7) ) ¨ < F Q(1) (F) F Q(1) (q )>f F(7 ) (F)d F 0
421 429
(6)
8 429
(7)
for the group of permanent members, and 1
Y Fˆ Pr(0 F(7) Q(1) ) ¨ < F F(7 ) (Q) F F(7 ) (q )>f Q(1) (Q)d Q 0
for the group of elected members. This results in the power vector 421 421 4 4 ¬ Y a ,…, , ,…, G 2145 2145 2145 2145 ®
i.e. exactly reproduces G. So the SSI in the context of the UN Security Council simply corresponds to a conditional version of our measure of strategic influence Y. It implicitly assumes that the grand coalition N will always be formed (the efficiency axiom thus makes sense), meaning in our context that there will never be any disagreement amongst the members of the Security Council regarding the direction of a resolution. In contrast, the power measures Y and Y take possible disagreement into account. As the differences between Y and Y a (a.k.a. G) show, the rather frequent disagreement amongst Security Council members affects permanent and elected members’ power asymmetrically: it is considerably more often the case that the outcome is x q because Q(1) b q b Q(5) and so that both Q(1) and Q(5) would be vetoed, i.e. permanent members do not have creative power, than that the status quo prevails because a permanent member would block F(4) or F(7) . This asymmetric effect of possible disagreement on influence of permanent and elected members means that the latter are, according to above analysis, roughly three times more powerful in relative terms if one does not restrict attention to matters of unanimous consent. 9 This may be a matter of taste, but we think that institutional inertia created by voting rules such as the one analyzed here is an important issue. Our method captures it when absolute power Y is evaluated and, moreover, its effects are also incorporated when the power of members is compared in relative terms (Y ) . The SSI underestimates relative power by a factor of more than 2.5 because only very special cases are taken into consideration. Reporter 1 We have already heard the first argument in a recent match between 9 Clearly, all members are less powerful in absolute terms when the possibility of disagreement is taken into account.
114
Stefan Napel and Mika Widgrén
the Beta Power Club and their normalized cousins, Beta Prime United, I think. Reporter 2 But the referee accepts this point … he announces that the local team has scored. And it’s on the display: 2:1! This is an unexpected but not undeserved lead in my eyes … The G-vector is carried off the field and the Princeton team’s generating function seems to be in critical condition … But they still have a few minutes to come up with a response – and remember the furious fight that they staged when unidimensionality and ignorance of any preference information were criticized. Reporter 1 Oh yes, they formed an a priori union and Lloyd Shapley plotted an entire power globe with regions of pivotality, in flawless stereographic projection. Or was it Mercator? I can’t remember now, but apparently their team has come up with some idea. Perhaps they will try a variation of the Spanish Attack, splitting the Security Council into a take-it-or-leave-it and a bargaining committee? Reporter 2 Or they might indeed put Nash to the power of Shapley, as our sharpeyed colleagues Annick and Federico saw them practice when they felt unwatched.
While one arguably needs to stretch the simple game framework more than usual, it is possible to capture institutional status quo bias or inertia using the SSI. Namely, one can add an artificial sixteenth ‘player’ Q to the set of agents N, creating the new set N a N \Q ^ . Then player Q can be endowed with status quo-preserving properties similar to the more explicit strategic analysis above. In order to achieve this, one needs to introduce a new characteristic function v a that coincides with the original v for all coalitions not containing Q, but is equal to zero for any coalition S containing Q. In particular, Q’s membership in any coalition S results in v a(S ) 0 even if v a S \ \Q ^ 1 , i.e. v a is non-monotonic. It follows that player Q ’s marginal contribution ¡v a(S ) v a S \ \Q ^ ¯° is either 0 or 1, and its SSI-power ¢ ± (s 1)( n s ) GQ v a(S ) v a S \ \Q ^ ¯° ¢¡ ± n S N a st Q S is negative. Since v(N a) 0 (the new grand coalition includes Q ), the sum of all regular players’ SSI-values is equal to the absolute value of GQ by the efficiency axiom. This value is necessarily smaller than unity because Q has a marginal contribution of 1 only for a strict subset of all coalitions S N a . For illustration consider the player ordering (P1 , P2 , P3 , P4 , P5 , E 1 , E 2 , E 3 , E 4 , E 5 , E 6 , E 7 , E 8 , E 9 , Q , E 10 )
where Q ‘arrives’ after elected member E 4 has already been identified as a pivotal player. This permutation of player set N a hence contributes 116 to the SSI-value of E 4 – just as it contributed 115 to E 4 ’s power in the
Shapley-Shubik vs. Strategic Power
115
original game without Q. But Q ’s eventual arrival turns the coalition from a winning into a losing one. So Q is (negatively) pivotal, too, and the same ordering contributes 116 to Q ’s negative power. In contrast, no player in the ordering (P1 , P2 , P3 , P4 ,Q , P5 , E 1 , E 2 , E 3 , E 4 , E 5 , E 6 , E 7 , E 8 , E 9 , E 10 )
has a non-zero marginal contribution. The intuitive reason for why E 4 ’s pivotal position from the original 15-player game is not counted for this constellation in the modified game with players N a and characteristic function v a is that no resolution can pass because there are permanent members both sides of the status quo (represented by Q ). Similarly, in the ordering (P1 , P2 , P3 , P4 , P5 , E 1 , E 2 , E 3 ,Q , E 4 , E 5 , E 6 , E 7 , E 8 , E 9 , E 10 )
no player is assigned any influence. The interpretation is now that all permanent members and only three elected members prefer a resolution to the left of the status quo and the remaining seven elected members want to move to the right, i.e. no proposal gets the support necessary to pass. All status quo and preference configurations in [0 1]16 from the explicit spatial analysis underlying Y correspond to some ordering of the above type. So going through all 16! permutations would mean looking at every situation that previously entered the computation of Y, the difference being the purely ordinal perspective. Note also that since we assumed that status quo and the ideal points are all independently uniformly distributed, any ordering in which a given voter is determining the size of a change to the left or right of the status quo has an equal probability of 116 . The key difference in computing the Security Council members’ average influence is that both (P1 , P2 , P3 , P4 , P5 , E 1 , E 2 , E 3 , E 4 , E 5 , E 6 , E 7 , E 8 , E 9 , Q , E 10 )
and its mirror image (E 10 Q E 9 E 8 E 7 E 6 E 5 E 4 E 3 E 2 E 1 P5 P4 P3 P2 P1 )
count towards E 4 ’s power in the cardinal spatial analysis: For the first constellation, an outcome x F 4 to the left of the status quo would be predicted and enter Y E 4 with the probability 116 of this ordinal preference and status quo configuration. Similarly for the second constellation, the outcome x F 4 to the right of the status quo would be predicted and enter Y E 4 accordingly. So above two orderings contribute exactly 216 to elected member E 4 ’s ex ante power of Y E 4 x 000163 . In contrast to this, the SSI of elected member E 4 in the coalitional-form game described by N a and v a would include the pivotal position in the first ordering only, i.e. both would contribute only 116 to G E 4 . In fact, for every ordering in which, say, E 4 determines the outcome, there exists an equiprobable mirror ordering in which E 4 determines the
116
Stefan Napel and Mika Widgrén
outcome in the sensitivity analysis, too. Both enter Y E 4 with a weight of 116, but only one enters G E 4 while the other contributes zero. It follows that the SSI-vector of the modified game is G x (003135,…, 003135, 000082,…,000082, 016492)
where the last entry is the power of artificial player Q. So for all regular players i N 1 Gi w Yi 2
and the institutional inertia identified above is simply 1 2GQ . In particular, the strategic power vector Y for the UN Security Council can be obtained by applying the linear mapping U [0 1]16 l [0 1]15 with U(y 1 ,…, y 15 , y 16 ) 2(y 1 ,…, y 15 )
to the ‘standard’ Shapley-Shubik index G of a suitably defined simple game. Reporter 1 This whole game v a is entirely degenerate rather than simple – but what a prediction. Oh, this is quite innovative … The old fans haven’t seen such a beautiful non-monotonicity and negative power values for a long, long time! And I bet it’s a first for the younger generation. Reporter 2 Yes, so the Princeton Gs have equalized!! Apparently, the Y-team’s captain argues with the referee once more. This factor 2 is a major difference he tells! Well, that’s a matter of taste to me … Reporter 1 And the game is over! The referee points to the Dieze in Bornstraße (at own expense) and the first players leave the field. The Y-team’s captain asks for extra time with an institution for which the equilibrium is simultaneously sensitive to several voters … sounds slightly indecent to me, but risk-aversion might bring this about … In vain, however; this is too late. The game ends 2:2, and Beta Prime is playing next. Their supporters are already entering the stands. Reporter 2 And with this session on strategic power well underway, and the prospects of a good day’s measuring ahead, back to the studio.
Acknowledgements We acknowledge encouraging comments from Matthew Braham. Our research has generously been supported by the Yrjö Jahnsson Foundation.
Shapley-Shubik vs. Strategic Power
117
References Banks, J.S. and Duggan, J. (2000) a Bargaining Model of Social Choice, American Political Science Review 1: 73–88. Banzhaf, J.F. (1965) Weighted Voting Doesn’t Work: a Mathematical Analysis, Rutgers Law Review 2: 317–343. Braham, M. and Holler, M.J. (2005) The Impossibility of a Preference-Based Power Index, Journal of Theoretical Politics 17: 137–157. Dubey, P. and Shapley, L. (1979) Mathematical Properties of the Banzhaf Power Index, Mathematics of Operations Research 2: 99–131. Felsenthal, D. and Machover, M. (1998) The Measurement of Voting Power – Theory and Practice, Problems and Paradoxes, Edward Elgar. Holler, M.J. (1978) A Priori Party Power and Government Formation, Munich Social Science Review 4: 25–41. Holler, M.J. and Packel, E.W. (1983) Power, Luck and the Right Index, Zeitschrift Für Nationalökonomie (Journal of Economics) 1: 21–29. Laruelle, A. and Valenciano, F. (2001) Shapley-Shubik and Banzhaf Indices Revisited, Mathematics of Operations Research 1: 89–104. Napel, S. and Widgrén, M. (2004) Power Measurement as Sensitivity Analysis – A Unified Approach, Journal of Theoretical Politics 4: 517–538. Napel, S. and Widgrén, M. (2005) The Possibility of a Preference-Based Power Index, Journal of Theoretical Politics 3: 377–387. Napel, S. and Widgrén, M. (2006a) The European Commission – Appointment, Preferences, and in stitutional Relations, CEPR Discussion Paper (5478). Napel, S. and Widgrén, M. (2006b) The Inter-Institutional Distribution of Power in EU Codecision, Social Choice and Welfare 27: 129–154. Nurmi, H. (1998) Rational Behaviour and Design of in situtions, Edward Elgar. Owen, G. (1972) Multilinear Extensions of Games, Management Science 5: P64–P79. Owen, G. (1995) Game Theory (3rd ed.), Academic Press. Penrose, L.S. (1946) The Elementary Statistics of Majority Voting, Journal of the Royal Statistical Society : 53–57. Rubinstein, A. (1982) Perfect Equilibrium in a Bargaining Model, Econometrica 1: 97– 109. Shapley, L.S. (1953) A Value For n-Person Games, in H. W. Kuhn and A. W. Tucker (eds) Contributions to the Theory of Games II, Princeton University Press, 307–317. Shapley, L.S. and Shubik, M. (1954) A Method for Evaluating the Distribution of Power in a Committee System, American Political Science Review 3: 787–792.
7. Modified Power Indices for Indirect Voting Guillermo Owen Department of Applied Mathematics, Naval Postgraduate School, Monterey, USA
Ines Lindner Department of Econometrics, Free University Amsterdam, The Netherlands
Bernard Grofman Department of Political Science, University of California, Irvine, USA
1. Introduction The Electoral College remains a controversial feature of U.S. political decision-making. After most U.S. presidential elections, there are calls for passage of a constitutional amendment to either abolish it or to ‘reform’ it substantially. There are numerous complaints about the Electoral College, of which the most important is the potential for the winner of the Electoral College majority to be a popular vote loser. Consider three assertions that often surface in the debates about the political impact of the Electoral College. First, the Electoral College is alleged to benefit the smaller states. Here the argument is simply that the failure of the Electoral College to satisfy the ‘one person, one vote’ standard by overweighting the seat shares of the smaller states disproportionately advantages those states in terms of their influence on presidential outcomes. An implication of this claim is that, ceteris paribus, candidates should spend more time and money campaigning in the smaller states than their populations would otherwise justify. Second, the winner-take-all feature of statewide voting used in the Electoral College by 48 of the 50 states (and the District of Columbia) is alleged to benefit the larger states. Here, we have the argument, based on game theoretic ideas about pivotal power, that the Electoral College should disproportionately focus candidate attention on the largest states, since it is claimed that, ceteris paribus, the citizens in those states have a likelihood of being pivotal in the election in terms of turning a losing coalition of states into a winning one that is more than proportional to their state’s share of Electoral College votes (Brams and Davis 1974).
120
Guillermo Owen, Ines Lindner, and Bernard Grofman
Third, it has recently been suggested that the Electoral College operates to benefit the states experiencing close contests for the presidency, by focusing candidate attention only on the relative handful of potentially competitive states, leaving much of the country barely aware that a presidential election is going on. It might appear obvious that all these assertions cannot be true. In particular, it is far from intuitive how the Electoral College might structure incentives so as to simultaneously make it more likely that candidates would campaign in both the largest states and the smallest states at levels higher than the population of those states would seem to merit. Yet, as we will see, we can construct models in which this underrepresentation of the states of middling population can occur. However, unless closeness and size are perfectly correlated, or unless the effects of state size and level of competition on campaign investments act in a completely additive fashion, then we need to follow up on a point made in Brams and Davis (1974: 132) about the desirability of relaxing the restrictive assumption they make that each state’s already decided voters are divided equally between the two parties on use of poll data about closeness. An important distinction to make here is that between a priori voting power, which is based entirely on the laws and description of the voting process, and the actual power which depends on likely coalitions. Since we are specifically considering political processes, it is clear that certain coalitions (e.g., coalitions among voters with similar ideology, or among voters living in a given area) are more likely than others. We discuss both types of power, but will give modifications mainly for the second (practical) case. There are several power indices in the literature. The best known are the Shapley-Shubik (1954) and Banzhaf (1965)/Coleman (1971)indices. Application of these indices shows that the larger states (California, New York, etc.) have substantially greater power than one would normally expect. Owen (1975; 1995: 302) found that – on the basis of 1970 census figures – a voter in California had, a priori, 2.86 times as much power as a voter in North Dakota. This was so even though North Dakota had then 2.87 times as many electoral votes per capita as California. The question is whether this rather unintuitive result is reasonable; if not, we would like to suggest modifications. We will limit ourselves to discussion and modifications of the ShapleyShubik index. Other power indices give similar results; it is not necessary to discuss them here.
2. Multilinear Extensions Our approach to the power index will be based on the multilinear extension (Owen 1972). Let (N ,v ) be an n-person TU game in characteristic function form. Then the multilinear extension
Modified Power Indices for Indirect Voting
¦£ ¦² f (q 1 ,!q n ) ¦¤q j (1 q j )v(S )¦ » ¦¦ j S j N \ S ¦¦ S N ¥ ¼
121
(1)
represents the expected worth, E
, of a random coalition [, given that each player, i, has probability q i of belonging to the coalition, and that all these probabilities are independent. The partial derivative f i sf i sq i represents the expected marginal contribution, v [ \i ^ v [ \i ^ , of i to this random coalition. Now the Shapley value can be obtained by the formula, from Owen (1972), 1
ɲi [v ] ¨ f i (t 1 ,!t n )dt
(2)
0
in which the Russian letter ɲ (sha) stands for the Shapley value. This formula can be interpreted by the following parable: the n players in a game have agreed to meet in a given place, at a given time. Because of random fluctuations in watches, unforeseen delays, etc. they in fact arrive in some random order. Each one’s arrival time is a random variable, X i ; these n random variables are independent and have identical distribution. So long as this is a continuous distribution, there is no loss of generality in assuming that it is a uniform distribution over the unit interval. We assume, as described in (Shapley 1953), that, on arrival, player i is paid his marginal contribution to the coalition consisting of those players who have already arrived. Then the value ɲi [v ] is precisely player i ’s expected payment. Suppose, however, that the n players’ arrival times are not identically distributed. (One is habitually tardy; another is an early riser, etc.) We let g i be the cumulative distribution for i ’s arrival time, i.e. g i (t ) Pr \X i b t ^ , and assume these variables are independent and absolutely continuous (so each can be represented by a density function g ia ). Then B
Zi ¨ f i g 1(t ),!, g n (t ) g ia(t )dt
(3)
A
is the expected payment to i under these assumptions. 1 To see how this works, we give two easy examples, with three players each, and normal distributions for their arrival times. Example 1 Consider a three-person situation, where any two of the voters form a winning coalition. In this case, the multilinear extension is given by 1
In formula (3), A and B should be chosen so that all g i (A ) 0, and all g i (B ) 1. Since this may not be practical (e.g., some distribution may have infinite support), we merely require that they be close to 0 and 1 respectively.
122
Guillermo Owen, Ines Lindner, and Bernard Grofman
f (q 1 , q 2 , q 3 ) q 1q 2 q 1q 3 q 2q 3 2q 1q 2q 3 .
The partial derivatives here are f 1 q 2 q 3 2q 2q 3 and similarly for the other two. Let the three voters’ times of arrival be normally distributed, with means and standard deviations N1 0.4, T 1 0.1 N 2 0.5, T 2 0.2 N 3 0.7, T 3 0.1
We will let g i (t ) Pr \X i b t ^ ' (t Ni T i )
where ' is the standard normal distribution function. Thus g 1(t ) '(10t 4) g 2(t ) '(5t 2.5) g 3(t ) '(10t 7)
Note that, for all three, we have g i (0) very close to 0, and g i (1) very close to 1. Thus it should suffice to let A 0 and B 1 in our integration formula above. (If a more precise result were necessary we could let A 1 and B 2.) From the above, we obtain the densities (derivatives) g 1a(t ) 10K(10t t ) g 2a (t ) 5K(5t 2.5) g 3a (t ) 10K(10t 7) 1
where K is the normal density function, K(x ) (2Q) 2 exp \x 2 2^. The integration formula leads to the result : (0.323,0.490, 0.187). Thus, player 2, whose expected time of arrival is in the middle, has the advantage, though he will frequently (more than half the time) shift out of the middle position. Player 1, whose expected position is more moderate than that of player 3, does in fact considerably better than 3. Note that this effectively assumes motion ‘from left to right’, with a coalition forming as the left-most members (those closest to 0) join first, then those in the middle, and finally those on the right (closest to 1). We can imagine as well motion from right to left (the reverse order), but in fact this gives the same results as before. This is to be expected where the voting game is decisive: for every coalition S, either S or N \ S (but not both) is a winning coalition. For such games, an order and the reverse order give the same result.
Modified Power Indices for Indirect Voting
123
Example 2 Consider a similar three-person situation, with the same winning coalitions and the same multilinear extension. The difference will be in the three voters’ times of arrival, now characterized by N1 0.5, T 1 0.1 N 2 0.5, T 2 0.2 N 3 0.5, T 3 0.05
We continue as in Example 1. The integration formula now leads to the result : (0.369,0.166,0.465). In this case, we find that player 3 is favored, mainly because his smaller variance means he will generally be closer to the center of the distribution, and thus more frequently in the middle, between the other 2. But note that, if the voting game required unanimity, the situation would be quite different: in this case, we would find : (0.316, 0.417,0.267). Thus, where the expected times of arrival are different (as in example 1), those players with expected positions near the median will be advantaged. Where the expected arrival times of the players are all equal (as in example 2), and a simple majority of the votes is necessary to win, the player with smaller variance is generally advantaged. (On the other hand, with a supermajority necessary, the situation may well be different.) What is not obvious from the example is that, when there are many players, the advantage will (asymptotically) be inversely proportional to the square root of the variance.
2.1 The Electoral College Let us see how this applies to the Electoral College. There are n players (states), with differing numbers of electoral votes, depending on the state’s population. Let v be the n-person game among the states. Let m j be the number of voters in state j, and let r j be the number of votes needed to determine the state’s electors, which is in this case (m j 1) 2. We shall let wj represent the simple game, with m j players, in which the minimal winning coalitions are precisely those with exactly r j members. Let G j (y 1 ,!, y m ) be the multilinear extension of game w j , and define g j (t ) G j (t ,!, t ).
(4)
It is easy to see that g j (t ) is in this case the probability that a binomial random variable with parameters m j and t be at least equal to r j . This means that if each of the m j voters arrives according to a uniform distribution in the unit interval, g j (t ) is the probability that a majority (or more of these has arrived not later than time t : in other words, it is the probability that state j will have its ‘time of arrival’ not later than t (Owen 1975).
124
Guillermo Owen, Ines Lindner, and Bernard Grofman
Assuming m j to be large, we can approximate this binomial probability by the normal distribution with mean tm j and variance t(1 t )m j . Approximately, then, tm j r j g j (t ) ' t(1 t )m j
¬ . ®
(5)
Next, we calculate the function f (g 1 ,! g n ). Let state j have w j electoral votes. Then, since g j (t ) is the probability that state j arrives on or before time t, then the number of electoral votes that state j will have contributed by time t can be thought of as a random variable with mean w j g j (t ), and variance w 2j g j (t ) 1 g j (t ) . It will follow that the number of electoral votes Y in the random coalition [ has mean M Y (t ) w j g j (t ) j
and variance TY2 (t ) w 2j g j (t ) 1 g j (t ) . j
Given the number of states, it is possible to approximate Y by a normal random variable having the same mean and variance. Thus f can be expressed in terms of the normal distribution function ', and the calculation is then a straightforward problem in integration (easily carried out with current computer packages, using Simpson’s rule). The reader is invited to read (Owen 1975) for details of this integration. It can also be seen that, since the ratios r j m j are all nearly equal to 21 , then the change of variable U
t 21
(6)
gives us the much simpler expression g j ' U m j
(7)
where ' is the normal distribution function. Thus, the effect of difference in population translates into a difference in variance: the time of arrival of state j is now a normal random variable with mean 0 and variance 1 m j . 2 Now, the Shapley value assumes that (a) all voters have their ‘time of ar2 It should be noted that, under this change of variable, the values 0 and 1 for the original variable, t, transform into –f and +f, respectively, for U. The integral (3) becomes an improper integral, but, in practice, it should suffice to let U run from –K to +K, where K is large enough so that K m i is of the order of 3 for the smallest of the constituencies.
Modified Power Indices for Indirect Voting
125
rival’ identically distributed and (b) these are independent, so that, in fact, formula (1) above holds. We will call this the hypothesis of universal population homogeneity and independence (HUPHI). This means that all the state positions have identical means, with variance inversely proportional to the population. The effect of this is that the larger states are more likely to be near the centre, and, as in Example 2 above, are more likely to be pivotal if a simple majority of the constituency weights is necessary to win. If, on the other hand, a super-majority (say, two thirds) is necessary, then these larger states are less likely to be pivotal (though of course they might still be stronger, simply because their voting weights are greater).
2.2 First Modification: Introduction of Undecided Voters Suppose, now, that the HUPHI does not hold. Different states have different distributions for their populations. We shall use a relatively simple model for this; there are of course several other possibilities. We assume that, in each state, part of the population is definitely on the left, part is definitely on the right, and the remaining voters (the undecideds) are the ones in play. In our ‘time of arrival’ parable, the left wing arrives immediately at time 0, the right wing arrives at time 1, and the undecideds arrive according to a uniform distribution in the unit interval. As before, we wish to find gj, the probability distribution of X j , state j’s time of arrival. Let m j be the voting population of constituency j, and let a j and b j be the population of the left- and right-wing blocs respectively. Then the undecideds are c j m j a j b j . Assume, as before, that r j (m j 1) 2 votes are needed to carry the constituency; then the left-wing party requires s j r j a j votes from among the undecideds. 3 We will therefore consider only states for which r j is greater than both a j and b j . . Continuing as above, we would then find that the state ‘arrives’ when exactly s j of the c j undecideds have arrived: tc j s j g j (t ) ' t(1 t )c j
¬ . ®
(7)
¬ . ®
(8)
If we let H j s j c j we obtain (t H j ) c j g j (t ) ' t(1 t ) 3
It may, of course, happen that a j p r j . In such case the left-wing party will certainly carry the state, X j will be equal to 0, g j will be constantly equal to 1, and the voters in this state will, in our analysis, have zero power. Similarly, if b j p r j , then the right-wing party will certainly win the state, X j will be equal to 1, g j will be constantly equal to 0, and once again the voters here have no power.
126
Guillermo Owen, Ines Lindner, and Bernard Grofman
The density is then given by
g aj (t )
(t H 2Ht ) c j 2(t t 2 )
3 2
(t H j ) c j K t(1 t )
¬ . ®
(9)
To obtain the modified power, we would then modify the calculations in (Owen 1975; 1995: 298–299), using these values for the functions g j . In principle, there is no great difficulty – given the existence of mathematical packages for computers – in carrying out the necessary integration. Some care must of course be taken to avoid values of t which are too close to 0 or 1, but such values would be of importance only if s j is extremely close to either 0 or c j , and these states will, in our model, have negligible power. We wish, however, to look at some qualitative properties of this modified index, making a simplification similar to the one above (equation 7). Unfortunately, since the H j are not all equal, the simplifying transformation (6) is not available. We note, however, that g j (H j ) 12 , so that the median entry time for constituency j is H j . The calculations at this point become somewhat complicated. Nevertheless, it is not too difficult to prove that, if two constituencies, i and j, have H i H j , and undecided populations c i c j respectively, then
Pr \X i X j ^ b ' [H i H j ] (2c i ) . If we further assume that c i is of the order of 20,000 (a rather small number given the size of today’s electorates), we find that Pr \X i X j ^ b ' 200[H i H j ] .
Now, we know that ' is a strictly increasing function, with '(2) 0.023. Thus, if H i H j 0.01, we find that Pr \X i X j ^ 0.023. We conclude that, unless the quantities H i are very close for several states, the probability of an out-of-order arrival is extremely low, and thus almost all of the voting power will reside in a small number (possibly two or three) of median states. 4
2.3 Discussion of the Above Indices As we have seen, the classical Shapley value (or for that matter the BanzhafColeman index) gives excessive a priori value to the larger constituencies, essentially because the HUPHI (hypothesis of universal population homogeneity and independence) causes all constituencies to have the same expected time of arrival, with smaller variances for the larger ones. When the voting game requires a simple majority (i.e., one half plus one) of the 4
The authors will be happy to provide details of this analysis.
Modified Power Indices for Indirect Voting
127
weighted votes, a smaller variance increases the probability that a state lie in pivotal position (see example 2 above.) It is of course also true that, for the Shapley value – though not the B-C index – a requirement for a super-majority will weaken the state with smaller variance. On the other hand, the inclusion of undecideds, etc., which we mentioned above, will give differing expected times of arrival for the states. However, it allots almost all the power to a very few constituencies – those with expected time of arrival nearest the median. It is as if, in this last (2004) presidential election, we need only consider four states: Ohio with 20 votes, Nevada, 5, New Mexico, 5, and Iowa, 7. To see this, note that Ohio was in fact the pivotal state. According to our analysis above, only states with a two-party division of the vote within 1% of Ohio would be considered possible pivots: these are precisely the four states mentioned. Among the remaining states, Bush had 249 votes, while Kerry had 252. To win, Bush needed 269 votes, and Kerry, 270. (This difference is due to the fact that, in case of an Electoral College tie, the election would be resolved in the House of Representatives, where the Republicans had a sizable majority of the states.) Thus whoever carried Ohio would win the election. We should perhaps include Florida and Wisconsin, where the division differed from that of Ohio by about 1.4 %. Then Ohio would not have all the power, but (as discussed above) the probability that Florida would end up to the left of Ohio, or Wisconsin to the right, are of the order of less than 2%. We would conclude that Ohio had perhaps 98% of the total power; the remaining 2% would be divided among Florida, Wisconsin, and the three other states mentioned above. The other 44 states, and the District of Columbia, would have truly negligible power. Now, it is our belief that the greatest problem with the indices thus far considered is that they fail to take into account correlation among the voters of a given state. To see why correlation is important, consider the following straw man: Suppose that, at birth (or on naturalization) each citizen of the United States were assigned a two-letter symbol. There are 51 of these, and they are assigned according to a certain probability distribution. For example, the symbol CA has probability 0.12, the symbol WY has probability 0.017, etc. Otherwise the assignment is totally random: an individual’s symbol has no relation to his family or place of birth, and even twin siblings can have symbols as totally different as MA and UT. These symbols seem to be quite meaningless, except that they are kept in the electoral rolls. Then, for a presidential election, votes are totaled according to symbol, Each symbol is then given a number of ‘supervotes’, proportional to the original probability distribution, and these will be assigned (on a winner-take-all basis) to the candidate with most votes from among those with a given symbol. Now, it is hard to imagine that anyone would espouse this proposal. So
128
Guillermo Owen, Ines Lindner, and Bernard Grofman
how is it different from the Electoral College? The answer is that in the Electoral College, people are grouped together by their state of residence rather than by some meaningless pair of letters (akin perhaps to the last two digits in the social security number). Because of this, there is meaningful correlation (of opinion) among voters who are grouped together in the Electoral College system, but not in our straw man proposal. However, the classical voting indices essentially assume that voters’ decisions are totally independent and thus no different from the situation in the straw man proposal.
3. An Alternative Approach Granted that the two methods described above give counterintuitive results, we feel that a different approach, assuming substantial correlation among nearby voters, and especially among voters within a given state, should be considered. With such a correlation, the variance of the state arrival times would be considerably larger than given by our model – certainly much larger than 1 m i or even 1 c i . As Gelman et al. (2004) point out, empirical evidence suggests that the variance is closer to m i0.2 . To model the correlation, we represent this by a partial differential equation, where we represent the voters by points, x, on a line, and time by t. The coordinate x corresponds to physical location: if x y is small, then voters x and y are ‘neighbours’ who can talk to each other. Assume that there are two parties, which we generically call the right-of-center and the left-of-center. We let u(x , t ) represent voter x’s state of mind (his feeling towards the two parties) at time t. Specifically, we assume that x has a ‘usual’ state. Then u(x , t ) 0 means that, at time t, x is more likely than usual to vote for the right-of-centre party; similarly, u(x , t ) 0 means he is more likely than usual to vote for the left-of-centre. Now, we assume that voter x is influenced by the voters near him, and has a tendency to move in the same way that they do. We will represent this by the equation su k s 2 u f (x , t ) dt sx 2 u(x ,0) g (x ) d x d t d
(10)
where k is a constant of proportionality, corresponding to the speed with which political news and opinions spread among neighbours, and where the forcing term, f, represents external events, and may even be split into two parts, one corresponding to random events, and the other to efforts by one or the other of the two parties. To estimate the correlation, assume there is no variation until time 0. As-
Modified Power Indices for Indirect Voting
129
sume then a unit shock, concentrated at some point x x 0 , at time t 0. This corresponds to an initial condition u(x ,0) E(x x 0 )
where E is the Dirac delta function. Assume also that there is no forcing term, i.e. f 0. The reader may recognize the above as the heat equation that appears in most courses in partial differential equations. It has the solution ² 1 ¦£(x x 0 )2 ¦ u(x , t ) (4 Qkt ) 2 exp ¤ ». ¦¥¦ ¦ 4kt ¦ ¼
(11)
Of course this shock could happen at any point xo within the state. There may be several such shocks, at possibly different times, but we assume these are uncorrelated. Thus the correlation between different voters can be given by the autocorrelation of this function. Some analysis tells us that the relative correlation is £¦(x y)2 ² ¦¦ S(x , y, t ) exp ¦¤ ». ¦¦¥ 8kt ¦ ¼¦
(12)
Now, for small values of t, this will be a very sharp curve, with a strong maximum at x y, and falling to 0 very quickly. For large t, on the other hand, his will be a very flat curve. Suppose, now, that the population of a constituency occupies a line segment of length c, which we can assume, without loss of generality, to be the interval [0, c ]. Assume also that each individual’s political stance has a variance 1. Then the variance of the sum of their positions is given by c
c
¨¨ 0
0
£¦(x y)2 ²¦¦ exp ¦¤ »dxdy ¦¥¦ 8kt ¦¼¦
(13)
which, after some calculations, reduces to £c 2 ²¦¯ 1 1 Var c(8Qkt ) 2 erf \c(8kt ) 2 ^ 8kt ¡1 exp ¦¤ »° ¡¢ ¥¦¦ 8kt ¼¦¦°±
and the variance of the mean is this same quantity divided by c 2 . 5 We find here that for small c (small compared to kt), then Var, as given above, is almost equal to c 2 . Thus the variance of the mean is only slightly smaller than c 2 , the variance of position for an individual voter. On the other hand, for large c, Var can be significantly smaller than c 2 and is asymptotically proportional to c. To see this behaviour, we calculate the qu5 Note: if each individual’s political stance has a variance of T 2 rather than 1, then the above result should be multiplied by T 2 .
130
Guillermo Owen, Ines Lindner, and Bernard Grofman Table 1
c
Var
Var c 2
100 200 300 500 1000 2000 4000 6000 10000 50000 100000
9983 39735 88674 240080 861524 254663 6089800 9634700 16724500 87622500 176245000
0.99830 0.99340 0.98530 0.96030 0.86150 0.63670 0.38060 0.26760 0.16720 0.03505 0.01762
antities Var and Var c 2 for several values of c and for kt 12500 (Table 1). As may be seen, Var c 2 decreases very slowly until about c 500. Afterwards, it decreases much more rapidly, until for c p 10000 it can be approximated reasonably well by 1760 c . In effect, this means that the variance behaves as though the c undecided voters arrived, not independently, but, rather, in independent bunches of 1760. This will hold so long as the actual number of undecideds is at least of the order of 10000. Of course, the numerator 1760, the size of the independent bunches, depends on both k and t. Specifically, for large c, it will be proportional to the square root of kt. It is difficult to decide how large this should be chosen, but in the next section we will try to justify a value chosen.
4. Statistical Analysis We will make the assumption that Republican candidate i ’s share of the two-party vote in state j can be approximated by q ij Ni z j e ij
(15)
where Ni represents the candidate’s personal popularity, z j is state j’s Republican tendency (low for MA and DC, high for UT and WY), and e ij is an additional (random) term, representing perhaps good or bad luck for the candidate in that state. Typically, Ni would be of the order of 0.1 for a very popular candidate (say, Nixon in 1972), 0.1 for an unpopular candidate (Goldwater in 1964) and in between for other candidates. In fact, we will not worry about the µi, since they do not affect the states’ rankings, and we are only interested in the probability that a given state be a pivot. We estimate z j by looking at the Republican share of the vote in the last five elections (1988–2002). Since positions evolve over time, we have discounted past elections by a factor of 0.8 for each cycle. Thus, for each state, we calculate the quantity
Modified Power Indices for Indirect Voting Table 2 State
zj
Hj
wj
Cumulative EV
DC MA RI NY VT HI MD IL CT CA ME WA DE MN NJ OR MI PA IA WI NM WV AR MO NH OH FL NV CO LA TN AZ VA GA NC KY MT AL SC IN SD TX MS KS OK ND AK NE WY ID UT
102 368 370 395 418 420 438 440 441 445 452 454 456 458 458 469 472 474 477 482 488 496 497 501 504 506 517 518 520 523 530 536 541 549 549 551 568 570 573 574 575 577 587 596 600 605 632 637 644 658 686
0.000 0.000 0.000 0.000 0.090 0.100 0.190 0.200 0.205 0.225 0.260 0.270 0.280 0.290 0.290 0.345 0.360 0.370 0.385 0.410 0.440 0.480 0.485 0.505 0.520 0.530 0.585 0.590 0.600 0.615 0.650 0.680 0.705 0.745 0.745 0.755 0.840 0.850 0.865 0.870 0.875 0.885 0.935 0.980 1.000 1.000 1.000 1.000 1.000 1.000 1.000
3 12 4 31 3 4 10 21 7 55 4 11 3 10 15 7 17 21 7 10 5 5 6 11 4 20 27 5 9 9 11 10 13 15 15 8 3 9 8 11 3 34 6 6 7 3 3 5 3 4 5
3 15 19 50 53 57 67 88 95 150 154 165 168 178 193 200 217 238 245 255 260 265 271 282 286 306 333 338 347 356 367 377 390 405 420 428 431 440 448 459 462 496 502 508 515 518 521 526 529 533 538
131
132
Guillermo Owen, Ines Lindner, and Bernard Grofman
zj
s 2004, j 0.8s 2000, j 0.64s 1996, j 0.512s 1992, j 0.4096s 1998, j 3.3616
(16)
where s k , j is the Republican share of the two-party vote in state j, in the year k presidential election. This allows us to give a position to each state. (The numbers should be divided by 1000.) We obtain an ordering for the states, from most Democratic to most Republican. The electoral votes, w j , are those of the 2000 reapportionment (Table 2). We next assume that 20% of the voters in each state are undecideds, and that they have, in the past, divided evenly between the two candidates. Thus, for state j, where the Republican’s share of the vote is yj, we assume that d j 0.9 y j are the Democratic stalwarts, and e j z j 0.1 are the Republican stalwarts. Then, if z j 0.4, the state is not in play, as the Democrats have a majority even without any of the undecideds. Similarly, if z j 0.6, then the Republicans have a certain majority. In our ‘time of arrival’ parable, the former of these arrive at time 0, while the latter arrive at time 1. Thus, there are 7 safe states for the Republicans, and 3 safe states, as well as the DC, for the Democrats. There are 40 states with 0.4 z j 0.6. These states are theoretically in play, though such states as Kansas, at 0.596, and Vermont, at 0.418, seem safe for their parties except in case of a landslide of historical proportions. We then note that, for a state to ‘arrive’, at least a fraction H j of its undecided voters must join, where H j 5z j 2
(17)
As discussed above, this H j is also the median value of X j , state j’s arrival time. As may be seen, Arkansas is the median state in this ordering, with z AR 0.497. Then ( H AR 0.485 is a good approximation to the median time at which a winning coalition forms. It is still necessary to get a distribution for these times of arrival. As mentioned above, the effect of correlation is to decrease the variance of arrivals. Thus, with c j undecideds, and independence, the variance is T 2 c j , where T 2 is the variance of each individual’s time. However, we saw that (under certain assumptions), the group variance was as if the undecideds arrived in bunches of 1760 at a time, these bunches coming independently. Moreover, this bunch size of 1760 depends on certain parameters which are admittedly uncertain. Let us assume, then, that the size of the bunches is, rather, 2500. Now, if we assume that 35% of the state population vote, and that 20% of these are undecided, we find that the undecideds are 7% of p j , the state population. However, these arrive in bunches of 2500. Thus the number of undecided bunches would be cj
0.07 p j 0.000028 p j . 2500
(18)
Modified Power Indices for Indirect Voting
133
Table 3 State
Hj
DC MA RI NY VT HI MD IL CT CA ME WA DE MN NJ OR MI PA IA WI NM WV AR MO NH OH FL NV CO LA TN AZ VA GA NC KY MT AL SC IN SD TX MS KS OK ND AK NE WY ID UT
0.000 0.000 0.000 0.000 0.090 0.100 0.190 0.200 0.205 0.225 0.260 0.270 0.280 0.290 0.290 0.345 0.360 0.370 0.385 0.410 0.440 0.480 0.485 0.505 0.520 0.530 0.585 0.590 0.600 0.615 0.650 0.680 0.705 0.745 0.745 0.755 0.840 0.850 0.865 0.870 0.875 0.885 0.935 0.980 1.000 1.000 1.000 1.000 1.000 1.000 1.000
p j (in 1000s) 551 6399 1076 19255 623 1275 5600 12763 3510 36132 1322 6288 844 5133 8718 3641 10121 12430 2966 5536 1928 1817 2779 5800 1310 11464 17790 2415 4665 4524 5963 5939 7567 9073 8683 4173 936 4558 4255 6272 776 22860 2921 2745 3548 637 664 1759 509 1429 2470
cj
H j o1 c j
Cumulative EV
15.430 179.170 30.130 539.140 17.440 35.700 156.800 357.364 98.280 1011.700 37.020 176.060 23.630 143.720 244.100 101.950 283.390 348.040 83.050 155.010 53.980 50.880 77.810 162.400 36.680 320.990 498.120 67.620 130.620 126.670 166.960 166.290 211.880 254.040 243.120 116.840 26.210 127.620 119.140 175.620 21.730 640.080 81.790 76.860 99.340 17.840 18.590 49.250 14.250 40.010 69.160
0 0 0 0 [0, 0.33] [0, 0.27] [0.11, 0.27] [0.15, 0.25] [0.10, 0.31] [0.19, 0.26] [0.10, 0.42] [0.19, 0.35] [0.07, 0.49] [0.21, 0.37] [0.23, 0.35] [0.25, 0.44] [0.30, 0.42] [0.31, 0.43] [0.28, 0.49] [0.33, 0.49] [0.30, 0.58] [0.34, 0.62] [0.37, 0.60] [0.43, 0.58] [0.35, 0.69] [0.47, 0.59] [0.54, 0.63] [0.47, 0.71] [0.51, 0.69] [0.53, 0.70] [0.57, 0.73] [0.60, 0.76] [0.64, 0.77] [0.68, 0.81] [0.68, 0.81] [0.66, 0.85] [0.64, 1] [0.76, 0.94] [0.77, 0.96] [0.79, 0.95] [0.66, 1] [0.85, 0.92] [0.82, 1] [0.87, 1] 1 1 1 1 1 1 1
3 15 19 50 53 57 67 88 95 150 154 165 168 178 193 200 217 238 245 255 260 265 271 282 286 306 333 338 347 356 367 377 390 405 420 428 431 440 448 459 462 496 502 508 515 518 521 526 529 533 538
134
Guillermo Owen, Ines Lindner, and Bernard Grofman Table 4 State
Hj
DC MA RI NY VT HI MD IL CT CA ME WA DE MN NJ OR MI PA IA WI NM WV AR MO NH OH FL NV CO LA TN AZ VA GA NC KY MT AL SC IN SD TX MS KS OK ND AK NE WY ID UT
0.000 0.000 0.000 0.000 0.090 0.100 0.190 0.200 0.205 0.225 0.260 0.270 0.280 0.290 0.290 0.345 0.360 0.370 0.385 0.410 0.440 0.480 0.485 0.505 0.520 0.530 0.585 0.590 0.600 0.615 0.650 0.680 0.705 0.745 0.745 0.755 0.840 0.850 0.865 0.870 0.875 0.885 0.935 0.980 1.000 1.000 1.000 1.000 1.000 1.000 1.000
p j (in 1000s) 551 6399 1076 19255 623 1275 5600 12763 3510 36132 1322 6288 844 5133 8718 3641 10121 12430 2966 5536 1928 1817 2779 5800 1310 11464 17790 2415 4665 4524 5963 5939 7567 9073 8683 4173 936 4558 4255 6272 776 22860 2921 2745 3548 637 664 1759 509 1429 2470
EV.
Piv. prob. :j
Power : j 0.07 p j
3 12 4 31 3 4 10 21 7 55 4 11 3 10 15 7 17 21 7 10 5 5 6 11 4 20 27 5 9 9 11 10 13 15 15 8 3 9 8 11 3 34 6 6 7 3 3 5 3 4 5
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0021 0.0000 0.0059 0.0000 0.0000 0.0067 0.0026 0.0024 0.0360 0.0636 0.0838 0.1017 0.1472 0.1984 0.0606 0.1872 0.0583 0.0243 0.0102 0.0051 0.0021 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 22.7 0.0 99.9 0.0 0.0 26.3 3.7 2.7 173.3 164.1 623.6 799.6 756.7 488.7 660.9 233.3 46.9 143.7 31.3 16.1 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Modified Power Indices for Indirect Voting
135
We call c j the virtual number of undecided bunches. 6 Of these virtual bunches, a fraction H j must arrive for state j to arrive. Thus we need s j H jc j
(19)
of these bunches. Then, in equation (4), g j (t ) is equal to the probability that a binomial variable, with parameters c j and t, be at least equal to s j . Approximating the binomial by a normal variable, we have the cumulative distribution function g j and its density, g aj as given by equations (8) and (9) above, which we repeat here as (20) and (21): (t H j ) c j g j (t ) ' t(1 t ) g aj (t )
(t H 2Ht ) c j 2(t t 2 )
3 2
¬ ®
(t H j ) c j K t(1 t )
(20) ¬ . ®
(21)
Unfortunately, X j is not normally distributed. The distribution g j is given in terms of the normal distribution, but its ‘variance’ unfortunately depends on t. Nevertheless, we note that, since t(1 t ) b 0.25, then, for all t H j we must have g j (t ) ' ¢¡2(t H j ) c j ¯°±
(22)
while the opposite inequality will hold if t H. Now, we know that '(1.96) 0.975, while '(1.96) 0.025. It will follow that H j 0.98 ¬ g j 0.975, ® cj
H j 0.98 ¬ g j 0.025 cj ®
(23)
and we conclude that H j o 1 c j gives us better than a 95% interval for X j. We now simplify matters by assuming that we can disregard anything outside the 95% interval, H j o 1 c j . In Table 3 we calculate this for all but the 10 states, and the DC, mentioned above as always safe for one or the other of the two parties. The several states’ populations, p j , are the July 2005 Census Bureau estimates (www.factmonster.com). As may be seen, if we consider only the times within each state’s 95% interval, a majority (270 electoral votes) can arrive no sooner than time 0.38 6
To see that this is a reasonable number, note that, for a standard-sized electoral district of 500,000 inhabitants, this gives us a total of 14 virtual bunches. As mentioned above, Gelman et al. (2004) suggest that the variance should behave as the –0.2 power of population, i.e. as if the population should be replaced by its fifth root. But the fifth root of 500,000 is 13.8. This is not to say that our analysis is exact, only that it is not unreasonable.
136
Guillermo Owen, Ines Lindner, and Bernard Grofman
(Arkansas’s early time), and will certainly have arrived by time 0.59 (Missouri’s late time). Thus, only those states whose interval overlaps (.038, 0.59) can be pivots under our scheme. We can put the states in five categories: (a) Safe for the Democrats: DC, RI, MA, NY, with 50 votes; (b) Almost safe for the Democrats: VT, HI, MD, IL, CT, CA, WA, MN, NJ, with 136 votes; (c) In play: ME, DE, OR, MI, PA, IA, WI, NM, WV, AR, MO, NH, OH, FL, NV, CO, LA, TN, with 181 votes; (d) Almost safe for the Republicans: AZ, VA, GA, NC, KY, MT, AL, SC, IN, SD, TX, MS, KS, with 141 votes; (d) Safe for the Republicans: OK, ND, AK, NE, WY, ID, UT, with 30 votes. Only the 18 states ‘in play’ have a non-negligible probability of being pivot. Since 186 votes are going to arrive early, the pivot will be that state (among these 18) to complete 84 votes. Thus, for example, Maine, with its 4 votes, must arrive at a time when other ‘in play’ states with at least 80, and not more than 83, electoral votes, have arrived. This number of votes is of course an integer, but we will approximate via a continuous distribution, Then, if Y j (t ) is the number of votes (not counting those of state j) to have arrived by time t, we find that the j-th partial derivative (where j refers to Maine) of the multilinear extension f, evaluated at g 1(t ),! g n (t ) , is given by f j g (t ) Pr <79.5 b Y j (t ) b 83.5 >.
(24)
Thus the probability that ME is in fact the pivot will be given by Z j ¨ Pr <79.5 b Y j (t ) b 83.5 >g aj (t )dt
where the integral is taken over a sufficiently large interval, for example, ME’s 95% interval, [0.10, 0.42]. For other states, the probability is given by a similar integral; the probability in the integrand must however be replaced by f j g (t ) Pr < 83.5 w j Y j (t ) b 83.5 >
(26)
where w j is state j’s electoral weight (number of votes). The integral in (25) should be modified accordingly. It remains to determine the probability that 83.5 w j b Y j (t )83.5. In fact, at time t, state k has arrived with probability g k (t ). Thus it will have contributed w k votes with probability g k and 0 votes with probability 1 g k . Thus the number of votes it has contributed is a random variable with mean w k g k , and variance w k2 g k (1 g k ). Now Y j is the sum of these variables, for all states k other than j. The states are assumed to arrive inde-
Modified Power Indices for Indirect Voting
137
pendently, and thus Y j has mean Mj
w
k
gk
(27)
K v` j
and variance V j w k2 g k (1 g k ).
(28)
k v` j
Note that, since all the g k depend on t, so do M j and V j . We will now assume that the number of states in play is sufficiently large that we can approximate Y j by a normal random variable with the given mean and variance. If that is so, then we have the approximation 83.5 M j ¬ 83.5 w j M j ¬ f j x ' ' . Vj Vj ® ®
(29)
Finally, we must determine the individual voters’ power. We note that a resident, k, of state j will be the pivot if (1) j is the pivot among the states, and (2) k is the median voter in state j. Since (apart from states in the two ‘safe’ categories) only undecided voters will be in median position, we divide the state’s pivot probability, Z j , by the number of undecided voters, which we had calculated as 0.07 p j . This gives us the voting power of individuals in state j. Table 4 gives the approximate values. 7 Entries in the last column should be multiplied by 10 9. As may be seen, the power is concentrated among the states near the middle of the order. West Virginia voters are the most powerful, with Arkansas, New Mexico and New Hampshire close behind. Missouri and Ohio seem to be the most important states. There seem to be certain anomalies, e.g. Oregon seems to be more powerful than Michigan, though the latter is more populous and more centrally located. The explanation seems to be that it is easier to ‘move’ the population of Oregon towards the center (there are fewer Oregonians than Michiganders) and so Oregon becomes an easier prize than Michigan. Admittedly these results depend quite heavily on the size of the ‘virtual bunches.’ In a subsequent article we will try to get a better statistical handle on these.
References Banzhaf, J.F. (1965) Weighted Voting Doesn’t Work: A Mathematical Analysis, Rutgers Law Review 19: 317–343. Brams, S.J. and Davis, M. (1974) The 3/2ths Rule in Presidential Campaigning, American Political Science Review 68: 113–134. 7
Integrations were done using Maple 10.
138
Guillermo Owen, Ines Lindner, and Bernard Grofman
Chamberlain, G. and Rothschild, M. (1981) A Note on the Probability of Casting a Decisive Vote, Journal of Economic Theory 25: 152–162. Cox, G.W. (1997) Making Votes Count: Strategic Coordination in the World’s Electoral Systems, Cambridge University Press. Duverger, M. (1959) Political Parties: Their Organization and Activity in the Modern State, John Wiley. Edwards, G.C. (2004) Why the Electoral College is Bad for America, Yale University Press. Felsenthal, D.S. and Machover, M. (1998)The Measurement of Voting Power. Theory and Practice, Problems and Paradoxes, Edward Elgar. Gaines, B. (1999). Duverger’s Law and the Meaning of Canadian Exceptionalism, Comparative Political Studies 32: 835–861. Gelman, A. Katz, J. and Bafumi, J. (2004) Standard Voting Power Indexes Do Not Work: An Empirical Analysis, British Journal of Political Science 34: 657–674. Good, I.J. and Meyer, L.S. (1975) Estimating the Efficacy of a Vote, Behavioral Science 20: 25–33. Grofman, B., Koetzle, W. and Brunell, T. (1997) An Integrated Perspective on the Three Potential Sources of Partisan Bias: Malapportionment, Turnout Differences, and the Geographic Distribution of Party Vote Shares, Electoral Studies 16: 457–470. Groseclose, T. and Snyder, J.M. (2000) Vote Buying, Supermajorities, and Flooded Coalitions, American Political Science Review 94: 683–684. Kimberling, W.C. (2004) The Electoral College, U.S. Federal Election Commission, Office of Election Administration. Madison, J. (1823) ‘Letter to John Hay, August 23, 1823’, in Max Farrand (ed.) The Records of the Federal Convention of 1787 (rev. ed.) vol. 3, Yale University Press (1966). Natapoff, A. (2004) ‘The Electoral College’, Presentation to the Colloquium Series of the Institute for Mathematical Behavioral Sciences, University of California, Irvine, October 2004. Owen, G. (1972) Multilinear Extensions of Games, Management Science 18: 64–79. Owen, G. (1975) Evaluation of a Presidential Election Game, American Political Science Review 69: 947–953. Owen, G. (1995) Game Theory, Academic Press. Shapley, L.S. Mann, I. (1962) Values of Large Games, VI: Evaluating the Electoral College Exactly, RM-3158-PR. The RAND Corporation. Shaw, D. (2003) A Simple Game: Uncovering Campaign Effects in the 2000 Presidential Election. Unpublished manuscript, Department of Political Science, University of Texas, Austin. Tufte, E. (1973) The Relationship between Seats and Votes in Two-Party Systems, American Political Science Review 67: 540–554.
8. Pivotal Voting Theory: The 1993 Clinton Health Care Reform Proposal in the U.S. Congress Joseph Godfrey WinSet Group, Fairfax, Virginia, USA
Bernard Grofman Department of Political Science, University of California, Irvine, USA
1. Introduction Theories of lobbying differ considerably about which legislators are most likely to be lobbied by which types of interest groups. In particular, there is not agreement as to whether lobbyists will focus on those likely to be sympathetic to the interest group (their friends), or those likely to be unsympathetic to the interest group (their enemies). 1 Plausible arguments can be made in each direction. One lobbies one’s friends to offer information that will help them draft legislation and fend off criticism, and to remind them of past obligations and future payoffs (carrots); one lobbies one’s enemies because they need to be exposed to arguments and facts countervailing their most likely position, and to alert them that this is an important vote that will be remembered and might cost them the opposition of an interest group in future re-election efforts (sticks). However, regardless of disagreements about whether lobbying is likely to be directed primarily at an interest group’s friends or at its enemies, there does appear to be a high degree of consensus in the interest group literature on the proposition -- one with clear rational choice roots -- that major lobbying efforts by virtually all special interests will include swing voters likely to be pivotal. But, the interest group literature also identifies some complicating factors that are also relevant from a decision-theoretic perspective. For example, some legislators, e.g., those in positions of power, or those seen by their fellow legislators as particularly experienced or knowledgeable, may be more likely to be influential, and thus lobbying efforts di1
See Austen-Smith and Wright (1994, 1996), Baumgartner and Leech(1996a, 1996b).
140
Joseph Godfrey and Bernard Grofman
rected at them may have a more important impact on outcomes than lobbying directed toward ‘ordinary’ legislators, even if potentially pivotal ones. Also, some legislators may be seen as potentially more pliable than others. For example in the U.S. Senate, with six year terms, legislators up soon for re-election may have election concerns that make them more open to persuasion by interest groups offering either carrots or sticks than those whose next re-election campaign is further away. Also, in the Senate, it may make more sense for lobbyists with limited resources to focus their grassroots lobbying efforts to indirectly influence a Senator my mobilizing forces within his constituency of the Senators from small states. 2 In this essay we make no pretence to test the full range of competing theories of lobbying. Rather, we will focus on one simple hypothesis: that major lobbying efforts will be directed toward potentially pivotal legislators. Moreover, we will not use new data but instead rely almost entirely on data provided by Goldstein (1999) in his chapter looking at special interest lobbying activities on the Clinton 1993 Health Care reform proposal. 3 Goldstein (1999: Chapter 5) reviews lobbying activities and tracks changes in health care reform proposals as they move through Congress. Goldstein interviewed lobbyists on health care reform in 1993 representing a number of the special interest groups (SIGs) active in the debate. 4 Of these SIGs, 8 were pro Clinton’s plan and 13 were against. Trade associations broke down 7 to 1 against Clinton’s plan; and lobbies of individual corporations broke down 3 to 0 against. Ideologically defined groups broke down in the predicted fashion, with the 7 groups on the left in Goldstein’s sample being for the plan, and the 3 groups on the right being against it. 5 Bills concerning health care reform were considered in the House Ways and Means Committee, the House Energy and Commerce Committee, the House Education and Labor Committee, the Senate Finance Committee and the Senate Labor and Human Resources Committee. Because neither the Labor Committee in the House nor the Labor and Human Resources Committee in the Senate attracted much lobbying attention, 6 both Goldstein and we focus on the three remaining committees which were the focus of the most intense lobbying. 2 Goldstein (1999: 103) quotes a lobbyist on exactly this point: ‘[f]ifty small businessmen are more likely to influence [Senator] Max Baucus in Montana than a lot more than fifty small businessmen are going to influence [Senator] Moynihan in Manhattan.’ 3 Most of the book is on issues related to political participation that are outside the scope of this paper. Moreover the nature of its specialized focus means that the data collected is not ideal for present purposes. Nonetheless, his is the only empirical data set on lobbying activity with which we are familiar that allows us to investigate the usefulness of spatial power score ideas in studying lobbying efforts. 4 He also interviewed party leaders and key figures in Hilary Clinton’s health care taskforce. 5 Goldstein reports this information in Table 5.1, p. 75. Also see the discussion in his book immediately below and after this table. 6 See discussion below.
Pivotal Voting Theory
141
For each of these three committees, Goldstein was able to learn which congressional legislators each SIG lobbied (either directly, or indirectly, through lobbying at the grassroots to generate communications with the legislator from his or her constituents). Goldstein also reports information on roll-call voting scores of the members on one standard index (ADA scores) provided by the liberal group, Americans for Democratic Action. This 0– 100 measure is commonly used as a measure of general ideology, with high values indicating liberalism and low values indicating conservatism. Even more importantly, for the period shortly after the Clinton proposal is unveiled, Goldstein reports data on how legislators on key committee are classified by one key lobbying group, the National Federation of Independent Businesses (NFIB) in terms of their sympathy for small business concerns. 7 He recodes NFIB data into a 5 point scale, with voters in the middle seen as potentially open to lobbying, while voters at the extremes are expected to vote for or against the Clinton proposal with near certainty. A combination of these two measures will allow us to identify which legislators on each of the three heavily lobbied committees were regarded as potential swing voters. 8 Goldstein (1999) makes a number of important contributions to our understanding of lobbying. 9 For present purposes, however, we will emphasize how his use of lobbying data on health care reform in the U.S. in 1993 leads to ways to test pivotal voter theory. First, Goldstein shows that the committees which attract attention from lobbyists are those where the lobbyists think that they might influence outcomes, and where they regard the outcome of the committee deliberations as likely to be influential on the floor. Goldstein estimates the first of these two factors by looking to see what proportion of the committee Democrats had already signed on as co-sponsors of the Clinton bill, since the partisan climate at the time was such that no Republican in any of these committees was a co-sponsor of the Clinton proposal. Goldstein estimates the second of these two factors by comparing the ADA scores of the Democratic majority in each committee with the position of the overall floor median. His argument, which we find persuasive, is that, given how closely divided the two 7 This issue played itself out in the Clinton health care debate primarily in terms of so-called employer mandates, i.e., requirements that employers provide some kind of health care insurance for their employees. At issue was how large a firm would have to be in order to be subject to this mandate, and whether requirements for firms would vary with firm size. 8 In particular, we combine the information from ADA and NFIB roll-call measures to develop a two-dimensional representation of the legislative space. While the full NFIB scale is a 100 point scale; since only five values occur, at these are at equal intervals apart, it is straightforward to recast that scale as a five point scale. The correlation between the original scale and the recoded scale is 1.0. 9 In general, Goldstein’s chapter on Clinton’s health care proposal focuses on the role of grass roots lobbying, i.e., on the mobilization of constituents by interest groups to influence legislators, a form of indirect lobbying by interest groups.
142
Joseph Godfrey and Bernard Grofman
chambers were along party lines, and the degree of partisan and ideological polarization around the issue of health care reform, and the great importance and huge potential costs attached to this issue, only proposals close to the views of the median floor voter had any real chance of passage. Goldstein finds that two of the five committees considering health care reform, the Labor Committee in the House and the Labor and Human Resources Committee in the Senate, with ADA means of 87 and 85 for their Democrat members, were somewhat further to the left of the overall floor median than was true for Democrats on the three committee which were the subject of major lobbying efforts (an ADA mean of 77.8). 10 Also, these two labor committees had a much higher proportion of committee Democrats who were already co-sponsors (i.e., introducing signators) of the Clinton bill (53% for Senate Labor and Human Resources, 54% for House Education and Labor, compared to a mean of 30% for the other three committees), thus making the outcome of the vote in the former committees mostly foregone. 11 The combination of these factors leads him to expect that the two labor committees will not be the targets for extensive lobbying, and this is exactly what he finds. Of the 21 SIGs about which Goldstein has data, 15 of 21 lobby House Ways and Means Committee, 16 the House Energy and Commerce Committee, 20 the Senate Finance Committee, but only 4 lobby the House Education and Labor Committee and only 5 of 21 lobby the Senate Labor and Human Resources Committee. Second, Goldstein’s study of the 21 SIGs reveals that they primarily focus on legislators whom they see as swing voters. For example, he finds that 16 of the 21 special interests groups about whom he gathers data (67%) claim to focus their attention on the ‘undecided’ legislators. 12 Moreover, within each of the three more heavily lobbied committees, when we calculate the correlations between NFIB scores and SIG lobbying efforts directed at individual legislators within that committee, 13 we get very substantial values: .84, 0.81, and 0.70, for the Senate Finance Committee, the House Ways and Means Committee, and the House Energy and Commerce Committee, respectively. 14 10
These differences are statistically significant. See Goldstein (1999: 80, Table 5.3). 12 Of the remaining 5 groups, 3 focus attention on legislators who are already on their side, and 2 on opponents (see Goldstein 1999: 84, Table 5.5). 13 Since NFIB scores were on a scale of 1 to 5, to obtain this correlation we have subtracted the extremism of NFIB ratings (defined as absolute value (NFIB-3)) from 2 to give us a scale which runs from 0 to 2, with 2 indicating maximum centrality. At the level of lobbying groups, SIG values were coded as dummies in that either a committee member was lobbied (coded as one) or s/he was not (coded as zero) . The value for each legislator is simply the sum of his/her scores across the 21 SIGs. 14 Although Goldstein reports various kinds of numerical data, he does not do any form of statistical analysis on his data other than simple tabulations. Thus, these correlations were created by the present authors. 11
Pivotal Voting Theory
143
In this paper we build on Goldstein’s work. We see the original contributions of this paper as five-fold: First, we make use of both ADA scores and NFIB roll-call voting scores to locate the Senators and House members on the three most lobbied committees in a two-dimensional policy space which captures, in an a priori fashion, their likely attitudes toward health care reform rather than using one or the other of these measures in isolation from one another. ADA scores may be taken as a rough proxy for the dimension of health care reform that had to do with the role of government in administering the program., e.g., should it be privately administered or should it look more like Medicare, the so-called single-payer option. NFIB scores may be taken as a rough proxy for a slightly different health care reform issue having to do with employer mandates, i.e., how much of the burden of health care insurance would be passed on to employers, and where would the threshold be set in terms of firm size to determine which firms must contribute to the health care insurance costs of their employees. While attitudes on these dimensions tended to be strongly correlated, the correlation was not perfect. Second, we calculate for each committee member the spatial analogue of the well-known Shapley value of pivotal power, the Shapley-Owen value (Owen and Shapley 1989). As far as we are aware, this is the first attempt to calculate Shapley-Owen values for real-world legislative data. 15 Third, we compare the Shapley-Owen calculations of pivotal power to which legislators on these committees were actually lobbied. We illustrate that, while NFIB scores and Shapley-Owen scores generally agree for legislators who appear centrally placed in the policy space, they can differ substantially for outliers in our two-dimensional representation, i.e., legislators who are high in their NFIB rating but low on ADA, or conversely. 16 We also find that lobbyists systematically neglect the legislators located at the fringes of the space that our game-theoretic calculations nonetheless identify as pivotal. Fourth, we test the usefulness of Shapley-Owen scores by looking to see whether legislators missed by unidimensional analysis, but identified as pivotal by their Shapley-Owen scores, behave differently in their voting behavior from those potentially pivotal legislators whom lobbyists did 15
Work in the 1950s for the RAND Corporation by Lloyd Shapley uses an earlier variant of the Shapley index to understand voting patterns in the U.S. Supreme Court; Grofman et al. (1987) calculate Shapley-Owen values in their reanalyses of experimental committee voting games run by economists and others, and examine from a mathematical perspective the connections between Shapley-Owen scores and the social choice solution concept, the Copeland winner; Feld and Grofman (1990) look at the mathematical link between Shapley-Owen values and another social choice concept, the yolk (Mckelvey 1986). 16 While the two measures are very highly correlated, the correlations are not perfect. For examples for the House Energy and Commerce Committee and the House Ways and Means Committee, the correlations are 0.94 and 0.79, respectively. Thus, we can identify outliers who are high on one score and low on the other, or vice versa.
144
Joseph Godfrey and Bernard Grofman
identify and target. Some legislators whom we identify as pivotal but who were left out of lobbying efforts were, in fact, less likely to vote for final passage of the bill within the committee (in the two committees where a bill was reported out) than the highly pivotal legislators identified by more conventional means of analysis who were lobbied. Thus, we argue that lobbyists missed viewing as important, and hence failed to influence, some potentially pivotal legislators whom a two-dimensional perspective using Shapley-Owen scores based both on sympathy to the NFIB position and generalized ideological location could have allowed us to identify. But we also recognize that the small sample and limitations of our data do not allow us to fully test this claim. Fifth and finally, we look at another solution concept, the Copeland winner and show how it can be applied to legislative data. The Copeland winner is the location which can defeat the most other alternatives in paired competition (Straffin 1980). We make use of Shapley-Owen calculations to determine the location of the Copeland winner. 17 The Copeland winner can be viewed as the alternative that is closest to a majority rule core (i.e., majority undominated) outcome in the committee voting space. Thus, it can be taken to be an a priori prediction of where the committee consensus will emerge in a majority rule setting. For the two committees we look at who did report out a bill, we take the location of the Copeland winner as a plausible estimate of the location of the reported bill. In the next section of the paper we briefly review the notion of pivotal power and show, in a relatively intuitive way, how Shapley-Owen scores can be calculated. Then, in the succeeding section we apply the Shapley-Owen measure. First we show the 1993 locations of the members of the three committees on which interest group lobbying efforts were concentrated (the Senate Finance Committee, the House Ways and Means Committee, and the House Energy and Commerce Committee) in the two-dimensional policy space defined by ADA scores and NFIB evaluations. Then, for each of the three committees, we look at measures of lobbying activity vis-a- vis Clinton health care reform proposals, and then we test the pivotal legislator theory of lobbying efforts, and compare what happened in the committee to the prediction based on the location of the Copeland winner.
2. Calculating the Shapley-Owen Value for Senate Committees A standard game-theoretic approach to calculating the power of individual voters or blocs of voters in situations involving voting is to determine the 17
Owen and Shapley (1989; see also Grofman et al. 1987) show that, in spatial voting games, the Copeland winner (also known as the strong point) can be calculated from the policy locations and Shapley-Owen values of the voters. In particular, the strong point can be expressed as a weighted average of the voter ideal points, where each voter ideal point is weighted by that voter’s Shapley-Owen power score.
Pivotal Voting Theory
145
expected proportion of time that a given voter or bloc can be expected to be pivotal or decisive on the outcome. For example, in Shapley’s (1953) approach the power of a voter (or bloc) is calculated by looking at the likelihood that a voter (or bloc) will put a given coalition ‘over the top,’ by converting a losing coalition to a winning one as a result of his joining the coalition. Game-theoretic measures of a priori power such as the Shapley value or the Banzhaf index make strongly simplifying assumptions about feasible coalitions in voting situations. The Banzhaf index assumes all combinations of actors are equally likely, while the Shapley value takes all permutations of voters to be equally likely. There have been a number of attempts to build in to power calculations more realistic assumptions about which coalitions are likely to arise. The most interesting of these for our purposes is the Shapley-Owen value. 18 The Shapley-Owen value applies to spatial voting games, i.e., to games where voters and alternatives can be regarded as points in some multidimensional issue or policy space. It estimates the proportion of times that a given actor (or bloc) will be pivotal as we consider possible ‘lines of choice’ in the space. While all choice lines are posited to be equally likely, this does not mean that all coalitions are equally likely. Rather, coalitions involving voters who are ‘central’ to the policy space are going to be more likely, and thus voters who are central are going to have greater pivotal power -- where centrality is a concept that an be defined in a quite precise way. We can illustrate the calculations for a simple three voter example. Consider three (equally weighted) voters, \1,2,3^ , located in a twodimensional space, whose axes we have labelled ‘guns’ and ‘butter’ simply for illustrative purposes. Each voter’s bliss point, represented by a black dot, represents that voter’s preferred combination of spending on guns and butter, respectively. We assume Euclidean distance, so that voters prefer points (i.e., combinations of spending on guns and butter) closer to them to points further away. Imagine that the choice is between two spending alternatives, A and B, which are located somewhere on the line we have labeled ‘choice line 1’ (or located, for that matter, on any line that is parallel to that line: by symmetry we will need to look at only one line in any given direction, since the results we get will be the same for all parallel lines). We can project the voter ideal points onto this choice line by dropping perpendiculars from the voters to the line. It is easy to see that, under the given assumptions, if voters are restricted to choices on this line, the voter i will prefer his projection on this line to any other point on the line, and will prefer points on the line closer to his projection to points on the line further away. 19 We can also identify the median voter on this line. 18
What was to become the Shapley-Owen index in its present form is introduced in Grofman et al. (1987), reporting work of which Guillermo Owen is the author. 19 See e.g. Feld and Grofman (1987).
Joseph Godfrey and Bernard Grofman Guns
146
Choice line 2
Choice line 3 1
Choice line 1 3 Pivot point p 2
Butter
Fig. 1.
Illustrative three voter example of Shapley-Owen calculations
The location of the two alternatives determines the angle of the choice line. Without loss of generality, because outcomes on all parallel lines are identical, we can consider only choice lines that pass through pivot point p. Looking at Fig. 1 we see that voter 1 is median on choice line 1, since that voter’s projection is the median projection onto the line. But, on choice line 2, it is apparent that voter 2 is the median voter; while on choice line 3, it is voter 3 who is median. But, by Black’s theorem on single-peaked preferences (Black 1958), if voting is by simple majority, we know that on any given line it is the median voter who will be the pivotal voter. The simple intuition behind the Shapley-Owen value is to figure out the proportion of choice lines on which each voter will be median, and thus pivotal. That proportion is their Shapley-Owen value. To obtain this value we consider all angles in the space, and identify which voter is median on each. The measure of those angles (normalized so that the sum across all voters is one) is the voter’s Shapley-Owen value. In a triangle the Shapley-Owen values have a particularly simple form. Each voter’s Shapley-Owen value is simply the arc of the triangle subtended by that voter, normalized by dividing by 180 degrees. Thus, in the three voter case, a voter who is located at an obtuse angle is more powerful (more likely to be pivotal) than a voter who is located at an acute angle. In general, Shapley-Owen values are calculated by finding the star angles associated with each voter, which are the angles within whose range the voter is median. 20 20 For further details see Godfrey (2005) who provides a computer algorithm for calculating Shapley-Owen values for two dimensional spatial voting games based on the theorems in Owen and Shapley (1989). For readers familiar with Krehbiel’s 1998 book, Pivotal Politics, we would note that the idea of pivotal power based on spatial Shapley-Owen values can be thought of as
Pivotal Voting Theory
147
3. Three Case Studies In President Clinton’s first term, due to spiraling health care costs and increasing numbers of citizens who lacked medical coverage, and the trigger of Harris Wofford’s upset victory in a special election in Pennsylvania for the U.S. Senate in which he had made health care reform the cornerstone of his campaigning, health care reform became a major priority for the Democrats (Skocpol 1996; Goldstein 1999: 72–73; Hacker 1999). A task force spearheaded by Hillary Rodham Clinton offered an innovative (and complex) scheme that made use of what was called ‘managed competition,’ involving the creation of dozens of new health care entities from among whom voters would choose. While there were multiple policy choices which needed to be made in crafting health care reform, one way to think about the choices confronting Congress in a simple fashion is to consider Clinton’s plan as a compromise between the Republican approach, which was to allow the private market to handle health care, and the approach of the most liberal Democrats, which was to adopt some variant of the Canadian single payer scheme with government regulating and financing of at least some elements of universal health care. Presidential supporters introduced bills into both Houses of Congress that were reported to committee. However, as noted earlier, in neither chamber was the committee assignment unique. In the House, bills were assigned to the Ways and Means Committee, and the Energy and Commerce Committee, and the Education and Labor Committees; in the Senate, bills were assigned for markup to both the Senate Finance Committee and the Senate Labor and Human Resources Committee, 21 The chairs of the House committees were Dan Rostenkowski, John Dingell, and William Ford, respectively; the chairs of the Senate committees were Daniel Moynihan, and Ted Kennedy, respectively. The failure of Clinton health care reform has been a topic much written about, including at least two book length studies (Skocpol 1996; Hacker 1999). The defeat of the Clinton Health Care reform initiative within Congress was a major watershed in the Clinton presidency, with ramifications that continue to this day in terms of still unaddressed issues involving health care costs and accessibility, and in terms of consequences for voter perceptions about Democrats competence to craft and implement policies in the public interest. the natural extension of ideas of pivotality based on unidimensional party competition. But we will not attempt to directly relate our work on lobbying to Krehbiel’s attempts to model the conditions for legislative gridlock and the breaking thereof. That topic we must reserve for a future paper. The algorithm reported by Godfrey measures star angles with a finite precision. Hence the computed Shapely Owen values are not exact. The Shapely-Owen values reported in this paper have a precision of about 0.005. 21 For present purposes we will not look at the (internal) political considerations that led to multiple referrals in each chamber .
National Federation of Independent Businesses
Joseph Godfrey and Bernard Grofman
Americans for Democratic Action (reversed)
Fig. 2.
US Congress – 103rd Session, 1993–94 (House/Finance/*)
National Federation of Independent Businesses
148
Americans for Democratic Action (reversed)
Fig. 3.
US Congress – 103rd Session, 1993–94 (House/Ways and Means/*)
149
National Federation of Independent Businesses
Pivotal Voting Theory
Americans for Democratic Action (reversed)
Fig. 4.
US Congress – 103rd Session, 1993–94 (House/Energy and Commerce/*)
Here we will draw largely from Goldstein’s study of the 1993-94 struggle within these various congressional committees to shape a health care reform package that could pass Congress. That study focuses heavily on the role of grass roots lobbying, and our own research will be limited to the topic of the choices made be lobbying organizations as to which legislators to target for grassroots lobbying activity. The issue of why Clinton’s proposals went down to defeat is of only incidental interest in this paper, and we will discuss the contents of alternative proposals only in passing. While we do identify lobbying targets that were missed, and Goldstein suggests ways of crafting bills that might have attracted broader legislative support, it is clear that, as Goldstein observes (1999: 74): ‘Lobbying efforts in general, and grass roots tactics in particular, alone, cannot explain the demise of the Clinton plan. Some of the credit – or blame – must go to an overreaching plan devised in secret, a divided Democratic party, a determined Republican opposition, and a problemplagued presidency.’ As noted earlier, Goldstein separately used ADA scores and NFIB scores as proxies for legislator’s likely attitudes toward health care reform. As shown in Figs. 2–4, using ADA and NFIB roll call voting evaluations, we can construct a two dimensional plot of legislator positions for the members of the three congressional committees considering health care reform which
150
Joseph Godfrey and Bernard Grofman
were the main targets of lobbying activity. For each committee member, we show locations on the ADA and NFIB dimension and indicate the ShapleyOwen value (decimal) of each member. In addition, information about the level of lobbying activities of the special interest groups involved with health care issues, for which Goldstein has collected data, is also shown in the figures. Associated with each legislator is the number of firms (out of 21 interviewed by Goldstein) that mobilized for the legislator. Several things should be obvious from those figures. First, since ADA and NFIB scores are correlated, member positions fall largely along a diagonal in the two-dimensional space. Second, once we identify party, it is apparent that, by and large, Democrats and Republicans are located in different portions of the space. In particular, Democrats are in the lower left (low ADA, low NFIB) and Republicans in the upper right (high ADA, high NFIB). Third, as previously suggested, we see that most of the lobbying was concentrated on committee members located in the middle of the NFIB scale, i.e., located in a vertical swath toward the center of each of the figures. Indeed, legislators located at one far extreme or the other of the NFIB scale were essentially never contacted by special interest group lobbyists. Thus, we see strong support for the notion that lobbying will be largely devoted to voters seen as potentially pivotal. However, there are some puzzling anomalies. For example, in the data for the Senate Finance Committee shown in Fig. 2 we see that Senators Danforth and Durenberger are not lobbied, despite their being within the vertical swath of heavily lobbied Senators. 22 Fourth, while high Shapley-Owen scores are by and large found for members whose location is near the center of the unit square shown for each of the four committees, and low Shapley-Owen scores are largely found in members whose location is far away from the center of the unit square, this relationship is far from perfect. In other words, using the Shapley-Owen measure of voting power, we obtain some counterintuitive notions of which members are most likely to be pivotal. For example, in Fig. 2, consider Senators Rockefeller and Prior. With Shapley-Owen values of 0.06 they are almost as pivotal as the centrally located Senator Breaux, with an Shapley-Owen score of 0.08. The lack of good fit between lobbying levels and member’s ShapleyOwen power scores is demonstrated by the fact that correlation between the two are very far from perfect. For the Senate Finance Committee, the Pearson correlation between Shapley-Owen values and SIG lobbying efforts is only 0.26; in the House Ways and Means Committee it is 0.54 , and in the House Energy and Commerce Committee it is 0.51. Recall, however, that the corresponding correlations for an NFIB-based measure of centrality 22 One other anomaly in Fig. 2, the lobbying attention devoted to Senator Moynihan, can be explained simply by recalling that he was then committee chair.
Pivotal Voting Theory
151
were 0.84, 0.81, and 0.70. Thus, if we were simply interested in making a priori predictions of which legislators in each of these three committees would be lobbied, then we would have been better off looking not to the Shapley-Owen measures but to our constructed measure of NFIB centrality. 23 Still, the question remains: ‘Does the fault lie with the ShapleyOwen value for failing to be a good indicator of pivotal power, 24 or does the fault lie with lobbyists who wrongly neglected some critical members of the committee whom they should have lobbied?’ In the remainder of this section we will turn our attention to this issue of optimal lobbying by looking at proposals within the three committees and at what type of bill emerged (or did not emerge) from each. In particular, we look at who supported the final bill as a function of their location in the two-dimensional space, their Shapley-Owen value, and whether or not they were lobbied. 25 We will also look at the relationship between the Copeland winner and our estimated location of committee outcomes.
3.1 Senate Finance Committee In the Senate Finance Committee, Senator Moynihan, while personally favoring a government-sponsored health plan, believed that for the health bill to survive a filibuster on the Senate floor bipartisan support was required. He therefore worked to amend the bill in such a way as to attract moderate Republicans. He did so by removing certain language offensive to small business interests (rewriting the section on employer mandates referred to earlier). In the Senate Finance Committee, as shown in Fig. 5, there were 12 yes votes and 8 no votes on final passage, with 2 Republicans and 10 Democrats voting yes, and 7 Republicans and 1 Democrat voting no. Fig. 5 shows voter locations (the same as given in Fig. 2) and votes on final passage of the bill reported out by the committee. The figure also gives our estimate of the location of the amended proposal, namely the Copeland winner (shown as a circle containing a dot). If we look at who voted yes on final passage, in addition to the usual suspects, i.e., the Democrats located in the bottom right hand corner of Fig. 2, 23 However, if we look to ADA centrality we get a more mixed comparison. For example, for the House Energy and Commerce Committee outperforms the Shapley-Owen measure in predicting who will be lobbied, since the correlation between ADA centrality, measured as twice (fifty minus absolute value (ADA score -50)) is 0.63, higher than the corresponding value of 0.51 for the Shapley-Owen correlation. But for the House Ways and Means Committee the advantage goes in the other direction (0.54 for the Shapley-Owen measure but only 0.35 for the ADA measure). 24 Alternatively, does the fault lie with the way in which we operationalized our construction of the Shapley-Owen values? 25 In some instances we also look at idiosyncratic factors identified by Goldstein (1999), or in other research on the 1993–94 struggle for health care reform, that may have influenced the votes of particular Representatives or Senators.
Joseph Godfrey and Bernard Grofman
National Federation of Independent Businesses
152
Americans for Democratic Action (reversed)
Fig. 5.
US Congress – 103rd Session, 1993–94 (Senate/Finance/*)
we see unexpected yes votes from Republican Senators Chafee and Danforth, only the former of whom was subject to substantial SIG lobbying, but who each may have been persuaded to vote yes by the changes in the bill made by Senator Moynihan. We also see an unexpected no vote from a Democrat, Kent Conrad, who was also heavily lobbied; but the other two most heavily lobbied Democrats, Sen. Baucus and Senator Breaux, voted with their party. Senator Durenberger, who voted with his party and against the bill, would have seemed to have been, given the final coalition, a possible yes vote that was lost. Senator Durenberger was not up for immediate re-election, so, while having an exceptionally high Shapley-Owen value, the fact that he was not lobbied vindicates Goldstein’s claim that SIGs believe their (grassroots) lobbying efforts will have relatively little influence on Senators who will not soon face the voters, and so expend their resources elsewhere. We regard this as a mistake. Senator Conrad, on the other hand, was up for re-election, and was heavily targeted. Given that his location in the two-dimensional configuration would appear to predispose him to join his fellow Democrats, he is a real anomaly. 26 Another anomaly is that Boren and Bradley were Democ26
Oppositional lobbying seems remarkably efficacious in its effects on Senator Conrad’s vote unless, for some reason, it was unidirectional and he was never lobbied by the pro-Clinton side. Because, for confidentiality reasons, we do know which lobby groups lobbied which legis-
Pivotal Voting Theory
153
ratic senators who were subject to comparable levels of lobbying, despite the fact that their high Shapley-Owen scores suggested that only one was likely to prove influential; both, however, voted with the vast bulk of their fellow party members, to accept the Moynihan compromise. Given the observed votes, the maximum likelihood axis of cleavage is one that runs from around (0, 90) to around (100, 70). Such an axis is the line that minimizes the number of mistaken predictions. Only Senator Conrad would be mispredicted, although both Senators Chafee and Durenberger would be very close to the edge of our hypothesized line of cleavage – and thus natural targets for lobbying if this were realized. If we draw in such a line of cleavage, given our estimate of the location of the amended version of the bill, we can identify the hypothetical location of the status quo as the reflection of that point through the axis of cleavage, by dropping a perpendicular from our estimate of the location of the amended version of the bill to the our estimated line of cleavage and extending the perpendicular the same distance above the line as the amended bill is below the line. We would also observe that the location of the Copeland winner that we have calculated from the Shapley-Owen values of the legislators using the Shapley-Owen approach is not very far from where we believe (impressionistically) the final amended bill in the committee to lie.
3.2 House Ways and Means Committee Next we consider the House Ways and Means committee. This committee experienced some turnover during the markup process due to legal difficulties faced by its chairman, Democratic Congressman Dan Rostenkowski, that led to his resignation as chair, but nothing that fundamentally affects our analysis. The vote on final passage was along party lines with all but four Democrats voting YES and all Republicans voting NO. Looking at Fig. 6, we see that the noticeable anomalies among the Democrats are Reps. Andrews, Hoagland and McDermott, who voted no; and Lewis who voted yes. Andrews and Hoagland would seem to be Democrats on the cusp, the first of whom the Democrats may have lost to heavy lobbying, but the other of whom was not heavily lobbied despite his substantial Shapley-Owen score. Rep. Brewer, the other Democratic no vote is a relatively conservative legislator, who looks only somewhat more moderate than a Republican. 27 Thus, perhaps, his vote is not that inexplicable. He, too, was subject to heavy lobbying (see Fig. 3). Rep. Lewis voted yes from a position that would seem to augur for a no vote, but he is African-American, and a well known liberal and loyal Democratic, so his vote may not be that surprising. lators, we cannot resolve that issue. 27 Rep. Payne, a Democrat, with a similar location, voted yes instead of no.
Joseph Godfrey and Bernard Grofman
National Federation of Independent Businesses
154
Americans for Democratic Action (reversed)
Fig. 6.
US Congress – 103rd Session, 1993–94 (House/Ways and Means/*)
Rep. McDermott, however, voted no from a position that would seem to augur for a yes vote. Trying to make sense of the voting choice of Rep. McDermott suggests that the assumption of ordinary (Euclidean) distance may not always be appropriate. According to Goldstein, McDermott insisted on a heavily subsidized public health issuance plan and would settle for absolutely nothing less. However, he did not get his desired changes, something not that surprising given what would appear to his limited bargaining power, with a Shapley-Owen value of only 0.007. Insistent on his position, he voted NO. 28 Perhaps, however, the most peculiar vote of all is that of Rep. Coyne, a Republican, who although a co-sponsor of the original bill, voted against the committee version. Still, based on his spatial location and his party, it would seem that the anomaly is not having him vote no on final passage, but having him as a sponsor of the bill in the first place. Another anomaly has to do with lobbying activities. Rep. Coyne and Rep. 28 We do not attempt to account for the apparent special nature of Rep. McDermott’s utility function in the present analysis. We might also note a bargaining game that went the other way despite the relatively low Shapley-Owen bargaining power of the representative (0.041) was that involving Rep. Payne. He was from Virginia, a major tobacco growing state, and objected to certain tax provisions against tobacco in the bill. In exchange for removing those provisions he supported the amended bill.
Pivotal Voting Theory
155
Lewis each had very high Shapley-Owen scores relative to the rest of the committee, yet neither was lobbied. Both were cross-pressured in terms of NFIB and ADA score locations (high on each). Representative Lewis voted yes; Representative Coyne voted no. If we assume that a YES vote is the default option for Democrats, then the failure by either proponents or opponents to lobby him to any great degree might not appear to matter that much. Yet, certainly, it would seem from our analysis that there was at least some potential for Representative Lewis to vote NO. Moreover, given Lewis’ Shapley-Owen score, it would have seemed that he should have been an influential player in working out compromises, yet the failure to lobby him would seem to have made that less likely. As for Representative Coyne, the failure to lobby him seems quite clear in that, while he co-sponsored the original bill, a signal of interest in finding a solution to health care problems, he ultimately voted NO! When we look at the Copeland winner in the House Ways and Means Committee and compare it to our (impressionalistically) estimated location of the final amended bill reported out of the committee we find the two to be quite close. However, if we now try to locate an axis of cleavage that will (optimally) separate the legislators into YES and No voters, we find ourselves making more mistakes in absolute numbers, but not in percentage terms, than we did when we considered the Senate Finance Committee. The most plausible possibility is a slightly upward sloping line, from roughly (0, 30) to roughly (100, 55). With that hypothesized axis of cleavage, Reps. Lewis, Pane and McDermott would be erroneously classified and Rep. Jackson would be very close to the cusp.
3.3 House Energy and Commerce Committee The Energy and Commerce committee was one in which the chairman, Rep. Dingle, wanting to push through the health bill with minimal markup, was unable to gain much cooperation. An analysis based on Shapley-Owen values makes the challenge facing Dingle particularly clear. We see from Fig. 4 that Reps. Boucher and Lehman have the same ideal point, each receiving an Shapley-Owen value of 0.306. Between the two them they have 0.612 of the total value. 29 To report the bill unchanged, Dingle would have had to ask Boucher and Lehman to give up a considerable amount of utility, i.e., to support a proposal far from their preferences. And he would still have had Reps. Slattery and Cooper to persuade, whose pivotalness would be strengthened if Reps. Boucher and Lehman shifted position. 29
There are some problems applying our computer algorithm to this committee; because it is an even-numbered committee with many members sharing the same ideal point. However, after experimenting with minor adjustments of ideal points to avoid the compounding of coincidence errors, Boucher and Lehman continue to command well over 60% of the game value. So qualitatively, at least, the representation given in the test is accurate.
156
Joseph Godfrey and Bernard Grofman
According to our analyses of Shapley-Owen values and policy locations, for Rep. Dingle to insist on reporting out the original bill from his committee with minimal markup was a doomed strategy. The best Dingle could have hoped to accomplish was to ask Reps. Boucher and Lehman to mark up the bill to their satisfaction (perhaps asking them to consult with Slattery and Schenk to draw the ideal point of the proposal more toward the Democratic corner). Had Rep. Dingle, in essence, delegated mark-up to Reps. Boucher and Lehman, the result would have been a bill whose ideal point matched fairly closely the corresponding bills reported by Senate Finance and the House Ways and Means committees, Instead not surprisingly from our analysis, due to internal disagreement, no bill was reported out of the House Energy and Commerce Committee.
3.4 Congress as a Whole The impasse in the House Energy and Commerce Committee was an augur for a breakdown of the usual legislative process, since the House Energy and Commerce Committee was widely (and correctly) regarded as a good mirror of the House. In part because of the failure of the House Energy and Commerce Committee to report out a bill, the House leadership asked the Senate to work on the health care reform issue before ordering a floor vote on any health care reform proposal in the House. The Senate Democratic leadership, presumably operating at the behest of President Clinton, chose to disregard the bill reported by the Senate Finance Committee and restored the provisions Senator. Moynihan had, through compromise removed. But the Senate Finance committee was a very good mirror of the Senate as a whole. Thus, ignoring the political information encoded in the Finance committee’s bill was to generate a bill that lacked adequate support. . The Senate Democratic leadership, in effect, followed Rep. Dingle in refusing to compromise and the decided not to allow forward a proposal that they knew would fail to pass on the floor. The result was a stillborn bill in the Senate that, combined with dissensus in the House, ensured the defeat of the Clinton health initiative.
4. Discussion: Pivotal Power Matters There are two issues raised by our work. The first has to do with how well SO value matches interest group lobbying behavior. Here we find (using regression analysis) that the SO value calculations do about as well in predicting who is lobbied as the two types of roll-call scores do when each is taken separately. However, there is also a strong normative component to our work. Here, deviations from the expectations generated by ShapleyOwen scores suggest sub-optimal lobbying choices by special interest
Pivotal Voting Theory
157
groups. For example, Rep. Dingle’s disregard of the pivotal power of some members of his committee, is caught by our analysis. This disregard arguably made political compromise about health care impossible in his committee, which in turn was a factor in the breakdown of the political process in the House. Similarly, failing to recognize the need for political compromises about health care reform, the Senate Democratic leadership threw out the information from the bargaining done in the Senate Finance committee. That, too, was a committee that well mirrored the overall Senate, and thus one whose negotiated compromises might have been the basis for majority agreement. In sum, failure to recognize such political realities that Shapley-Owen analyses can capture, and the concomitant inability to compromise, led the Clinton health care initiative to a defeat that had far reaching political implications for the Democratic Party.
Acknowledgements Research support to pursue this research came from the Center for the Study of Democracy at the University of California
References Austen-Smith D. and Wright, J. (1994) Counteractive Lobbying, American Journal of Political Science 38: 25–44. Austen-Smith D. and Wright J. (1996) Theory and Evidence for Counteractive Lobbying, American Journal of Political Science 40: 543–564. Baumgartner, F.R. and Leech, B.L. (1996a) The Multiple Ambiguities of ‘Counteractive Lobbying’, American Journal of Political Science 40: 521–542. Baumgartner, F.R. and Leech, B.L. (1996b) Good Theories Deserve Good Data, American Journal of Political Science 40: 565–569. Black, D (1958) The Theory of Committees and Elections, Cambridge University Press. Feld, S.L. and Grofman B. (1987) Necessary and Sufficient Conditions for a Majority Winner in n-Dimensional Spatial Voting Games: An Intuitive Geometric Approach, American Journal of Political Science 32: 709–728. Feld, S.L. Grofman, B. (1990) A Theorem Connecting Shapley-Owen Power Scores and the Rof the Yolk in Two Dimensions, Social Choice and Welfare 7: 71–74. Goldstein, K.M. (1999) Interest Groups, Lobbying and Participation in America, Cambridge University Press. Grofman, B., et. al. (1987) Stability and Centrality of Legislative Choice in the Spatial Context, American Political Science Review 81: 539–553. Hacker, J.S. (1999) The Road to Nowhere: The Genesis of President Clinton’s Plan for Health Security, Princeton University Press. Krehbiel, K. (1998) Pivotal Politics: A Theory of U.S. Lawmaking, University of Chicago Press. McKelvey, R. D. (1986) Covering, Dominance, and Institution Free Properties of Social Choice, American Journal of Political Science 30: 283–314. Owen, G. (1995) Game Theory (3rd ed.), Academic Press.
158
Joseph Godfrey and Bernard Grofman
Owen, G. and Shapley, L.S. (1989) Optimal Location of Candidates in Ideological Space, International Journal of Game Theory 18: 125–142, 339–356. Shapley, L.S. (1953) A Value for n-Person Games, in H. W. Kuhn and A. Tucker (eds.) Contributions to the Theory of Games, II. Annals of Mathematical Studies 28, Princeton University Press. Skocpol, T. (1996) Boomerang, Norton. Straffin, P.D. (1980) Topics in the Theory of Voting, Birkhauser.
9. Coalition Formation Theories Revisited: An Empirical Investigation of Aumann’s Hypothesis Vincent C.H. Chua School of Economics, Singapore Management University, Singapore
Dan S. Felsenthal Department of Political Science, University of Haifa, Israel
1. Introduction In one of the earliest attempts to examine the effect of a priori voting power on actual political phenomena, Riker (1959) looked at changes in party affiliation in the French National Assembly in 1953–54, and used these data to test the hypothesis that deputies who switched parties were seeking thereby to increase their a priori voting power. His findings were negative, or at best inconclusive. In his paper Riker used the voting power index proposed by Shapley and Shubik (1954) – which was the only measure of a priori voting power known to him at that time. 1 By now there is a large body of literature applying considerations of a priori voting power to political institutions such as the UN, the US Congress, the US Presidential Electoral College, the US Supreme Court and its rulings on the implementation of the ‘Equal Protection’ clause in the 14th Amendment to the Constitution; and of course numerous writings on voting-power considerations in the European Union. But, as far as we know, it took 36 years after Riker’s paper was published before someone has followed Riker's lead in suggesting that the formation (or dissolution) of political coalitions should be examined from the viewpoint of a priori voting power, and using for this purpose the Shapley-Shubik (S-S) index. 2 That 1
Indeed, Riker refers to it throughout – beginning with the paper's title – as ‘the power index’. As a matter of fact, L.S. Penrose (1946) had proposed another measure of a priori voting power; but it did not become widely known until it was reinvented by Banzhaf (1965) after whom it is generally named. For further details see Felsenthal and Machover (1998). 2 Felsenthal and Machover (2000) argue that Riker’s (1959) use of the S-S index is unsuit-
160
Vincent C.H. Chua and Dan S. Felsenthal
someone was the renowned game theorist (and the 2005 Nobel laureate in Economics), Robert J. Aumann. On 30 June 1995 Eric van Damme conducted an interview with Aumann on the state of the art in game theory. One of the practical applications of game theory mentioned by Aumann in this interview was the use of the Shapley value (Shapley, 1953; Shapley and Shubik, 1954) as an algorithm for predicting which governmental coalition is likely to form. Here are some relevant excerpts from this interview (cf. van Damme, 1997: 11–13; van Damme 1998: 184–87): Q: From these examples, can one draw some lessons about the type of situations in which one can expect game theory to work in applications? A: What one needs for game theory to work, in the sense of making verifiable (and falsifiable!) predictions, is that the situation be structured. … For years I have been predicting the government that will form in Israel once the composition of the Israeli parliament is known after an election. That is a structured situation, with set rules. The parliament has 120 seats. Each party gets a number of seats in proportion to the votes it got in the election. To form a government a coalition of 61 members of parliament is required. The president starts by choosing someone to initiate the coalition formation process. (Usually, but not necessarily, this ‘leader’ is the representative of the largest party in parliament.) The important point is that the process of government formation is a structured situation to which you can apply a theory. … For instance, in the governmental majority matter, one can set up a parliament as a simple game in the sense of Von Neumann and Morgenstern’s cooperative game theory, where we model the party as a player; we get a coalitional worth function that attributes to a coalition the worth 1 when it has a majority and the worth 0 otherwise. And then one can work out what the Shapley values are; 3 the structure is there, it is clear, and one can make predictions. Now there are all kinds of things that are ignored by this kind of procedure, but one can go out and make predictions. Then, if the predictions turn out correct, you know that you were right to ignore what you ignored. … Q: In this example of coalition formation, you make predictions using an algorithm that involves the Shapley value. Suppose you show me the data and your prediction comes out correct. I might respond by saying that I don’t understand what is going on. Why does it work? Why is the Shapley value related to coalition formation? Is it by accident or is it your intuition, or is it suggested by theory? A: There are two answers to that. First, certainly this is an intuition that arises from able in this context. However, they agreed that for the purpose of testing Aumann’s hypothesis the employment of the S-S index was probably more suitable than using the Penrose measure. See footnote 4. 3 In the context of simple voting games the ‘Shapley value’ is usually referred to as the ‘Shapley–Shubik (S-S) index’.
Coalition Formation Theories Revisited
161
understanding the theory. The idea that the Shapley value does represent power comes from theory. Second, for good science it is not important that you understand it right off the bat. What is important, in the first instance is that it is correct, that it works. If it works, then that in itself tells us that the Shapley value is relevant. Let me explain this a little more precisely. The theory that I am testing is very simple, almost naïve. It is that the leader – the one with the initiative – tries to maximize the influence of his party within the government. So, one takes each possible government that he can form and one looks at the Shapley value of his party within the government; the intuition is that this is a measure of the power of his party within the government. 4 This maximization is a nontrivial exercise. If you make the government too small, let’s say you put together a coalition government that is just a bare majority with 61 members of [the Israeli] parliament – a minimal winning coalition – then it is clear that any party in the coalition can bring down the government by leaving. Therefore, all the parties in the government have the same Shapley value. So the hypothesis is that a wise leader won’t do that. That is also quite intuitive, that such a government is unstable, and it finds its expression in a low Shapley value for the leader. On the other hand, too large a coalition is also not good, since then the leader doesn’t have sufficient punch in the government; that also finds its expression in the Shapley value. Consequently, the hypothesis that the leader aims to maximize his Shapley value seems a reasonable hypothesis to test, and it works not badly. It works not badly, but by no means a hundred percent. For example, the current (June 1995) government of Israel is very far off from that, it is basically a minimal winning coalition. In fact, they don’t even have a majority in parliament, but there are some parties outside the government that support it; though it is really very unstable, somehow it has managed to maintain itself over the past 3 years. But I have been looking at these things [in Israel] since 1977, and on the whole, the predictions based on the Shapley value have done quite well. I think there is something there that is significant. It is not something that works 100% of the time, but you should know that nothing in science works 100% of the time. In physics also not. In physics they are glad if things work more than half the time. Q: Would you like to see more extensive empirical testing of this theory? A: Absolutely. We have tried it in one election in the Netherlands where it seems to work not badly; but we haven’t examined that situation too closely. The idea of using the Shapley value is not just an accident, the Shapley value has an intuitive content and this hypothesis matches that intuitive content.
4
Note that Aumann regards the Shapley value, interchangeably, as ‘measure of influence’ and as ‘measure of power’. As argued by Felsenthal and Machover (1998:17–19, 35–36; 2005), one ought to distinguish between two types of a priori voting power: power as influence (Ipower) – which is the a priori probability that a player’s vote will be decisive, and power as prize (P-power) – which is the share a player can expect to receive in the prize distributed by a winning coalition among its members. Inasmuch as the Shapley value is a coherent notion, it measures the latter type of power; in the context of political coalitions the ‘prize’ is usually regarded as cabinet portfolios. In this context a similar distinction (between the concepts `officeseeking' and `policy seeking' behavior) is made by Laver and Schofield (1991: 91ff ).
162
Vincent C.H. Chua and Dan S. Felsenthal
Although more than a decade has elapsed since Aumann first phrased his above mentioned hypothesis, it has not been subjected, as far as we know, to an extensive empirical verification. The purpose of this paper is to conduct such an investigation. The way Aumann’s hypothesis is phrased makes it easy to verify. This is so because it focuses only on the party charged with forming a (winning) governmental coalition, 5 hence one only needs to know the identity of this party and the distribution of seats of all the parties represented in parliament, or in Aumann’s words: ‘So, one takes each possible government that he can form and one looks at the Shapley value of his party within the government’. 6 In investigating the predictive performance of Aumann’s hypothesis we shall compare its performance with three modified versions of the hypothesis while benchmarking against two selected established theories of coalition formation. In carrying out this exercise, we make use of historical election data from eight European countries and Israel as tabulated in de Swaan’s (1973) book. Motivated by Aumann’s observation that his hypothesis appears to have worked quite well in the case of Israel between 1977 and 1995 as well as by Felsenthal and Machover’s (1998: 205, fn. 32) refutation of this observation, 7 we carried out a further check using detailed election results for Israel from 1949 until 2006. The statistical results do not support Aumann’s hypothesis, but three variations of this hypothesis appear to perform somewhat better: restricting the maximization process to the set of closed majority coalitions, or likewise but with a further requirement that the coalition selected be of minimal size or of minimal range. However, 5
Note that Aumann’s hypothesis is not concerned with situations in which one party has an absolute majority of seats in parliament, nor with situations in which a minority governmental coalition is supported by one or more parties outside the coalition. 6 The second-named author suggested to Aumann in an email message in November 2003 that in practice the party charged with forming a governmental coalition must gain the consent of each of the parties it wishes to incorporate in the coalition – and that some parties may refuse to join a proposed coalition for various political reasons. Hence he proposed that the phrase ‘each possible government that he can form’ should perhaps be replaced with ‘each possible politically feasible government that he can form’. Aumann responded (on 26 November 2003) that ‘[My] original hypothesis did not include the caveat ‘politically feasible’, which may be a little difficult to formulate for the purposes of empirical testing. However, in practice it may indeed be necessary to add such a caveat; or, perhaps, it may not be. It’s a point worth thinking about.’ As is described in the sequel, given the positions of the various parties on a left-right ideological continuum, we defined operationally the concept ‘politically feasible coalition’ as ‘ideologically closed coalition’. 7 Felsenthal and Machover (1998: 205, fn. 32) state that their calculations do not bear out Aumann’s claim: ‘We analyzed all governments formed in Israel during the period 1977–1996 and found that in all these cases the party charged with forming the government could have increased its S-S index within the government by either narrowing or widening the government that it actually formed.’ Our own analysis (see Table A2 and Table A3) supports this conclusion.
Coalition Formation Theories Revisited
163
none of these variations achieves a level of predictive performance comparable to Leiserson (1966) and Axelrod (1970: 170 ff.) closed minimal range theory or with Riker’s (1962) minimum size principle when confronted with the data. The remainder of the paper is organized as follows. In the ensuing section, we discuss the hypothesis further and introduce a number of different variations to Aumann’s hypothesis. In Section 3, we describe the data used in the analysis and explain the statistical tests that are used in our evaluation of the worth of each of the hypotheses considered. The results of our empirical analysis are summarized in Table 1 and discussed in Section 4; some concluding remarks follow.
2. Aumann’s Hypothesis and Modifications Aumann’s hypothesis asserts that a party charged with forming a majority government will select that majority coalition that will maximize its ShapleyShubik index. However, before embarking on an empirical investigation of this hypothesis, one key point of clarification requires highlighting. Let us return to the passage where Aumann says: For instance, in the governmental majority matter, one can set up a parliament as a simple game in the sense of Von Neumann and Morgenstern’s cooperative game theory, where we model the party as a player; we get a coalitional worth function that attributes to a coalition the worth 1 when it has a majority and the worth 0 otherwise. And then one can work out what the Shapley values are; …
From the loose way Aumann speaks in this passage – it is after all an interview, not a scholarly paper – it is at first unclear whether, for the purpose of calculating the S-S index, the term ‘majority’ in this passage refers to (i) simple majority within the government (coalition), or (ii) simple majority in the parliament as a whole. But from what Aumann says in a subsequent passage it becomes certain that he means simple majority in the parliament as a whole. In this passage he says: This maximization is a nontrivial exercise. If you make the government too small, let’s say you put together a coalition government that is just a bare majority with 61 members of [the Israeli] parliament – a minimal winning coalition – then it is clear that any party in the coalition can bring down the government by leaving. Therefore, all the parties in the government have the same Shapley value.
From this passage two conclusions can be reached. First, a government with 61 seats in the Israeli parliament is a minimal winning coalition if the required majority is a simple majority of the parliament as a whole but not
164
Vincent C.H. Chua and Dan S. Felsenthal
necessarily if the required majority is a simple majority of the coalition members. Second, the conclusion that ‘all the parties in the government have the same Shapley value’ follows only if one assumes that the required majority is a simple majority of all members of parliament. We therefore interpreted Aumann’s description of the maximization process as implying the same threshold (quota) for each winning coalition considered, a threshold that is set equal to a simple majority of the sum of votes of all members of parliament. We shall henceforth refer to this interpretation of Aumann’s hypothesis as the maximal SSQ hypothesis. However, from the perspective of developing a predictive theory of coalition formation, it is important that one has a theory that provides a reasonably sharp or precise prediction as to the likely outcome. Parsimony – in terms of the size of the predicted set of likely coalitions – is an important attribute of a good predictive theory. Just like the von Neumann-Morgenstern stable set, however, there is a good chance that Aumann’s hypothesis, too, will produce a number of predicted coalitions that is unreasonably large. 8 An alternative route to avoiding the profusion of possibilities under the maximal SSQ hypothesis is to resort to ad hoc restrictions that will decrease the predicted set of coalitions by introducing social standards which, presumably, make some winning coalitions more reasonable than others. 9 To our minds, as long as these restrictions are sufficiently persuasive, due consideration should be given to them. In this spirit, we consider three variations of the maximal SSQ hypothesis.
2.1 Variation I: Restriction to Closed Coalitions In this variation, we consider a restriction of the domain to the set of closed or ideologically connected coalitions. 10 The motivation for introducing this restriction is two-fold. First, as de Swaan (1973: 148) has noted, out of 108 majority coalitions in nine countries which he investigated in his study, 85 (or 8
As can be seen in the 5th column of Tables A2 and A3, more than one coalition – and sometimes considerably more than one – is predicted to form according to Aumann’s hypothesis in almost all investigated elections. Thus, for example, in the 1996 Israeli election (cf. Table A3) fully one-third (246 out of 739) of the winning coalitions in which the leader is a member are predicted to form according to Aumann’s hypothesis. There is a second point. When thinking of the coalition formation process along the line of an n-person non-transferable utility game, it is important to realize that it is reasonable to do so only to the extent that members of the same party in the assembly act in unison, that is, legislators operate strictly along party lines. Thus, Aumann’s hypothesis, if it works at all, should work best in a context where party discipline is strict or tight. In political cultures where this is not the case, it would be unreasonable to expect the hypothesis to perform well. 9 This seems to be the usual practice employed by coalition theorists as noted, for instance, by Luce and Raiffa (1957: 213) and by Gamson (1961: 380). 10 A coalition is ‘closed’ if all its members are adjacent to one another along the ideological continuum; otherwise the coalition is ‘open’.
Coalition Formation Theories Revisited
165
79 percent) of these were closed coalitions. Thus, empirically, the evidence appears to indicate a preference by political parties to form closed coalitions. 11 A second and related point is the observation that in a multi-party system, a coalition leader located at one extreme of the ideological continuum is highly unlikely to enter into a political union with a party located at the other extreme even though such a union may result in the highest S-S index for the coalition leader. The restriction to closed coalitions where the S-S index of the party charged with forming the coalition is maximized will help in making the predicted set more parsimonious. We will refer to Aumann’s hypothesis under this domain restriction as the Closed Maximal SSQ hypothesis.
2.2 Variation II: Restricting the Predicted Set of Closed Coalitions to the One with Minimum Size Even with the domain restriction introduced in variation I, the size of the predicted set may still be quite large. Since in the maximization process the stability issue arising from the defection of coalition members has already been factored into the calculation, it appears reasonable to present an argument along the line of the minimum size principle that only the smallest coalition in terms of the number of votes it controls in the closed maximal set will form. This may be so since it is not uncommon to expect that the actual distribution of cabinet positions among coalition members will be approximately proportional to the number of votes they control within the coalition. Like the minimum size principle advocated by Gamson (1961) and Riker (1962), 12 this refinement of the predicted set will almost always yield a prediction that is a singleton or one that involves a relatively small number of coalitions. Such an approach is however distinct from the minimum size principle and it also overcomes the objection raised by Aumann concerning stability of the coalition when the minimum size principle (simpliciter) is invoked. 13 We shall refer to this variation as the Closed Maximal SSQ Minimum Size hypothesis. Of course, under this variation, the predicted sets will be subsets, 11
It is tempting to attribute this to the notion that ideologically closed coalitions have lower levels of conflict of interest than those that are not ideologically closed and are thus preferred; but the evidence reported by Browne et al. (1984) does not appear to support this position. Of the investigated coalitions in this study, France during the Fourth Republic is perhaps one exception to the case – and this, in turn, serves to highlight that it is probably not reasonable to expect one single theory to do well in all instances. 12 Gamson calls the winning coalition controlling the smallest total number of seats (or votes) cheapest winning coalition while Riker calls it coalition of minimal size. 13 This is so because a coalition of minimum size is often also a minimal winning coalition where the defection of any member renders it losing and where the Shapley value of all members is equal.
166
Vincent C.H. Chua and Dan S. Felsenthal
not necessarily proper subsets, of those obtained under Variation I, the closed maximal SSQ hypothesis.
2.3 Variation III: Restricting the Predicted Set of Closed Coalitions to the One with Minimal Range As an alternative to the restriction of the predicted set to include only those closed maximal coalitions that are of minimal size or weight, we consider the restriction of the predicted set to the subset of closed maximal coalitions that are of minimal range. 14 Consideration of this variation is largely motivated by de Swaan’s finding that the Axelrod–Leiserson closed minimal range hypothesis appears to fit the historical data rather well. 15 It also adds an additional dimension to the optimization process emphasizing, in addition to power, the desirability of homogeneity in ideological positions of coalition members, an idea that is rather intuitive. Summarizing, in addition to considering the version of Aumann’s hypothesis that we have referred to as the maximal SSQ hypothesis, we will also consider in our investigation the following three variations: (a) the variation of the SSQ version that restricts the maximization process to only those majority coalitions that are ideologically closed; (b) the variation that further restricts the prediction of the Closed Maximal SSQ version to the subset that is of minimal size; and (c) the variation that further restricts the prediction of the Closed Maximal SSQ version to the subset that is of minimal range. While (b) is an analogue to the Riker-Gamson minimum size principle, (c) may be viewed as the analogue to the Axelrod-Leiserson closed minimal range theory. We note that the Riker-Gamson minimum size principle and the Axelrod-Leiserson closed minimal range theory are ideas that are already deeply entrenched in the political science literature.
3. Data and Analysis 3.1 Data Sources In testing Aumann’s hypothesis and its three different variants, the primary source of our data is the tabulation of election outcomes provided in de Swaan’s (1973) book Coalition Theories and Cabinet Formations. This data tabulation covers nine countries and selected historical parliamentary elections. For some of these countries, elections from the end of the First World War 14
Briefly, on a policy scale in which any two adjacent parties are regarded as one unit of distance apart, the range of a coalition is the distance between the two parties in the coalition whose positions on the policy scale are furthest apart. 15 Of the 85 closed coalitions in de Swaan’s study, 55 were of minimal range. See also de Swann (1973: 148).
Coalition Formation Theories Revisited
167
up to the early 1970s were included. 16 de Swaan’s data is particularly useful because, for each election, in addition to listing the number of seats controlled by parties which gained more than 2.5% of the seats in an assembly, 17 the parties have also been ordered along the left-right ideological continuum ‘according to the share of national income they wish to see redistributed by means of the government budget, military and police expenditures excepted. When this [did] not affect the ranking of the other actors, the criterion of nationalism [was] added to place the Fascist parties. The preference of the party’s cadre [was] taken as indicative of the party’s stand and, in the absence of such information, the judgment of parliamentary historians and other expert observers [were] accepted instead.’ (1973: 142). The rank-ordering of parties rendered the task of identifying closed majority coalitions in each election considerably less onerous. 18 However, whereas the coalitions studied by de Swaan included many interim coalitions, 19 it seems to us that Aumann’s hypothesis – and perhaps all other coalition-formation theories – is (are) concerned only with original coalitions because the formation of interim coalitions cannot be reasonably considered as independent of the formation of the original coalition when the leading party remains the same. Hence, except in one occasion, 20 we limited our investigation only to original coalitions. Our second departure from de Swaan’s approach is in the definition of an admissible majority coalition. For each given election, de Swaan considers every possible winning coalition as a potential candidate for the formation of a government. However, in reality the party charged with forming a governing coalition is normally selected by the head of state, and is usually 16 Specifically, the nine countries and the periods investigated by de Swaan were as follows: Denmark (1918–71), Sweden (1917–70), Norway (1933–36; 1965–69), Israel (1949–69), Italy (1946–72), The Netherlands (1918–72), Finland (1919–72), Germany’s Weimar Republic (1919–32), and France’s Fourth Republic (1945–57). 17 de Swaan (1973: 131) justifies the 2.5% restriction by arguing that parties controlling no more than 2.5% of the seats in an assembly were almost never included in a winning governmental coalition. Nevertheless he took into consideration the number of seats controlled by such small parties in calculating the needed majority to pass decisions by an assembly. 18 Two other studies regarding coalition formations were conducted more or less simultaneously with (and independently of) de Swaan’s study: the study by Taylor and Laver (1973) which dealt with western European coalitions during the period 1945–71, and the study by Dodd (1976) which dealt with coalitions in the early 1970s. Laver and Schofield (1991: 249–50) state that the ideological rankings of parties according to these two studies correspond closely to each other and to de Swaan’s ranking. 19 These are coalitions formed not immediately following a general election (which we call ‘original coalitions’) but as a result of change due to defections from, or broadening of, the original coalition. 20 The exception is the 2001 Israeli direct elections of a prime minister (but not also the Knesset). Although the composition of the Knesset in 2001 remained the same as it was following the 1999 general elections, the new prime minister (Ariel Sharon) belonged to a different party than his predecessor (Ehud Barak), and hence decided to form a new governing coalition.
168
Vincent C.H. Chua and Dan S. Felsenthal
the party that, if successful in forming the governmental coalition, assumes the premiership. This is exactly the reason why Aumann’s hypothesis is concerned only with the Shapley value of this party. Hence in our investigation of this hypothesis we consider only those winning coalitions that include the coalition leader which we identify ex post as the party of the prime minister. The above considerations have meant that whereas de Swaan’s conclusions are based on 108 original and interim coalitions formed in nine countries, ours are based, except for Israel, only on 65 original coalitions formed in these countries during the same period. The case of Israel warrants special attention in this study. As we already mentioned, the findings by Felsenthal and Machover (1998) did not corroborate Aumann’s statement that ‘predictions based on the Shapley value have done quite well’ regarding Israeli elections between 1977 and 1995. de Swaan (1973: 237) too noted that ‘Israel is a difficult country for the theories’. However, as de Swaan’s and Aumann’s observations regarding Israel relate to different periods, we decided to investigate all 18 elections conducted in Israel during the period 1949–2006.21 Moreover, because in Israel very small parties were sometime included in governmental coalitions, we included in our analysis all parties that gained representation in Israel’s parliament and, consequently, our classification of some Israeli governmental coalitions diverges from de Swaan’s. (As noted above, de Swaan, in contrast, considered as admissible members of winning coalitions only parties controlling more than 2.5% of the seats in the various parliaments that he investigated.) 22
3.2 Evaluating the Worth of a Theory No theory is expected to correctly predict the outcome all of the time. In some contexts, a theory that performs marginally better than chance may be considered a reasonably good theory if other competing theories can do no better. Intuitively, when a restriction is placed on the set of admissible coalitions resulting in predicted sets that are more precise, it would appear that the frequency of obtaining a correct prediction is likely to be lower compared with the case when the domain is unrestricted. This is certainly the case when the restriction results in predicted sets that are proper subsets of the unrestricted predicted sets. This does not however automatically render the restricted theory a poorer predictive theory. Conversely, for the same 21
As mentioned in the previous footnote, of these 18 elections 17 were general elections to the Knesset and one (conducted in 2001) was a direct election of (only) the prime minister. 22 Because of space considerations we do not list here the parties’ ideological ordering along a left-right continuum, nor the composition of the 65 (original majority) governmental coalitions surveyed by de Swaan and the 18 Israeli elections that occurred to date. These can be found, inter alia, in Appendix Tables 5a and 5b of an earlier version of this paper downloadable from: http://eprints.lse.ac.uk/archive/00000767.
Coalition Formation Theories Revisited
169
election, different theories may give rise to predicted sets that differ considerably in terms of size, and the theory that gives rise to a larger predicted set will naturally have a better chance of correctly predicting the outcome. But such a theory is not necessarily a better predictive theory. Somehow, the tradeoff between the probability that the actual outcome is included in the predicted set and the parsimony of the predicted set will have to enter the calculus in determining which theory should be preferred. In determining the worth of each theory in our analysis, we have kept in mind this tradeoff. Instead of attempting a direct comparison of the competing theories, our approach is to compare the different theories with their respective randomized counterparts so that the evaluation of each theory is carried out on a level playing field. For the de Swaan data set, given a sample size of 65 which is reasonably large, we carry out our evaluation by invoking the Central Limit Theorem. For the more detailed investigation in the case of Israel, with a sample size of 18 elections, it appears grossly inappropriate to invoke the Central Limit Theorem. In this case, we computed the exact probability mass function under each theory or hypothesis to assist us in making our evaluation. A brief outline of our approach is provided below. Let N i denote the size of the set of winning coalitions given the distribution of seats secured by the various political parties in the i-th election, i 1,2,!, n . Let S ij denote the corresponding size of the predicted set under the j-th theory, j 1,2,!, m . Then, for any theory, each election may be regarded as an independent Bernoulli trial with probability of success Pij given by S ij / N i . The probability of success here refers to the probability that the prediction of the theory is consistent with the actual outcome. Since N i and S ij vary across elections, the probability of success Pij also varies across elections. For each theory, therefore, the set of elections included in the analysis may be regarded as a sequence of independent Bernoulli trials with unequal probability of success across trials. Given these probabilities, the expected number of consistent predictions under each theory is given by i Pij with associated standard deviation given by 1 i Pij (1 Pij )¯ 2 . The expected number of consistent predictions i Pij is ± ¢ what one would expect if the theory in question is no better than a pure chance mechanism. For sufficiently large n, the Central Limit Theorem for independent random variables postulates that the expected number of consistent predictions will be approximately normally distributed with mean i Pij and 1 standard deviation ¢ i Pij (1 Pij )¯± 2 . In our analysis of de Swaan’s data, we make use of this result to evaluate the worth of each theory. Specifically, a theory that produces a predictive outcome that departs significantly in the positive (negative) direction, measured in standard deviation units, from the mean value of its randomized counterpart would be regarded as one that is better (worse) than a pure chance mechanism and thus worthy
170
Vincent C.H. Chua and Dan S. Felsenthal
(unworthy) of further consideration as a predictive theory of coalition formation. In our supplemental analysis of the case of Israel, we have only 18 elections between 1949 and 2006. Because of the rather small sample it would be inappropriate to invoke the Central Limit Theorem here. This is particularly so when the probability of success, Pij , in each of the independent Bernoulli trials, is close to either end of the unit interval as this will cause the probability mass function for the sum of the independent Bernoulli variables to be severely skewed. In this instance, while there are a number of alternative approaches that one may employ to evaluate the worth of each theory, we have opted to compute the exact probability mass function for the randomized schemes under each of the theories considered, noting that the probability mass at k, the number of successes under the j-th theory, is given by: P j (x k )
P (1 P ij
a k A k i a k
ij
i a k
¬ ). ®
(1)
In this expression, A k denotes the set of situations that give rise to k successes and a k is an element of this set. The cardinality of the set A k is n C k (i.e. the number of combinations of size k one can extract out of a population of size n) and the summation in the expression is over each of these nC k different situations. As an illustration, consider a sequence of three elections, that is, n 3. For the j-th theory, suppose P1 j , P2 j , and P 3 j are respectively 21 , 31 , and 41 . If this theory is no better than a chance mechanism, there will be 3C 0 or exactly one instance in which it will fail to produce a consistent prediction. This is the case when k 0 and denoting a consistent prediction by the letter s and an inconsistent prediction by the letter f, the sequence of outcomes referred to in this instance is fff. The probability that this occurs is given by P j (x 0) (1 P1 j )(1 P2 j )(1 P3 j ) 12 ¸ 23 ¸ 43 41 .
Similarly, there will be 3C 1 or exactly three instances in which the theory will produce exactly 1 consistent prediction. These three instances are associated with the outcome sequences sff, fsf, and ffs. Thus the probability that the theory achieves exactly one consistent prediction: P j (x 1) P1 j (1 P2 j )(1 P3 j ) (1 P1 j )P2 j (1 P3 j ) (1 P1 j )(1 2 j )P3 j 11 41 18 121 24 .
It can be similarly verified that P j (x 2) 41 and P j (x 3) 241 . As always, these probabilities sum to unity as the different scenarios are both mutually exclusive and exhaustive.
Coalition Formation Theories Revisited
171
With the exact probability mass function for each theory in hand, and supposing the number of successes obtained under the j-th theory is k *j , we are able to calculate the exact probability that this number of successes or higher will be observed under its randomized counterpart scheme, i.e. we can calculate P j (x p k *j ). This, in turn, will allow us to test the hypothesis that the theory in question is no better than the randomized scheme considered. A large value for P j (x p k j* ) is indicative that this is indeed the case, whereas a very small value for P j (x p k j* ) provides strong evidence against the null hypothesis, indicating that the predictive performance of the theory is unlikely to be the result of pure chance. In the latter situation, the theory in question would be regarded as deserving further consideration. We now turn to a discussion of our results based on the tests that we have described.
4. Main Results As is evident from a cursory review of the summary statistics presented in Table 1, Aumann’s SSQ hypothesis resulted in predicted sets that performed poorly both for the de Swaan data and the more detailed Israeli data. For the 65 elections we considered in de Swaan’s data set, the expected number of consistent predictions based on the randomized counterpart to the Maximal SSQ hypothesis is 13.2115, but under this hypothesis the number of consistent predictions obtained was only 8, a performance that is 1.8269 standard errors below its randomized mean. In fact, it is the only theory of the six theories considered that performed worse than its randomized counterpart. For the Israeli 1949–2006 data set, it is also one of five hypotheses that did not achieve a single success while at the same time having the highest randomized mean of the five, thus making it the least attractive theory. These considerations lead us to categorically rejecting the Maximal SSQ hypothesis. Detailed computational results appear in the Appendix: Table A1 and Table A2 for the de Swaan data set, and Table A3 for the Israeli elections from 1949 through 2006. In all these tables there are entries of the type x / y in the columns of the various tested theories. Thus, for example, in the first row of Table A1 under the Maximal SSQ column appears the entry 3/48. This entry should be interpreted thus: of the total 48 possible coalitions in which the leader’s Shapley-Shubik index is maximized over all 17 elections considered in Denmark, only 3 coalitions were actually formed. The same interpretation applies, mutatis mutandis, to all other entries of this form. In Table A4, the exact probability mass functions that we employ in our evaluation of the detailed Israeli data set are tabulated.
172
Maximal SSQ
Closed and Maximal SSQ
Closed Maximal SSQ and Minimum Size
Minimum Size
Closed Maximal SSQ and Minimal Range (Interval)
Closed Minimal Range (Interval)
Mean Number of Successes (Randomized Scheme)
13.2115
7.2973
4.2307
4.6451
4.7588
5.5605
Standard Deviation (Randomized Scheme)
2.8527
2.3299
1.9454
2.0370
2.0411
2.1892
Number of Consistent Predictions
8
13
12
26
13
34
–1.8269
2.4477
3.9937
10.4836
4.0376
12.9910
0.3790
0.0133
0.0132
0.4013
0.0132
0.0195
0
0
0
1
0
2
1.0000
1.0000
1.0000
0.3346
1.0000
0.000159
De Swaan’s Data
Standardized Deviation from Randomized Mean Israel (1949-2006 Data) Mean Number of Successes (Randomized Scheme) Number of Consistent Predictions
P j (x p k *j )
Vincent C.H. Chua and Dan S. Felsenthal
Table 1. Summary Test Statistics on the Predictive Accuracy of the Theories
Coalition Formation Theories Revisited
173
All the three variations of the Maximal SSQ version considered, namely the Closed Maximal SSQ variation, the Closed Maximal SSQ and Minimal Size variation, as well as the Closed Maximal SSQ and Minimal Range variation, performed better than the (pure) Maximal SSQ variation both in terms of being more accurate in their predictions as well as in terms of their predictions being more parsimonious. However, these variations, too, are considerably inferior when compared to either the Gamson–Riker minimal size theory or to the Axelrod–Leiserson minimal range theory. Like Aumann’s hypothesis, Gamson–Riker’s cheapest coalition or minimum size principle is simple and its informational requirements are similar. Unlike Aumann’s hypothesis, however, it is extremely parsimonious and often produces predicted sets that are very small relative to the number of winning coalitions. As can be seen from Table 1, this principle works reasonably well for de Swaan’s data set with 26 consistent predictions in 65 elections, or 10.484 standard errors above its randomized mean. As can also be seen from Table 1, this principle was more parsimonious and produced more accurate predictions than any of the three variations of Aumann’s hypothesis that we examined. But its performance is inferior when compared with that achieved under the Axelrod-Leiserson minimal range theory. As pointed by Aumann, as well as by others before him, an apparent key weakness of forming a cheapest coalition (or a coalition of minimum size) is that the defection of even the smallest of its members turns it into a losing coalition, thereby apparently making the prospect of forming such a coalition very unattractive for a coalition leader. Nevertheless, there seem to be also considerable merits in forming coalitions of minimal size, as 40% of the examined coalitions in de Swaan’s dataset were of this type. As can be seen from Table 1, Table A1, and Table A3, both the predictive performance of the Axelrod-Leiserson closed minimal range theory as well as its parsimony were the best: of the 65 elections in de Swaan’s data set, this theory predicted correctly the (original) governmental coalition formed immediately following 34 (52%) of these elections, and was the only theory of those investigated which scored two correct predictions in the detailed 18-election Israeli data set.
5. Concluding Remarks According to the Social Science Citation Index the Shapley–Shubik (1954) paper is still, more than 50 years after it was first published, one of the most cited papers in social science research. Nevertheless, it is very doubtful whether many politicians have read this paper, or even heard about the Shapley–Shubik voting-power index, let alone use it as a tool for deciding which governing coalition they should form, join, or defect from. Aumann, in proposing his hypothesis, was of course aware of this when he responded to his interviewer that ‘… for good science it is not important that you un-
174
Vincent C.H. Chua and Dan S. Felsenthal
derstand it right off the bat. What is important, in the first instance is that it is correct, that it works. If it works, then that in itself tells us that the Shapley value is relevant.’ So, does it work? As can be seen from Table 1, as well as from Table A1, Table A2, and Table A3, the answer is clearly ‘no’. Aumann’s hypothesis in its (pure) SSQ version achieved the smallest number of consistent predictions for both data sets considered, it lacks the parsimony required of a good predictive theory and hence should be rejected. However, we wish to note that based on a set of additional calculations that we have performed and which we intend to report in a separate paper, the number of consistent predictions according to the (pure) SSQ version of Aumann’s hypothesis becomes much larger if one calculates the S-S index or Penrose measure of the leading party by setting the quota within each prospective winning coalition it can form equal to a simple majority of the coalition’s members instead of setting it equal to a simple majority of 23 This increase in the number of consistent the entire parliament. predictions is due to the fact that in a large proportion of the actual coalitions formed the party forming the coalition controlled an absolute majority of the votes within the coalition, which in turn made it a dictator with an S-S (maximal) index of 1. But in this case Aumann’s SSQ hypothesis suffers from a further substantial decrease in its parsimony because usually there existed many possible winning coalitions in which the leading party controlled an absolute majority of the votes within the coalition, thus making it, presumably, indifferent among them. In conclusion, it appears that maximizing a priori voting (P-)power by the party charged with forming a governmental coalition – which seems quite reasonable from the point of view of game theory – does not account for which governing coalitions are formed in practice following general elections. This conclusion corroborates the (negative) findings of Felsenthal and Machover (1998) regarding Israeli governing coalitions during the period 1977–96. It is also in line with the (negative) findings by Riker (1959) and by Felsenthal and Machover (2000) who discovered that defections from governing coalitions cannot be accounted for by considerations of a priori voting power. 23
As explained by Felsenthal and Machover (2002), the overall Penrose voting power of a member of an alliance is equal to his voting power within the alliance multiplied by the alliance’s voting power in the entire assembly. When one considers only winning governmental alliances (coalitions) – as is the case in this paper – then the voting power of each prospective winning coalition within the entire assembly (parliament) is, by definition, equal to 1 according to any reasonable voting power index; and the (Penrose) power of each coalition member within the coalition should be calculated by setting the quota equal to the decision rule existing within the coalition (which is usually a simple majority of the coalition’s members), rather than by setting it equal to the quota existing in the entire assembly (according to the manner in which Aumann’s SSQ hypothesis must be interpreted; see explanation at the beginning of Section 2).
Coalition Formation Theories Revisited
175
However, recently Andjiga et al. (2006) examined the outcomes of four elections 24 in order to ascertain whether the Shapley–Shubik or the Penrose–Banzhaf power indices as modified by Owen (1977, 1982) for voting games ‘with a priori coalitions’ can be used in order to predict which governmental coalitions are likely to form. Using these modified indices, they examined two alternative hypotheses: 1) that the coalition formed will belong to the core; 2) that the largest party forms the coalition, and that it will choose to form that coalition in which its modified Shapley-Shubik or Penrose-Banzhaf index is maximized subject to several additional constraints. The conclusions of this study are that the core does not always include a coalition in which the largest party is a member, and that the optimal coalition in which the largest party is a member is sensitive to the power index used, to the assumption as to whether the opposition parties act independently of one another, as well as to the ideological ordering of the parties along a left-right continuum. So it seems that the use of the power indices to predict the formation of governmental coalitions according to these alternative approaches is also not very promising.
24
The elections to the Catalan (regional) parliament held in 1980, as well as the parliamentary elections held, respectively, in the Republic of Cameroon, in the Czech Republic, and in the Federal Republic of Germany, in 1993, 2002, and in 2005.
176
Appendix Table A1. No.
Country
Maximal SSQ
Closed and Maximal SSQ
Closed Maximal SSQ and Minimum Size
Minimum Closed Maximal Size SSQ and Minimal Range (Interval)
Closed Minimal Range (Interval)
1
Denmark
17
3/48
6/30
6/17
16/18
6/21
16/22
2
Finland
10
3/38
4/16
4/10
1/13
4/10
2/13
3
France
1
0/16
0/1
0/1
0/1
0/1
0/2
4
Israel
6
0/18
0/7
0/7
0/13
0/6
1/10
5
Italy
2
0/10
0/3
0/2
1/2
0/3
1/4
6
Norway
4
0/10
2/5
2/4
4/4
2/5
3/7
7
Sweden
7
1/29
0/13
0/7
3/7
0/8
5/9
8
The Netherlands
15
1/57
1/29
0/15
1/29
1/17
6/21
9
Weimar Republic
3
0/14
0/4
0/3
0/5
0/3
0/5
Total
65
8/243
13/108
12/66
26/92
13/74
34/93
Mean No. of Successes (Randomized Scheme)
*
13.2115
7.2973
4.2307
4.6451
4.7588
5.5605
Standard Deviation (Randomized Scheme)
*
2.8527
2.3299
1.9454
2.0370
2.0411
2.1892
Standardized Deviation from Randomized Mean
*
–1.8269
2.4477
3.9937
10.4836
4.0376
12.9910
Vincent C.H. Chua and Dan S. Felsenthal
Number of Elections
Summary of Predictive Accuracy (de Swaan’s Data)
Table A2. No.
Election Year
Detailed Tabulation of Predictive Performance by Country Maximal SSQ
Closed and Maximal SSQ
Closed Maximal Minimum Closed Maximal SSQ and Size SSQ and Minimal Minimum Size Range (Interval)
Closed Minimal Range (Interval)
(a) Denmark 1 1918 2 1920 3 1920A 4 1920B 5 1924 6 1926 7 1929 8 1932 9 1935 10 1939 11 1945 12 1953 13 1957 14 1960 15 1966 16 1968 17 1971 Total
6 6 6 6 7 6 7 7 7 7 11 14 13 15 14 10 15 *
5 4 4 4 3 4 3 3 3 3 5 6 3 7 6 5 7 *
1/4 0/1 0/1 1/4 0/3 1/4 0/3 0/3 0/1 0/1 0/4 0/1 0/2 0/4 0/6 0/2 0/4 3/48
1/3 1/3 1/3 1/3 0/1 1/3 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/2 0/2 1/1 0/2 6/30
1/1 1/1 1/1 1/1 0/1 1/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 1/1 0/1 6/17
1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/2 1/1 0/1 1/1 16/18
1/2 1/1 1/1 1/1 0/1 1/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/2 0/2 1/1 0/2 6/21
1/2 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/2 1/1 0/1 1/2 1/1 1/2 1/2 16/22
(b) Finland 1 1924 2 1930 3 1933 4 1939 5 1951 6 1954 7 1958 8 1962
20 12 17 16 24 18 43 23
3 2 6 6 9 5 8 8
0/2 1/8 0/3 0/1 1/9 0/2 0/5 0/3
1/1 1/1 1/1 0/1 0/4 0/1 0/1 0/2
1/1 1/1 1/1 0/1 0/1 0/1 0/1 0/1
0/1 0/1 1/2 0/1 0/3 0/1 0/1 0/1
1/1 1/1 1/1 0/1 0/1 0/1 0/1 0/1
1/1 1/1 0/1 0/1 0/1 0/2 0/1 0/2
177
# Closed Winning Coalitions
Coalition Formation Theories Revisited
# Winning Coalitions
Election Year
Detailed Tabulation of Predictive Performance by Country
# Winning Coalitions
# Closed Winning Coalitions
Maximal SSQ
Closed and Maximal SSQ
Closed Maximal Minimum Closed Maximal SSQ and Size SSQ and Minimal Minimum Size Range (Interval)
Closed Minimal Range (Interval)
1966 1970 Total
49 42 *
9 9 *
1/2 0/3 3/38
1/1 0/3 4/16
1/1 0/1 4/10
0/1 0/1 1/13
1/1 0/1 4/10
0/2 0/1 2/13
1947 Total
44 *
9 *
0/16 0/16
0/1 0/1
0/1 0/1
0/1 0/1
0/1 0/1
0/2 0/2
1949 1955 1959 1961 1965 1969 Total
115 226 118 112 54 60 *
16 19 15 15 9 9 *
0/5 0/5 0/3 0/2 0/2 0/1 0/18
0/1 0/2 0/1 0/1 0/1 0/1 0/7
0/1 0/2 0/1 0/1 0/1 0/1 0/7
0/1 0/6 0/2 0/1 0/2 0/1 0/13
0/1 0/1 0/1 0/1 0/1 0/1 0/6
0/1 0/1 0/3 1/1 0/2 0/2 1/10
1946 1972 Total
52 29 *
11 9 *
0/9 0/1 0/10
0/1 0/2 0/3
0/1 0/1 0/2
0/1 1/1 1/2
0/1 0/2 0/3
0/1 1/3 1/4
(f) Norway 1 1936 2 1961 3 1965 4 1969 Total
7 31 9 9 *
3 9 3 3 *
0/3 0/5 0/1 0/1 0/10
0/1 0/2 1/1 1/1 2/5
0/1 0/1 1/1 1/1 2/4
1/1 1/1 1/1 1/1 4/4
0/1 0/2 1/1 1/1 2/5
0/1 1/2 1/2 1/2 3/7
…/ Finland (cont.) 9 10 (c) France 1 (d) Israel 1 2 3 4 5 6 (e) Italy 1 2
Vincent C.H. Chua and Dan S. Felsenthal
No.
178
Table A2.
Table A2. No.
Election Year
Detailed Tabulation of Predictive Performance by Country
# Winning Coalitions
# Closed Winning Coalitions
Maximal SSQ
Closed and Maximal SSQ
Closed Maximal Minimum Closed Maximal SSQ and Size SSQ and Minimal Minimum Size Range (Interval)
Closed Minimal Range (Interval)
12 7 14 15 7 14 15 *
6 3 6 7 3 6 7 *
1/8 0/3 0/6 0/1 0/1 0/6 0/4 1/29
0/4 0/1 0/2 0/1 0/1 0/2 0/2 0/13
0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/7
0/1 0/1 0/1 0/1 1/1 1/1 1/1 3/7
0/1 0/1 0/1 0/1 0/1 0/1 0/2 0/8
0/1 0/1 1/1 1/2 1/1 1/1 1/2 5/9
95 25 19 48 79 83 24 21 23 24 12 48 192 64 202 *
14 9 8 11 14 15 10 8 8 8 6 14 20 11 15 *
0/1 0/5 0/2 0/2 0/5 0/2 0/2 1/8 0/9 0/1 0/1 0/5 0/10 0/1 0/3 1/57
0/1 0/1 0/1 0/3 0/1 0/2 0/1 0/3 0/4 0/4 0/1 0/2 0/2 1/2 0/1 1/29
0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/15
1/3 0/1 0/1 0/1 0/1 0/3 0/1 0/1 0/2 0/1 0/1 0/1 0/4 0/2 0/6 1/29
0/1 0/1 0/1 0/1 0/1 0/2 0/1 0/1 0/1 0/1 0/1 0/1 0/1 1/2 0/1 1/17
1/2 1/3 1/2 1/2 0/1 1/2 1/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 6/21
(g) Sweden 1 2 3 4 5 6 7
(h) The Netherlands 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1918 1922 1925 1929 1933 1937 1946 1948 1952 1956 1959 1963 1967 1971 1973 Total
Coalition Formation Theories Revisited
1917 1924 1932 1936 1952 1956 1970 Total
179
180
No.
Election Year
# Winning Coalitions
Detailed Tabulation of Predictive Performance by Country
# Closed Winning Coalitions
Maximal SSQ
Closed and Maximal SSQ
10 12 9 *
0/2 0/11 0/1 0/14
0/1 0/2 0/1 0/4
Closed Maximal Minimum Closed Maximal SSQ and Size SSQ and Minimal Minimum Size Range (Interval)
Closed Minimal Range (Interval)
(i) Weimar Republic 1 2 3
1919 1925 1928 Total
56 150 105 *
0/1 0/1 0/1 0/3
0/1 0/2 0/2 0/5
0/1 0/1 0/1 0/3
0/1 0/2 0/2 0/5
Vincent C.H. Chua and Dan S. Felsenthal
Table A2.
Table A3. No.
Election Year
# Winning Coalitions
# Closed Winning Coalitions
Maximal SSQ
Closed and Maximal SSQ
1949 1951 1955 1959 1961 1965 1969 1973 1977 1981 1984 1988 1992 1996 1999 2001 2003 2006
1883 15262 1867 1961 929 3733 4069 454 3470 384 12875 12288 440 739 12703 11059 3618 1641 89375
30 42 30 35 25 41 44 25 14 13 27 30 22 12 39 23 16 33 501
0/4 0/21 0/2 0/2 0/1 0/3 0/3 0/2 0/1 0/4 0/3 0/1 0/9 0/246 0/6 0/1 0/3 0/1 0/313
0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/3 0/1 0/1 0/1 0/1 0/1 0/1 0/20
0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/18
0/18 0/193 0/24 0/16 0/17 0/60 0/18 0/13 0/58 0/20 0/587 0/492 0/10 0/20 0/301 0/297 0/57 1/33 1/2234
0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/2 0/1 0/1 0/1 0/1 0/1 0/1 0/1 0/19
0/1 0/1 0/1 0/1 0/4 0/1 0/2 0/1 0/1 0/2 0/1 0/1 1/1 0/1 0/3 0/2 0/1 1/1 2/26
*
*
0.3790
0.0133
0.0132
0.4013
0.0132
0.0195
0
0
0
1
0
2
1.0000
1.0000
1.0000
0.3346
1.0000
0.000159
Mean No. Successes (Randomized Scheme) No. Consistent Predictions
P j (x p k *j )
*
*
Closed Maximal Minimum Closed Maximal SSQ and Size SSQ and Minimal Minimum Size Range (Interval)
Closed Minimal Range (Interval)
Coalition Formation Theories Revisited
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Total
Summary of Predictive Accuracy (Israel 1949-2006)
181
182
Vincent C.H. Chua and Dan S. Felsenthal Table A4. Exact Probability Mass Function P j x k for Randomized Scheme under the Minimum Size and Closed Minimal Range Theory
Number of Successes (k) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Sum
Ck
Minimum Size
Closed Minimal Range(Interval)
1 18 153 816 3060 8568 18564 31824 43758 48620 43758 31824 18564 8568 3060 816 153 18 1
0.66543 0.27515 0.052672 0.0062037 0.00050386 3.00E-05 1.35E-06 4.74E-08 1.31E-09 2.85E-11 4.95E-13 6.79E-15 7.32E-17 6.11E-19 3.85E-21 1.77E-23 5.56E-26 1.06E-28 9.31E-32 1.00000
0.98067 0.019176 0.000158 7.31E-07 2.12E-09 4.15E-12 5.68E-15 5.60E-18 4.07E-21 2.21E-24 8.99E-28 2.74E-31 6.25E-35 1.05E-38 1.26E-42 1.05E-46 5.72E-51 1.80E-55 2.49E-60 1.00000
n
Acknowledgements The authors are grateful to Matthew Braham, Aurobindo Ghosh, and Moshé Machover for their helpful comments.
References Andjiga, N.G., Badirou D., and Mbih B. (2006) On the Evaluation of Power in Parliaments and Government Formation, University of Rennes 1 and University of Caen Economics Working Papers (www Archive). Axelrod, R. (1970) Conflict of Interest: A Theory of Divergent Goals with Applications to Politics, Markham. Banzhaf, J.F. (1965) Weighted Voting Doesn’t Work: a Mathematical Analysis, Rutgers Law Review 19: 317–343. Browne, E., Gleiber, D., and Mashoba, C. (1984) Evaluating Conflict of Interest Theory: Western European Cabinet Coalitions, 1945–80, British Journal of Political Science 14: 1–32. Dodd, L.C. (1976) Coalitions in Parliamentary Government, Princeton University Press. van Damme, E. (Interviewer) (1997) On the State of the Art in Game Theory: An Interview with Robert Aumann, in W. Albers et al.(eds) Understanding Strategic Interaction: Essays in Honor of Reinhard Selten, Springer. Reprinted in Games and
Coalition Formation Theories Revisited
183
Economic Behavior 24 (1998): 181–210. Felsenthal, D.S. and Machover, M. (1998) The Measurement of Voting Power: Theory and Practice, Problems and Paradoxes, Edward Elgar. Felsenthal, D.S. and Machover, M. (2000) Voting Power and Parliamentary Defections: the 1953–54 French National Assembly Revisited’, paper presented at the Workshop on Game Theoretic Approaches to Cooperation and Exchange of Information with Economic Application, University of Caen, France, May 25–27, 2000. Felsenthal, D.S. and Machover, M. (2002) Annexations and Alliances: When are Blocs Advantageous a Priori? Social Choice and Welfare 19: 295–312. Felsenthal, D.S. and Machover, M. (2005) Voting Power Measurement: A Story of Misreinvenntion, Social Choice and Welfare 25: 485–506. Gamson, W.A. (1961) A Theory of Coalition Formation, American Sociological Review 26: 373–382. Laver, M. and Schofield, N. (1991) Multiparty Government: The Politics of Coalition in Europe, Oxford University Press. Leiserson, M.A. (1966) Coalitions in Politics: A Theoretical and Empirical Study, PhD dissertation, Yale University. Luce, R.D. and Raiffa, H. (1957) Games and Decisions: Introduction and Critical Survey, John Wiley. Owen, G. (1977) Values of Games with a Priori Unions, in R. Henn and O. Moeschlin (eds.), Mathematical Economics and Game Theory: Essays in Honor of Oskar Morgenstern, Springer, 76–87. Owen, G. (1982) Modification of the Banzhaf-Coleman Index for Games with a Priori Unions, in M.J. Holler (ed.) Power, Voting, and Voting Power, Physica Verlag. Penrose, L.S. (1946) The Elementary Statistics of Majority Voting, Journal of the Royal Statistical Society 109: 53–57. Riker, W.H. (1959) A Test of the Adequacy of the Power Index, Behavioral Science 4: 120–131. Riker, W.H. (1962) The Theory of Political Coalitions, Yale University Press. Shapley, L.S. (1953) A Value for n-Person Games, in H.W. Kuhn and A.W. Tucker (eds) Contributions to the Theory of Games II, Princeton University Press, 307–317. Shapley, L.S. and Shubik, M. (1954) A Method for Evaluating the Distribution of Power in a Committee System, American Political Science Review 48: 787–792. de Swaan, A. (1973) Coalition Theories and Cabinet Formation: A Study of Formal Theories of Coalition Formation Applied to Nine European Parliaments After 1918, Elsevier. Taylor, M. and Laver, M. (1973) Government Coalitions in Western Europe, European Journal of Political Research 1: 205–248.
10. Coalition Formation, Agenda Selection, and Power Friedel Bolle Department of Economics, Europa University Viadrina, Frankfurt/Oder, Germany
Yves Breitmoser Department of Economics, Europa University Viadrina, Frankfurt/Oder, Germany
1. Introduction In a wide range of political systems, decision making requires the support of a majority. When majorities are backed by coalitions, then they are organized either for a given period of time or on a case by case basis (i.e. for single proposals). In Germany, for instance, minority governments are rare and the first case prevails. In any case, there may be a unique party that is entitled to form a coalition (called formateur), or any party may be entitled to do so. We are interested in the non–cooperative treatment of a formateur model, and based on this, we want to discuss bargaining power. The model that we analyze deviates from the models discussed in the literature, and therefore, let us first outline and discuss it. Briefly, the following applies. The party with the largest voting share is assigned the task of the formateur and asks the other parties about their aspirations with respect to government participation. Given a profile of reported aspirations, he chooses a coalition and an agenda subject to the requirement that the aspirations of all participating parties are satisfied. If the formateur fails to form a coalition, a fixed outside option applies (e.g. new elections). Thus our approach to coalitional bargaining resembles ‘menu auctions’ (Bernheim and Whinston 1986) and ‘team selection’ (Bolle 1995). The model differs in that we rule out side payments, and in that our context may allow us to assume specific valuation functions. For instance, it may be appropriate to assume spatial preferences to model the valuation of political agendas (see also Steunenberg et al. 1999), or to relate the valuations of alternative coalitions to some scalar measure of their respective stabilities (see below). Our main innovation is to include a public element into the model of the communication between the formateur and the coalition candidates. Pre-
186
Friedel Bolle and Yves Breitmoser Table 1.
Valuations in Example 1
Agenda, Coalition Partner
Party 0
Party 1
Party 2
x 2 x 1 y 1 y 2
5 4 3 2 1
4 5 2 1 3
2 1 5 4 3
sumably, the public (the electorate) is interested in the parties’ aspirations whenever they bargain about agendas and coalitions, and it requires the parties to outline their aspirations. In addition, the parties themselves are interested in communicating aspirations, as the formateur would have to offer them contracts where their aspirations are satisfied (if possible). Thus, unilaterally, each party is best off announcing aspirations. This positive effect of communicating aspirations comes at a price that appears at first glance negligible: the public expects the parties to accept proposals where their aspirations are satisfied. The aspiration levels thus resemble auction bids, and the coalitional bargaining game approximates an auction. As a result, the parties can find it optimal to announce very low aspiration levels. The models discussed in the literature typically assume that the formateur offers a contract to a coalition of his choice, and the parties decide whether to accept or reject it. There is no binding communication between the players, but the results are typically robust to assuming cheap talk. Examples of such models are Baron and Ferejohn (1989), Baron (1991), Austen-Smith and Banks (1988), and Eraslan (2002). It appears, perhaps, that simultaneous announcements of aspirations are strategically equivalent to simultaneous decisions upon accepting or rejecting a proposal. This is not true, since rejecting a specific proposal invokes the outside option, while the announcement of unrealistically high aspirations only leads to coalitions that do not include oneself, but it does not invoke the outside option. We illustrate this is in a little example. The valuations of the example are given in Table 1. There is the formateur (party 0) who can form a coalition with either of two parties (1 and 2). In addition, the formateur has to decide about the agenda, i.e. about x or y. He prefers x over y, he prefers to implement x with party 2, and if it should be y, then preferably with party 1. In case of failure, the outside option (denoted ) would apply. Party 1 prefers agenda x to the outside option, and the outside option over y, and the opposite applies to party 2. Without communication, the formateur would propose to implement x with party 1. Since 1 prefers it to the outside option, it would accept the proposal, whereas 2 would not accept to implement x (he would reject the respective proposal to invoke the outside option). In these circumstances, the parties would never accept proposals where they are worse off than
Coalition Formation, Agenda Selection, and Power
187
under the outside option. This changes in a model with public communication. We assume that players have to set their aspirations simultaneously,i.e. they are set by the parties’ executive boards uninformed of the others’ aspirations. In our example, if 2 expects that 1 sets aspirations of 5, corresponding with requiring (x ,1) , then its best response would be aspirations of 2. In the eyes of 2, this is worse than the outside option, but these aspirations constitute the unique equilibrium. Thus, public communication matters, and our point of departure is that the public requires the parties to outline their aspirations before the negotiations start. In our model, the coalition and the agenda are negotiated simultaneously. This applies also to the studies of Ray and Vohra (1999) and Jackson and Moselle (2002), but there, the formateur has no opportunity to communicate with the parties before he makes his proposal (as in the above cited Baron–Ferejohn model family). In other studies, coalition and agenda (or, the allocation of governmental positions) are negotiated sequentially; for instance in Diermeier et al. (2002, 2003) and Merlo (1997). Our model can be generalized to allow for such sequentialities, too. Our specification of the model only requires that the expected payoffs of forthcoming negotiations are reflected in the valuations of the coalitional bargaining game that we analyze. In order to keep our analysis tractable, we consider a model that is simplified in other respects. Unlike Bloch (1996) and Ray and Vohra (1999), we assume that the game ends when a coalition has formed. In more general models, the remaining players might coalesce as well. In political bargaining, this would affect only the way that the opposition is organized, but the respective parties might benefit from such collusion. Our simplification is not restrictive, as the coalitions resulting amongst the remaining players could be induced backwards. Unlike most other models, we assume that a fixed outside option applies if the formateur fails to form a coalition. Usually, it is assumed that another formateur is appointed in cases of failure, and thus, there is a close relationship between the valuation structure of the players and the value of the outside option. In the models of the literature, this relationship is critical, as the outside option restricts the actions of the formateur significantly. For, the formateur is restricted to coalitions where no member is worse off than in the outside option. As shown above, this does not apply to our model. Here, the players may be best off proposing coalition contracts where they are worse off than in the outside option, and generally, the actual valuations of the outside option are far less relevant. Following the analysis of our model, we will discuss the bargaining power of the parties and possible constructions of power indices. This relates our study to analyses of cooperative games (see, e.g., Holler and Illing 1996) and power indices (e.g. Holler and Owen 2001), but in particular, it relates our study to the critiques of traditional power analysis as brought forward by
188
Friedel Bolle and Yves Breitmoser
(amongst others) Garrett and Tsebelis (1999a,b), Steunenberg et al. (1999), and Napel and Widgrén (2004). Traditionally, power indices are based solely on the parties’ voting shares and the decision rules of the voting body. The aim is to characterize power abstract of preferences (as political programs), strategies, and institutions. The critics argue that this approach is problematic and we largely agree with them – in particular, we agree with the arguments and approach of Napel and Widgrén (2004). In our eyes, and whenever coalitional bargaining takes place, the parties would bargain in specific non–cooperative frameworks, they would have specific preferences, and probably also detailed knowledge of each other’s preferences (platforms). The framework and the preferences are, of course, of significant relevance for the bargaining outcome. Following Napel and Widgrén (2004), we suggest that if the power of players is to be measured abstract of preferences (ex ante), then it is done by taking expectations of power in generic non–cooperative games. In Section 2, the valuation functions are discussed, the coalitional bargaining game is defined formally, and a second example is provided. In Section 3, we define the equilibria that we find most plausible and discuss their general characteristics (formally). In Section 4, we discuss the existence of equilibria, with an eye on an apparently straightforward case: all parties value participation in any government positively. In Section 5, we characterize the set of equilibrium coalitions, in a model allowing for negative valuations of government participation. In the concluding Section 6, we discuss the model and the application to power indices.
2. Motivation and Definitions 2.1 Preferences and Issues What would the parties negotiate about? There appear three issues: whether to join a given coalition, the political agenda in the period of legislature, the allocation of the governmental positions. Clearly, a party’s willingness to participate in a government depends on the assumed agenda and the allocated positions. Yet, it would also depend on the other parties in the coalition, independently of the agenda and the allocated positions. In particular, it would if the parties committed to specific coalitions in the election campaign (where their reputation would suffer in cases of violations), or from a long–term perspective, if the parties would want to distinguish themselves from other parties (with an eye on forthcoming elections). Our model will recognize the possibility of such valuations. Our model also allows for political positioning in arbitrary spaces. The primitive of our model is a ‘valuation function’ that maps combinations of ‘coalitions’ and ‘agendas’ to valuations. The agenda can be understood literally, i.e. as an outline of forthcoming decisions, but it can also describe
Share of cabinet ministers
Coalition Formation, Agenda Selection, and Power
189
0.35 0.30 0.25 0.20 0.15 0.10 0.05 0 0.00
0.05
0.10
0.15
0.20
0.25
0.30
Small party’s share of coalition seats
Fig. 1.
Share of governmental positions allocated to the small party in relation to its voting share (normalized by the voting share of the coalition)
the allocation of governmental positions. In the following, we will illustrate that the allocation of positions appears to be governed by fixed rules rather often. Thus, assuming independence of the allocation of positions from the original bargaining process need not be restrictive. While this paper was written, the two big German parties (SPD and CDU) had been negotiating a ‘grand’ coalition. (In the literature, a grand coalition is understood to include all parties, but in Germany, it refers to a coalition of these two parties.) The last grand coalition was formed in 1966 and it was in office until 1969. Between these grand coalitions, there were 11 periods of legislature where ‘small’ coalitions were formed (or 12, if one counts Willy Brandt’s resignation as the start of a government). There, one of the big parties did coalesce with a small party, with the FDP or the Greens. For these coalitions, we can count the aggregate number of cabinet positions that the small party was allocated, and plot these numbers against the parties’ voting shares. This is done in Fig. 1. Note that the voting share is defined as the number of votes for the small party in relation to the combined number of votes for the whole coalition. While the small number of observations does not allow precise statistical inference, it suggests that the relationship may be captured well by simple functions. An even stricter formulation of this phenomenon is known as Gamson’s Law (Gamson 1961): a party’s share of portfolios is proportional to the amount of resources (seats in the assembly) that it contributed to the coalition. Following empirical analyses strongly supported modified versions of this hypothesis (e.g. Browne and Franklin 1973; Schofield and Laver
190
Friedel Bolle and Yves Breitmoser
1985; Warwick and Druckman 2001). With respect to Germany, let us finally note that the fixed rules describing the allocation of governmental positions in Germany can be refined even further. For instance, the big party always named the chancellor and the small party named the foreign minister and the vice chancellor. For these reasons, it appears feasible to assume that the resulting allocations of governmental positions depend only on the resulting coalitions, but are not explicitly negotiated. Thus, the valuation of the allocated positions is part of the valuation of a given coalition. For these reasons, when we consider simplified models below, we will structure the preferences to reflect political positioning (ex post) and valuations of coalitions, excluding a specific valuation of governmental positions.
2.2 The Model Let i 0,…, n describe political parties with voting shares T i . Without loss of generality, let 21 T 0 p T 1 p " p T n . The president, or some other institution, invites a party to form a majority coalition. We assume that party 0 would be assigned this task. The task may turn out impossible. In this case, party 0 would have to choose its outside option and all parties i realize outside values v i . We denote coalitions by the set of parties taking part in addition to party 0. For instance, the outside option does not require party 0 to coalesce with any other party, and we refer to the underlying (non–majority) coalition as the empty set . The set of coalitions that 0 may form contains and all subsets of \1,…, n ^ that lead to a majority coalition (in addition to party 0). We refer to the set of majority coalitions as C and to its elements as c C . The set C 0 C \^ also includes the outside option. C \c {1,…, n } ] T 0 i c T i 12 ^
(1)
A political agenda s S describes a set of consistent parliamentary decisions that a majority could enforce. S is assumed to be finite. The preferences of party i are described by a valuation function v i that maps each combination of agendas and coalitions to a real number, i.e. v i : S qC 0 l \. Note the generality of this definition. It allows that the valuation of a party outside the governing coalition depends on which parties are in the coalition and on which agenda the coalition would enforce. Similarly, it allows that the valuation of a member of the governing coalition depends on the agenda as well as on the coalition that formed. The coalition formation, as we analyze it, has two stages. First, all parties except 0 announce aspiration levels a i . They do so simultaneously and independently. The idea of bargaining through aspiration levels corresponds with assumptions in related auction models: ‘truthful bidding’ in menu auc-
Coalition Formation, Agenda Selection, and Power
191
tions (Bernheim and Whinston 1986) and ‘semisincere strategies’ in proxy auctions (Ausubel and Milgrom 2002). For player i, the set of possible aspiration levels is denoted Ai \. We assume that there are feasible aspiration levels for all possible valuations of i , i.e. Ai
\v i (s , c )(s , c ) S qC 0 ^ .
(2)
The aspiration levels constitute two--sided commitments (as discussed in the introduction). On one hand, party i would not sign a coalition agreement unless participation in the governing coalition and under the announced agenda yields a i . On the other hand, party i will join all governments leading to a valuation of at least a i . Thus, for given aspirations a i of party i, the set of implementable coalitions and agendas is G i (a i ) \(s , c ) S qC 0 i c or v i (s , c ) p a i ^ .
(3)
We refer to a given combination of coalition and agenda, (s , c ) S qC 0 , as a government. In the second stage of the game, party 0 chooses an agenda s and a coalition c to maximize its own valuation v 0(s , c ). Thanks to the above commitments, party i will sign a coalition contract establishing (s , c ) if and only if v i (s , c ) p a i . That is, there is no renegotiation. Thus, under subgame perfectness, party 0 would simply choose the optimal element of the set of feasible coalition–agenda–combinations (governments). For a given aspiration profile (a i ) a , this set is G (a ) i p1G i (a i ), and party 0 maximizes v 0(s , c ) subject to (s , c ) G . Notably, G contains also the outside option of party 0 (c ), as this option does not require the agreement of any other party. The optimal choice of 0 is denoted (s , c ). We assume that there exists at least one majority coalition that all required parties prefer to the outside option. Formally, given the outside valuations v i , there exists (s , c ) S qC such that v i (s , c ) p v i i c \0^ . As shown in Example 1 in the introduction, this relation need not apply to equilibrium outcomes. We will characterize the subgame-perfect equilibria (SPEs) of this game. An SPE is described by a profile of aspiration levels (a i )i \1,…,n ^ and a function mapping each such profile to the choice of party 0. Notably, the choice of 0 need not be unique for a given profile of aspiration levels, and thus, the actual choice is an integral part of any equilibrium.
3. Realistic Aspiration Equilibrium The existence of subgame–perfect equilibria of the above game follows from the existence of Nash equilibria. To illustrate this, let us fix any set of optimal choices by party 0 (as a function mapping aspiration profiles to 0’s choices). Then, the above game can be reduced to a static, simultaneous–
192
Friedel Bolle and Yves Breitmoser
move game where the parties 1,…, n set their aspiration levels. This game is effectively finite, as the set of possible governments and thus the sets of distinguishable aspiration levels is finite. Hence, Nash equilibria must exist. This holds, of course, only as far as we allow for mixed equilibria. Mixed equilibria, in turn, appear rather vulnerable to allowing for (auction–like) dynamic definitions of the game. In the following, we therefore concentrate on pure equilibria, but as far as those are concerned, existence is not trivial. In most cases, pure equilibria have to be constructed explicitly to prove existence. We will do so for a couple of special cases and we characterize the set of governments sustainable in equilibrium through necessary and sufficient conditions. We indicated above that not all pure equilibria appear equally plausible. To illustrate this in more detail, let us distinguish equilibria where all parties have realistic aspiration levels from equilibria with unrealistic aspiration levels. A formal definition follows. Informally, the aspiration level of party i is called realistic if i ’s valuation of the government that party 0 chooses is equal to i ’s aspiration level. Definition 3.1 Fix a pure subgame–perfect equilibrium with aspirations (a i ) and party 0 choices g qi Ai l S qC 0 . It is called realistic aspiration equilibrium (RAE) iff i \1,…, n ^ a i v i g (a 1 ,…, a n )
(4)
We say that the aspirations of a player are unrealistically high if they are greater than the valuation that results in the government chosen in response to a given aspiration profile, and they are unrealistically low if they are less than the resulting valuation. The assumption of realistic aspirations is closely related to the exclusion of ‘blocking bids’ in Bolle (1995) and Ausubel and Milgrom (2002). It is also related to (but weaker than) ‘coalition proofness’ as defined by Bernheim and Whinston (1986).
3.1 Illustrations of the Definition First, we consider equilibria with unrealistically high aspirations. We find (Lemma 1) that either all groups of parties are indifferent with respect to deviations to realistic aspirations, or there is a group of parties that can deviate to realistic aspirations and all parties (in this group) benefit. In the latter case, the parties block each other – what would usually not happen in a dynamic game or under trembling hands. To simplify our arguments, we assume that the valuations of the formateur are generic in the sense that he is not indifferent between any pair of alternative options: (s , c ) v (s a, c a) v 0(s , c ) v v 0(s a, c a)
(5)
Coalition Formation, Agenda Selection, and Power
193
Lemma 3.1 Assume generic valuations, Eq. (5). Consider a government (s , c ) that is sustained in an equilibrium with unrealistically high aspirations and let (s a, c a) denote the (unique) choice of party 0 when all parties had aspirations that are realistic with respect to (s , c ). (1) If (s a, c a) (s , c ), then all groups of parties with unrealistically high aspirations are indifferent with respect to deviations to realistic aspirations. (2) If (s a, c a) v (s , c ), then there exists a group of parties with unrealistically high aspirations that can deviate to realistic aspirations such that all of them are better off.
Proof Case 1 Party 0 does not adapt its choice when all unrealistic parties deviate to realistic aspirations. When a subset of these parties deviates, then the set of possible governments is a subset of the set when all parties deviate. The government (s , c ) is in both sets, and it constitutes the optimum in the superset. Hence, it also constitutes the optimum (generically unique) when only a subset of parties deviates to realistic aspirations. Consequently, it would still be chosen by 0 after any such deviation, and no valuation is affected by the deviation. Case 2 Assume that all parties in c a \ c with originally unrealistic aspirations deviate to realistic aspirations. As a result, the set of governments that party 0 could choose is a subset of the options available when all unrealistically high parties deviate, and it includes (s a, c a). Thus, party 0 optimally chooses (s a, c a) and all parties in c a \ c would be in the coalition after the simultaneous deviation. Hence, their aspirations, which are realistic with respect to (s , c ) , must at least be satisfied in (s a, c a). Generically, indifference is ruled out, and thus, all of the parties in c a \ c are better off. , With respect to equilibria with unrealistically low aspirations, we can say that no group of parties would be worse off deviating to realistic aspirations. However, the corresponding realistic strategy profile need not constitute an equilibrium anymore, and in such a case, there may exist an equilibrium in unrealistically low strategies that Pareto dominates all realistic equilibria. Nonetheless, we can conclude that simultaneous deviations to realistic aspirations are either payoff–irrelevant, or there exists a group of parties outside a given coalition that would benefit (when they originally blocked each other). Now, we shall consider the opposite case (Lemma 2): is there blocking in realistic aspiration equilibria? The answer is no. Lemma 3.2 Consider an outcome (s , c ) that is sustained in an RAE. There is no group of parties outside the coalition that can deviate simultaneously from the equilibrium such that they are all better off.
Proof Let c denote a group of parties outside the chosen coalition c , i.e.
194
Friedel Bolle and Yves Breitmoser
any c N \ c . When all of the parties in c would increase their aspirations, then the set of feasible governments would shrink but the originally optimal (s , c ) would remain feasible. Hence, it would still be chosen by party 0, and no valuation would be affected. Thus, in order to affect the valuations, at least one of the parties outside c would have to reduce its aspirations, and if this leads to an adapted choice of party 0, then at least one of these parties suffers a loss. , Finally, to conclude our illustrations of the assumption of realistic aspirations, we discuss an issue that concerns the ‘degree of competitiveness’ of the aspirations. In this context, we say that the lower a party’s aspiration, the more competitive is its strategy, and also that parties are more competitive when they decrease their aspirations (in order to be taken into the coalition). Our discussion starts with an observation about the best responses of parties that have realistic aspirations: when they are left out of the coalition despite of being realistic, then they cannot improve their payoffs through decreasing their aspirations. Lemma 3.3 Consider an arbitrary aspiration profile (a i ) and let (s , c ) denote the corresponding choice of party 0. Any i c with realistic aspirations, a i v i (s , c ), may not gain through decreasing its aspiration level.
Proof If party i c decreases its aspiration, then there are (weakly) more governments that party 0 can choose. In all of the additional options, party i is worse off than in (s , c ), as they became feasible only because of i decreasing its aspiration. Therefore, if party 0 deviates from (s , c ) to one of these additional options, party i must be worse off. , Thus, aspiration reductions cannot be profitable – when one is realistic. Does this mean that realism prevents competitive behavior? Rather to the contrary; it shows that being realistic is already as competitive as sensible, and unrealistically low strategies can never be a unique best response. Through assuming realistic aspirations, we rule out cases where a group of parties would profit from simultaneous deviations to more competitive strategies, while unilateral deviations are payoff irrelevant. That is, we rule out (rather implausible) uncompetitive equilibria.
3.2 A General Characterization We now characterize the set of equilibrium outcomes. This characterization rests on two conditions, and those conditions, in turn, can be visualized easily when we rank the formateur’s (i.e. party 0’s) options based on his preferences. To do so, let us denote the set of possible governments, including the outside option, as R S qC 0 . Typical elements are denoted r R .
Coalition Formation, Agenda Selection, and Power
195
The coalition underlying the government r is denoted by c(r ). We now order (p) the elements of R in the following way: r a, r aa R r a p r aa v 0(r a) p v 0(r aa).
(6)
This ordering is complete, transitive, and reflexive. Let us also define r a r aa as being equivalent to r a p r aa and r a b r aa. Valuation profiles are related through the ordering pc (r ) in the following (multiplicative) way: v(r ) pc (r ) v(r a) iff for all parties i in c(r ) we know v i (r ) p v i (r a). The characteristic conditions are defined next. On one hand, given that all parties i v 0 have realistic aspirations with respect to a government r, party 0 should indeed be best off choosing r (i.e. 0 should not be able to deviate to a ‘higher’government r a r ):
r a r , i c(r a) v i (r a) v i (r ).
(C1)
On the other hand, for any party i in the coalition underlying government r, when i increases its aspirations then a government results that i values less than r (it should not be profitable for i to force party 0 to choose a ‘lower’ government r a r ). In turn, for each option r a b r that one of the i c(r ) prefers to r, there either (i) exists an alternative r aa r a that the formateur would choose when i deviates to higher aspirations , or (ii) r a cannot be chosen given the other parties’ aspirations under r. The following formulation combines these cases: r a b r , i c(r ) v i (r a) v i (r ) º r a v max \r aa ] v(r aa) pc (r aa) (v i (r ), v i (r a))^ .
(C2)
In the next statement, we concentrate on generic valuations to reduce the number of cases to be distinguished. Let us note, however, that a similar statement holds for degenerate valuations. Proposition 3.1 Assume generic valuations, Eq. (5). A government r R can be sustained in an RAE if and only if r satisfies (C1) and (C2).
Proof Fix a government r R . First, we show that the conditions are necessary. If (7) is violated then party 0 can deviate from choosing r to a government r a that it prefers (given realistic aspirations), and hence, r would not result in equilibrium. Assume now (7) is satisfied. If any party i c(r ) deviates to higher aspirations, then party 0 would have to choose a government r a b r . If (8) is violated, then there exists a party i and a government r a with the following characteristics. Party i is better off in r a and i can increase its aspirations such that the formateur’s best choice given the aspirations is r a (generically, this best choice is unique). Consequently, this aspiration increment is a profitable deviation for i if (8) is violated.
196
Friedel Bolle and Yves Breitmoser
Secondly, we show that the conditions are sufficient. Suppose that r would satisfy these conditions without being sustained in an RAE. First, assume that party 0 would be better deviating from choosing r. For all governments r a r that 0 prefers, we know that some of the required parties is worse off in r a than in t . Given realistic aspirations with respect to r, party 0 cannot deviate to any such r a. Consequently some other party, i.e. some i c(r ), must be better off deviating unilaterally from its (realistic) aspirations. Party i can profit only from increasing its aspiration (Lemma 3). On one hand, (7) implies that, for all governments r a r that i prefers to r, 0 cannot choose r a if i deviates to higher aspirations. On the other hand, (8) implies that, for all governments r a r that i prefers to r, 0 would not choose this government if i deviates unilaterally to aspirations v i (r a). Moreover, for any r a r that i prefers, if i deviates to aspirations that are less than v i (r a), the set of options that are feasible for 0 is equal to the set of options feasible under v i (r a) ‘plus’ options that i values less than r a . Hence, the government that 0 would choose is still valued less than r a by i. As a result, no government r a that i prefers would be chosen if i deviates to aspirations v i v i (r ), v i (r aa)¯± , given r aa argmax r v i (r ) denotes the government that i prefers most. Implicitly, when i deviates to v i v i (r ), v i (r aa)±¯ , then no government is chosen that includes i. As a result, the government chosen in these cases is the same as in those cases where i deviates to aspirations v i v i (r aa) that are higher than its maximal valuation (still, a government that i values less than r). Thus no party i c(r ) can profit through deviating unilaterally from aspirations v i (r ) , contradicting the initial assumption that , r cannot be sustained in an RAE.
4. Government Participation is Not Valued Negatively The existence of RAEs is difficult to establish in general. We want to illustrate this for particularly simple valuation functions: all parties associate non–negative valuations with participations in any coalition, under any agenda. This assumption appears strong, but as we see below, it is not sufficient to guarantee the mere existence of RAEs. Below, we show that an equilibrium exists if three specific conditions are satisfied, but first, we want to motivate these conditions by illustrating their relevance. In the following, we assume that the preference functions are additively separable. ¦£h i (s ) p i (c ), if i c \0^ v i (s , c ) ¦¤ ¦¦¥h i (s ), else.
(7)
Here, h i (s ) reflects i ’s valuation of the enforced agenda and p i (c ) i ’s valuation of the governing coalition c (provided i participates). As indicated, we assume that i c p i (c ) p 0 in this section. Equation (7) may be taken as representing the idea that the main effect of participating in a government
Coalition Formation, Agenda Selection, and Power
197
Table 2. No RAE exists (alternative valuations in parentheses)
Agenda, Coalition
Party 0
Party 1
Party 2
X \1^ X \2^ X \1,2^ Y \1^ Y \2^ Y \1,2^ Z \1^ Z \2^ Z \1,2^
10 7 4 8 5 2 6 3 0 0
0 (2) 0 (0) 0 (1) 3 (5) 3 (3) 3 (4) 6 (8) 6 (6) 6 (7) 0
3 (3) 3 (5) 3 (4) 6 (6) 6 (8) 6 (7) 0 (0) 0 (2) 0 (1) 0
is given by the allocation of government positions, which (by assumption) is a function of the coalition. The government positions are valuable, for instance, because they simplify the communication with the electorate.
4.1 RAE Existence Conditions The first example that we give for a game without RAEs suggests that most problems are with party 0. Here, we set p i (c ) 0 for all i but party 0. Notably, the non–existence of RAEs does not depend on this simplification; to underline this, we give a set of alternative valuations that are not thus simplified (in parentheses). The valuations are in Table 2. First, there cannot be an RAE leading to a coalition involving party 2, i.e. neither \2^ nor \1,2^ . Whenever 1 and 2 have aspirations that are realistic with respect to such a government, then party 0 would choose X , \1^ , \Y , 1 ^, or Z , \1^ . If 1 and 2 have aspirations that are realistic with respect to X , \1^ , then party 1 would increase its aspiration level to 3(5), and thus push 0 to choose Y , \1^ . If the aspirations are realistic with respect to Y , \1^ , then party 1 would increase its aspiration level to 6(8), and thus push 0 to choose Z , \1^ . Finally, if the aspirations are realistic with respect to Z , \1^ , then party 0 would rather choose X , \2^ . Hence, there is no pure–strategy equilibrium in this game. Mainly, this is due to the overlapping preferences of party 0. If party 0 has a weakly preferred agenda, i.e. an agenda that it would preferably enforce under any coalition, then most problems disappear. In the following, we shall therefore assume that there is a weakly preferred agenda, called s . That is, for all majority coalitions c 1 , c 2 and all agendas s v s , party 0 does not value the implementation of (s , c 1 ) less than the implementation of (s , c 2 ). Formally, v 0(s , c 1 ) p v 0(s , c 2 )
(A1)
198
Friedel Bolle and Yves Breitmoser Table 3.
No RAE exists
Agenda, Coalition
Party 0
Party 1
Party 2
X \1,2^ X \2^ X \1^
3 2 1 0
2 1 3 0
3 2 1 0
Table 4.
No RAE exists
Agenda, Coalition
Party 0
Party 1
Party 2
Party 3
Party 4
X \1,2^ X \2,3^ X \3,4^
3 2 1 0
3 1 2 0
2 3 1 0
1 2 3 0
1 2 3 0
As the next example shows, this condition is not sufficient. In Table 3, there is only one agenda, and it turns out that a second problem may appear: the parties should not prefer larger coalitions to smaller ones. Before we discuss this example, let us note that it appears that there would be an alternative (but less intuitive) assumption to guarantee existence in this example: we might assume that aspirations are realistic when they are less than or equal to the resulting valuations (instead of being precisely equal). As the next example (Table 4) shows, this does not guarantee existence in general. For this reason, we do not discuss it further. In Table 3, if the aspirations are realistic with respect to X , \1^ , then party 0 would choose X , \2^ . If the aspirations are realistic with respect to X , \2^ , then 1 would choose X , \1,2^ . Finally, if the aspirations are realistic with respect to X , \1,2^ , then 1 would increase its aspiration level to 3, and thus force 0 to choose X , \1^ . It appears that this problem does not appear if at most one of the parties would prefer larger coalitions to smaller ones,but this case is (in general) difficult to treat. In turn, it appears rather straightforward to assume that all parties would not prefer large coalitions, as the allocated governmental positions would be less advantageous in large coalitions. Formally, i c 1 c 2 i p i (c 1 ) b p i (c 2 ).
(A2)
Unfortunately, not even combined are the above conditions sufficient. A third problem may appear, as shown in Table 4. Again, the example does not rely on degenerate valuations and again there is only one agenda (which therefore is weakly preferred by all parties). The coalitions listed in the example are the unique minimal winning coalitions for the following
Coalition Formation, Agenda Selection, and Power
199
voting shares: (T 0 , T 1 , T 2 , T 3 , T 4 ) 41 41 41 18 18 . We neglect non–minimal coalitions; for any such (s c ) the valuations are assumed to be in (0 1) for party 0, in (1 2) for all i c , and in (0 1) for all i c . Government X , \1,2^ is not sustained in an RAE, as party 2 could increase its aspirations to 3, forcing 0 to choose X , \2,3^ . Government X , \2,3^ is not sustained in an RAE, as party 3 could increase its aspirations to 3. Finally, X , \3, 4^ is not sustained in an RAE, as party 0 would choose X , \1,2^ under realistic aspirations. The same argument applies to the outside option. Thus, there is no RAE. A formalization of a simple condition ruling out such cycles appears difficult. Namely, it appears that we require conditions as strong as those underlying Prop. 1, applied to the coalition structure, to rule out those cases. Therefore, it appears that a general condition for the existence of RAEs would not be very instructive. For this reason, we skip it. In the following, we will concentrate on the existence of RAEs in specific circumstances.
4.2 Cases with Equilibria Sustaining Party 0’s Preferred Government Here, we are interested in sufficient conditions for the existence of an equilibrium sustaining the ideal government for party 0. We distinguish two cases. First, a degenerate one: party 0 does not care about the resulting coalition, i.e. c p 0(c ) 0. In this case, party 0 would be interested only in the resulting agenda, and we denote its most preferred one as s . Proposition 4.1 Assume (A1), (A2), and c p 0(c ) 0. There exists an RAE leading to s in equilibrium.
Proof In the equilibrium that we describe, s is sustained by an arbitrary minimal winning coalition c . Since the game is degenerate, the function mapping aspiration profiles to choices of party 0 is not defined uniquely through subgame perfectness. We refine the optimization through the following hierarchical criteria. (1) if indifferent and (s c ) is a candidate, then it is chosen; (2) else, if indifferent, then (if possible) party 0 chooses a minimal winning coalition; (3) if still indifferent, then 0 chooses a coalition c to maximize the number of players i c with aspirations equal to h i (s ). As these criteria merely refine the optimization, the resulting choice of party 0 would still be compatible with subgame–perfectness. We have to show that no party i \1,…, n ^ would be better off deviating from (a i ). Parties i c may not be better off changing their aspirations, as party 0 would not deviate from (s , c ). Parties i c may not be better off decreasing their aspirations, as the same coalition and agenda could (and
200
Friedel Bolle and Yves Breitmoser
would) be chosen by party 0. Finally, parties i c may not be better off increasing their aspirations. For, given realistic aspirations of the others, a winning coalition to enforce s can be formed even without i. Since c is a minimal winning coalition, leaving out i destroys the majority. Consequently, leaving out party 0 would destroy the majority, too, implying that a coalition of party 0 with the parties outside c has a majority. Let c a denote any minimal winning coalition of 0 with parties outside c . Choosing c a would satisfy all of the refinement criteria. Hence, whatever coalition 0 actually chooses, it would satisfy all criteria as well. In turn, when i has increased its aspirations, no coalition with i would satisfy all criteria (at least, 3 is violated). Therefore, no coalition with i would be chosen after i ’s deviation, and s would still be implemented. , Thus, no i would profit from a deviation. The second case that we consider is non–degenerate in the following sense: for any pair of governments, parties that are involved in both of them value them differently: (s 1 , c 1 ) v (s 2 , c 2 ) i (c 1 c 2 ) {0} v i (s 1 , c 1 ) v v i (s 2 , c 2 ).
(8)
Let c denote 0’s most preferred coalition. Also, let c(i ) denote the most preferred coalition (to party 0) that does not include i and that all required players prefer to c : c(i ) arg max p 0(c ) using C (i ) \c C 0 ] i C and j c p j (c ) p p j (c )^. (9) c C (i )
c(i ) is well defined, since C(i ) contains at least the outside option . Let us say that c c(i ) iff party 0 prefers c to c(i ). The government (s , c ) can be sustained in an RAE if and only if i c c c(i ) j c c p j (c ) p j (c )
(10)
A rather interesting special case is where the coalition c a that is the best alternative to c has an empty conjunction with c . Here, the coalition c is substituted with c a when any party in c becomes too greedy (i.e. deviates unilaterally). This scenario applies, for instance, if the possible coalition partners of 0 are grouped in the sense that no party from one group would be willing to coalesce with a party from another group. Proposition 4.2 Assume Eq. (10) and generic valuations Eq. (8). There exists an RAE leading to 0’s preferred choice (s , c ) in equilibrium.
Proof Party 0 would not deviate from the equilibrium (as it can choose the unique optimal government). Deviations of parties i c would not lead to different choices of party 0, and hence are payoff irrelevant. Similarly, deviations of parties to lower aspirations i c are payoff irrelevant. Apart
Coalition Formation, Agenda Selection, and Power
201
from this, it is easy to check that (s , c ) satisfies the conditions underlying , Prop. 3.5.
4.3 Cases with Equilibria Sustaining the Median Agenda We show that if there is some ‘median’ agenda sˆ such that for all alternatives s v sˆ a weak majority prefers sˆ over s, then there is an RAE where sˆ is enforced. Formally, we call sˆ a median agenda if s v sˆ cˆ N \0^ T i p 05 and i cˆ h i (sˆ) h i (s )
(11)
i cˆ
To simplify the following argument, let us additionally assume that (for all parties i) the valuation of participating in the government is independent of the actual coalition, i.e. £¦h i (s ) p i , v i (s , c ) ¦¤ ¦¦¥h i (s ),
if i c \0^ else
with p i p 0.
(12)
Note that Eq. (12) constitutes a special case of (A1) and (A2). Thus, the following shows that in a wide range of cases, where the formateur’s preferred choice is sustained in equilibrium, an equilibrium co existsthat sustains the median government. Proposition 4.3 Assume (11) and (12) are satisfied. There exists an RAE where sˆ is enforced.
Proof In the equilibrium that we describe, the aspiration levels are a i h i (sˆ) p i i and party 0 chooses the agenda sˆ whenever it is optimal, and if indifferent between the coalitions, he chooses the largest optimal one. In equilibrium, the grand coalition cˆ \1,…,n ^ is chosen. First, we show that choosing (sˆ cˆ) is optimal under the assumed aspiration profile. Consider any alternative agenda s v sˆ. On one hand, let us consider the case that i prefers s over sˆ. Because of Eq. (11), there is a weak majority that prefers sˆ, and thus, at most 0.5 of the voting share would agree to s. Consequently, the formateur cannot deviate to s . On the other hand, if i does not prefer s over sˆ, then sˆ is less preferred than s by the formateur. As a result, sˆ is an optimal choice of party 0, and since 0 does not care about the actual coalition, an optimal coalition is cˆ. Next, we show that the parties in cˆ (i.e. all parties) are best off sticking to the presumed aspirations. First, if i decreases his aspirations a i , and 0 should adapt its choice in response to this, then it would must be to the disadvantage of i, i.e. it would lead to a government that i values less than (sˆ, cˆ). Secondly, if i deviates and party 0 still chooses (sˆ, cˆ), then i ’s deviation would be payoff irrelevant. Finally, consider any party i cˆ that devi-
202
Friedel Bolle and Yves Breitmoser
ates through increasing its aspirations a i . As a result, (sˆ, cˆ) cannot be chosen by 0 anymore. A feasible choice is s , cˆ\i ^ , and similarly to the arguments used above, we can show that it also is an optimal choice. Following the above restrictions concerning the choice of party 0, s , cˆ\i ^ would therefore be chosen after i ’s deviation. In i ’s eyes, and given sˆ is enforced, it is optimal to participate in the government. Hence, i cannot profit , through such a deviation. In the following, let us assume that the space of agendas S is embedded in a metric space R . The distance of two elements s , r R is denoted d(s , r ) . We maintain the assumption of separable preferences as above, i.e. v i (s , c ) h i (s ) p i if i c and v i (s , c ) h i (s ) otherwise, for some p i p 0 . We say that party i has spatial preferences if there exists an o i R such that h i satisfies h i (s ) d(s , o i ).
(13)
The reference point o i can be interpreted as an ideal agenda for party i , though it need not be an element of S. In this model, party i ’s valuation of agenda s S is equivalent to the distance of s to o i . In most circumstances, one would assume that R is some space of real vectors, i.e. R \ m for some m. A specific distance function is the Manhatten distance d M . For any pair s , r \ m , with s (s 1 ,!, s m ) and r (r 1 ,!, r m ), it is defined as: d M (s , r ) i s i r i .
(14)
If m 1, spatial preferences are unimodal under any distance function, which guarantees existence of an equilibrium sustaining a median agenda. Similarly, m 2 under the Manhattan distance guarantees existence. The underlying median agendas are defined first. Definition 4.4 N R is a median agenda if for all dimensions j b m , the following holds:
T
i o ij pN j
i
p 21
and
T
i
p 12 .
(15)
i o ij bN j
Note that o ij refers to the value of i ’s ideal agenda in dimension j. Thus, N is the m–dimensional median of all ideal agendas. Proposition 4.5 (1) For R \ a median agenda N exists and is preferred to all alternatives by a weak majority under any distance function. (2) For R \ 2 , a median agenda N exists and is preferred to all alternatives by a weak majority under the Manhatten distance.
Coalition Formation, Agenda Selection, and Power
203
Proof Part 1 is a standard result from social choice theory. For part 2, let us , refer to Kats and Nitzan (1977). As a result, Prop. 4.3 applies. It guarantees that there is an equilibrium leading a government enforcing the median agenda (implying that there is an equilibrium in the first place).
5. Government Participation May be Valued Negatively Generally, also long–term considerations play a role in coalitional bargaining. Parties, for instance, would compete for votes in upcoming elections. In this case, some parties might not want to coalesce with another, or at least one of the parties might associate a negative valuation with such coalitions. In this section, we allow for negative valuations. That is, we stick to additively separable valuations, Eq. (7), but allow for cases where p i (c ) 0 holds for some c and i c . Note that the following arguments hold independently of the assumptions (A1) and (A2). Instead, we impose certain ‘symmetry’ assumptions and concentrate on the coalitions that may result in equilibrium. This sheds some light on the relation of bargaining power to the coalitions that can be sustained in equilibrium. We refer to coalitions that all members value non–negatively as stable coalitions, and the remaining coalitions are called unstable. Thus, a coalition c is stable iff p i (c ) p 0 for all i c . Note that in the context of additively separable valuations, the ‘stability’ of coalitions can be defined independently of the agenda. Occasionally, we also say that parties valuing memberships in a given coalition negatively would destabilize these coalitions. Formally, the set of stable coalitions is denoted C s : C s \c C ] i c \0^ p i (c ) p 0^ .
(16)
We require that C s is not the empty set, i.e. that there are stable majority coalitions. Initially, we shall assume that stable coalitions would be weakly preferred in the following sense: c 1 C \ C s c 2 C s i (c 1 c 2 ) \0^ p i (c 1 ) p i (c 2 ).
(17)
We show that there are no RAEs leading to coalitions that one of the involved parties values negatively. For this reason, we call them unstable. Proposition 5.1 Fix an arbitrary RAE and let (s , c ) denote party 0’s choice in equilibrium. Assume generic valuations, Eq. (8), satisfying Eq. (17). Then, c C s , i.e. c is a stable coalition.
Proof We assumed that the set of potential governments is a set S qC .
204
Friedel Bolle and Yves Breitmoser
Now, assume that there is an equilibrium (s , c ) that implies an unstable coalition, and let c C s denote an alternative stable coalition. By assumption, all parties in c value participation under c non–negatively. Hence, the parties in c \ c are not worse off in (s , c ) than in (s , c ). The remaining parties, c c , weakly prefer c over c , because of Eq. (17). This extends to party 0. Hence, they are also better off in (s , c ) than in (s , c ). Hence, when all parties are realistic with respect to (s , c ), then (s , c ) is feasible, too, and preferred by party 0. As a result, it would be chosen by party 0, con, tradicting the initial assumption that (s , c ) is sustained in an RAE. The above finding is rather weak if there are several stable coalitions. To shed light on such cases, we shall strengthen the assumption Eq. (17) by assuming that an ordering (stability measure) S(c ) \ exists, refining the above bivariate measure stable/unstable. We understand the ‘stability’ of a coalition to measure the degree of competition between its members in communicating with the electorate, the degree of obstacles in implementing common agendas, and similar points. Thus, we would say that a coalition is more stable (than another one) if there is less competition between its members. Formally, we assume that S(c ) can be defined such that c a v c aa : S(c a) p S(c aa) i (c a c aa) \0^ : p i (c a) p p i (c aa).
(18)
As a result, S(c a) S(c aa) implies p i (c a) p i (c aa) for all relevant i. We show that only most stable coalitions are sustained in equilibria, i.e. only the coalitions c that maximizes S(c ). Proposition 5.2 Assume generic and symmetric valuations, Eq. (8), (17), and (18). The government (s , c ) is sustained in an RAE only if c maximizes S(c ) over all c C (i.e. only if it is most stable).
Proof First, we show that if c is stable and c a is more stable than c, then c a is stable, too. Formally, c C s and S(c a) S(c ) º c a C s
(19)
Suppose to the contrary that there are c , c a where c C s , c a C s , and S(c ) p S(c a). Since it is weakly more stable, player 0 and all parties in c c a weakly prefer c over c a. Because of weak symmetry, they would not prefer c weakly over a stable coalition if c itself is not stable. Therefore, c C s cannot apply if c a C s and S(c ) p S(c a). Hence, c C s would be implied. Now, we show that the most stable coalition must result in equilibrium (generically, it is unique). Suppose that a government (s , c ) is sustained in an equilibrium even though c is not most stable. It must be stable (as shown above), and there must exist a strictly more stable coalition c a. Since c a is
Coalition Formation, Agenda Selection, and Power
205
more stable, all parties in c c a prefer it (plus party 0), and all parties in c a \ c associate positive valuations p i (c a) p 0 with participating in (s , c a) . Hence, (s , c a) is feasible and preferable for party 0, contradicting the , assumption that (s , c ) would be chosen along the equilibrium path. Thus, if the valuations are additive and if the coalitions can be ordered according to their stability (in the above sense), then only most stable coalitions can result. Generically, the most stable coalition is unique. In such cases, i ’s bargaining power may appear to be independent of the number of stable coalitions and of the number of stable coalitions that i is a member of. This impression would be wrong, as all alternatives imply power with respect to the agenda bargaining.
6. Application to Power Indices We considered a model of coalitional bargaining where the formateur communicates with the other parties to learn about the coalition contracts that they would accept. Based on the responses, he forms (proposes) a coalition. We assumed that the game would be static, rather than dynamic, and that the parties’ responses would be aspiration levels, i.e. lists of contracts that are compatible with an aspiration level rather than unrestricted lists of acceptable coalition contracts. Both assumptions are not critical with respect to the topic that we want to discuss next: the construction of power indices. In particular, alterations of these assumptions would not render the parties’ preferences irrelevant. We also assumed that the players would be completely informed, which appears to be a feasible approximation of most instances of political bargaining. This framework allowed us to provide general characterizations of equilibrium outcomes and specific predictions for selected valuation structures. Our results underline the relevance of the valuations for the sustainability of specific outcomes in equilibrium. We also saw that comparably minor variations in the valuation structure can have significant impacts upon the set of equilibrium outcomes (while, on the other hand, even major variations may be without any consequences, e.g. when the formateur’s most preferred choice would result in any case). This has implications for conducting power analyses, even if the power measure is to be abstract of the valuation structure (see e.g. Garrett and Tsebelis 1999a,b ; Steunenberg et al. 1999). Namely, the estimated power would generally depend on whether we ignore valuations (and strategies) altogether or take the expected power in non–cooperative games for some distribution of valuation structures. The latter approach would also allow to differentiate the power with respect to the political positioning of the respective party. We follow Napel and Widgrén (2004), who argue strongly in favor of the second (non–cooperative) approach. They also define a general framework
206
Friedel Bolle and Yves Breitmoser
to conduct power analyses. As the measure of power in non–cooperative games, they (mainly) propose the sensitivity of the equilibrium outcome to infinitesimal changes in the strategies (the marginal impact of a player upon the solution). Applied to our model, players outside a given equilibrium coalition do not have any power in this sense. For, the outcome would not be affected when they increase their aspirations, and infinitesimal aspiration reductions would have no impact because of the discreteness of our model (apart from that, aspiration reductions would not reflect power in the sense of Napel and Widgrén). We have seen above (Prop. 5.2) that the equilibrium coalition is unique in a significant number of cases, and also that the presence of parties outside the equilibrium coalition does affect the set of equilibrium outcomes. Hence, we conclude that these parties have power, while their marginal impact does not capture it (in our model). An alternative measure would be the impact implied by the presence of a party. This relates to the proposal of Napel and Widgrén (2004: 529) to consider the maximal impact that a party could have on the outcome through changing its strategy. In our context, the equilibrium outcome appears to be affected most when the party would not take part in the bargaining process, and a slight generalization of the bargaining game would allow us to define this within the framework of Napel and Widgrén (2004).The required generalization concerns the introduction of a first stage where the parties signal whether they would take part in the second stage, and being informed of who takes part, the parties announce their aspirations in the second stage. There appear two significant obstacles to this approach of defining power. On one hand, the definition of valuations for games with several equilibria is generally difficult. In these cases, the ex ante power would depend on the assumed equilibrium selection concept, and thus be generally open for debate. Depending on the game form, it is possible to obtain unique predictions without equilibrium selection (see, for instance, Breitmoser 2006), but a consensus on the definition of the game would be required still. On the other hand, the ex ante power would depend on the assumed distribution of valuations. One might try to solve this obstacle through endogenizing the preferences. The preferences reflect the political platforms of the parties, and the platforms are strategically chosen to attract votes in elections. In this sense, the evaluation of the power characteristics of given institutions would best be assessed in models of complete election cycles (e.g. following Austen-Smith and Banks 1988; Diermeier et al. 2003) with certain constraints to reflect the characteristics of the investigated electorate. But in such a general model a new obstacle may appear, tractability. It is a matter of further research to show whether such a general model would allow closed form solutions or at least numerical approximations.
Coalition Formation, Agenda Selection, and Power
207
References Austen-Smith, D. and Banks, J. (1988) Elections, Coalitions, and Legislative Outcomes, American Political Science Review 82: 405–422. Ausubel, L. M. and Milgrom, P. R. (2002) Ascending Auctions with Package Bidding, Frontiers of Theoretical Economics 1: 1–42. Baron, D. and Ferejohn, J. (1989) Bargaining in Legislatures, American Political Science Review, 83: 1181–1206. Baron, D. P. (1991) A Spatial Bargaining Theory of Government Formation in Parliamentary Systems, American Political Science Review 85: 137–64. Bernheim, D. and Whinston, M. (1986) Menu Auctions, Resource Allocation, and Economic Influence, Quarterly Journal of Economics 101: 1–31. Bloch, F. (1996) Sequential Formation of Coalitions in Games with Externalities and Fixed Payoff Division, Games and Economic Behavior 14: 90–123. Bolle, F. (1995) Team Selection: Factor Pricing with Discrete and Inhomogeneous Factors, Mathematical Social Sciences 29: 131–150. Breitmoser, Y. (2006) A Theory of Coalitional Bargaining in Democratic Institutions, Europa Universität Viadrina, Frankfurt/Oder, mimeo. Browne, E. C. and Franklin, M. N. (1973) Aspects of Coalition Payoffs in European Parliamentary Democracies, American Political Science Review 67: 453–469. Diermeier, D., Eraslan, H., and Merlo, A. (2002) Coalition Governments and Comparative Constitutional Design, European Economic Review 46: 893–907. Diermeier, D., Eraslan, H., and Merlo, A. (2003) A Structural Model of Government Formation, Econometrica 71: 27–70. Eraslan, H. (2002) Uniqueness of Stationary Equilibrium Payoffs in the Baron– Ferejohn Model, Journal of Economic Theory 103: 11–30. Gamson, W. A. (1961). A Theory of Coalition Formation, American Sociological Review 26: 373–382. Garrett, G. and Tsebelis, G. (1999a) More Reasons to Resist the Temptation to Apply Power Indices to the EU, Journal of Theoretical Politics 11: 331–338. Garrett, G. and Tsebelis, G. (1999b). Why Resist the Temptation to Apply Power Indices to the EU, Journal of Theoretical Politics 11: 291–308. Holler, M. and Illing, G. (1996) Einführung in die Spieltheorie, Springer. Holler, M. and Owen, G. (2001) Power Indices and Coalition Formation, Kluwer. Jackson, M. O. and Moselle, B. (2002) Coalition and Party Formation in a Legislative Voting Game, Journal of Economic Theory 103: 49–87. Kats, A. and Nitzan, S. (1977) More on Decision Rules and Policy Outcomes, British Journal of Political Science 7: 419–422. Merlo, A. (1997) Bargaining Over Governments in a Stochastic Environment, Journal of Political Economy 105: 101–131. Napel, S. and Widgrén, M. (2004) Power Measurement as Sensitivity Analysis, Journal of Theoretical Politics 16: 517–38. Ray, D. and Vohra, R. (1999) A Theory of Endogenous Coalition Structures, Games and Economic Behavior 26: 286–336. Schofield, N. and Laver, M. (1985) Bargaining Theory and Portfolio Payoffs in European Coalition Governments 1945–83 British Journal of Political Science 15: 143–164. Steunenberg, B., Schmidtchen, D., and Koboldt, C. (1999) Strategic Power in the
208
Friedel Bolle and Yves Breitmoser
European Union: Evaluating the Distribution of Power in Policy Games, Journal of Theoretical Politics 11: 339–366. Warwick, P. V. and Druckman, J. N. (2001) Portfolio Salience and the Proportionality of Payoffs in Coalition Governments, British Journal of Political Science 31: 627– 649.
11. Democratic Defences and (De-)Stabilisations Werner Güth Max Planck Institute of Economics, Jena, Germany
Hartmut Kliemt Frankfurt School of Finance and Management, Frankfurt, Germany
Stefan Napel Department of Economics, University of Bayreuth, Germany
1. Introduction 1 Once the invention of the state is made, the question of controlling it arises. 2 Taking recourse to controllers the ancient problem of controlling those who are in control emerges: ‘Quis custodiet ipsos custodes?’ (who will guard the guardians?). As far as this is concerned democratic self-rule has been and is often still regarded as a way out: self-control seems to eliminate the need for control and thereby the need for controllers. But taking a closer look most of us will agree with John Stuart Mill (On Liberty, chap. 1): The ‘people’ who exercise the power are not always the same people with those over whom it is exercised; and the ‘self-government’ spoken of is not the government of each by himself, but of each by all the rest. The will of the people, moreover, practically means the will of the most numerous or the most active part of the people; the 1 One referee pointed out that a discussion of the Barbera-Jackson (2004) study ‘Choosing how to Choose’ was conspicuously missing in the first version of our paper. We are most grateful to the referee since we were indeed shamefully unaware of this important item of the relevant literature. But we should like to remind our fellow economists, too, that in constitutional theory an economic approach – as more traditionally conceived – is not the only game in town. Our paper is an effort to bridge the gap between traditions at least to some extent. When digging a tunnel, starting from both sides is a good strategy provided that the efforts meet. The question of how constitutional preferences are endogenously created by the constitutional process and how they feedback on it by supporting or eroding the constitutional framework should be approached from different sides. Our analysis is clearly one-sided, too, but hopefully on its way to the other side. 2 For an overview see Gordon (1999).
210
Werner Güth, Hartmut Kliemt, and Stefan Napel
majority, or those who succeed in making themselves accepted as the majority; the people, consequently, may desire to oppress a part of their number; and precautions are as much needed against this as against any other abuse of power. The limitation, therefore, of the power of government over individuals loses none of its importance when the holders of power are regularly accountable to the community, that is, to the strongest party therein. This view of things, recommending itself equally to the intelligence of thinkers and to the inclination of those important classes in European society to whose real or supposed interests democracy is adverse, has had no difficulty in establishing itself; and in political speculations ‘the tyranny of the majority’ is now generally included among the evils against which society requires to be on its guard.
The general acceptance of a democratic ‘rule of submission’ (de Jasay 1997) which amounts to the general opinion that acceptance by a majority carries a moral claim to legitimacy independently of substantive normative content makes the problem of arbitrary democratic power even more pressing. Backed by the opinion of its natural legitimacy majority rule may possibly overrun all constitutional checks and balances by means of voting. But participation in majority voting and having a share in the exercise of power may (psychologically) induce the foes of constitutional democracy to endorse the system and its basic rule of recognition. 3 Higher order rules constraining voting processes by requiring higher majorities may form an important stabilizing element. 4 But constitutional rules that constitute and constrain constitutional powers cannot be picked from the shelf so to say. They must be brought into existence endogenously within the very social process they regulate. A complex rule of recognition conferring powers to relevant authorities – including voting bodies – must be applied and sustained within the social process (voting criteria are just one element of that rule system). Rendering within rule choices of the rules too difficult to accomplish may lead to an erosion of the belief in the legitimacy of the system and thereby enhance the risk of abandoning the constitution as a whole by extra-constitutional measures. 5 Relying on the simple majority rule and letting diverse groups participate in power may psychologically induce the greatest extra-constitutional support for the constitutional rules but at the same time erosion of the constitutional constraints by within rule choices. In sum: on the one hand, a Constitution may be changed by abolishing it altogether and putting a new one in its place. Such an act of rule substitution does not arise within the Constitution but rather in ways external to it. Whether this may happen or not entirely depends on the
3
See, of course, Hart (1961) and with respect to political stability Garzon-Valdes (1994). See Buchanan (1962) and for a discussion of central formal aspects, Barbera and Jackson (2004). 5 On the maximization of total commitment power of a constitution see Lutz (1994). 4
Democratic Defences and (De-)stabilisations
211
sufficient or insufficient acceptance of a complex rule of recognition embedded in a complex social practice. On the other hand, a Constitution may be changed by within rule choices. For instance, the simple majority rule might be changed according to the majority rule itself or some rule that requires higher majority parameters than simple majority. 6 Even though the basic rules of rule change of the system may be very complex in a constitutional democracy simple majority rule will remain fundamental. It is such a basic rule and there is nowadays such a fundamental and deeply entrenched prejudice in favor of it that no constitutional democracy could survive for long unless it would decide most matters including those of rule change according to simple majority rule. Therefore as far as the acceptance of the system as a whole and the belief in its legitimacy are concerned we cannot but grant a central role to simple majority. But it is not only the rule that matters. Any Constitution claiming to be a democratic one would lose out to its critics if it kept larger groups of society out of the game of majority formation within the Constitution even though it would be entirely constitutional. Vice versa, if there are groups in society which have adverse or at least skeptical attitudes towards the Constitution letting or actively making them participate in the majoritarian game may – and we think will indeed – be an effective method of instilling a preference for constitutional democracy in the members of those groups. The risk that the foes of constitutional democracy might use the simple majority rule to abolish that rule must be incurred if granting them a real share in power is the only way to induce them to accept democracy. 7 In studying the problem of the effects and the requirements of democratic constitutionalism we will adopt a philosophy of law perspective first (section 2). Then we will approach the same class of problems in a somewhat more specific and more formal vein and illustrate by ways of simplified examples how a cartel of otherwise competing democratic parties might deal with what we shall call ‘dogmatic’ parties (section 3). After having illustrated what philosophy of law and game theoretic analyses can in principle contribute to our understanding of democratic defenses and destabilizations we add somewhat speculative extensions (section 4) and conclude with final observations (section 5).
6 There may be changes of the rules of the game by other authorized actors like for instance the courts. 7 This is similar but different from Donald Lutz’s problem of maximizing commitment power by balancing the risk that a constitution may be too rigid and therefore abolished altogether and that it may be too lenient and impose no constraint or only insufficient constraints at all.
212
Werner Güth, Hartmut Kliemt, and Stefan Napel
2. On Limiting the Power to Enact 2.1 The ‘Logical’ Problem That the Pope could not tie his own hands was a commonly accepted insight of medieval political theory. As the ‘canonists’ observed – already noted in Ockham (1992) – the Pope could not ‘today’ enact a command that would commit him to do something ‘tomorrow’. For, being the highest authority in church matters, the Pope could revoke today’s command tomorrow. Therefore he could not by a present command commit himself to a future action. A revocation of present enactments of norms would follow automatically under the rule that later norms enacted by the Pope would supersede norms enacted before. In fact, this power to create norms that are ‘automatically’ regarded as valid according to the basic ‘rule of recognition’ 8 of the internal rules of the church characterizes more than anything else what it means to be the Pope. But then, how could the Pope conceivably commit himself in church matters? To put it slightly otherwise, due to the preceding characterization of the Pope there seem to be no sub-game perfect ways of self-commitment for him. According to the same logic, it seems to follow more generally that a supreme rule-giver in any system of rules cannot commit. Like a rational actor who is defined such that he cannot give up his own rationality or a Bodinian sovereign 9 who cannot restrict his own sovereignty any supreme law-giver seems unable to restrict his own capacity to enact law by law. 10 Therefore, is it conceivable that the rules of rule enactment could limit themselves? Is it meaningful to enact a constitutional clause like Article 88 Danish Constitution or Article 79 (3) Grundgesetz (GG, German Constitution or German Basic Law) which both stipulate that certain other norms cannot be altered legally in the future? 11 It is unclear whether articles such as 88 Danish Constitution or 79(3) GG could be changed constitutionally. 12 It all depends on how we interpret the rules of the social games we play. If we accept, as we should, that the semantics of rules are such that they cannot be changed in ordered and intentional ways unless there are secondary rules allowing for such changes then a system of rules without a rule of rule enactment cannot be changed inten-
8
On this see, of course, Hart (1961). On this and for additional literature see Garzon-Valdes (1983). 10 Being the highest authority in the church, the Pope could not find means within his rule enactment power to solve any ‘political weakness of the will problem’ he might face. See Ainslee (2002), Ainslee (1992), and Spitzley (2005). 11 Of course, the rules can always be abolished by simply not obeying them. But this is different from being able or unable to abolish rules within the system according to its own rules of legitimate rule change. 12 See Ross (1969), Raz (1972), and Hoerster (1972). 9
Democratic Defences and (De-)stabilisations
213
tionally according to rules. 13 But if that is so then introducing a rule of rule change that explicitly names the exceptions to the fall-back rule of no change is possible. The semantics of interpreting rules are such that all intentional alterations of norms of the basic system of rules by enacting new rules are ruled out unless such alterations are explicitly allowed. According to this interpretation the German Constitution by introducing 79 (3) GG explicitly allows ‘e contrario’ to change all articles except for 79 (3) GG. The latter remains unalterable in explicitly stating that articles 1 and 20 may not be altered. In this reading article 79 (3) GG is just an extreme form of a list of exceptions to the rule that the basic rules are unalterable unless the alteration – in the case at hand for all articles except 1 and 20 GG – is explicitly admitted. There does not seem to be a principal ‘logical’ problem with the preceding solution of the constitutional commitment problem (Kliemt 1993). We construe a game of rule enactment in which we start from a set of unalterable norms including one unalterable rule which explicitly lists the rules that can be altered. All others remain beyond the reach of the rule of rule enactment since rules in the sense that we use the term are ‘unalterable by rules’-entities unless the possibility of alteration is explicitly introduced by a secondary rule. As long as the consistency of the system of rules is enforced the rule of non-alteration will restrict the enactment of new rules (Kliemt 1978). In short, constitutionalism may work if an appropriate ‘rule of recognition’ is in place. Applying this rule we can tell valid from invalid law and impose substantive constraints on what can be validly enacted according to the rules themselves. There is no logical problem involved in absolutely limiting the constitution by means of the constitution. As long as sufficiently many sufficiently influential individuals 14 adopt an internal point of view to its basic rule of recognition the system will be changed only according to the rules of rule change implemented in it. But within the modern democratic mindset as opposed for instance to that of the medieval Church 15 any form of commitment must pay due respect to the requirements of democratic rule. This ‘psychological’ rather than logical problem renders the absolute commitments discussed before impossible. As far as within rule choices are concerned sets of voting rules that 13
Of course, people can start to act differently and thereby change the established rules or conventions but this is not of interest in a context in which we consider games in which the rules are changed only according to the rules of those games. To games in which there are other ways of rule change obviously other considerations would apply. 14 This, of course, alludes to a famous phrase from the ‘defensor pacis’ of Marsilius of Padua in which he characterizes conditions for the existence of constitutional frames in general. 15 Even though, for instance according to Russell’s fine discussion of the conciliary movement in the medieval church, there was a strong democratic element even among the reformist precursors of the much less democratic Lutheran reformation. See Russell (1975).
214
Werner Güth, Hartmut Kliemt, and Stefan Napel
may be ‘self-stable’ in that they will not lead to their own abolishment – in some kind of equilibrium – become of particular importance as means of constitutional commitment. The analysis of Barbera and Jackson (2004) is in this regard very instructive. However, it should be noted that the analysis presupposes that rules do manage to exist at all. For that to be the case they must be brought into existence by the application of the rule of recognition from an internal point of view. And, this in turn, will happen only if an appropriate belief system prevails.
2.2 The Paper Wall Problem and Democracy Critics of constitutionalism have objected always that it is impossible to constrain real powers in a society by writing something ‘on paper’. What matters according to this argument is ‘real’ power (guns and money so to say) 16 not words in a legal document and opinions about the legitimacy thereof. However, in the last resort hardly anything but opinion does matter. As the British Moralists were fond to say ‘it is on opinion only that government is founded’ (Hume 1985. I,iv ). 17 The opinion that the words written on paper (or in case of unwritten constitutions established in practices) go along with a legitimate claim to obedience or at least, as Hume insists, with the belief that no alternative system can be established with acceptable transition costs (including transition risks) render the constitution ‘stable’. The belief is at the same time limiting and is constitutive for power. 18 Who is powerful is determined by rules. The powerful are powerful because certain individuals accept a ‘rule of recognition’ such that those thereby empowered are ‘recognized’ as those whose orders are to be obeyed; and this maxim extends to the most despotic and most military governments, as well as to the most free and most popular. The soldan of EGYPT, or the emperor of ROME, might drive his harmless subjects, like brute beasts, against their sentiments and inclination: But he must, at least, have led his mamalukes, or praetorian bands, like men, by their opinion (Hume 1985: iv).
Some two hundred years later Hayek makes roughly the same point: There is thus no logical necessity that an ultimate power must be omnipotent. In fact, what everywhere is the ultimate power, namely that opinion which produces allegiance, will be a limited power, although it in turn limits the power of all legislators. This ultimate power is thus a negative power, but as a power of withholding al16 We do not deny that the fact that Swiss constitutional democracy has been a rather stable one may be related to the fact that Switzerland always had a militia system of defence and no standing army. 17 Hobbes already said ‘... the power of the mighty hath no foundation but in the opinion and belief of the people…’ (Hobbes 1682/1990: 16). 18 On how power is constituted by secondary rules see Barry (1981).
Democratic Defences and (De-)stabilisations
215
legiance it limits all positive power. And in a free society in which all power rests on opinion, this ultimate power will be a power which determines nothing directly yet controls all positive power by tolerating only certain kinds of exercise of that power (Hayek 1973-79: I, 93).
Much depends on what is meant by the term ‘free society’ in this context. If we assume that we are talking of a non-despotic system in which secure individual rights and rule of law prevail then, clearly, it should be true that power is quite effectively constrained in a ‘free society’. The power of the mighty is constrained by the people’s unwillingness to follow too extreme orders of those who are – by the very willingness to follow their guidance – made powerful. But if we assume that a free society on top of being subject to rule of law is characterized by democratic rules of law enactment then things may be different. The claim to legitimacy that majority vote as such seems to command may be so strong that it can undermine any restrictions that may have been imposed. Therefore the fundamental problem of making constitutionalism work in practice – to make the constitution an effective factual restraint on majority rule rather than a mere paper wall – applies with particular force with respect to constitutional democracies or democratic games and their sub-games. The risk that the constitutional system may be eroded by the rule of simple majority applies to all forms of democratic rule. 19 But in democracies in which democratic rule is not yet firmly established as a guiding principle it is a minor one as compared to the risk that the legitimacy of the system at large is not accepted. In particular after the initial phase in which a new constitutional democracy is established there is a risky time in which those who are sceptical about the new political order must be induced to think more favourably of it. Clearly, citizens may be won over by such factors as economic success as established within the new legal framework (as the German experience after World War II shows). But another factor crucial for creating a general opinion of legitimacy seems to be participation in the democratic political process. Even a democracy that is not ‘self-stabilizing’ in the aforementioned sense may be ‘self-establishing’ as far as the belief in its legitimacy is concerned. Participation in the exertion of democratic power might induce a gradual change of mind even in the foes of constitutional democracy. If, as we tend to believe, such a mechanism of endogenous preference change is in fact operative this psychological mechanism suggests that Democrats should consider to grant the non-democratic a share in democratic power. Doing so they will incur the risk of losing out to the foes of democracy altogether. But chances are that the foes may be induced to become friends of constitutional democracy (or, so to say, be tamed). This might render 19
For this problem see, again, Barbera and Jackson (2004).
216
Werner Güth, Hartmut Kliemt, and Stefan Napel
constitutional democracy much safer against constitution substitution – by external or revolutionary means. 20
3. Sharing Democratic Power Imagine you are in the position of a counsellor to democratic parties. Assume that the parties supporting democracy confront minority parties that are critical of democracy itself. The minority groups intend to compete within the rules of the democratic game for a majority. Their agenda is to gain a majority to abolish democracy eventually. Or, to put it slightly differently, they intend to use the majoritarian rule or rule change to rule out its future use. If the anti-democratic parties – to whom we will refer rather euphemistically as ‘dogmatic’ also – should succeed to abolish constitutional democracy democratically this would have devastating consequences from the point of view of adherents of constitutional democracy. However, as long as the parties with non-democratic aims comply with the rules of constitutional democracy it seems very problematic from a democratic point of view not to admit them as players or competitors in general elections. The more seriously democratic parties take their own basic democratic convictions the more difficult it is for them to erect barriers to entry to the ‘political market’. In their dealings with dogmatic parties democratic ones will therefore tend to seek remedies for the problem of dealing with constitutional democracy’s foes that avoid to disenfranchise these groups of the populace. For this reason and since experience indicates that letting the enemies of democracy play along with its supporters and to let them compete within the system can be rather subversive for the original anti-democratic impetus of the dogmatic parties, democratic parties may tend towards ‘power sharing’. As far as participation in democratic life leads to accepting democracy one could speak of an addiction hypothesis of democratic participation. Such a view would be very naïve if it would assume that being exposed to democracy would always and almost automatically induce a preference for democratic procedures and constitutional democracy. But this notwithstanding the self-educational aspect of democratic participation should not be disregarded out of hand. In transitional societies, especially when the winners and the losers of fundamental changes can hardly be predicted (except for a few certain losers), many might tentatively be willing to accept democratic elections (Acemoglu and Robinson 2006). But to take hold the tentative view presupposes that all groups not only participate in voting but also in the exertion of the powers thereby conferred on government and other (e.g. judicial) authorities. So prevalent groups of individuals who all 20
On other aspects of what makes democracy work see, of course, Putnam (1994).
Democratic Defences and (De-)stabilisations
217
endorse constitutional democracy will face the need to share governmental powers with others who possibly will remain enemies of constitutional democracy. Taking such a bet is in line with the democratic principles of the majority. It takes into account the democrats’ resentment against disenfranchising any group of voters (and here not only as far as the right to vote is concerned but also the actual exercise of the powers conferred on governments). Moreover, since it is based on quite plausible assumptions of political psychology democrats may often hope that integrating democracy’s enemies into the democratic process succeeds. This prospect seems worth taking some risk. Still, it will not justify exposing democracy to arbitrary risks. Since we need some kind of instrument to assess such risks in somewhat more precise terms let us turn to some very simple such models as a first step into such a direction.
3.1 A Model of a Democratic Majority Cartel in a Democracy To be more specific, assume that proven democratic parties compete according to the rules of the game of democracy but form a cartel as far as admission of non-democratic parties to a share in democratic power is concerned. Members of the cartel believe that there can be two types of adversaries of democracy. One type is so dangerous that the risk of dealing with it is relatively high, while for the other that risk is somewhat lower. We refer to the latter as the low-danger or F-type, and to the former as the high-danger or 1 F -type. The parameters F 0, 21 and 1 F can be interpreted as the probability of an abolishment of democracy if the democratic parties handed over all power to their non-democratic adversary. 21 The probability that co-operation with the anti-democratic party leads to an end for democracy itself will depend on the share of power c ¢ 0,1 conceded (the concession made) by democratic parties to the anti-democratic ones. Though the concession of size c is made with the aim to assimilate the dogmatic parties into the democratic system this effort may fail. The actual probability of failure of the ‘appeasement policy’ for given power share c is assumed to be R cM when the true type of the non-democratic party is M \F,1 F^ . Assume that, when offering the share c in power, the democratic parties cannot discriminate between the two types. But the democratic parties have beliefs about the non-democratic party’s type. More specifically, we assume the democratic majorities to expect the low-danger F-type with probability p 0,1 and the high-danger 1 F -type with complementary probability 1 p. 21 Somewhat more generally, one could introduce separate parameters FH , FL 0,1
with FH FL for the two types; but the implicit restriction FH 1 FL implies no essential loss.
218
Werner Güth, Hartmut Kliemt, and Stefan Napel
Let us assume we assess the value of successfully integrating the originally anti-democratic minority into the democratic process to be
u(c ) B (1 M)c for some B [0,1) . The process will eventually succeed and the benefits U (c ) will actually be reaped with type-dependent success probability of (1-R). The payoff of having admitted the dogmatic minority to a share c of power in case of failure is assumed to be 0. This payoff will apply with type-dependent complementary probability R. Note that if the inclusion strategy succeeds, its value will be higher for the F-type than the 1 F -type since for M F we get B (1 F)c which in view of F 0, 21 is larger than B Fc . This reflects that during the time when transformation into a fully democratic party – the aim of the process – has not yet succeeded, it can be very plausibly assumed that from the point of view of the democratic parties the co-operation in democratic government is the more fruitful the less antidemocratic the party admitted to power sharing is. Either the process ends in success and the party becomes fully democratic or manages to abolish democracy. 22 The kind of counsel that a counselor should give a cartel of democratic parties depends, of course, on the expected benefits perceived for different policies. As far as that is concerned the crucial policy variable is c. According to the assumptions made here, c can be fixed by the cartel of democratic majority parties as seems fit. Inviting the anti-democratic party in or not can be decided for alternative values of c by the majority. 23 For given values of F and p, an optimal c can be determined by considering the expected value (or utility) U (c ) p(1 Fc )< B (1 F)c > (1 p )<1 (1 F)c > ¢B 1 (1 F) c ¯±
where 1 R L (1 Fc ) and 1 R H <1 (1 F)c > indicate the probability that the process of power sharing at parameter c does not fail – to keep matters simple the payoff is assumed to be 0 if it fails (the generalization is obvious) – in case of the low and the high-danger anti-democratic party, respectively. The c * (0,1) which maximizes U (c ) can be found by forming the first derivative (the second-order condition for an interior optimum U aa(c ) 2F(1 F) 0 is satisfied),
22
The two types of an originally dogmatic party may not imply equal payoffs for the democratic parties since they may have lost all chances of democratically gaining power in case of the high danger type. 23 It should also be noted that in a constitutional democracy c need not coincide with anything like the number of ministers in the cabinet or any such simple measure.
Democratic Defences and (De-)stabilisations
219
U a(c ) \p F < B (1 F)c > p(1 Fc )<(1 F)>^
\<(1 p )(1 F)> (1 p )<1 (1 F)c >F^ and setting it to zero. Solving for c yields p <1 (1 B)F > (1 p )(F(1 B) B) 2F(1 F) B (1 2F)(1 B) 1 p. 2(1 F) 2F 2F(1 F)
c*
Therefore, whenever a value c * (0,1) with U a(c * ) 0 can be found, the optimal policy advice is: the enemies of democracy should be admitted in to have the positive share c * 0 of democratic power. 24 If, however, there is no c * (0,1) with U a(c * ) 0 then the best policy is to keep the dogmatic party out at c * 0 . The latter case is rather likely when, as opposed to our simplification, failure of inducing the dogmatic party to develop a belief in the legitimacy of constitutional democracy and a proclivity to play by its rules, leads to dramatically negative payoffs for the democratic parties. Looking at the effect of a variation of B, i.e., the baseline value of no integration (c 0), one can check that B l 0 implies that c * converges to lim c * Bl0
F (1 2F)p 0. 2F(1 F)
Moreover, the marginal change of c * after a marginal increase of B is dc * (1 2F)p (1 F) dB 2F(1 F)
which is negative for every p 1. So, the larger the baseline value of no integration, the smaller is the optimal power share given to the non-democratic party by the democratic cartel. If B is sufficiently large, c * 0 will in fact become optimal. More generally, the requirement c * 0 can be expressed as the requirement that p exceeds some lower bound: p p :
B F(1 B) (1 2F)(1 B)
It may be useful to consider ‘type’ convergence, too. If the types would converge with F l 21 this implies
24
The smaller p, the smaller is the optimal inclusion level c * which the democratic cartel will offer to the non-democratic party (recall F 0, 21 and B [0,1) ). This would naturally be expected since p is the probability of the less dangerous dogmatic type.
220
Werner Güth, Hartmut Kliemt, and Stefan Napel
p ¯ 1 p lim1 c * 2 ¡ (1 B) (1 B)° (1 B) ¡ °± 2 Fl ¢2 2
according to which an interior solution for c *(0,1) requires B (0,1) . Finally, for p 12 we get c*
1 (1 B)F F(1 B) F 1 B . 4F(1 F) 4F(1 F)
The preceding considerations are applicable only if the democratic parties in a democracy would face their adversaries as a unitary actor. As long as democratic parties manage to act as a kind of ‘power cartel’ they should be able to implement the optimal strategy choice c * . They will not unwisely incur the risk of admitting non-democratic parties to a share in power if that does not maximize their expectations. If, however, competition among democratic parties becomes a factor it may well be that the democratic cartel, desirable as it may be otherwise, breaks down. The following model captures some of the effects of competition among the democratic parties and corroborates in more precise terms some of the intuitive views on the potentially harmful effects of competition among democratic parties in a setting in which a dogmatic party is present.
3.2 Democratic Competition for Dogmatic Parties Assume that there are two democratic parties and a single dogmatic one. The democratic parties could share power which in total amounts to c 1 . None of them has a majority on its own but the two together could form a majority that jointly would command power c 1. Let c 1 (0,1) be the power that democratic party 1 is offering to concede to its preferred coalition partner, democratic party 2, and let c 2 (0,1) be the concession that party 2 is willing to make to democratic party 1. Accordingly, democratic party 1 demands the share (1 c 1 ) (0,1) while democratic party 2 demands (1 c 2 ) (0,1). If in the bargaining process among the two democratic parties the demands are incompatible in the sense of (1 c 1 ) (1 c 2 ) 1 then insufficient concessions to the other democratic party, respectively, have been made since 1 c 1 c 2 . 25 Assume that in this case the one non-democratic party 3 approaches a democratic party, possibly the one which was willing to make the larger concession c max \c 1 , c 2 ^ to the intended democratic partner. We assume 26 that the (by intention) 25
On bargaining see Holler (1992). Openly discriminating a suspicious party but inviting it nevertheless to participate in government may as such exclude making it truly democratic (the coalition agreement between Christian Democrats and Communists in Italy did, for instance, avoid any obvious discrimination). 26
Democratic Defences and (De-)stabilisations
221
non-democratic party expects the same concession as was offered to the democratic competitor. To make the same such demand is plausible since the dogmatic party claims to be democratic. To concede it may be wise in view of the aim of ‘seducing’ the foes of democracy since that would be hardly possible if the latter were discriminated. If the party i \1,2^ with c i max \c 1 , c 2 ^ evaluates sharing power with the dogmatic party at U i (c i ) while a democratic party that is not sharing in power at all evaluates the result at 0 then the party approached should in principle be willing to share power if U i (c i ) p(1 Fc i ) (1 p )<1 (1 F)c i >< B (1 (1 F))c i > 0.
When, after insufficient concessions, a purely democratic coalition ceases to be an option, then the coalition may be formed even if the expectation is not maximal from the point of view of the democratic parties. Had they been able to form a cartel they would have been better off at least potentially. But the competition among democrats drives them towards a more risky course. The preceding comparison of U i (c i ) with the 0-payoff assumes, of course, that the democratic competitors 1 and 2 cannot agree on sharing power (after insufficient concessions in the sense of c 1 c 2 1 ). But even in case of insufficient concessions a party i \1,2^ may approach its democratic counterpart and try to avoid the impasse by a further concession c i 1 c j with j v i . Assume that the payoff from conceding c i 1 c j to one’s democratic competitor is U i (c i ) 1 c i c j . Then we only have to expect a democratic party to co-operate with the non-democratic party if p(1 Fc i )< B (1 F)c i > (1 p )<1 (1 F)c i > c j
(*)
holds for at least one party and i 1,2 and j v i . Otherwise, one could expect the two democratic competitors to reach an agreement in spite of their initial impasse. If, however, U i (c i ) is larger than c j for at least one party i 1,2, chances are that the dogmatic party will be admitted into government. The crucial condition depends on insufficient concessions c i c j 1 of the two democratic parties. In slightly more precise terms one could imagine the following elementary procedure, among two democratic parties 1, 2 and a dogmatic party, 3: Stage 1 Both democratic parties 1 and 2 choose a concession of c 1 , c 2 < 0,1> respectively. If c 1 c 2 p 1 the game ends with a coalition of the two democratic parties \1, 2^ and payoffs 1c i for i 1,2 and c 1 c 2 2 2 c 1 c 2 1 U 1 U 2 for c 1 c 2 2. 2 Ui
222
Werner Güth, Hartmut Kliemt, and Stefan Napel
If c 1 c 2 1 the process proceeds to the next stage. Stage 2 Democratic party i \1,2^ with c i c j (if c 1 c 2 , equal probability is assumed) decides between conceding c i 1 c j , thereby ending the game with coalition \1, 2^ receiving payoffs U i c j ,U j 1 c j ; or forming a coalition \i ,3^ with dogmatic party, 3, leading to payoff U i (c i ) p(1 Fc i )< B (1 F)c i > (1 p )<1 (1 F)c i >.
In sum the initial concessions are crucial along three dimensions: x the power shares of the democratic parties in case of feasibility c1 c 2 p1 , x the role of becoming the natural target of undemocratic dogmatism (in the sense that party i with c i c j will be approached since party 3 finds it more likely that this one offers a higher power share), x the effect on the crucial condition (*).
4. Extensions Assume that the above condition (*) for expecting that a democratic party could be willing to co-operate with the non-democratic party applies. The result may be a coalition with a comparatively high power share c i c j for the non-democratic party even though an operational cartel of the democratic parties would optimally choose only a low involvement of the non-democrats or none at all (see section 3.1). One might object to this finding that we considered a fairly inflexible bargaining protocol (essentially simultaneous concessions), but there are good practical and theoretical reasons to expect significant mis-co-ordination even under more sophisticated bargaining procedures. 27 The public-good character of democracy implies that competition between democratic parties – no matter which precise form it takes – can produce a sub-optimal level of inclusion of dogmatic parties (which in fact may prove ‘lethal’ for democracy itself). Democratic competition fares worse in that regard than a cartel of democratic parties entirely focused on (and restricted to) democracy’s preservation as well as on democratizing dogmatism. Inefficiency in fulfilling democratic values is the result of democratic agents’ efforts to maximize individual party-oriented payoffs rather than being first of all interested in preserving democracy. Suppose that the two democratic parties independently choose levels c 1 and c 2 which jointly determine the degree of acknowledgement or legitimacy bestowed on the non-democratic party. The non-democratic party’s total share of power, c, which results from individual choices c 1 and c 2 could be determined in various ways depending on the institutional structure. In terms of our simple model this structure may lead to different conditions of power sharing with the non-democratic party. 27
See, for example, the general impossibility result in Myerson (1983).
Democratic Defences and (De-)stabilisations
223
For example, c w H(c 1 c 2 ) for H (0,1) would reflect that both democratic parties’ acknowledgements are perfect substitutes regarding the nondemocratic party’s overall role in society and potentially harmful access to power. Alternatively, c w H ¸ c 1 ¸ c 2 would formalize that more acknowledgement by one of the democratic parties can compensate for less acknowledgement by the other but with an increasing rate of substitution and, more critically, both democratic parties can veto any positive power share for the non-democrats since if either party chooses c i 0 then total power c is zero. The same is true if for instance c w H ¸ min \c 1 , c 2 ^ (capturing a very high degree of complementarity between c 1 and c 2 over their full range). Here, for the time being, we do not specify how c is linked to c 1 and c 2 . Of course, whether in fact there will be institutional arrangements corresponding to the preceding examples of functional forms is an empirical issue. Likewise it depends on factual preferences of the decision making entities which kinds of decision they would reach under alternative institutional arrangements. Assume for instance that the democratic parties are concerned with, first, the common expected utility U (c ) considered in Section 3.1 but also, second, a private utility component U i (c i ) which only depends on the acknowledgement c i that they themselves bestow on the dogmatic party. This term may increase or decrease in c i depending on whether it primarily reflects costs of acknowledging the non-democrats or (private) benefits from doing so. The former may result, e.g., from the need to wield internal support for the implied gamble amongst risk-averse party members or because acknowledgement of an extreme leftist/rightist party diminishes the moderate leftist/rightist party’s voter base. Private benefits might take the form of an expected ‘preferential treatment’ in case the nondemocrats successfully acquire dictatorial power or could account for diminished electoral chances of the democratic competitor (reminiscent of implicit endorsement of the Ross Perot or Ralph Nader candidacies in US presidential elections by Democrats and Republicans, respectively, which, of course, did not threaten US democracy but potentially the established twoparty system). Independently of whether U i (c i ) increases or decreases in c i , party i will choose c i to maximise the sum of expected net social benefits of inclusion, U (c ), and net private benefits U i (c i ) . Except for special cases, whatever the expected c j and the connection between c, c 1 , and c 2 may be, 28 the maximum of U (c ) U i (c i ) is achieved at a different level c i* than the maximum of U (c ). That is, individual decisions c 1* and c 2* will typically fail to result in the optimal level of inclusion c * (as identified in section 3.1). The right level of comparison may actually no longer be c * since private 28
Even, for instance, c w c i could be justified either by false consensus (party i thinks that party j will reason in the same way so that c c i c j ) or by dictatorial illusion (party i thinks that her choice c i will determine c j ).
224
Werner Güth, Hartmut Kliemt, and Stefan Napel
costs or benefits to the democratic parties, U i (c i ), were not considered in section 3.1. But even if one looks at the level c ** which maximizes U (c ) U 1(c 1 ) U 2(c 2 ) , e.g. when assuming c w (c 1 c 2 ) 2, the equilibrium (c 1* c 2* ) produced by strategic interaction of the democratic parties will typically fail to be socially optimal. The reason is that both democratic parties impose an externality on each other (via U (c ) ) which is ignored in their particular optimization problems. The result could be too little integration of the non-democratic party, namely if private costs dominate: both parties try to free-ride, i.e., enjoy the expected net benefits from successful integration but bear a less than equal share of its up-front costs. It seems more likely though that there will be too much integration in analogy to the analysis of section 3.2. This is the case if private benefits are considerable: both parties try to feather their own nest and spoil the other’s, but fail to fully account for the shared consequences of unsuccessful integration. The intermediate case in which positive and negative externalities cancel out is a theoretical possibility (then both parties’ interests would be fully aligned with the common goal of preserving democracy), but it can be expected to arise only by great coincidence. To admit party competition sub-optimal choices (c 1* c 2* ) may well be a price worth paying. In particular, the gap between the socially optimal level c * and the one resulting from (c 1* c 2* ) can be tolerably small. This would be the case if the term U (c ) is of considerably greater magnitude and variation than the corresponding private terms. Both democratic parties would then value democracy per se higher than winning a particular election or pursuing other private goals such as increasing their party’s membership. Production of the ‘ideal level’ of inclusion of the dogmatic party by competition among democratic parties can be expected to be a rare event, though. Still, the more all democrats are committed to democracy, the less likely a fatal deviation from the ideal level becomes. This is how it should be.
5. Conclusions Without allowing quite unrestrained democratic voting and open competition for power the legitimacy of democracy will be undermined. In particular in young or transitional democratic legal orders supporters of democracy may deem it advantageous not to side-line democracy’s foes. This will apply in particular if they endorse the quite plausible empirical claim that participation in democratic practices and having a real share in power will tend to induce a preference for constitutional democracy itself. Taking the risk of letting the foes of democracy participate in earnest will be good policy only if the ‘enemy’ is not ‘too dangerous’. As long as democratic parties can form a cartel which keeps out the more dangerous dogmatic foes of democracy one might basically trust that the risk will be taken only if it is worthwhile. But since competition among democratic parties
Democratic Defences and (De-)stabilisations
225
may induce them to admit their dogmatic competitors to a positive share in power when it is unwise (from the point of view of the common weal of the parties), trust in the good behaviour of democratic parties in dealing with non-democratic ones may not be warranted. One might want to resort here to constitutional rules that simply prevent non-democrats from competing. But keeping competition open for all competitors is such a basic ingredient of the process in which democratic legitimacy is built up that it may not be a particularly good policy to erect barriers to entry. Such a policy may be unwise because under official prohibition the forbidden parties may engage in all sorts of conspiracies and clandestine operations. It may be much better to ‘have them in the open’ or to admit them to the democratic power game as legitimate competitors. But then the exclusion of parties from true participation in the game of power may become more conspicuous and in the end even more harmful. Nevertheless, in particular in a transitional state of affairs in which constitutional democracy has not taken hold it may be relatively best to control the democratic game by certain constitutional rules of a non-democratic character. In this vein, one could imagine that a constitution would allow for a ‘path dependent’ cartel formation rule among democratic parties of the following kind: any of the established democratic parties that form the cartel has a veto against the formation of any coalition that contains one of the non-democratic parties. Such a measure would fall short of a full fledged prohibition of dogmatic parties. They would be allowed to compete for a majority of votes while the rule might prevent the race to the bottom. If under such circumstances the adversaries of democracy would fail to win votes it would be helpful. Should they, however, win great shares of the vote it would be hard to imagine that constitutional democracy could persist. Still, letting the foes of constitutional democracy participate in government such as to induce them ‘to develop a taste’ for constitutional democracy though a rather desperate strategy in the first place may nevertheless be the best option within the realm of the feasible.
Acknolwdgements We thank two anonymous referees for their challenging and constructive comments and participants at the conference on ‘Power: Conceptual, Formal, and Applied Dimensions’, 17–20 August 2006, Hamburg, Germany.
References Acemoglu, D. and Robinson, J.A. (2006) Economic Origins of Dictatorship and Democracy, Cambridge University Press. Ainslee, G. (1992) Picoeconomics, Cambridge University Press. Ainslee, G. (2002) Break Down of the Will, Princeton University Press.
226
Werner Güth, Hartmut Kliemt, and Stefan Napel
Barbera, S. and Jackson, M.O. (2004) Choosing How to Choose: Self-Stable Majority Rules and Constitutions, Quaterly Journal of Economics 1011–1048. Barry, N. (1981) An Introduction to Modern Political Theory, Macmillan. Buchanan, J.M. and Tullock, G. (1962) The Calculus of Consent, University of Michigan Press. Garzon-Valdes, E. (1994) Constitution and Political Stability in Latin America, in W. Krawietz, N. MacCormick, and G.H.v. Wright (eds) Prescriptive Formality and Normative Rationality in Modern Legal Systems, Duncker und Humblot. Garzon-Valdes, E. (1983) Die gesetzliche Begrenzung des staatlichen Souveräns, Archiv für Rechts- und Sozialphilosophie LXVIII/4: 431ff. Gordon, S. (1999) Controlling the State, Harvard University Press. Hart, H.L.A. (1961) The Concept of Law, Clarendon Press. Hayek, F.A.v. (1973-79) Law, Legislation and Liberty: A New Statement of the Liberal Principles of Justice and Political Economy, Routledge & Kegan Paul. Hobbes, T. (1682/1990) Behemoth or The Long Parliament, Chicago University Press. Hoerster, N. (1972) On Alf Ross’s Alleged Puzzle in Constitutional Law, Mind 81: 422–426. Holler, M.J. (1992) Ökonomische Theorie der Verhandlungen, R. Oldenbourg Verlag. Hume, D. (1985) Essays. Moral, Political and Literary, Liberty Fund. Jasay, A. de (1997) Against Politics: On Government Anarchy and Order, Routledge. Kliemt, H. (1978) Can There be Any Constitutional Limits to Constitutional Powers, Munich Social Science Review 4: 99–108. Kliemt, H. (1993) Constitutional Commitments, in P. Herder Dorneich et al. (eds) Jahrbuch für Neuere Politische Ökonomie 12, Mohr und Siebeck. Lutz, D. (1994) Toward a Theory of Constitutional Amendment, American Political Science Review 88: 355–370. Myerson, R.B. and Satterthwaite, M.A. (1983) Efficient Mechanisms for Bilateral Trading, Journal of Economic Theory 28: 265–281. Ockham, W.v. (1992) Dialogus: Auszüge zur politischen Theorie, Wissenschaftliche Buchgesellschaft. Putnam, R.D. (1994) Making Democracy Work, Princeton University Press. Raz, J. (1972) Professor A. Ross and Some Legal Puzzles, Mind 81: 415–421. Ross, A. (1969) On Self-Reference and a Puzzle in Constitutional Law, Mind 78: 1– 24. Russell, B. (1975) A History of Western Philosophy, Routledge and Kegan Paul. Spitzley, T. (ed.) (2005) Willensschwäche, Mentis.
12. The Instability of Power Sharing Steven J. Brams Department of Politics, New York University, USA
D. Marc Kilgour Department of Mathematics, Wilfrid Laurier University, Waterloo, Canada
1. Introduction When two factions clash within a committee, a company, or a country, a natural question to ask is: Why don’t they share power or responsibilities and try to reach a compromise that, while leaving neither faction in control, at least leaves neither too aggrieved. At the interpersonal level, couples, sometimes aided by marriage counselling or couples therapy, resolve their differences and avoid divorce. In some families, the torch is passed on smoothly to new generations, especially in the creation and development of businesses. Father, son, and grandson shared responsibilities in the hugely successful IBM. Father and daughter now run Playboy Enterprises, and Rupert Murdoch’s family has successfully divided responsibilities in its vast multimedia empire. In organizations, conflicting parties frequently do find room to maneuver, make deals, and save face. At the national level, countries like Belgium and Switzerland have managed to avoid break-up despite their language and ethnic differences. And the most diverse large country in the world today, the United States, has remained one entity for over two hundred years. Indeed, the fact that it suffered, but survived, a major civil war may have contributed to its subsequent stability, as we will suggest later. But these successes are probably the exception. Family quarrels, especially, are frequent and bitter. As a case in point, three Koch brothers split into two factions, in which identical twins took different sides in a fight over control of Koch Industries, Inc., an energy-exploration and trading company that is the largest private company in the United States. And while some countries have split peacefully, like the former Soviet Union into several republics in 1990 (with Chechnya the notable exception), and Czechoslovakia into the Czech Republic and Slovakia in 1993, it is civil wars that are
228
Steven J. Brams and D. Marc Kilgour
the norm. 1 Since World War II, they have led to far more deaths and destruction than international wars and have a mixed record of attaining a sustainable peace. 2 What is possibly most surprising is the number of voluntary mergers that go sour. At the international level, Egypt and Syria formed the United Arab Republic in 1958, only to see it dissolve in acrimony three years later. In business, voluntary ‘mergers of equals’ are frequent, but in reality they hardly ever turn out that way. For example, while DaimlerChrysler and Citigroup started out as such mergers, one of the principals in each quickly became the dominant figure and ousted its former partner. To try to understand the difficulties of power sharing, we begin with Model I (section 2), in which two players agree, initially, on how they will share the assets of their merged enterprise. Either player may then choose to break the agreement by ‘shooting’ at its erstwhile partner – now its opponent – as in a duel. If this shot hits its mark, which we assume occurs with a specific probability, the shooter eliminates its opponent and acquires all the assets. If neither player is eliminated, the original sharing agreement stays in place. Each player must worry that its opponent will fire first. Even if it is not rational for, say, player P to fire first, we show that it will be rational for player Q to do so. But anticipating a shot by Q, which may or may not be successful, P, in turn, can do better by getting in the first shot. This result is extremely robust – it does not depend on the sharing agreement or the probability of success of either player. We show that neither player will be deterred from shooting, however large its initial share of the assets, and however small its probability of eliminating its opponent. 3 This is true even with the discounting of future payoffs in a multiperiod 1
Conditions that lead to civil wars are analyzed in, among other places, Collier et al. (2003) and Fearon and Laitin (2003). 2 Conditions that produce sustainable peace after civil wars are discussed in, among other places, Rothchild and Roeder (2005) and Walter (2002). 3 There is a huge literature on modeling deterrence, but for the most part it ignores sharing agreements and an analysis of their stability. In international relations, see, for example, Zagare and Kilgour (2000), Powell (1999), and Brams and Kilgour (1988). Perhaps the models most relevant to our analysis are those of ‘predation and conflict’ developed by Jack Hirshleifer and his associates, in which one side attempts to appropriate what others have produced. See, for example, Hirshleifer (2001), the articles in the edited collection of Garfinkel and Skarpedas (1996), and Garfinkel and Skarpedas (2007). Whereas these models are rooted in economic comparisons of cost and gain, our game-theoretic models of duels abstract from those details to focus on the simple question of whether or not to attack an opponent in a 2person game. Thus, we do not address the question of the stability of coalitions in n-person games, nor do we consider the possibility of there being a higher authority or institution to enforce an agreement. We believe our main finding – that the damage caused by shooting is essential in halting it, but it is hard to make this factor salient – justifies our using ‘instability’ rather than ‘stability’ in the title. The other justification is empirical: Power struggles that provoke violent conflict are ubiquitous (Kadera 2001).
The Instability of Power Sharing
229
game, as we show in Model II (section 3). No matter how small the discount factor is and, therefore, how large the shadow of the future looms, each player will try to preempt its opponent in every period until one player or both are eliminated. In Model III (section 4), we add a damage factor to Model II that renders the strategic situation somewhat more auspicious. In particular, we posit that when a shot is fired – whether it hits its mark or not – the assets to the players will be reduced. Put another way, there is a cost, in reduced assets, when one or both players attack their opponents. This cost must be weighed against the benefit of acquiring, with some probability, all the assets, which, of course, would be reduced by the shooting. (If both players succeed in eliminating their opponents in the same period, this factor does not come into play, because both players receive nothing.) When the assets are reduced by the act of shooting as well as the passage of time, the players will be deterred if the probabilities of eliminating their opponents are sufficiently low. But if, say, player P has a high probability of eliminating player Q , it will shoot, and so will Q, even if Q is not such a good shot. This is perhaps why gunfighters in westerns invariably try to shoot each other. Either they are too good to do otherwise, or they worry that their rival will beat them to the draw. Consequently, they try to get in the first shot before does. This metaphor, however, is not so apt today, because gunfighters usually do not affect the lives of others in the way, for example, that duelists in a multinational corporation or a country can ruin the lives of thousands or even millions of people when they battle for control. We conclude (section 5) that the instability of power sharing, at least as modeled by the duel we analyze, cannot be easily overcome. Evidence from the recent deadly civil wars in Lebanon and the former Yugoslavia, in which competing factions vied for power at a cost of hundreds of thousands of lives, reinforces this conclusion. But the damage factor in Model III suggests that there may be a brighter side – if not now, then later. The destruction wrought by the combatants in the Lebanese and Yugoslavian civil wars was horrific, but, ironically, one benefit was to make the combatants reluctant to renew hostilities. In addition, realizing that a quick and decisive blow against the enemy is probably not in the cards and more assets, instead, will be destroyed if there is further shooting, the duelists have been deterred. We briefly discuss what affects the strength of this damage factor in different settings.
2. Model I: One-Period Play Assume there are two players, P and Q , who have probabilities, p and q, of eliminating their opponents when they fire at them. We assume 0 p 1
230
Steven J. Brams and D. Marc Kilgour
and 0 q 1, so while P and Q are not perfect shots, they each have some positive probability of eliminating their opponents when they shoot. While we assume that the players know p and q, they do not know when their shots will eliminate their opponents, making the outcomes resulting from their actions probabilistic rather than deterministic. But we assume that the players do know when a shot has been fired, rendering the duel ‘noisy’ and the games to be described next ones of perfect information. 4 The conflict between P and Q is over how they will divide their total assets, which we assume sum to 1. Suppose, initially, that both players agree to divide their assets so that P receives a units and Q receives 1 a units. 5 After reaching this agreement, each player then must decide whether to fire (F) or not fire (F) at its opponent to try to gain all the assets and thereby not have to share power. There are four possible outcomes: 1. F F. Both P and Q hold their fire. The distribution of the assets to (P, Q ) remains (a ,1 a ). 2. F F. P fires first. Q is eliminated with probability p, in which case P receives all the assets and the distribution is (1,0). But Q, if not eliminated, then fires at P, eliminating P with probability q, in which case the distribution is (0,1). 6 If neither P nor Q is eliminated, the distribution of assets remains (a ,1 a ). 3. F F. Q fires first. P is eliminated with probability q, in which case Q receives all the assets and the distribution is (0,1). But P, if not eliminated, then fires at Q, eliminating Q with probability p, in which case the distribution is (1,0). If neither P nor Q is eliminated, the distribution of assets remains (a ,1 a ). 4. F F. P and Q fire simultaneously. Q is eliminated with probability p and, independently, P is eliminated with probability q. If there is one survivor, that survivor receives all the assets. If both players survive, the distribution of assets remains (a ,1 a ). If neither player survives, both players receive 0, in which case the distribution is (0,0). In a standard (noisy) duel, the players can observe (or hear) each other’s actions. Consequently, each knows if its opponent gets off the first shot (in which case it may be too late for the second player to respond). If the players cannot directly observe each other’s actions, which will be true in many 4
By contrast, in a ‘silent’ duel, the shots are no less deadly, but the players do not know when they have been fired upon unless the shot is successful. For a general analysis of noisy and silent duels, see Karlin (1959, chs. 5 and 6). 5 We assume that assets are a surrogate for power, although we recognize that assets in the form of money or other resources are only one component of what is normally considered to be power. Note that the division of the assets into a and 1 – a might be an expectation of how the assets will be divided, not necessarily a formal agreement, and might be proposed, or even imposed, by some outside power. Whether there is a formal agreement or not, we assume that P and Q can make the choices we describe next. 6 This is an assumption of the model, but it will be justified later as always a rational choice in Models I, II, and III whenever one player (P or Q) fires first.
The Instability of Power Sharing
231
situations that we seek to model, we need only assume that the players act independently, without knowledge of each other’s actions. We suppose throughout that P and Q are rational and make choices to maximize their expected share of assets, which we use as a surrogate measure for their expected share of power. When will the players be deterred from shooting? Assume that Q is docile and does not attempt to eliminate P – it chooses F. We compare P ’s expected share of power, a, when it decides to share power with Q at F F with its expected share, EP , when it attempts to eliminate Q at F F. If Q is docile and does not fire at P, even after surviving P ’s attempt to eliminate it, P’s expected share of power will be EP I p[1] (1 p )[a ] a p(1 a ) p a
for all values of p (the superscript ‘I’ signifies Model I). Therefore, P is always better off trying to eliminate a docile Q. More realistically, suppose that Q is not docile but merely slower than P. Then if P attempts to eliminate Q and does not succeed, Q responds by trying to eliminate P. If neither player’s attempt succeeds, the power-sharing arrangement stays in place, with no loss to either player (we modify this assumption in Model III). In this case, P ’s expected share of power 7 is EP1I p[1] q(1 p )[0] (1 p )(1 q )[a ] p a(1 p )(1 q ).
P will accept the power-sharing arrangement iff a p EP1I , or a p p a(1 p )(1 q ),
which is equivalent to pb
aq . 1 a aq
(1)
The region in which power sharing is rational for P is labeled 1 in Fig. 1; this figure is drawn under the assumption that a 0.6. For all values of (q , p ) in region 1, P will not attack Q and accept power sharing; outside this region, P does better firing at Q. A similar analysis shows that if P does not attack Q first but retaliates if Q attacks first and P survives – the ‘realistic’ scenario postulated above – then Q maximizes its expected share of power by not attacking P iff
7
Here and later in the text we use subscripts on EP to indicate the conditions under which P’s expected value is calculated: Subscript ‘1’ means that P fires first; subscript ‘2’ that P fires second, and subscript ‘0’ that both players fire simultaneously.
232
Steven J. Brams and D. Marc Kilgour p 1
2
3
a
1
0
1a
1
q
Fig. 1. Models I and II
pp
aq . (1 a )(1 q )
(2)
For all values of (q , p ) in the region labeled 2 in Fig. 1, Q will not attack P and accept power sharing; outside this region, Q does better firing at P. To summarize, given that P is assured that Q will not attack first but will retaliate if attacked, P will be deterred from attacking first only if (q , p ) lies in region 1. Similarly, if Q is assured that P will not attack first but will retaliate, A will be deterred from attacking first only if (q , p ) lies in region 2. It follows that both P and Q will accept the power-sharing arrangement iff the point (q , p ) lies in both regions. But as suggested by Fig. 1, these regions do not overlap. To prove this, suppose that (q , p ) lies in both regions. Then aq aq ¬ p ¬ p 0 p 1 a aq ® (1 a )(1 q )®
because, by (1) and (2), the left side is the sum of two non-negative terms. But this sum equals aq aq aq 2 0. 1 a aq (1 a )(1 aq ) (1 a aq )(1 a )(1 q )
This contradiction implies that (q , p ) cannot satisfy both (1) and (2) simultaneously. 8 8
In fact, it can be verified that the curves defining (1) and (2) have a common tangent at
The Instability of Power Sharing
233
Thus, it is never rational for both players to accept power-sharing. Note that this conclusion does not depend on the specific values of p, q, or a. We can say more about what will happen by determining P ’s best course of action if Q attempts to eliminate P initially. If Q attempts to eliminate P and P, if not eliminated, attacks Q, then P ’s expected share of power is EP2I q[0] (1 q )p[1] (1 q )(1 p )[a ] p pq a(1 p )(1 q ).
If P and Q attack each other simultaneously, P ’s expected share of power is EP 01 p(1 q )[1] q[0] (1 p )(1 q )[a] p pq a(1 p )(1 q ).
Because EP 0I EP2I , P will indifferent between attacking Q at the same time that Q attacks P, or attacking Q only if Q attacks P unsuccessfully. This means that shooting simultaneously with Q, and shooting after Q, are equally good for P. 9 It is easy to verify that EP1I EP2I EP1I EP 0I pq 0.
Therefore, for any values of p and q, P prefers to attack Q before Q attacks P ; P is indifferent between attacking simultaneously and allowing Q to attack first (with P firing in return if it is not eliminated). Of course, exactly the same conclusions apply to Q. Even when P would prefer to accept the power-sharing arrangement if it knew that Q would accept it too, P knows that Q maximizes its expected share of power by attacking P first. Exactly the same conclusion applies to Q. Ineluctably, P and Q are led to try to be the first to attack in a preemptive race that applies the anti-golden rule: ‘Do to your opponent what he would do to you, but do it first.’ the origin. The fact that the deterrence regions are disjoint means that the players do not need to know the values of any parameters of the game: Knowing that both can never, simultaneously, be deterred is sufficient to incite them to try to preempt each other. But the specific values of the probabilities, p and q, of eliminating an opponent – and new parameters to be introduced – do become important in Model III, wherein a region exists in which both players can benefit from power-sharing. 9 Alternatively, we might assume that if P is eliminated and therefore gets nothing, it would prefer to do so when it eliminates Q (who would also get nothing) than when Q survives and gets everything. In other words, dying alone is worse than dying with one’s opponent. The difficulty with this assumption is that we are not modeling literally dying in a duel but, instead, being prevented from sharing power with another player. If P, because of Schadenfreude or other reasons, prefers that Q also be cut out of power if P is, what payoff greater than 0 should P receive? And aren’t there P ’s that would have the opposite preference, wanting an opponent to win control rather than see a new player take over? Because the preferences of the eliminated players are by no means apparent in this kind of situation, we have not included them as parameters in the game. Note, incidentally, that the game is not constant-sum, because when P and Q shoot simultaneously and succeed in eliminating each other, they both end up losers with 0 – as compared with other outcomes at which their power shares sum to 1.
234
Steven J. Brams and D. Marc Kilgour
Earlier we argued that power-sharing would not be accepted by both sides in any region of the (q , p ) unit square of Fig. 1. Now we claim, more ominously, that both players will attempt to preempt throughout the unit square. More specifically, in region 1 P prefers not to attack (on the assumption that if P does not attack, the power-sharing arrangement will be implemented), whereas Q prefers to attack. As we have just seen, the knowledge that Q prefers to attack will induce P to try to attack before Q. Because P is indifferent between attacking simultaneously with Q or allowing Q to attack first, P has good reason to preempt in region 1. By a similar argument, we can also expect a ‘race to preempt’ in region 2. Finally, in region 3, neither side will be willing to wait under any conditions, because each player is always better off attacking first. In this sense, region 3 is the most unstable region, because each player has a dominant strategy of attacking. Because of the rationality of preemption by one player in regions 1 and 2, and by the other player in anticipation of this preemption, these regions are hardly more stable than region 3. We conclude that no matter what the values of p and q are, no matter what the value of a – including a 21 , when power is shared equally – is, and no matter what knowledge the players have of each other’s intentions, each player maximizes its expected share of power by racing to preempt.
3. Model II: Multiple-Period Play with Discounting of Assets In Model II, Model I is played repeatedly: In discrete time periods 0, 1, 2, 3, …, each player may fire or not fire at its opponent. The players discount their future payoffs, using discount factor r. Thus, receiving k units of assets t periods in the future is worth the same as kr t units received now. As in Model I, P and Q are assumed to make choices so as to maximize their expected total assets, but now over all time in which one or both players survive. We search for stationary (or Markov) equilibrium behaviour, whereby a player’s choice of a strategy depends only on the situation that the player faces at the moment, not on the history of play. 10 With this understanding, we analyze the situation at time 0. 10 This restriction, which facilitates calculations, does limit the players’ behaviour in that it rules out history-dependent strategies like tit-for-tat. We assess next in the text whether an analogue of a trigger strategy can stabilize power-sharing by assuming that, if either player attempts to eliminate an opponent, the players enter into a duel, firing alternately until one of them is eliminated. We note that a trigger strategy constitutes a greater threat than tit-for-tat, implying that if tit-for-tat can stabilize power-sharing, so can a suitable trigger strategy. Conversely, if a trigger strategy cannot stabilize power-sharing, then the analogous tit-for-tat strategy cannot stabilize it either.
The Instability of Power Sharing
235
Suppose P believes that Q will not fire first at time 0, and in all subsequent periods because of stationarity. If P also does not fire, so both players survive into the infinite future, P ’s expected current value of its stream of revenues is a ar ar 2 ar 3 ...
a . 1 r
(3)
On the other hand, if P tries to eliminate Q in every period, and if Q responds by trying to eliminate P, then the expected value of P ’s revenue stream at time t is EP1II p[1 r r 2 ...] q[0] (1 p )(1 q )[a rEP1II ],
where the first term on the right reflects P succeeds in period 0 by firing first and eliminating Q, the second term that Q eliminates P (whether Q is eliminated or not) and the third term that both players survive period t and the game continues, giving P a payoff of a from period t plus its expectation in period t 1, discounted by factor r. Substituting the stationarity condition EP1II(t ) EP1II (t 1), the recursion can be solved directly; at any time, the expected value of P ’s future revenue stream is EP1II
p (1 p )(1 q )(1 r )a . (1 r )[1 (1 p )(1 q )r ]
(4)
Note that EP1II is analogous to EP1 I in Model I. Subtracting (4) from (3), P will be at least as well off not firing at Q as firing at it, and thereby sharing in the income stream, iff p (1 p )(1 q )(1 r )a a p (1 p )(1 q )a a p 0. 1 r (1 r )[1 (1 p )(1 q )r ] (1 r )[1 (1 p )(1 q )r ]
(5)
Because the denominator on the right side of equation (5) must be positive, the inequality will be satisfied iff the numerator on the right is non-negative, which is equivalent to inequality (1). In other words, the condition under which P will share power – assuming Q is also willing to share power – is the same as in Model I. Moreover, while the expected current values from the two courses of action depend on the parameter r in Model II, whether or not inequality (5) holds is in fact independent of the value of r. As in Model I, it is easy to verify that it is in Q’s interest to respond to P ’s firing first by firing at P if it survives. If Q does not respond, then P should still attempt to eliminate Q subsequently, regardless of the values of the parameters p, q, a, and r. It can also be shown that if Q attempts to eliminate P, as in Model I, then P should respond by firing itself. Furthermore, P ’s expected payoff is the same whether, in each period, P fires after Q or at the same time.
236
Steven J. Brams and D. Marc Kilgour
Finally, Q will be willing to share Q’s revenue stream, rather than try to eliminate P, iff condition (2) holds, exactly as in Model I. But then it will not be in P ’s interest to refrain from firing at Q. Thus, Fig. 1 applies to Model II as well as Model I: At every point (q , p ) in the unit square, either (1), (2), or both fail. It follows that, whatever the values of the parameters, preemption is optimal in every period in which both players survive.
4. Model III: Multiple-Period Play with Discounting of, and Damage to, Assets In Model III, Model II is modified to include a damage factor. In the event that a player fires at but does not eliminate its opponent in any period, the total assets of that period are reduced by damage factor s, where 0 b s b 1 . The reduced assets are shared in the ratio a : 1 a if nobody is eliminated, regardless of which player or players fired a shot. 11 In other words, there is a cost to the players of shooting, which, of course, is most harmful if a player is eliminated (it gets 0) but still hurts the player or players that survive. Besides the damage factor, the players’ assets continue to be reduced by the discount factor r from period to period. Does the damage factor change the preemption behaviour that we found in Models I and II? As in Model II, we search for stationary (or Markov) equilibria and begin by analyzing the situation at time 0. Suppose P believes that Q will not fire first at time 0. If P does not fire and both players survive, P ’s expected current value of its stream of revenues is a ar ar 2 ar 3 ...
a . 1 r
(6)
On the other hand, if P tries to eliminate Q in every period, and if Q responds by trying to eliminate P, then the expected current value of P ’s revenue stream will be EP1III p[1 r r 2 ...] q[0] (1 p )(1 q )[a rsEP1III ],
which, after applying stationarity, can be solved to obtain EP1III
11
p (1 p )(1 q )(1 r )a . (1 r )[1 (1 p )(1 q )rs ]
(7)
To be sure, P and Q will seek to destroy their opponents’ assets rather than their own. But the fact, as we will show once again, that if shooting by one player is rational, it will lead to mutual preemption means that the assets of both sides will be damaged, whether or not the shooting eliminates an opponent. Thus, the assumption of Model III that shooting will be damaging to both sides seems reasonable, because both players will always shoot.
The Instability of Power Sharing
237
Note that EP1III is analogous to EP1II in Model II. Subtracting (7) from (6), P will be at least as well off not firing at Q as firing at it, and thereby sharing in the income stream, iff p (1 p )(1 q )(1 r )a a 1 r (1 r )[1 (1 p )(1 q )rs ] a p a(1 p )(1 q )[1 r (1 s )] p 0. (1 r )[1 (1 p )(1 q )rs ]
(8)
As previously, because the denominator on the right side of equation (8) must be positive, the inequality will be true iff the numerator on the right is non-negative. Define R r (1 s ). This parameter represents the current value of the reduction in the next period’s income if the players attempt to eliminate each other during the first time period, 0. We call R the discounted damage. Note that 0 b R b 1. The parameters, r and s, affect the stability of power sharing only through the discounted damage. The condition for P to accept power sharing – that the numerator of the right side of (8) be non-negative – is a p a(1 p )(1 q )(1 R ) p 0,
which is equivalent to pb
aq a(1 q )R . 1 a(1 q )(1 R )
(9)
If R 0, the fraction on the right side of (9) is identical to the fraction on the right side of (1). Thus, if the discounted damage is 0, the condition for power sharing is identical to that in the previous models. But if R 0 , condition (9) is a new constraint on power sharing. It is easy to verify that the fraction on the right side of (9) is a strictly increasing function of q. The value of this function at q 0 is p0
aR . 1 a aR
Note that 0 p 0 a provided that R 0 . The graph of this fraction is the curved line passing from (0, p 0 ) to (1, a ) in Fig. 2; this figure is drawn under the assumption that a 0.6, the same value as in Fig. 1, and R 0.2. An analogous calculation for Q shows that it will be willing to accept power sharing iff qb
(1 a )p (1 a )(1 p )R , 1 (1 a )(1 p )(1 R )
238
Steven J. Brams and D. Marc Kilgour p 1
a
p1 p0
0
q0
q1
1a
1
q
Fig. 2. Model III
which is equivalent to pp
aq (1 a )(1 q )R . (1 a )(1 q )(1 R )
(10)
It is easy to verify that the fraction on the right side of (10) is a strictly increasing function of q. It equals 0 when q q 0 , where q0
(1 a )R , a (1 a )R
and it equals 1 when q 1 a . The graph of this fraction is the curved line passing from (q 0 ,0) to (1 a ,1) in Fig. 2. As shown in Fig. 2, p 0 and q 0 help to define the region (shaded) in the (q , p ) unit square, where both P and Q are willing to accept power sharing in the ratio a : 1 a. Considered as functions of R r (1 s ), p 0 and q 0 are strictly increasing and satisfy p 0 l 0 and q 0 l 0 as R l 0 , and p 0 l a and q 0 l (1 a ) as R l 1. A sufficient condition for (q , p ) to fall in the stable power-sharing region is q b q 0 and p b p 0 . But it is not necessary; a portion of the shaded region in Fig. 2 lies above and to the right of the point (q 0 , p 0 ). To analyze this region further and, in particular, to find a necessary condition for stable power sharing, we study the coordinates of the point (q 1 , p 1 ), shown in Fig. 2. Formally, (q 1 , p 1 ) is defined as the value of (q , p ) achieving equality in both (9) and (10). Equating the right sides of (9) and (10) and solving for R gives
The Instability of Power Sharing
R
aq 2 . 1 a q aq 2
239
(11)
To invert (11) to obtain a solution for q 1 in terms of R, note first that (11) is equivalent to the quadratic equation a(1 R )q 2 Rq (1 a )R 0.
Because the coefficient of q 2 is positive and the left side of the equation is negative at q 0, it follows that q q 1 is the unique positive root of this equation, which is q1
R R 2 4aR(1 a )(1 R ) . 2a(1 R )
(12)
An analogous calculation shows that p1
R R 2 4aR(1 a )(1 R ) . 2(1 a )(1 R )
(13)
As illustrated in Fig. 2, a necessary condition for power sharing to be stable is q b q 1 and p b p 1 . Note that p 1 : q 1 a : 1 a , so the probabilities at the upper right-hand corner of the stable power-sharing region are in the same ratio as the power shares. Moreover, it can be shown that as R l 0 , p 1 l 0 , and q 1 l 0 , whereas as R l 1, p 1 l a , and q 1 l (1 a ) . Finally, for any R satisfying 0 R 1, it can be verified that 0 p 0 p 1 a and 0 q 0 q 1 1 a . Thus, when the discounted damage, R , is close to 0, the stable power-sharing region is very close to the point (q , p ) (0,0) . We interpret these conditions as follows. When R is close to zero – that is, the discounted damage is low – the stable power-sharing region is small because the assets retain much of their value over time even when there is shooting. On the other hand, when R is close to 1 – that is, the discounted damage is high – the stable power-sharing region almost fills the rectangle defined by 0 b q b 1 a and 0 b p b a. Because much can be lost by shooting in this case, the players have good reason to refrain, especially if p and q, respectively, fall below their shares of power, a and 1 a . A diminution of p and q could well be facilitated by an arms-control agreement. The situation when a 21 , and power is shared equally, is noteworthy. First, equal power-sharing maximizes the area of the power-sharing region, which might help to persuade the players to renegotiate an unequal sharing agreement, especially should they be uncertain about the values of p and q. Second, all points, lines, and regions are symmetric with the respect to the 45°-line joining (0, 0) and (1,1). The endpoints of the stable region extending away from the origin are given by
240
Steven J. Brams and D. Marc Kilgour
p0 q 0
R ; 1 R
p1 q 1
R R . 1 R
In this situation, it is not difficult to show that if 0 R 1, then (i) 0 p 0 p 1 ½, (ii) p 1 l 0 as R l 0, and (iii) p 0 l 21 as R l 1. Thus, when R is close to 1 and shooting at an opponent quickly damages or destroys assets, the stable power-sharing region approaches the square with corners (0,0), (0, 21 ), ( 12 , 12 ), ( 12 ,0). Then the players should never shoot, provided neither can eliminate its opponent with probability greater than 21 . Of course, the dark side of Model III is that if the players are outside the stable power-sharing region, they have exactly the same motivation to shoot as they do throughout the unit square in Models I and II. At least one player will find it rational to fire at its opponent, propelling both players into a race to preempt. Arresting this race in Model III requires that the players’ assets deteriorate rapidly over time and the damage caused by shooting be significant, especially if the marksmanship of the players is relatively good.
5. Conclusions While our results may be viewed as profoundly pessimistic, we believe there is some basis for optimism. The pessimism stems from our first two models, which say that no matter what the values of any of the parameters are, rational players will be impelled to try to eliminate a partner in a powersharing agreement. This incentive to shoot is not mitigated by a player’s having a low probability of eliminating an opponent (p or q) or getting the lion’s share of the power in the beginning (a or 1 a ). At least one player, and sometimes both, will have an immediate incentive to try to aggrandize all the power by firing at its opponent in both Models I and II. True, one player (say, P ) may be deterred under some conditions if Q is cooperative, but Q never has this incentive at the same time that P does. Because their deterrence regions never overlap – a necessary condition for power sharing – the players will race to preempt, independent of the parameter values in the model. Surprisingly, this conclusion is not altered by Model II, in which the Model 1 game is repeated, but with assets reduced by discount factor r in each period of play. While repeated play has worked to inhibit noncooperative behaviour in games like Prisoners’ Dilemma (Axelrod, 1984), it does not work in the game underlying Model II. Exactly the same incentives to preempt exist as in Model I, except now they apply in each period of play. One possible way to foster more cooperation would be to impress on the two players that at least one, and possibly both, will end up with nothing if they fire at each other in every period of play. If they are maximin players who wish to avoid their worst outcomes – or if they are risk-averse – it would
The Instability of Power Sharing
241
behoove them to sign a binding and enforceable agreement to desist from shooting at each other at any time. The problem with such an agreement, of course, is finding a way to enforce it when it is always in the interest of the players to break it, perhaps ambiguously or surreptitiously in order to evade detection. Model III, wherein shooting reduces the assets through damage factor s, provides some grounds for optimism. If the players know they will suffer not only a discounting of assets with the passage of time (introduced in Model II) but also additional damage or destruction of these assets (Model III), they have a more compelling reason to refrain from shooting. But they will not always refrain. The discounted damage R, which reflects both the discount factor, r, and the reduction in assets, 1 s , must be sufficiently high. Only then may the passage of time, and the damage inflicted by shooting, render attempts to eliminate an opponent less appealing, especially when the marksmanship of the players is relatively poor. Note that as long as the shares of the players are equal (a 1 a ½), any elimination probability greater than ½ (for either player) will induce rational players to preempt. How can we explain that as the accuracy of ballistic and, later, cruise missiles increased from the 1960s on, the superpowers reached more and more arms-control agreements? One reason was the growing realization that the damage that a nuclear exchange could cause tremendous damage, possibly resulting in a ‘nuclear winter’, which echoes the damage factor in Model III. 12 But at least as important were improvements in satellite reconnaissance and other verification techniques, which facilitated the detection of treaty violations and eased the enforcement problem. In addition, both sides developed a nearly invulnerable second-strike capability, primarily though submarine-launched missiles, which meant that a first strike could not wipe out an opponent’s ability to respond. Devastating as the damage might be, the enemy could strike back with almost the same fury. Memories of earlier devastation caused by war also matter. The American civil war, the bloodiest in U.S. history, was an event so searing that no serious challenge to the unity of the country has been mounted since. By contrast, when a faction of a divided country believes it can strike a lethal blow against another faction that might challenge it, war becomes more likely. This happened in Lebanon and the former Yugoslavia in the 1980s and 1990s, with gruesome results for each country. Other countries have suffered greatly from civil wars in recent times. 12
In Brams and Kilgour (2007), we show that if payoffs are not accumulated in repeated play but occur only when play terminates – and are diminished if there has been shooting that does not eliminate a player, as in Model III – then the players are more likely to be deterred from shooting. In fact, they will always be deterred if power-sharing is equal (a 1 a 21 ) and their shooting is simultaneous.
242
Steven J. Brams and D. Marc Kilgour
Often it takes a generation or more of conflict before a situation reaches a point in which leaders say ‘no more’, or outside parties intervene. Unfortunately, this can be a slow and costly way to learn, as has recently been demonstrated in, among other places, Angola, Burundi, the Democratic Republic of the Congo, Liberia, Rwanda, Sierra Leone, Somalia, Sri Lanka, and Sudan, wherein various factions have attempted, through armed conflict, to usurp power. We see no easy way to speed up the learning process. Sometimes the damage must be inflicted to sink into collective memories and bring people to their senses, whether the conflict is interpersonal, international, or something in between. To be sure, not all conflicts are unwarranted. Some marriages turn bad, and divorce is the best option for both spouses. Some corporate battles can be settled only by the ouster of one side if a company is to survive. And some wars, such as World War II, seem morally necessary to rid the world of an evil dictator or an inhumane movement. But many wars, especially civil, are morally suspect if not bankrupt. They may be fueled by the personal whims of an autocrat, or by ethnic, racial, or religious differences in the populace that become excuses for ferocious fighting that, on occasion, turns into genocide. It would be desirable if one could demonstrate, in advance, the untoward damage that such wars can cause so that the disputants, anticipating dire consequences, can pull back in time. While Model III shows that such damage can be an important deterrent to fighting, it does not show how to make it felt quickly or decisively.
Acknowledgements We thank Matthew Braham, Frank Steffen, Stephen Haptonstahl, and an anonymous referee for valuable comments. We also benefited from discussions at the conference on ‘Power: Conceptual, Formal, and Applied Dimensions’, Hamburg, Germany, 17-20 August 2006, and the Workshop on Power-Sharing and Democratic Governance in Divided Societies, Centre for the Study of Civil War (CSCW), International Peace Research Institute, Oslo (PRIO), 21-22 August 2006, and other conferences where earlier versions of the paper were presented.
References Axelrod, R. (1984) The Evolution of Cooperation, Basic Books. Brams, S.J. and Kilgour, D.M. (2007) Stabilizing Power-Sharing, Preprint. Brams, S.J. and Kilgour, D.M. (1988) Game Theory and National Security, Blackwell. Collier, P. et al. (2003) Breaking the Conflict Trap: Civil War and Development Policy, Oxford University Press.
The Instability of Power Sharing
243
Fearon, J. D. and Laitin, D.D. (2003) Ethnicity, Insurgency, and Civil War, American Political Science Review 97: 75-90. Garfinkel, M.R. and Skaperdas, S. (2007) Economics of Conflict: An Overview, in T. Sandler, and K. Hartley (eds) Handbook of Defense Economics, vol. 2, Elsevier, 649– 710. Garfinkel, M.R. and Skaperdas, S. (eds) (1996) The Political Economy of Conflict and Approprivation, Cambridge University Press. Hirshleifer, J. (2001) The Dark Side of the Force: Economic Foundations of Conflict Theory, Cambridge University Press. Kadera, K.M. (2001) The Power-Conflict Story, University of Michigan Press. Karlin, S. (1959) Mathematical Models and Theory in Games, Programming, and Economics, vol. 2, Addison-Wesley. Powell, R. (1999) In the Shadow of Power: States and Strategies in International Politics, Princeton University Press. Rothchild, D. and Roeder, P.G. (eds) (2005) Sustainable Peace: Power and Democracy After Civil War, Cornell University Press. Walter, B. (2002) Committing to Peace: The Successful Settlement of Civil Wars, Princeton University Press. Zagare, F.C. and Kilgour, D.M. (2000) Perfect Deterrence, Cambridge University Press.
13. The Power to Propose versus the Power to Oppose Donald A. Wittman Department of Economics, University of California, Santa Cruz, USA
1. Introduction In this paper we compare proposal power to veto power within the context of a majority-rule voting system, such as a legislature. To illustrate the issues involved, consider the following scenario: P can propose a bill, which is then enacted if a majority (M ) of legislators vote in favour of the bill and V (for veto player) agrees to its passage. P could stand for the committee system in the US House of Representatives, where a committee brings a bill to the floor of the house. V would then stand for the president who can veto a bill. 1 And even if the President does not veto the bill, the Supreme Court might find the bill unconstitutional. So the Supreme Court can be seen as a veto player, as well. When does the proposer have more power than the veto player? More generally, what is the outcome if there are several proposers and/or veto players? We will assume a unidimensional set of preferences, with each individual having the following utility function: B i ( p * xˆi )2 , where p * is the position chosen if the bill is enacted into law and xˆi is i’s most preferred position. 2 If the bill is not enacted, then the outcome will revert to the statusquo, s * , so that individual i ’s utility will be: B i (s * xˆi )2 . For convenience, we set s * equal to 0 and assume that the median legislator’s most preferred position, mˆ, is to the right of s * . We then switch the preferences of P and V and determine whether the outcome changes. If the outcome more closely tracks P ’s preferences than V’s preferences, then we say that P is more powerful. In all cases that we consider, the following sequence holds: (1) The proposer offers a bill to replace the status quo (if there are several proposers, then the sequence of bills is predetermined); 1 Of course even this example leaves out the role of the Senate. Note that with the exception of Proposition 10, those players who can propose are different from those players who can veto. 2 Most of the proofs only require single-peakedness (quasiconcavity).
246
Donald A. Wittman
(2) the legislature votes; and (3) if the bill passes, the veto player(s) then decide whether to veto the bill. If the bill is not vetoed, it becomes law. Thus we have a sequential game where subgame perfection is imposed. The model follows along the path started by Romer and Rosenthal (1977, 1978). In their model, the proposer chooses a position that maximizes his utility subject to the constraint that a majority prefers the position to the status quo. Here the proposer is constrained by both the majority and the veto player. The model also builds on more recent work by Ferejohn and Shippan (1990) who treat the bureaucracy as the agenda setter constrained by the presidency and legislature, Krehbiel (1998) who focuses on supermajoritarian rules, Cameron (2000) and Tsebelis (2002) who focus on veto players, and Cox and McCubbins (2005) who consider a party cartel model. This paper differs from previous research in that it is more abstract and general. For example, we consider the following possibilities: (1) that there are N proposers; (2) that k out of N proposers must make the same proposal before the proposal reaches the legislature; and (3) that j of the M vetoers must agree to veto if the veto is to be effective. 3 Because we are dealing with single-peaked preferences on a single dimension, the median is well defined. 4 Consequently, we can deal with just one legislator, M, with most preferred position, mˆ. To avoid dealing with epsilons, we will assume that if M is indifferent between s * and the proposed bill, p, then M will vote for p. Similarly, we will assume that if V is indifferent between s * and p, then V will not veto p. The proposal that is implemented will be denoted by p * .
2. Propositions To set the stage and to enable us to grasp the logic of the more complex cases, we first consider the simple and well-known case where there is only one agenda setter and one veto player. Let pˆ be the most preferred position of P and let vˆ be the most preferred position of V. Proposition 1A
If pˆ and/or vˆ b 0, then s * will not be overturned.
Proof First suppose that pˆ s * 0. Any proposal, p, such that p s * will be rejected by M. Next suppose that vˆ s * . Then V will veto any p s * . , Proposition 1B If pˆ , vˆ p 2mˆ , then p * 2mˆ will be implemented. 3 McCarty (2000) considers possibility 3 in the context of a divide-the-dollar game with random proposers. 4 Single-peaked preferences enable us to generate more positive results than if we just restricted our analysis to compound-simple games. For a recent paper dealing with some of the same issues in the context of compound-simple games, see O’Neill and Peleg (forthcoming). Because of the complexity of the game, most of their results are non-results.
Power to Propose versus Power to Oppose
247
p* Pˆ
s* 0
M
2M
M
2M
p*
Vˆ
s* 0
p* Mˆ DEM Pˆ DEM
s* 0
Mˆ REP
M
2M
Note that for visual clarity mˆ M , vˆ Vˆ , pˆ Pˆ. Mˆ DEM is the most preferred position of the median Democratic legislator.
Fig. 1. If the proposer or vetoer is to the left of s * , then the outcome will remain at s * .
p* 2M Vˆ Pˆ
M
s*
p*
s*
Pˆ
M
Vˆ
2M
2Vˆ
2M
p* s*
Vˆ
M
Pˆ
p*
s*
Vˆ
M
2Vˆ
Pˆ
Fig. 2. The proposer is constrained by 2M and 2Vˆ.
2M
248
Donald A. Wittman
Proof M will reject any p such that p 2mˆ because M prefers s * to p 2mˆ . Both P and V prefer p * 2mˆ over any other p 2mˆ , including s * by assumption. Therefore p * 2mˆ will be implemented. Proposition 1C
If s * pˆ 2mˆ and pˆ b vˆ, then p * pˆ is implemented.
Proof P cannot do better by proposing an alternative, and both M and V , prefer pˆ to s * . Proposition 1D If s * vˆ 2mˆ and vˆ pˆ , then (i) p * pˆ mented if pˆ b 2vˆ, 2mˆ ; (ii) p * 2vˆ will be implemented pˆ 2vˆ; and (iii) p * 2mˆ will be implemented if mˆ vˆ and
will be impleif vˆ b mˆ and pˆ 2mˆ .
Proof P will choose a position as close to pˆ as possible as long as both V , and M prefer the position to s * . All of this is illustrated in Figs. 1 and 2, where for visual clarity we denote mˆ by M and use uppercase letters Vˆ and Pˆ for vˆ and pˆ. Fig. 1 illustrates Proposition 1A. The bottom diagram in Fig. 1 captures the party cartel model of Cox and McCubbins (2005). Suppose that the Democratic Party is the majority party in Congress and that the median Democratic representative prefers the status quo to the median position in the House of Representatives. Because the median of the Democratic Party prefers the status quo and the party has the majority on each committee, the party cartel model suggests that no committee will propose a policy preferred to the status quo. Furthermore, the party will not allow a proposal to go to the floor under an open rule (where amendments to the proposed bill can be made from the floor) when the median of the party prefers the status quo to the median of the legislature. In this later case the majority party median may prefer a position to the right of the status quo, but under an open rule, the median of the legislature would be enacted; therefore, the party does not allow the proposal to reach the floor in the first place. Fig. 2 illustrates Proposition 1B –1D. So Proposition 1 says that when there is one proposer and one veto player, the proposer has more power in the middling situations where both of their most preferred positions are to the right of the status quo, but both of their preferred positions are not to the right of 2mˆ . We note that these results hold when the veto player acts before M does (but still after P ). This might occur for example when a bill from a specialized committee must past muster from a general committee (e.g., a budget committee) before it goes to the floor of the legislature. We next consider the situation where there is more than one veto player (V i ) and only one veto is needed to prevent a bill from passing. One could treat the arrangement for passing revenue bills in the US as a situation
Power to Propose versus Power to Oppose
249
p* s*
Vˆ1 Vˆ2 M Vˆ3
2Vˆ1
Vˆ4 2M Vˆ5 Pˆ
2Vˆ1
Vˆ4 2M Vˆ5
p* s*
Vˆ1
Vˆ2 M
Vˆ3
Pˆ
Note that M is also a vetoer. If M were to the left of the other vetoers, then M would be the leftmost vetoer instead of V 1.
Fig. 3. When there is one proposer and N vetoers, then only the leftmost vetoer is relevant.
where the Budget committee of the House of Representatives is the agenda setter, The House itself is the majority rule legislature, and the Senate and President are the two veto players. It is convenient to treat, M, as another veto player. Proposition 2 If there are two or more veto players, each with the power to veto on its own, and only one proposer, then the leftmost veto player is the only relevant veto player for any p * s * . (See Fig. 3.)
Proof Any position p * s * that is preferred by the leftmost veto player to s * will also be preferred by the veto players to his right. Therefore, if the leftmost prefers p * to s * , so will the other veto players. Because the leftmost veto player can veto bills on his own, the actions of the veto players to , his right are irrelevant when the leftmost veto player prefers s * to p. We next consider the situation where there is more than one proposer, where each proposer has one turn at proposing. We first consider the situation where there are no veto players besides M. We will denote the leftmost proposer’s most preferred position as pˆ(L ) and the rightmost proposer’s most preferred position as pˆ R . 5 The actual proposals by these two are p(L ) and p R , respectively. When p i pˆi , we will refer to the actual proposal as pˆi , so that the notation p i will be used only to connote those situations where p i does not equal to pˆi . Proposition 3 Suppose that there is no veto player besides M: (i) If pˆ R b s * , then the status quo will remain in place. (ii) If s * pˆ(L ) b pˆ R b mˆ, then pˆ R will be enacted. (iii) If mˆ pˆ(L ) b pˆ R , 2mˆ , then pˆ(L ) will be enacted. 5 The term is in parentheses to connote when we are counting proposers from left to right instead of right to left.
250
Donald A. Wittman
(iv) If 2mˆ pˆ(L ) , then 2mˆ will be enacted. (v) If pˆ(L ) b mˆ b pˆR , then mˆ will be enacted. (See Fig. 4.) Proof (i) Follows from earlier arguments. (ii) s * pˆ L b pˆ R b mˆ. M will vote for the position that is closest to mˆ regardless of the order of voting. If pˆ L is offered first, then M will vote for pˆ L over s * . If pˆ R is offered second, then M will vote for pˆ R over pˆ L . Of course, if the order was reversed and pˆ R had won over s * , then pˆ L would be voted down when put up against pˆ R . In practice, these might not be separate votes, but amendments and/or just one platform, pˆ R , being offered. (iii) The proof is symmetric to (ii). (iv) The proof follows from earlier arguments. (v) Suppose that the last proposer is to the right of mˆ. Also, suppose that the winning position until then is p mˆ. Then the last proposer will propose 2mˆ p mˆ and M will vote for this position over the previous position, p. Any proposer to the left of mˆ would prefer mˆ to 2mˆ p mˆ . Therefore proposers to the left would have proposed mˆ in the first place to forestall such a bad outcome. A similar logic holds when a player to the left of M is the last proposer. Note that this result relies on the fact that a position to the right (left) of mˆ has been passed before some proposer on the left (right) gets to propose. , We next consider the situation where there are n proposers and n vetoers. We keep the number the same so that we make useful comparisons. However, the analysis quickly carries over to the situation where the numbers differ. We will consider two possible orderings: (1) the order of proposal presentation is from right to left (2) the order of proposal presentation is from left to right. When we considered one proposer and one veto player, we saw that the power to propose was greater than the power to oppose. As we will now see, this relationship is in general reversed when there are many proposers and many vetoers. Unfortunately, the metric of relative power is not straightforward when there are many proposers and vetoers and relative power cannot be defined independent of the other players’ preferences. We start with an easy case. Proposition 4A If all proposers are to the left of s * and/or at least one vetoer is to the left of s * , then s * will remain the status quo.
Power to Propose versus Power to Oppose
251
p* s*
Pˆ5
Pˆ4 Pˆ3
Pˆ2
Pˆ1 M
2M
p*
M Pˆ5
s*
Pˆ4 Pˆ3 Pˆ2
Pˆ1
2M
p*
Pˆ5 Pˆ4 M
s*
Pˆ3 Pˆ2
Pˆ1 2M
(i) If the proposers are all to the left of the median, then the rightmost proposer’s most preferred position will be implemented. (ii) If the proposers are all to the right of the median, then the leftmost proposer’s most preferred position will be implemented. (iii) If the proposers are on both sides of the median, then the median’s most preferred position will be implemented.
Fig. 4. N proposers who propose in some order and no vetoers besides the median.
Proof Follows from our earlier arguments.
,
Proposition 4B If all proposers and vetoers (not including M) are to the right of 2mˆ , , then 2mˆ will be implemented.
Proof Follows immediately from our earlier arguments.
,
Proposition 4 considered easy cases where vetoers employ simple strategies. However in other situations, the veto players must consider the behaviour of other veto players, resulting in more complex strategizing. This is illustrated in the next proposition. Proposition 5A Suppose that the proposal order is from right to left. If p is the new status quo, then the optimal strategy for every vetoer with a most preferred position to the right of p is to veto any substitute bill p a such that p a is strictly to the left of p. In response, the optimal strategy by the leftmost vetoer is to veto all proposals that are to strictly to the right of the vetoer’s most preferred position from the set of positions that will be proposed.
Proof Obvious.
,
Proposition 5A allows us to concentrate our attention on the leftmost veto player (including M ). For the remainder of the paper we will not distinguish M from the other vetoers.
252
Donald A. Wittman p* Pˆ5 Pˆ4
s*
VˆL
Pˆ3 Pˆ2
Pˆ1
2VˆL
Pˆ3 Pˆ2
Pˆ1
2VˆL
p* s*
Pˆ5
Pˆ4 VˆL
(i) The left most vetoer will veto all proposals until his most preferred poposal is proposed. (ii) Thereafter, all vetoers to the right of the leftmost vetoer will veto any proposal to the left. (iii) In the top diagram, p * will be at the most preferred position of the third proposer. (iv) In the bottom diagram, proposer 3 will offer a position epsilon closer to VˆL than Pˆ4 is to VˆL . (v) Note that the median is just treated as one of the vetoers and therefore the median is not specifically identified.
Fig. 5.
N proposers (who propose in order from right to left) and N vetoers.
p* Pˆ4 Pˆ5
s*
VˆL
VˆL 1
Pˆ3 Pˆ2
Pˆ1
2VˆL
p*
s*
Pˆ5
VˆL
Pˆ4 VˆL 1
Pˆ3 Pˆ2
Pˆ1
2VˆL
(i) In the top diagram, p * will be at Pˆ3 . In the bottom diagram, p * will be epsilon closer to VˆL 1 than Pˆ4 is to VˆL 1. (ii) Note again that the median is just treated as one of the vetoers and that the order of proposals is from right to left.
Fig. 6.
Legislation passes unless there are K vetoes (here K 2 )
Define P L .V as the proposer whose most preferred position is closest on the left to the leftmost vetoer and P L .V as the proposer whose most preferred position is closest on the right to the leftmost vetoer. More generally, define P L k .V as the proposer whose most preferred position is closest on the left to the k-th leftmost vetoer and P L k .V as the proposer whose most preferred position is closest on the right to the k-th leftmost vetoer. Proposition 5B Suppose that positions are offered from right to left: (i) If s * b pˆR b vˆL , then p * pˆR will be enacted. (ii) If s * b vˆR b pˆ L b 2vˆL , then p * pˆ L will be enacted. (iii) If s * b pˆ L b vˆL b pˆR and pˆ L .V is closer to vˆ L than pˆ L .V is to vˆ L , then p * pˆ L .V . (iv) If s * b pˆ L b vˆL b pˆR and pˆ L .V is closer to vˆL than pˆ L .V is to vˆL , then the winning position will be 2vˆL pˆ L .V F.
Power to Propose versus Power to Oppose
253
Proof (i) PR will offer pˆ R . No proposer will offer more than pˆ R , and no veto player will reject this offer, as all other proposals (as well as s * ) will make the vetoers worse off. (ii) Symmetric argument to (i). Note that the proofs for (i) and (ii) do not require any particular ordering of the proposals. (iii) By Proposition 5A, the leftmost veto player will reject all proposals strictly greater than pˆ L .V . No veto player would want to reject pˆ L .V because they would then be faced with a worse proposal, pˆ L .V . P L .V cannot credibly commit to offering more than his preferred position. (iv) By Proposition 5A, the leftmost veto player will reject all proposals strictly greater than pˆ L .V . Proposer P L .V wants to forestall pˆ L .V from being implemented. P L .V can do this by offering p L .V 2vˆL pˆ L .V F. Hence, V L strictly prefers p L .V to the only credible offer of p L .V by P L .V . None of the other vetoers will want to reject this proposal. Note that so far, only the proofs for 5(iii) and and 5(iv) require a quadratic loss function. , Fig. 5 illustrate points (iii) and (iv). If the order of proposing is reversed. Then the outcome will shift to the rightmost vetoer, but the shift may not be complete and the analysis is not completely symmetric. This is because the left-most vetoer will veto any proposal that is greater than 2vˆ L . More generally, if the order of proposals is unknown, only the boundary can be specified, which is again 2vˆL . Suppose next, that a bill will pass if there are not more than K vetoes. Proposition 6 No proposal beyond 2vˆL K 1 will pass. (See Fig. 6.)
Proof K vetoers prefer the status quo over any proposal beyond 2vˆL K 1 . , This is the essence of Krehbiel’s filibuster model. In this case, a bill is not passed if there are not enough votes to override the filibuster. Proposition 7 Suppose that positions are offered from right to left and that K vetoes are required for a veto to be successful: (i) If s * b pˆ R b 2vˆ L K 1 , then p * pˆ R will be enacted. (ii) If s * b vˆR b pˆ L b 2vˆL K 1 , then p * pˆ L will be enacted. (iii) If s * b pˆ L b vˆL K 1 b pˆ R and pˆ L K 1.V is closer to vˆL K 1 than ˆ p L K 1.V is to vˆL K 1 , then the winning position will be pˆ L K 1.V . (iv) If s * b pˆ L b vˆL K 1 b pˆ R and pˆ L k 1.V is closer to vˆL K 1 than ˆ p L K 1.V is to vˆL K 1 , then the winning position will be 2vˆL K 1 pˆ L K 1.V F .
Proof A straightforward generalization of Proposition 5(i). Note that the winning proposal movers weakly to the right as more vetoes required for a defeat increases. ,
254
Donald A. Wittman
Proposition 7(iii) and 7(iv) are illustrated in Fig. 6. We next consider the situation where K of N proposers must agree on the same bill before it can become a proposal. Because the logic is more opaque in this case, we will demonstrate this proposition more formally and in steps. Let P N be the leftmost proposer, P(L ) ; let P N 1 P(L 1) be the next leftmost proposer, etc. In this way, the rightmost proposer is PR P1 P(L N 1). Note that we use the parentheses when we are counting proposers from left to right and we do not use the parentheses when we are counting proposers from right to left. Proposition 8 Suppose that positions are offered from right to left, K proposers must propose the same bill for the bill to pass, and only one veto or a majority vote against is needed for the proposal not to pass. If s * vˆL , mˆ pˆ(L ) b pˆ(L +K -1) b 2vˆL , 2mˆ then p * pˆ(L +K -1) pˆ N -K +1 .
Proof All proposals greater than pˆ(L +K -1) will be vetoed (or beaten by pˆ(L +K -1) ). P(L K 1) will propose pˆ(L +K -1) pˆ N -K +1 and the other K 1 proposers to the left will propose the same as they prefer pˆ(L +K -1) to both s * and any p pˆ(L +K -1). All the vetoers prefer this to s * . Given this outcome, it is possible that earlier proposers would propose the same pˆ(L +K -1). In any event the outcome would be pˆ(L +K -1). , Note how close the result of Proposition 8 is to Proposition 7(iii). Fig. 7 illustrates Proposition 8. For our next case, we will assume that | pˆ i pˆi 1 | | pˆ i-1 pˆi 2 |, and that if indifferent, the proposer will go with the proposal on the floor rather than offer a different proposal that yields the same utility. We make this assumption so that proposals are confined to the set of most preferred positions, thereby greatly simplifying the logic. Proposition 9 Suppose that positions are offered from right to left. Assume that K 2 . If s * pˆ(L ) b pˆ R w pˆ1 b vˆ L and pˆ(L ) s * pˆ(L +1) pˆ(L ) then p * pˆ R pˆ1 when R is even and p * pˆ R -1 w pˆ 2 when R is odd. (See Fig. 8.)
Proof We work by backwards induction. If there are only two proposers, then P1 PR P(L 1) will offer pˆ1 w pˆ(L +1) and P2 P(L 1) will do likewise, since P L prefers pˆ(L +1) to s * by assumption. Also all the veto players prefer pˆ(L +1) to s * . So there will be no veto. Therefore p * pˆ1 w pˆ R w pˆ(L +1).
Power to Propose versus Power to Oppose
VˆL M Pˆ5 Pˆ4
s*
p 3*
p 4*
Pˆ3
Pˆ2 2VˆL Pˆ1 2M
255
(i) Let p i * be the outcome when i identical proposals are required. (ii) Assumptions: Proposers propose from right to left. All proposers are to the right of the median voter and leftmost vetoer. (iii) Conclusions: When three (four) identical proposals are needed for a proposal to pass, the outcome is p 3* Pˆ3 ( p 4* Pˆ2 ).
Fig. 7.
Legislation needs K out of N proposers to propose the same proposal.
3 Proposers p*
s*
Pˆ3 Pˆ2 Pˆ1
VˆL
M
2M
4 Proposers
p* s*
Pˆ4 Pˆ3 Pˆ2 Pˆ1
VˆL
M
2M
5 Proposers
p* s * Pˆ5 Pˆ4 Pˆ3 Pˆ2 Pˆ1
VˆL
M
2M
Assumptions: 2 of N proposers must propose the same policy position for the policy to pass. Only one vetoer needed to veto legislation. s * Pˆ1 , ! PˆN M , VˆL . The most preferred positions of the proposers are equally spaced. (iii) If indifferent, proposers will propose the same proposal as already proposed over proposing a different proposal with equal utility. In the second diagram from the top, if Pˆ2 does not accept Pˆ1 , then Pˆ3 will be implemented.
Fig. 8.
Two identical proposals needed for legislation to pass
If there are three proposers, then again p * pˆ(L +1). 6 P2 PR 1 P(L 1) can guarantee p * pˆ 2 w pˆ R -1 w pˆ(L +1) by offering pˆ(L +1) , thereby rejecting any proposal by P1 PR P(L 2) such that p 1 p R p (L 2) pˆ 2 pˆ R -1 pˆ(L +1). If there are four proposers, then PR P1 P(L 3) will offer pˆ1 w pˆ(L +3) . P2 P(L 2) , will also offer pˆ(L +3) as P2 P(L 2) is indifferent between pˆ(L +3) 6 Of course, L 1 stands for the proposer farthest to the right when there are two proposers and the proposer second to the right when there are three proposers.
256
Donald A. Wittman
and pˆ(L +1) (which would be the outcome if P 2 had offered a different proposal) and therefore, by assumption, P2 will choose to go with pˆ(L +3) pˆ R pˆ1 . And again none of the vetoers will veto the proposal. , More generally, if P(L K ) knows that P(L K 1) will not propose pˆ(L +K ) , then P(L K ) will offer pˆ(L +K +1) (if P(L K 1) has offered pˆ(L +K +1) ). And if P(L K ) knows that pˆ(L +K ) will be offered by P(L K 1) , then P(L K ) will offer pˆ(L +K ) . But this means that if there are N proposers and N is even, then p * pˆ R pˆ1 and if N is odd, then p * pˆ R -1 pˆ 2 . The basic concept, easily generalizes to K 2. Assuming that the remaining conditions in Proposition 9 hold, then P(L K 1) will choose pˆ(K ) and all of the proposers to the left will propose pˆ(K ) , as well. If N 2K , then p * pˆ(K ) will be implemented. If the number of proposers are at least 2K but strictly less than 3K , then p * pˆ(L 2 K 1) will be implemented. More generally, if the number of proposers are at least TK but strictly less than (T 1)K , then p * pˆ(L TK 1) will be implemented. Note the difference in Propositions 8 and 9. In Proposition 8, the left most vetoer is to the left of the proposers and vetos all proposals that are to the right of the leftmost viable proposal. In proposition 9, the vetoers are out of the picture. So the outcome depends on whether the second proposer can get the third proposer to accept his proposal. We now turn our attention to plenary sessions, where all players are both proposers and vetoers, and there are no earlier or later stages. We assume that there is some finite order of proposals by each of the proposers. Proposition 10 Suppose that the proposers are also the voters and vetoers as would be the case in a legislative body in plenary session without special agenda setting committees. Then under majority rule, the outcome will be at the median voter’s most preferred outcome. 7
Proof Under majority rule, (N 1) 2 vetoers are needed to defeat a proposal. Suppose instead that some p v mˆ is implemented. But a majority of voters prefer mˆ, a position that would be proposed by M. Therefore a majority of voters would have defeated p. Suppose in the absence of p, some p a v mˆ is implemented. Again a majority of voters prefer mˆ. Therefore a majority of voters would have defeated p a. This process continues until mˆ is guaranteed to win. , As Cox (2006) has noted legislatures rarely have plenary sessions; instead the power to propose is almost always given to a subset of the members. 8 7
This can be seen as a special case of Banks and Duggan (2000). In the United Kingdom, ‘no member of the House of Commons can introduce a bill the main purpose of which is to increase expenditure or taxation.’ Many countries have committees that play a prominent role in setting the legislative agenda. See Laver and Shepsle 8
Power to Propose versus Power to Oppose
257
3. Concluding Remarks Although there may be many proposers and vetoers, we need only concentrate on a few of the players to determine the outcome of the legislative game. For example, when proposers propose from right to left and there is one vetoer, the only critical vetoer is the leftmost vetoer. By acting strategically, the leftmost vetoer can render the vetoers to its right powerless. Proposers too must act strategically with respect to other proposers. We concentrated on the case where proposals go from right to left, and, as a consequence, the leftmost vetoer was the most powerful of the vetoers. One might ask whether we can deal with proposal sequence in another way. One possibility is that proposers are drawn at random. This is the approach undertaken by Baron and Ferejohn (1989) and McCarty (2000) for dividethe-dollar games. In this situation a proposer offers an allocation of an income pie, knowing that if he is too stingy, his proposal will be rejected and another proposer may be drawn the next time. The only advantage to the present proposer is that the amount of money shrinks each time. To illustrate, suppose that there are only two proposers in plenary session with no other players and both must agree on the allocation for the allocation to take place. Assume further that the amount of money decreases by half if the allocation is not agreed on. Then the first person to get the proposal will offer the other person ¼ of the pie. The other person will take it, because if he refuses, he has a half a chance of getting ¼ of the half pie if he is not chosen as a proposer on the next round and half a chance of getting ¾ of the half pie, which adds up to ¼ of the pie. It is much harder to apply the random proposer methodology to the dimension game that we consider here because the cost to each player depends on where the player’s most preferred position is, while in the dividethe-dollar game, every one is identical (each wants more pie and utility is linear in dollars). Nevertheless, we can gain some insight even though the exact calculations are difficult to determine. In this game, delay means that the status quo remains in place. This is most costly for those players farthest away from the status quo, which in our diagrams means those who are furthest to the right. This creates an advantage for those players to the left as they are more patient. So while the results are not exactly the same in a random proposer model, the basic thrust of the argument still holds.
Acknowledgements I would like to thank the participants in the conference on ‘Power: Conceptual, Formal, and Applied Dimensions’ 17–20 August 2006, Hamburg, and two anonymous referees for helpful suggestions.
(1996) who show that particular ministers have proposal power in their particular areas.
258
Donald A. Wittman
References Banks, J. and J. Duggan (2000) Stationary Equilibria in a Bargaining Model of Social Choice, American Political Science Review 94: 73–88. Baron, D. and J. Ferejohn (1989) Bargaining in Legislatures, American Political Science Review 89:1181–1206. Cameron, C. (2000) Veto Bargaining, Cambridge University Press. Cox, G. (2006) The Organization of Democratic Legislatures, in B. Weingast and D. Wittman (eds) Oxford Handbook of Political Economy, Oxford University Press, 141– 161. Cox, G. and M. McCubbins (2005) Setting the Agenda: Responsible Party Government in the U.S. House of Representatives, Cambridge University Press. Ferejohn, J. and C. Shippan (1990) Congressional Influence on the Bureaucracy, Journal of Law, Economics, and Organization 6: 1–20. Krehbiel, K. (1998) Pivotal Politics: A Theory of US Lawmaking, University of Chicago Press. Laver, M, and K, Shepsle (1996) Making and Breaking Governments: Cabinets and Legislatures in Parliamentary Democracies, Cambridge University Press. McCarty, N. (2000) Proposal Rights, Veto Rights, and Political Bargaining, American Journal of Political Science. 44: 506–522, O’Neill, B. and B. Peleg (forthcoming) Lexicographic Composition of Simple Games, Games and Economic Behavior. Romer, T. and H. Rosenthal (1978) Political Resource Allocation, Controlled Agendas, and the Status Quo. Public Choice 33: 27–43. Romer, T. and H. Rosenthal (1979) Bureaucrats versus Voters: On the Political Economy of Resource Allocation by Direct Democracy, Quarterly Journal of Economics 93: 563–587. Tsebelis, G. (2002) Veto Players, Princeton University Press.
14. Divergence in the Spatial Stochastic Model of Voting Norman Schofield Center in Political Economy, Washington University in St. Louis, USA
[I]t may be concluded that a pure democracy, by which I mean a society, consisting of a small number of citizens, who assemble and administer the government in person, can admit of no cure for the mischiefs of faction … Hence it is that democracies have been spectacles of turbulence and contention; have ever been found incompatible with personal security … and have in general been as short in their lives as they have been violent in their deaths. A republic, by which I mean a government in which the scheme of representation takes place, opens a different prospect... [I]f the proportion of fit characters be not less in the large than in the small republic, the former will present a greater option, and consequently a greater probability of a fit choice. — James Madison, 1787 (quoted in Rakove 1999).
1. Introduction Much research has been devoted over the last few decades in an attempt at constructing formal models of political choice in electoral systems based on proportional representation (PR). In large degree these models have been most successful in studying the post-election phase of coalition bargaining (Laver and Schofield 1990; Laver and Shepsle 1996; Banks and Duggan 2000). Such models can take the locations and strengths of the parties as given. Attempts at modelling the electoral phase have met with limited success, since they have usually assumed that the policy space is restricted to one dimension, or that there are at most two parties. The extensive formal literature on two party electoral competition was typically based on the assumption that parties or candidates adopted positions in order to win (Calvert 1985.) In PR systems it is unlikely that a single party can gain enough votes to win outright (Cox 1990, 1997; Strom 1990). However, there is a well developed empirical modelling technique which studies the relationship between party positions and electoral response.
260
Norman Schofield
Dating back to empirical models of U.S. elections (Poole and Rosenthal 1984) and continuing more recently with Quinn, Martin and Whitford (1999), Alvarez, Nagler and Bowler (2000), Alvarez, Nagler and Willette (2000), such models elucidate the relationship between voter preferred positions, party positions, sociodemographic variables and voter choice. Given such an empirical model, it is possible to perform a ‘counter-factual experiment’ to determine the voter response if economic conditions or party positions had been different (Glasgow and Alvarez 2003). It is then natural to seek the existence of ‘Nash equilibria’ in the empirical model – a set of party positions from which no party may deviate to gain advantage. Since the ‘utility functions’ of parties are, in fact unknown, I can use the ‘counter factual experiment’ to make inferences about the political game. That is, knowing the relationship between positions and electoral outcome, I make the hypothesis that the actual positions are indeed Nash equilibria in some formal political game. In principle, this could give information about the unknown utility functions of political leaders. The most obvious assumption to make about party utility functions is that they can be identified with the vote shares of the parties. In PR systems, in particular, vote shares and seat shares are approximately identical. At least for some parties, increasing seat shares will increase the probability of membership of a governing coalition, thus giving access to government perquisites and the opportunity to affect policy. Since empirical models necessarily have a stochastic component, the natural formal tool to use in seeking Nash equilibria is the so-called probabilistic spatial model of voting. Such models have an inherent stochastic component associated with voter choice. Developing the earlier argument of Hinich (1977), Lin, Enelow and Dorussen (1999) have asserted the ‘mean voter theorem’ that all vote maximizing parties should converge to the electoral mean (Hinich 1982, 1984a,b , 1989a,b ; Coughlin 1994; Adams 1999a,b ; 2001; Adams and Merrill 2000, 2001; Banks and Duggan 2005; and McKelvey and Patty 2006). Since the early exposition of Duverger (1954), ample evidence has been collected that indicates that parties do not converge in this way (Daalder 1984; Budge, Robertson, and Hearl 1987; Rabinowitz, MacDonald and Listhaug 1991). The contradiction between the conclusions of the formal model and observation has led to efforts to modify the voter model by adding ‘party identification’ or ‘directional voting’ (Merrill, Grofman, and Feld 1999) However, such attempts have not made it obvious why small parties apparently adopt ‘radical’ positions (Adams and Merrill 1999, 2000). An alternative way to modify the basic formal model is to add the effect of ‘valence.’ Stokes (1963, 1992) first introduced this concept many years ago. ‘Valence’ relates to voters’ judgements about positively or negatively evaluated conditions which they associate with particular parties or candidates. These judgements could refer to party leaders’ competence, integrity, moral stance or ‘charisma’ over issues such as the ability to deal with the
Divergence in The Spatial Stochastic Model of Voting
261
economy, foreign threat, etc. The important point to note is that these individual judgements are independent of the positions of the voter and party. Estimates of these judgements can be obtained from survey data – see, for example, the work on Britain by Clarke, Stewart and Whiteley (1997, 1998) and Clarke et al. (2004). However, from such surveys it is difficult to determine the ‘weight’ that an individual voter attaches to the judgement in comparison to the policy difference. As a consequence, the empirical models usually estimate valence for a party or party leader as a constant or intercept term in the voter utility function. The party valence variable can then be assumed to be distributed throughout the electorate in some appropriate fashion. This stochastic variation is expressed in terms of ‘disturbances,’ which, in the most general models, can be assumed to be distributed multivariate normal with a non-diagonal covariance matrix. This formal assumption parallels that of multinomial probit estimation (MNP) in empirical models (Quinn, Martin, and Whitford 1999). A more restrictive assumption is that the errors are independently and identically distributed by the Type I extreme value, or log Weibull, distribution (Quinn and Martin 2002; Dow and Endersby 2004). This parallels multinomial logit estimation (MNL) in empirical estimation. This conception of voting as judgement as well as the expression of voter interest or preference is clearly somewhat similar to Madison’s understanding of the nature of the choice of Chief Magistrate in the Republic. Madison’s argument may well have been influenced by Condorcet’s work on the so-called ‘Jury Theorem’ (Condorcet 1785; Schofield 2006). However, Madison’s conclusion about the ‘probability of a fit choice’ depended on assumptions about the independence of judgements from interests, and these assumptions are unlikely to be valid. Condorcet’s work has recently received renewed attention (McLennan 1998) and formal models have been presented based on the notion of valence (Ansolabehere and Snyder 2000; Groseclose 2001; Aragones and Palfrey 2002, 2005). However little work has been done on developing the stochastic voter model when judgements, or valences, and preferences are both involved. (An important recent exception is Adams and Merrill (2005)). In a sense, the above work and the results presented here can be seen as a contribution to the development of Madison’s conception of elections in representative democracies as methods of aggregration of both preferences and judgements. Unlike Madison, however, these models also concern themselves with the response of elected representatives to electoral perceptions. Indeed, the most important contribution of such models is the determination of conditions under which heterogeneity can be maintained. Although the formal model is applied to electoral systems based on proportional representation, it could in principle be applied to electoral systems based on plurality, such as in Britain or the U.S.
262
Norman Schofield
2. Modelling Elections Formal models of voting usually make the assumption that political agents, whether parties or candidates, attempt to maximize expected vote shares (or, in two party contests, the plurality). The ‘stochastic’ version of the model typically derives the ‘mean voter theorem’ that each agent will adopt a ‘convergent’ policy strategy at the mean of the electoral distribution. This conclusion is subject to a constraint that the stochastic component is ‘sufficiently’ important. (Lin, Enelow, and Dorussen 1999). Because of the apparent inconsistency between the theory and empirical evidence, I shall re-examine the implications of the formal stochastic model when there are significant valence differences between the candidates or party leaders. The ‘valence’ of each party derives from the average weight, given by members of the electorate, to the overall competence of the particular party leader. In empirical models, a party’s valence is independent of its current policy declaration, and can be shown to be statistically significant in the estimation. Because valence may comprise all those non-policy aspects of the competition, I attempt to account for as many such factors as possible by including sociodemographic features in the model. As the analysis shows, when valence terms are incorporated in the model, then the convergent vote maximizing equilibrium may fail to exist. I contend that the empirical evidence is consistent with a formal stochastic model of voting in which valence terms are included. Low valence parties, in equilibrium, will tend to adopt positions at the electoral periphery. High valence parties will contest the electoral center, but will not, in fact, occupy the electoral mean. I use evidence from elections in Israel (Schofield et al. 1998; Schofield and Sened 2005a) to support and illustrate this argument. For the discussion and analysis of the case of Israel I combine available and original survey data for Israel for 1988 to 1996, that allows me to construct an empirical model of voter choice in Knesset elections. I use expert evaluations to estimate party positions and then construct an empirical vote model that I show is statistically significant. Using the parameter estimates of this model, I develop a ‘hill climbing’ algorithm to determine the empirical equilibria of the vote-maximizing political game. Contrary to the conclusions of the formal stochastic vote model, the ‘mean voter’ equilibrium, where all parties adopt the same position at the electoral mean, did not appear as one of the simulated equilibria. Since the voter model that I develop predicts voter choice in a statistically significant fashion, I infer that the assumptions of the formal stochastic vote model are compatible with actual voter choice. Moreover, equilibria determined by the simulation were ‘close’ to the estimated configuration of party positions for the three elections of 1988, 1992 and 1996. I infer from this that the assumption of vote share maximization on the part of parties is a realistic assumption to make about party motivation.
Divergence in The Spatial Stochastic Model of Voting
263
To evaluate the validity of the ‘mean voter theorem’ I consider, a formal vote model. The usual assumption to make to ensure existence of a ‘Nash equilibrium’ at the mean voter position depends on showing that all party vote share functions are ‘concave’ in some domain of the party strategy spaces (Banks and Duggan 2005). Concavity of these functions depends on the parameters of the model. Because the appropriate empirical model for Israel incorporated valence parameters, these were part of the concavity condition for the baseline formal model. Concavity is a global property of the vote share functions, and is generally difficult to empirically test. I focus on a weaker equilibrium property, that of ‘local Nash equilibrium,’ or LNE. Necessary and sufficient conditions for existence of an LNE at the electoral mean can be obtained from ‘local concavity’ conditions on the second derivative (the Hessian) of the vote share functions. If local concavity fails, then so must concavity. The necessary condition required for existence of LNE at the origin in the formal vote model is violated by the estimated values of the parameters in the empirical model in these elections in Israel. Consequently, the empirical model of vote maximizing parties could not lead us to expect convergent strategies at the mean electoral position. The electoral theorem presented below is valid in a policy space of unrestricted dimension, but has a particularly simple expression in the two-dimensional case. The theorem allows me to determine whether the mean voter position is a best response for a low valence party when all other parties are at the mean. In the empirical model I estimate that low valence parties would, in fact, minimize their vote share if they chose the mean electoral position. This inference leads me to the following conclusions (i) some of the low valence parties, in maximizing vote shares, should adopt positions at the periphery of the electoral distribution (ii) if this does occur, then the first order conditions for equilibrium, associated with high valence parties at the electoral mean, will be violated. Consequently, for the sequence of elections in Israel, we should expect that it is a non-generic property for any party to occupy the electoral mean in any vote maximizing equilibrium. Clearly, optimal party location depends on the valence by which the electorate, on average, judges party competence. The simulations suggest that if a single party has a significantly high valence, for whatever reason, then it has the opportunity to locate itself near the electoral center. On the other hand, if two parties have high, but comparable valence, then the simulation suggests that neither will closely contest the center. We can observe that the estimated positions of the two high valence parties, Labor and Likud, are almost precisely identical to the simulated positions under expected vote maximization. The positions of the low valence parties are, as predicted, close to the periphery of the electoral distribution. However they are not identical to simulated vote maximizing positions. This suggests that the perturbation away for vote maximizing equilibria is either due to policy prefer-
264
Norman Schofield
ences on the part of party principals or to the effect of party activists (Aldrich 1983a,b ; Miller and Schofield 2003). The formal and empirical analyses presented here are applicable to any polity using an electoral system based on proportional representation. The underlying formal model is compatible with a wide variety of different theoretical political equilibria. The theory is also compatible with the considerable variation of party political configurations in multiparty systems (Laver and Schofield 1998). The analysis of the formal vote model emphasizes the notion of ‘local’ Nash equilibrium in contrast to the notion of a ‘global’ Nash equilibrium usually employed in the technical literature. One reason for this emphasis is that I deploy the tools of calculus and simulation via hill climbing algorithms to locate equilibria. As in calculus, the set of local equilibria must include the set of global Nash equilibria. Sufficient conditions for existence of a global Nash equilibrium are therefore more stringent than for a local equilibrium. In fact, the necessary and sufficient condition for a local equilibrium at the electoral center, in the vote maximizing game with valence, is so stringent that I regard it to be unlikely to obtain in polities with numerous parties and varied valences. I therefore infer that existence of a global Nash equilibrium at the electoral center is very unlikely in such polities. In contrast, the sufficient condition for a local, non-centrist equilibrium is much less stringent. Indeed, in each polity there may well be multiple local equilibria. This suggests that the particular configuration of party positions in any polity can be a matter of historical contingency.
3. Empirical Estimation of Elections in Israel 1988–1996 I assume that the political preferences (or beliefs) of voter i can be described by a ‘latent’ utility vector of the form u i (x i , z) (u i 1(x i , z 1 ),, u ip (x i , z p )) R p .
(1)
Here z (z 1 ,…, z p ) is the vector of strategies of the set, P, of political agents (candidates, parties, etc.). The point z j is a vector in a policy space X that I use to characterize party j. (For the formal theory it is convenient to assume X is a compact convex subset of Euclidean space of dimension w, but this is not an absolutely necessary assumption. I make no prior assumption that w 1 .) Each voter, i, is also described by a vector x i , in the same space X , where x i , is used to denote the beliefs or ‘ideal point’ of the voter. I assume
u ij (x i , z j ) M j Aij (x i , z j ) R jT I i F j .
(2)
I use Aij (x i , z j ) to denote some measure of the distance between the vectors x i and z j . I follow the assumption of the usual ‘Euclidean’ model and
Divergence in The Spatial Stochastic Model of Voting
265
assume that Aij (x i z j ) C & x i z j &2 where & ¸ & is the Euclidean norm on X and C is a positive constant. It is also possible to use an ellipsoidal distance function for Aij , although in this case the necessary and sufficient condition for equilibrium has a more complex expression. The term M j is called valence and has been discussed above. The k-vector R j represents the effect of the k different sociodemographic parameters (class, domicile, education, income, etc.) on voting for the party j while I i is a k-vector denoting the i th individual’s relevant ‘sociodemographic’ characteristics. I use R jT to denote the transpose of R j so R jT I i is a scalar. The abbreviation SD is used throughout to refer to models involving sociodemographic characteristics The vector F (F1 ,, F j ,, F p ) is the ‘stochastic’ error , where F j is associated with the j- th party. Some recent analyses have assumed that F is multivariate normal, and used Markov Chain Monte Carlo (MCMC) methods, allowing for multinomial probit (MNP) estimation when the errors are covariant (Chib and Greenberg 1996; Schofield et al. (1998); Quinn, Martin, and Whitford 1999). Assuming that the errors are independent and identically distributed via the Type I extreme value (or log-Weibull) distribution gives a multinomial logit (MNL) model. The details of this modelling assumption are given in the next section, where I present formal results based on this assumption. MNP models are generally preferable because they do not require the restrictive assumption of ‘independence of irrelevant alternatives’ (Alvarez and Nagler 1998). I use a MNL model in this paper because comparison of MNL and MNP models suggest that the simpler MNL model gives an adequate account of voter choice (Dow and Endersby (2004: 111). It is also much easier to use the MNL empirical model to simulate vote-maximizing strategies by parties (Quinn and Martin 2002). I shall now show that divergence results simply from the hypothesis of vote maximizing in the context of the empirical stochastic model, based on Type I extreme value distribution when valence is important. Table 1 gives the results for the five elections in Israel from 1988 to 2003, while Table 2 gives details of the factor model for 1996. Table 3 gives the estimation details for the election of 1996, showing the spatial, valence and sociodemographic coefficients. Similar estimations were obtained for the elections of 1992 and 1988. These estimations are based on factor analysis of the surveys conducted by Arian and Shamir (1990, 1999, 1995) for the three elections. Party positions for the three election years 1988, 1992 and 1996 were estimated by expert analysis of party manifestos, using the same survey questionnaires and these are shown in figures 1, 2 and 3. These figures also show the ‘smoothed’ distributions of voter ideal points for 1996 and 1992 and 1988. (The outer contour line in each figure contains 95% of the voter ideal points).
266
Norman Schofield Table 1.
Israeli elections 1998–2003
Party
Knesset Seats 1988
1992
1996
1999
2003
Labor (LAB) Democrat (ADL) Meretz,Ratz (MZ) CRM, MPM,PLP Communist (HS) Balad
39 1 -9 4 -53
44 2 12 -3 -61
34 4 9 -5 -52
28 5 10 -3 2 48
19a 2a 6 3 3 3 36
Olim III Way Center Shinui (S)
---2 2
-----
7 4 --11
6 -6 6 18
2b --15 17b
Likud (LIK) Gesher Tzomet (TZ) Yisrael beiteinu
40 -2 -42
32 -8 -40
30 2 --32
19 --4 23
38b --7 45
Shas (SHAS) Yahadut (AI, DH) (Mafdal) NRP Moledet (MO) Techiya,Thia (TY)
Subtotal
6 7 5 2 3 23
6 4 6 3 -19
10 4 9 2 -25
17 5 5 4 -31
11 5 6 --22
Total
120
120
120
120
120
Left
Subtotal Center
Subtotal Right
Subtotal Religious
a
Am Ehad or ADL under Peretz, combined with Labor, to give the party 19 2 21 seats. Olim joined Likud to form one party giving Likud 38 2 40 seats, and the right 40 7 47 seats.
b
Table 2. Factor analysis for the 1996 Israeli election Issue question
Factor weights Security Religion
Chance for peace
0.494 (0.024)
–
Land for peace
0.867 (0.013)
–
Religious law vs. Democracy
0.287 (0.038)
0.613 (0.054)
Must stop peace process
–0.656 (0.020)
–
Agreement with Oslo accord
–0.843
–
Divergence in The Spatial Stochastic Model of Voting
267
Table 2. Factor Analysis, 1996 Israeli 1996 Election (cont’d) Issue question
Factor weights Security Religion
Oslo accord contributed to safety
–0.798 (0.016)
–
Personal safety after Oslo
–0.761 (0.020)
–
Israel should talk with PLO
–0.853 (0.016)
–
Opinion of settlers
0.885 (0.015)
–
Agree that Palestinians want peace
–0.745 (0.016)
–
Peace agreement will end Arab-Israeli conflict
–0.748 (0.018)
–
Agreement with Palestinian state
–0.789 (0.016)
–
Should encourage Arabs to emigrate
0.618 (0.022)
–
Israel can prevent war
–0.843 (0.019)
–
Settlements 1
0.712 (0.014)
–
Settlements 2
0.856 (0.014)
–
National security
0.552 (0.023)
–
Equal rights for Arabs and Jews in Israel
–0.766 (0.018)
–
More government spending towards religious institutions
0.890 (0.035)
–
More government spending towards security
0.528 (0.049)
0.214 (0.065)
More government spending on immigrant absorption
0.342 (0.052)
0.470 (0.065)
More government spending on settlements
0.597 (0.040)
0.234 (0.055)
More government spending in Arab sector
–0.680 (0.019)
–
Public life should be in accordance with Jewish tradition Views toward an Arab minister
1.000 constant –0.747 (0.019)
Var (security)
1.000
Var (religion)
0.732
Covar ( security, religion)
0.591
constant
268
Norman Schofield Table 3. Multinomial Logit Analysis,1996 Israeli Election Party
Spatial Distance
C
Posterior Mean
95% Confidence Interval Lower Bound Upper Bound
1.117
0.974
1.278
Constant
Shas Likud Labor NRP Moledet III Way
–2.960 3.140 4.153 –4.519 –0.893 –2.340
–7.736 0.709 1.972 –8.132 –4.284 –4.998
1.018 5.800 6.640 –1.062 2.706 0.411
Ashkenazi
Shas Likud Labor NRP Moledet III Way
0.066 –0.625 –0.219 1.055 0.819 –0.283
–1.831 –1.510 –0.938 –0.206 –0.560 –1.594
1.678 0.271 0.492 2.242 2.185 1.134
Age
Shas Likud Labor NRP Moledet III Way
0.014 –0.024 –0.040 –0.064 –0.026 0.014
–0.058 –0.063 –0.077 –0.111 –0.088 –0.034
0.086 0.008 –0.012 –0.020 0.026 0.063
Education
Shas Likud Labor NRP Moledet III Way
–0.377 –0.032 0.011 0.386 0.049 –0.067
–0.693 –0.180 –0.099 0.180 –0.219 –0.298
–0.063 0.115 0.120 0.599 0.305 0.150
Religious Observation
Shas Likud Labor NRP Moledet III Way
3.022 0.930 0.645 2.161 0.897 0.954
1.737 0.270 0.077 1.299 –0.051 0.031
4.308 1.629 1.272 3.103 1.827 1.869
Correctly Predicted
Shas Likud Labor NRP Moledet III Way Meretz Entire Model
0.309 0.707 0.717 0.408 0.078 0.029 0.286 0.638
0.210 0.672 0.681 0.324 0.046 0.017 0.226 0.623
0.414 0.740 0.752 0.493 0.115 0.043 0.349 0.654
n 791
Log marginal likelihood: –465.
Religion
Divergence in The Spatial Stochastic Model of Voting
2
Yahadut
NRP
Shas
1
Moledet
Likud Gesher
0
Olim
Labor
III Way
–1
Tzomet
Meretz
Dem-Arab Communists
–2 –2
Religion
Fig. 1.
–1
0
2 Security
1
Voter distribution and estimated party positions in the Knesset at the 1996 election
2 Yahadut
Shas
NRP
1 Likud
Moledet
Labor
Dem-Arab
0
Tzomet
Communists Meretz
–1
–2 –2
Fig. 2.
–1
0
1
2 Security
Party positions and electoral distribution (at the 95%, 75%, 50% and 10% levels) in the Knesset at the 1992 election
269
Norman Schofield Religion
270
2
1 Moledet
D-Htora Aguda Shas
NRP
Tzomet
Thia Likud
0 Labor
Mapam Communists
–1
DMC
Ratz
Dem-Arab
–2 –2
Fig. 3.
–1
0
1
2 Security
Party positions and electoral distribution (at the 95%, 75%, 50% and 10% levels) in the Knesset at the 1988 election
Each respondent for the survey is characterized by a point in the resulting two-dimensional policy space, X. Thus the smoothed electoral distribution can be taken as an estimation of the underlying probability density function for the voter ideal points. In these Figures, ‘Security’ refers to attitudes to peace initiatives. ‘Religion’ refers to the significance of religious considerations in government policy. The axes of the figures are oriented, so that ‘left’ on the security axis can be interpreted as supportive of negotiations with the PLO, while ‘North’ on the vertical or religious axis is indicative of a support for the importance of the Jewish faith in Israel (see Arien (1998), for discussion of these themes in the politics of Israel ). Comparing Fig. 3 for 1988 with Fig. 1 for 1996 suggests that the covariance between the two factors has declined over time. Since the competition between the two major parties, Labor and Likud, is pronounced, it is surprising that these parties do not move to the electoral mean (as suggested by the formal vote model) in order to increase vote and seat shares. The data on seats in the Knesset given in Table 1 suggest the vote share of the small Sephardic orthodox party, Shas, increased significantly between 1992 and 1996. As Fig.s 1 and 2 illustrate, however, there was no significant move by Shas to the electoral center. The inference is that the shifts of electoral support are the result of changes in party va-
Divergence in The Spatial Stochastic Model of Voting
271
lence. To be more explicit, I contend that prior to an election each voter, i, forms a judgment about the relative capability of each party leader. Let Mij denote the weight given by voter i to party j in the voter’s utility calculation. The voter utility is then given by the expression: u ij (x i , z j ) Mij C & x i z j &2 R jT I i .
(3)
However, these weights are subjective, and may well be influenced by idiosyncratic characteristics of voters and parties. For empirical analysis, I assume that Mij M j % ij , where % ij is drawn at random from the Type I extreme value distribution, : for the variate F j , with expected value zero. The expected value, Exp(Mij ) of Mij is M j , and so I write Mij M j F j , giving (2). Since I am mainly concerned with the voter’s choice, I shall assume here that M j is exogenously determined. The details of the estimations of the parameters [C ; M j , j 1,…, p ] and for the k by p matrix T for 1996 are given in the Table 3. Estimating the voter model requires information about sample voter behavior. It is assumed that data is available about voter intentions: this information is encoded, for each sample voter i by the vector c i (c i 1 ,…, c ip ) where c ij 1 if and only if i intends to vote (or did indeed vote) for agent j. Given the data set \x i , I i , c i ^ N for the sample N (of size n) and \z j ^ P , for the political agents, a set \S i ^ N of stochastic variables is estimated. The first moment of S i is the probability vector S i \S i 1 ,…, S ip ^ . Here S ij is the probability that voter i chooses agent j. There are standard procedures for estimating the empirical model. The technique is to choose estimators for the coefficients so that the estimated probability takes the form: S ij Pr .
(4)
Here, u ij is the j th component of estimated latent utility function for i. The estimator for the choice is c ij 1 if and only if S ij S jl for all l P \ { j } The procedure minimizes the errors between the n by p matrix [c ] and the n by p estimated matrix [c ] . The vote share, V j(z) , of agent i, given the vector z of strategies, is defined to be: V j(z) 1n4 i S ij (z).
(5)
Note that since V j(z) is a stochastic variable, it is characterized by its first moment (its expectation), as well as higher moments (its standard variance, etc.). In the interpretation of the model I shall follow the usual assumption of the formal model and focus on the expectation Exp(V j(z)) . The estimate of this expectation, denoted E j (z) , is given by: E j (z) 1n4 i S ij(z).
(6)
272
Norman Schofield
Table 4. Bayes Factor [ln(B jk )] for Model j vis-à-vis Model k for the 1996 election Spatial MNP Mj
Spatial MNP Spatial MNL1 Spatial MNL2 Joint MNL
n.a. –239 17*** 49*** ***
Spatial MNL1
Spatial MNL2
***
239 n.a. 255*** 288***
–17 –255 n.a. 33***
Joint MNL –49 –288 –33 n.a.
= very strong support for M j
A virtue of using the general voting model is that the Bayes’ factors (or differences in log likelihoods) can be used to determine which of various possible models is statistically superior (Quinn, Martin and Whitford 1999; Kass and Raftery 1995). I compared a variety of different MNL models against a pure MNP model for each election. The models were: (i) MNP: a pure spatial multinomial probit model with C v 0 but R w 0 and M 0 (ii) MNLSD: a pure logit sociodemographic(SD) model, with C 0 , involving the component T, based on respondent age, education, religious observance and origin (whether Sephardic, etc.). (iii) MNL1: a pure multinomial logit spatial model with C v 0 , but R w 0 and M 0 . (iv) MNL2: a multinomial logit model with C v 0 , R v 0 and M 0 . (v) Joint MNL: a multinomial logit model with C v 0 , R v 0 and M v 0 . The pure sociodemographic model MNLSD gave poor results and this model was not considered further. Table 4 give the comparisons for MNP, MNL1, MNL2 and Joint MNL for 1996 Note that the MNP model had no valence terms. Observe, for the 1996 election, the Bayes’ factor for the comparison of the Joint MNL model with MNL1 was of order 288, so clearly sociodemographic variables add to predictive power. However, the valence constants add further to the power of the model. The spatial distance, as expected, exerts a very strong negative effect on the propensity of a voter to choose a given party. To illustrate, Table 3. shows that, in 1996, the C coefficient was estimated to be approximately 1.12. In short, Israeli voters cast ballots, to a very large extent, on the basis of the issue positions of the parties. This is true even after taking the demographic and religious factors into account. The coefficients on ‘religious observation’ for Shas and the NRP (both religious parties) were estimated to be 3.022 and 2.161 respectively. Consequently, a voter who is observant has a high probability of voting for one of these parties, but this probability appears to fall off rapidly the further is the voter’s ideal position from the party position. In each election, factors such as age, education, and religious observance play a role in determining voter choice. Obviously this suggests that some parties are more successful, among some groups in the electorate than
Divergence in The Spatial Stochastic Model of Voting
273
would be implied by a simple estimation based only on policy positions. These Tables indicates that the best model is the joint MNL that includes valence and the sociodemographic factors along with the spatial coefficient C. In particular, there is strong support, in all three elections, for the inclusion of valence. This model provides the best estimates of the vote shares of parties and predicts the vote choices of the individual voters remarkably well. Therefore this is clearly the model of choice to use as the best estimator for what I refer to as the stochastic electoral response function. Adding valence to the MNL model makes it superior to both MNL and MNP models without valence. Adding the sociological factors increases the statistical validity of the model. Because the sociodemographic component of the model was assumed independent of party strategies, I could use the estimated parameters of the model to simulate party movement in order to increase the expected vote share of each party. ‘Hill climbing’ algorithms were used for this purpose. Such algorithms involve small changes in party position, and are therefore only capable of obtaining ‘local’ optima for each party. Consequently, a vector z (z 1 ,…, z p ) of party positions that results from such a search is what I call a ‘local pure strategy Nash equilibrium’ or LNE. I now present the definition of local equilibria for the context of the empirical vote maximizing game defined by E X p l R p . Definition 1 (i) A strategy vector z (z 1 ,…, z j 1 , z j , z j 1 ,…, z p ) X p is a local strict Nash equilibrium (LSNE) for the profile function E X p l R p iff, for each agent j P , there exists a neighborhood X j of z j in X such that:
E j (z 1 ,…, z j 1 , z j , z j 1 ,…, z p ) E j (z 1 ,…, z j ,…, z p ) for all z j X j {z j } .
(ii) A strategy vector z (z 1 … z j 1 z j z j 1 … z p ) is a local weak Nash equilibrium (LNE) for E iff, for each agent j, there exists a neighborhood X j of z j in X such that: E j (z 1 ,…, z j 1 , z j , z j 1 ,…, z p ) p E j (z 1 ,…, z j ,…, z p ) for all z j X j .
(iii) A strategy vector z (z 1 ,…, z j 1 , z j , z j 1 ,…, z p ) is a strict, respectively, weak, pure strategy Nash equilibrium (PSNE, respectively, PNE) for E iff X j can be replaced by X in (i), (ii) respectively. (iv) The strategy z j is termed a ‘local strict best response,’ a ‘local weak best response,’ a ‘global weak best response,’ a ‘global strict best response,’ respectively to z j (z 1 ,…; z j 1 ; z j 1 ;…, z p ) . In these definitions ‘weak’ refers to the condition that z j is no worse than any other strategy. Clearly, a LSNE must be a LNE, and a PNE must be a LNE. Obviously a LNE need not be a PNE. If a necessary condition for LNE fails, then so does the necessary condition for PNE. One condition that
274
Norman Schofield
is sufficient to guarantee that a LNE is a PNE for the electoral game is concavity of the vote functions. Definition 2 The profile is concave E X p l R p iff for each j, and any real B and x y X then E j (Bx (1 B)y) p B E j (x ) (1 B)E j (y) .
Concavity of the payoff functions { E j } in the j -th strategy z j , together with continuity in z j and compactness and convexity of X is sufficient for existence of PNE (Banks and Duggan 2005). The much weaker condition that z be a LNE is local concavity, namely that concavity holds in a neighborhood of z We use this in the following section. There, I discuss the ‘mean voter theorem’ of the formal model. As mentioned above, this theorem asserts that the vector z (x ,…, x ) (where x is the mean of the distribution of voter ideal points) is a PNE for the vote maximizing electoral game (Hinich 1977; Enelow and Hinich 1984; Lin, Enelow, and Dorussen 1999). I call (x ,…, x ) the joint electoral mean. Since the electoral distribution can be readily normalized, so x 0 , I shall also use the term joint electoral origin. I used a hill climbing algorithm to determine the LSNE of the empirical vote models for the three elections. The simulation of the empirical models found five distinct LNE for the 1996 election in Israel. A representative LNE is given in Fig. 4. Notice that the locations of the two high valence parties, Labor and Likud, in Fig. 1 closely match their simulated positions in Fig. 4. Obviously, none of the estimated equilibrium vectors in Fig. 4 correspond to the convergent situation at the electoral mean. Figs. 5 and 6 give representative LNE for 1992 and 1988. Before I begin the theoretical discussion of the results just presented, several preliminary conclusions appear to be of interest. (i) The empirical MNL model and the formal model, discussed in the next section are both based on the extreme value distribution : and are mutually compatible. (ii) The set of LNE obtained by simulation of the empirical model must contain any PNE for this model (if any exist). Since no LSNE was found at the joint mean position, it follows that the mean voter theorem is invalid, given the estimated parameter values of the empirical model. This conclusion is not susceptible to any counter-argument that the parties may have utilized evaluation functions other than expected vote shares, because only vote share maximization was allowed to count in the ‘hill climbing’ algorithm used to generate the LNE. (iii) A comparison of Figs. 1, 2 and 3 with the simulation figures 4, 5 and 6 makes it clear that there are marked similarities between estimated and simulated positions. This is most obvious for the high valence parties, Labor and Likud, but also for the low valence party Meretz. This suggests that the expected vote share functions \E j ^ is a close proxy to the actual, but unknown, utility functions \U j ^ , deployed by the party leaders.
Religion
Divergence in The Spatial Stochastic Model of Voting
275
2
1
1
6 5 4
0
2 3
7
–1
–2 –2
–1
0
1
2 Security
Key 1=Shas, 2=Likud, 3=Labor, 4=NRP, 5=Molodet, 6= Third Way, 7=Meretz.
Religion
Fig. 4.
Simulated Local Nash Equilibrium for Israel in 1996
2
1
15 2
6
0 4
7
3
–1
–2 –2
–1
0
1
2 Security
Key 1=Shas, 2=Likud, 3=Labor, 4=Meretz, 5=NRP, 6=Molodet, 7=Tzomet.
Fig. 5.
A representative Local Nash Equilibrium in the Knesset, 1992 election
Norman Schofield Religion
276
2
1
5 1 74
0
32 6
–1
–2 –2
–1
0
1
2 Security
Key 1=Shas, 2=Likud, 3=Labor, 4=Tzomet, 5=NRP, 6=Ratz, 7=Thia.
Fig. 6.
A representative Local Nash Equilibrium in the Knesset, 1988 election
(iv) Although the equilibrium notion of LSNE that I deploy is not utilized in the game theoretic literature, it has a number of virtues. In particular, the simulation performed here shows that local equilibria do indeed exist. Other formal results (Schofield 2001) show that such equilibria typically exist, as long as the game form is ‘smooth’ Moreover, the definition of \E j ^ presented above makes it obvious that the vote maximization game considered here is indeed smooth. On the other hand existence for PNE is problematic when concavity fails. (v) Although the ‘local’ equilibrium concept is indeed ‘local,’ there is no formal reason why each of the various LNE that I obtain should be, in fact, ‘close’ to one another. It is noticeable in Figs. 4, 5 and 6 that the LNE for each election are approximately permutations of one another, with low valence parties strung along what I shall call the electoral principal axis. In the following section, I examine the formal vote model in order to determine why the mean voter theorem appears to be invalid for the estimated model of Israel. The formal result will explain why low valence parties in the simulations are far from the electoral mean, and why all parties lie on a single electoral axis.
Divergence in The Spatial Stochastic Model of Voting
277
4. The Formal Model of Elections Given the vector of agent policy positions. z (z 1 ,…, z p ) X p ,each voter, i, is described by a vector u i (x i , z) (u i 1(x i , z 1 ),…, u ip (x i , z p )) , where: u ij (x i , z j ) u ij* (x i , z j ) e j and u ij* (x i , z j ) M j C || x i - z j ||2 . Here, u ij (x i , z j ) is the ‘observable’ utility for i associated with party j The valence M j of agent j is exogenously determined. The terms {F j } are the stochastic errors,whose cumulative distibution is denoted by :. Again, the probability that a voter i chooses party j is: S ij (z) Pr <[u ij (x i , z j ) u il (x i z l )], for all l v j > .
Here Pr stands for the probability operator associated with :. The expected vote share of agent j is:
V j (z)
1 Sij (z) n i N
In the vote model it is assumed that each agent j chooses z j to maximize V j , conditional on z j (z 1 ,…, z j 1 , z j 1 ,…, z p ) . The theorem presented here assumes that the exogeneous valences are given by the vector M (M p , M p 1 ,…, M 2 , M1 ) and the valances are ranked M p p M p 1 p … p M 2 p M1 . The model is denoted M (M, C , : ) . In this model it is natural to regard M j as the ‘average’ weight given by a member of the electorate to the perceived competence or quality of agent j. The ‘weight’ will in fact vary throughout the electorate, in a way which is described by the stochastic distribution. In these models, the C 2-differentiability of the cumulative distribution, :, is usually assumed, so that the individual probability functions \S ij ^ will be C 2-differentiable in the strategies \z j ^ . Thus, the vote share functions will also be C 2-differentiable and Hessians can be calculated. Let x (1n )4 i x i . Then the mean voter theorem for the stochastic model asserts that the ‘joint mean vector’ z 0 (x ,…, x ) is a ‘pure strategy Nash equilibrium (PNE). Adapting Definition 1, we say a strategy vector z is a LSNE for the formal model iff, for each j, z j is a critical point of the vote function V j (z 1 ,…, z j 1 , z j 1 ,…, z p ) and the eigenvalues of the Hessian of this function (with respect to z j ), are negative. The definitions of LNE, PSNE and PNE for the profile V X p l R p for the formal model follow Definition 1 above. The theorem below gives the necessary and sufficient conditions for the joint mean vector z 0 to be an LSNE. A corollary of the theorem shows, in situations where the valences differ, that the necessary condition is likely to fail. In dimension w, the theorem can be used to show that, for z 0 to be an
278
Norman Schofield
LSNE, the necessary condition is that a ‘convergence coefficient,’ defined in terms of the parameters of the model, must be strictly bounded above by w. When this condition fails, then the joint mean vector z 0 cannot be a LNE and therefore cannot be a PNE. Of course, even if the sufficient condition is satisfied, and z 0 (x ,…, x ) is an LSNE, it need not be a PNE. As with the empirical analysis, I assume a Type I extreme value distribution (Train 2003) for the errors. The cumulative distribution, :, takes the closed form: :(h ) exp< exp[h ]>
with variance is 61 Q 2 . It readily follows (Train 2003: 79) for the choice model given above that, for each i, S ij (z)
exp ¢ u ij (x i , z j )±¯
p k 1
exp
.
This implies that the model satisfies the independence of irrelevant alternative property (IIA) namely that for each voter i, and for each pair, j , k , the ratio S ij (z) S ik (z) is independent of a third candidate l. To state the theorem, I first transform coordinates so that in the new coordinates, x 0 . I shall refer to z 0 (0,…, 0) as the joint origin in this new coordinate system. Whether the joint origin is an equilibrium depends on the distribution of voter ideal points. These are encoded in the voter variance/covariance matrix. I first define this, and then use it to characterize the vote share Hessians. Definition 3 The voter variance-covariance matrix ( ) . To characterize the variation in voter preferences, I represent in a simple form the variance covariance matrix, of the distribution of voter ideal points. Let X have dimension w and be endowed with a system of coordinate axes (1,…, s , t ,…, w ) . For each coordinate axis let Yt (x 1t , x 2t ,…, x nt ) be the vector of the t th coordinates of the set of n voter ideal points. I use (Y s , Yt ) to denote scalar product. The symmetric w qw voter covariation or data matrix is then defined to be: (Y1 Yw )¬ (Y1 Y1 ) (Y Y ) s s (Yt Y t ) (Yw Y1 ) (Yw Yw )®
The covariance matrix is n1 I write v s2 n1 (Y s , Y s ) for the electoral variance on the s th axis and
Divergence in The Spatial Stochastic Model of Voting w
v 2 v r2 r 1
279
1 w (Yr , Y r ) trace( ) n r 1
for the total electoral variance. The normalized covariance between the s th and t-th axes is ( v s , v t ) n1 (Y s , Yt ) The formal model is denoted M (M, C ; :, ) , though I shall usually suppress the reference to . Definition 4 The Convergence Coefficient of the model M (M C :) . (i) At the vector z 0 (0,, 0) the probability S k ( z 0 ) that i votes for party, k , is
¯ 1
¢
±
S k ¡¡¡1 j vk exp ¡¢M j Mk ¯°± °°°
(ii) The coefficient A k for party k is A k C(1 2S k ) .
(iii) The characteristic matrix for party k at z 0 is C k <2[A k ]( ) I >
where I is the w by w identity matrix. (iv) The convergence coefficient of the model M (M C :) is c(M, C ; :) 2C[1 2S 1 ]v 2 2 A1v 2 .
The definition of S k follows directly from the definition of the extreme value distribution. Obviously, if all valences are identical, then S 1 1p , as expected. The effect of increasing M j , for j v 1 , is clearly to decrease S 1 , and therefore to increase A1 , and thus c(M, C ; :) . Theorem 3.1 The electoral theorem (Schofield 2006a, 2007a,b) The condition for the joint origin be a LSNE in the model M (M C :) is that the characteristic matrix C 1 <2 A 1 I >
of the party 1, with lowest valence, has negative eigenvalues. As usual, conditions on C 1 for the eigenvalues to be negative depend on the trace, trace (C 1 ) and determinant, det(C 1 ) of C 1 (see Schofield (2003) for these standard results ). These depend on the value of A1 and on the electoral covariance matrix, . Using the determinant of C 1 , it is easy to show, in two dimensions, that 2A1v 2 1 is a sufficient condition for the eigenvalues to be negative. In terms of the ‘convergence coefficient’
280
Norman Schofield
c(M, C ; : ) I can write this as c(M, C ; : ) 1 . In a policy space of dimension w , the necessary condition on C 1 , induced from the condition on the Hessian of V 1 is that c(M, C ; : ) b w . If this necessary condition for V 1 fails, then z 0 can be a neither a LNE nor a LSNE. Ceteris paribus, a LNE at the joint origin is ‘less likely’ the greater are the parameters C , M p M1 and v 2 . The Theorem gives the following Corollaries. Corollary 1 Suppose the dimension of X is w . Then, in the model M (M, C ; : ) the necessary condition for the joint origin to be a LNE is that c(M, C ; : ) be no greater than w . Notice that the case with two parties with equal valence immediately gives a situation with 2C[1 2S 1 ]v 2 0 irrespective of the other parameters. However, if M 2 M1 then the joint origin may fail to be a LNE if Cv 2 is sufficiently large. Note also that for the multiparty case S 1 is a decreasing function of (M p M1 ) so the necessary condition is more difficult to satisfy as (M p M1 ) increases. Corollary 2 In the two dimensional case, the sufficient condition for the joint origin to be a LSNE is that c(M, C ; : ) be strictly less than 1. Furthermore if v t2 n1 (Y t Y t ) are the electoral variances on the two axes t 1,2 , then the two eigenvalues of C 1 are:
\ (A )\ v
^ ^ 1.
1
a 1 (A1 ) ¡¡¢v 12 v 22 ] [[v 12 v 22 ]2 4(v 1 , v 2 )2 ¯°°± 2 1, a2
1
2 ¡ ¡¢ 1
1
v 22 ][[v 12 v 22 ]2 4(v 1 , v 2 )2 ¯°°± 2
When the covariance term (v 1 , v 2 ) 0 , then the eigenvalues are obviously a t A1v t2 , t 1,2 . The more interesting case is when the covariance (v 1 , v 2 ) is significant. By a transformation of coordinates, I can choose v t , v s to be the eigenvectors of the Hessian matrix for agent 1, and let these be these new ‘principal components’ of the electoral covariance matrix. If v t2 b v s2 then the s - th coordinate can be termed ‘the principal electoral axis.’
5. Empirical Analysis for Israel Consider first the election for the Israel Knesset in 1996. Using the formal analysis, I can readily show that one of the eigenvalues of the low valence party, the NRP, is positive. Indeed it is obvious that there is a principal component of the electoral distribution, and this axis is the eigenspace of the positive eigenvalue. It follows that low valence parties should then posi-
Divergence in The Spatial Stochastic Model of Voting
281
tion themselves on this eigenspace as illustrated in the simulation given below in Fig. 4. In 1996, the lowest valence party was the NRP with valence –4.52. The spatial coefficient is C 112 , so for the extreme value model M (: ) I compute S NRP 0 and A NRP 112 . Now v 12 10, v 22 0732 and (v 1 , v 2 ) 0591 (see Table 2). Thus 1
S NRP A NRP
415 452
1 e C 112,
e 314 452
0,
10 0591 124 132 C NRP 2(112) 0591 0732 I 132 064 , c(: ) 388
Then the eigenvalues are 2.28 and – 0.40, giving a saddlepoint, and a value for the convergence coefficient of 3.88. The major eigenvector for the NRP is (1.0,0.8) , and along this axis the NRP vote share function increases as the party moves away from the origin. The minor, perpendicular axis is given by the vector (1.0, 1.25) and on this axis the NRP vote share decreases. Fig. 4 gives one of the local equilibria in 1996, obtained by simulation of the model. The figure makes it clear that the vote maximizing positions lie on the principal axis through the origin and the point (1.0,0.8) . In all, five different LSNE were located. However, in all the equilibria, the two high valence parties, Labor and Likud, were located at positions similar to those given in Fig. 4. The only difference between the various equilibria were that the positions of the low valence parties were perturbations of each other. I next analyze the situation for 1992, by computing the eigenvalues for the Type I extreme value distribution, :. From the empirical model, I obtain Mshas 4.67, Mlikud 2.73, Mlabour 0.91, C 1.25 . When all parties are at the origin, then the probability that a voter chooses Shas is S shas
1 1 e
273 467
e 091 467
0
Thus, A shas C 125,
10 0453 15 113 C shas 2(125) 0453 0435 I 113 008 , c(:) 36.
Then the two eigenvalues for Shas can be calculated to be 2.12 and 0.52 with a convergence coefficient for the model of 3.6. Thus I find that the origin is a saddlepoint for the Shas Hessian. The eigenvector for the
282
Norman Schofield
large, positive eigenvalue is the vector (10,055) Again, this vector coincides with the principal electoral axis. The eigenvector for the negative eigenvalue is perpendicular to the principal axis. To maximize vote share, Shas should adjust its position but only on the principal axis. This is exactly what the simulation found. Notice that the probability of voting for Labor is [1 e 182 ]1 014 , and Alabour 09 , so even Labor will have a positive eigenvalue at the origin. Clearly, if Likud occupies the mean voter position, then Labor as well as all low valence parties would find this same position to be a saddlepoint. In seeking local maxima of vote shares all parties other than Likud should vacate the electoral center. Then, however, the first order condition for Likud to occupy the electoral center would not be satisfied. Even though Likud’s vote share will be little affected by the other parties, it too should move from the center. This analysis predicts that the lower the party’s valence, the further will its equilibrium position be from the electoral mean. This is illustrated in Fig. 5. Calculation for the model M (: ) for 1988 gives eigenvalues for Shas of 2.0 and 0.83 with a convergence coefficient of 3.16, and a principal axis through (1.0, 0.5) . Again, vote maximizing behavior by Shas should oblige it to stay strictly to the principal electoral axis. The simulated vote maximizing party positions indicated that there was no deviation by parties off the principle axis or eigenspace associated with the positive eigenvalue. Thus the simulation was compatible with the predictions of the formal model based on the extreme value distribution. All parties were able to increase vote shares by moving away from the origin, along the principal axis, as determined by the large, positive principal eigenvalue. In particular, the simulation confirms the logic of the formal analysis. Low valence parties, such as the NRP and Shas, in order to maximize vote shares must move far from the electoral center. Their optimal positions will lie either in the ‘north east’ quadrant or the ‘south west’ quadrant. The vote maximizing model, without any additional information, cannot determine which way the low valence parties should move. As noted above, the simulations of the empirical models found multiple LSNE essentially differing only in permutations of the low valence party positions. In contrast, since the valence difference between Labor and Likud was relatively low in all three elections, their optimal positions would be relatively close to, but not identical to, the electoral mean. The simulation figures for all three elections are also compatible with this theoretical inference. The figures also suggest that every party, in local equilibrium, should adopt a position that maintained a minimum distance from every other party. The formal analysis, as well as the simulation exercise, suggests that this minimum distance depends on the valences of the neighboring parties. Intuitively it is clear that once the low valence parties vacate the origin, then high valence parties, like Likud and Labor will position themselves almost symmetrically about the origin, and along the major axis. It should be noted
Divergence in The Spatial Stochastic Model of Voting
283
that the positions of Labor and Likud, particularly, closely match their positions in the simulated vote maximizing equilibria. Clearly, the configuration of equilibrium party positions will fluctuate as the valences of the large parties change in response to exogenous shocks. The logic of the model remains valid however, since the low valence parties will be obliged to adopt relatively ‘radical’ positions in order to maximize their vote shares. The correlation between the two electoral axes was much higher in 1988 (r 2 070) than in 1992 or 1996 (when r 2 047 ). It is worth observing that as r 2 falls from 1988 to 1996, a counter-clockwise rotation of the principal axis that can be observed. This can be seen in the change from the eigenvalue (1.0,0.5) in 1988, to (1.0,0.55) in 1992 and then to (1.0,0.8) in 1996. Notice also that the total electoral variance increased from 1988 to 1992 and again from 1992 to 1996. Indeed, in 1996, Fig. 1 indicates that there is evidence of bifurcation in the electoral distribution in 1996. In comparing Fig. 1, of the estimated party positions, and Fig. 4, of simulated equilibrium positions, there is a notable disparity particularly in the position of Shas. In 1996, Shas was pivotal between Labor and Likud, in the sense that to form a winning coalition government, either of the two larger parties required the support of Shas. It is obvious that the location of Shas in Fig. 1 suggests that it was able to bargain effectively over policy, and presumably perquisites. Indeed, it is plausible that the leader of Shas was aware of this situation, and incorporated this awareness in the utility function of the party. The relationship between the empirical work and the formal model, together with the possibility of strategic reasoning of this kind, suggests that the true but unknown utility functions for the political game are perturbations of the expected vote share functions. These perturbations may be modelled by taking into account the post election coalition possibilities, as well as the effect of activist support for the party.
6. Concluding Remarks The purpose of this paper has been to argue that the ‘centripetal’ tendency of political strategy, inferred from the spatial voting model, is contradicted by empirical evidence. It is suggested, instead, that a valence model can be utilized both to predict individual vote choice, and to interpret the political game. The necessary condition for local Nash equilibrium at the electoral mean also gives a necessary condition for vote maximizing Nash equilibrium at the mean. The validity of the ‘mean voter theorem’ depends essentially on a limit to the electoral variance, as well as bounds on the variation of valence between the parties. The evidence from Israel, with a highly proportional electoral system, suggests that the mean voter theorem can generally be expected to fail.
284
Norman Schofield
Empirical models for other polities can be used to examine the variation between predicted positions and estimated positions, and these differences would, in principle, allow us to infer the true nature of the utility functions of party leaders. It is plausible that party calculations are affected by considerations of activist support, as suggested by Aldrich (1983a,b). In principle, the model of exogenous valence proposed here can be extended by allowing valence to be affected by activist support in the electorate. The degree to which parties will locate themselves far from the center will depend on the nature of the activist calculus. Under proportional electoral rules, it is likely that there will be many potential activist coalitions, and their interaction will determine the degree of political fragmentation, as well as the configuration of small radical parties. Refining the formal model in this way may suggest how the simple vote maximization model may be extended to provide a better account of party position taking. The relationship between the empirical work and the formal model, together with the possibility of strategic reasoning of this kind, suggests that the true but unknown utility functions for the political game are perturbations of the expected vote share functions. These perturbations may be modelled by taking into account the post election coalition possibilities, as well as the effect of activist support for the party.
Acknowledgements This paper is based on research supported by NSF Grant SES 024173. Empirical and computational work mentioned here was undertaken in collaboration with Itai Sened. The tables and figures are taken from Schofield and Sened (2006) with pemission of Cambridge University Press.
References Adams, J. (1999a) Multiparty Spatial Competition with Probabilistic Voting, Public Choice 99: 259–274. Adams, J. (1999b) Policy Divergence in Multicandidate Probabilistic Spatial Voting, Public Choice 100: 103–122. Adams, J. (2001) Party Competition and Responsible Party Government, University of Michigan Press. Adams, J. and Merrill, S. (1999) Modeling Party Strategies and Policy Representation in Multiparty Elections: Why are Strategies so Extreme? American Journal of Political Science 43: 765–791. Adams, J. and Merrill, S. (2000) Spatial Models of Candidate Competition and the 1988 French Presidential Election: Are Presidential Candidates Vote Maximizers? Journal of Politics 62: 729–756. Adams, J. and Merrill, S. (2001) Computing Nash Equilibria in Probabilistic Multiparty Spatial Models with Non-policy Components, Political Analysis 9: 347–361. Adams, J. and Merrill, S. (2005) Policy Seeking Parties in a Parliamentary Democ-
Divergence in The Spatial Stochastic Model of Voting
285
racy with Proportional Representation: A Valence-Uncertainty Model, mimeo. University of California at Davis. Adams, J. Merrill, S. and Grofman, B. (2005) A Unified Theory of Party Competition, Cambridge University Press. Aldrich, J. (1983a) A Spatial Model with Party Activists: Implications for Electoral Dynamics, Public Choice 41: 63–100. Aldrich, John. (1983b) A Downsian Spatial Model with Party Activists, American Political Science Review 77: 974–990. Aldrich, J. (1995) Why Parties? Chicago University Press. Aldrich, J. and McGinnis, M. (1989) A Model of Party Constraints on Optimal Candidate Positions, Mathematical and Computer Modelling 12: 437–450. Alvarez, M. and Nagler, J. (1998) When Politics and Models Collide: Estimating Models of Multi-candidate Elections, American Journal of Political Science 42: 55–96. Alvarez, M., Nagler, J. and Bowler, S. (2000) Issues, Economics, and the Dynamics of Multiparty Elections: The British 1987 General Election, American Political Science Review 94: 131–150. Alvarez, M., Nagler, J. and Willette, J. (2000) Measuring the Relative Impact of Issues and the Economy in Democratic Elections, Electoral Studies 19: 237–253 Ansolabehere, S. and Snyder, J. (2000) Valence Politics and Equilibrium in Spatial Election Models, Public Choice 103: 327–336. Aragones, E. and Palfrey, T. (2002) Mixed Equilibrium in a Downsian Model with a favored Candidate, Journal of Economic Theory 103: 131–161. Aragones, E. and Palfrey, T. (2005) Spatial Competition between Two Candidates of Different Quality: The Effects of Candidate Ideology and Private Information, in D. Austen-Smith, and J. Duggan (eds) Social Choice and Strategic Decisions, Springer. Arian, A. (1998) The Second Republic: Politics in Israel, Chatham House. Arian, A. and Shamir, M. (1990) The Election in Israel: 1988, SUNY Press. Arian, A. and Shamir, M. (1995) The Election in Israel: 1992, SUNY Press. Arian, A. and Shamir, M. (1999) The Election in Israel: 1996, SUNY Press. Austen-Smith, D. and Banks, J. (1998) Social Choice Theory, Game Theory and Positive Political Theory, Annual Review of Political Science 1: 259–287. Banks, J. and Duggan, J. (2000) A Bargaining Model of Collective Choice, American Political Science Review 94: 73–88. Banks, J. and Duggan, J. (2005) The Theory of Probabilistic Voting in the Spatial Model of Elections, in D. Austen-Smith and J. Duggan (eds) Social Choice and Strategic Decisions, Springer. Budge, I., Robertson, D. and Hearl, D. (eds) (1987) Ideology, Strategy and Party Change: A Spatial Analysis of Post-War election Programmes in Nineteen Democracies, Cambridge University Press. Calvert, R. (1985) Robustness of the Multidimensional Voting Model: Candidates, Motivations, Uncertainty and Convergence, American Journal of Political Science 29: 69–85. Chib, S. and Greenberg, E. (1996) Markov Chain Monte Carlo Simulation Methods in Econometrics, Econometric Theory 12: 409–431. Clarke, H., Stewart, M. and Whiteley, P. (1997) Tory Trends, Party Identification and the Dynamics of Conservative Support since 1992, British Journal of Political Science 26: 299–318.
286
Norman Schofield
Clarke, H., Stewart, M. and Whiteley, P. (1998) New Models for New Labour: The Political Economy of Labour Support, American Political Science Review 92: 559– 575. Clarke, H. et al. (2004) Political Choice in Britain, Oxford University Press. Condorcet, N. (1795) Essai sur lapplication de lanalyse a la probabilite des decisions rendus a la pluralite des voix, Imprimerie Royale. Coughlin, P. (1992) Probabilistic Voting Theory, Cambridge University Press. Cox, G. (1990) Centripetal and Centrifugal Incentives in Alternative Voting Institutions, American Journal of Political Science 34: 903–935. Cox, G. (1997) Making Votes Count, Cambridge University Press. Daalder, H. (1984) In Search of the Center of European Party Systems, American Political Science Review 78: 92–109. Dow, J.K. and Endersby, J.W. (2004) Multinomial Probit and Multinomial Logit: A Comparison of Choice Models for Voting Research, Electoral Studies 23: 107–122. Downs, A. (1957) An Economic Theory of Democracy, Harper and Row. Duverger, M. (1954) Political Parties: Their Organization and Activity in the Modern State, Wiley. Enelow, J. and Hinich, M. (1982) Nonspatial Candidate Characteristics and Electoral Competition, Journal of Politics 44: 115–130. Enelow, J. and Hinich, M. (1984a) Probabilistic Voting and the Importance of Centrist Ideologies in Democratic Elections, Journal of Politics 46: 459–478. Enelow, J. and Hinich, M. (1984b) The Spatial Theory of Voting, Cambridge University Press. Enelow, J. and Hinich, M. (1989a) A General Probabilistic Spatial Theory of Elections, Public Choice 61: 101–114. Enelow, J. and Hinich, M. (1989b) The Location of American Presidential Candidates: An Empirical Test of a New Spatial Model of Elections, Mathematical and Computer Modelling 12: 461–470. Glasgow, G. and Alvarez, M. (2003) Voting Behavior and the Electoral Context of Government Formation, Electoral Studies 24: 245–264. Groseclose, T. (2001) A Model of Candidate Location when one Candidate has a Valance Advantage, American Journal of Political Science 45: 862–886. Hinich, M. (1977) Equilibrium in Spatial Voting: The Median Voter Result is an Artifact, Journal of Economic Theory 16: 208–219. ISEIUM (1983) European Elections Study: European Political Parties Middle Level Elites, Europa Institut, Mannheim. Kass, R. and Raftery, A. (1995) Bayes Factors, Journal of the American Statistical Association 90: 773–795. Laver, M. and Schofield, N. (1990) Multiparty Government, Oxford University Press. Laver, M. and Shepsle, K. (1996) Making and Breaking Governments, Cambridge University Press. Lin, T-M., Enelow, J. and Dorussen, H. (1999) Equilibrium in Multicandidate Probabilistic Spatial Voting, Public Choice 98: 59–82. Madison, J. (1787) The Federalist No.10, in J. Rakove (ed.) James Madison: Writings, The Library of America. McKelvey, R. and Patty, J. (2006) A Theory of Voting in Large Elections, Games and Economic Behavior 57: 155–180. McLennan, A. (1998) Consequences of the Condorcet Jury Theorem for Beneficial
Divergence in The Spatial Stochastic Model of Voting
287
Information Aggregration by Rational Agents, American Political Science Review 92: 413–418. Merrill, S. and Grofman, B. (1999) A Unified Theory of Voting, Cambridge University Press. Merrill, S., Grofman, B. and Feld, S. (1999) Nash Equilibrium Strategies in Directional Models of Two-Candidate Spatial Competition, Public Choice 98: 369–383. Miller, G. and Schofield, N. (2003) Activists and Partisan Realignment in the U.S., American Political Science Review 97: 245–260. Poole, K. and Rosenthal, H. (1984) U.S. Presidential Elections 1968-1980: A Spatial Analysis, American Journal of Political Science 28: 283–312. Quinn, K., Martin, A. and Whitford, A. (1999) Voter Choice in Multiparty Democracies, American Journal of Political Science 43: 1231–1247. Quinn, K. and Martin, A. (2002) An Integrated Computational Model of Multiparty Electoral Competition, Statistical Science 17: 405–419. Rabinowitz, G, Macdonald, S. and Listhaug, O. (1991) New Players in an Old Game: Party strategy in Multiparty systems, Comparative Political Studies 24: 147–185. Rakove, J. (ed.) (1999) James Madison: Writings, Library of America. Schofield, N. (2001) Generic Existence of Local Political Equilibra, in M. Lasonde (ed.) Approximation, Optimization and Mathematical Economics, Springer, 297–308. Schofield, N. (2003) Mathematical Methods in Economics and Social Choice, Springer. Schofield, N. (2006a) Equilibria in the Spatial Stochastic Model of Voting with Party Activists, The Review of Economic Design 10: 183–203. Schofield, N. (2006b) Architects of Political Change: Constitutional Quandaries and Social Choice Theory, Cambridge University Press. Schofield, N. (2007a) The Mean Voter Theorem: Necessary and Sufficient Conditions for Convergent Equilibrium, Review of Economic Studies 74: 965–980. Schofield, N. (2007b) Political Equilibrium with Electoral Uncertainty, Social Choice and Welfare 28: 461–490. Schofield, N. et al. (1998) Multiparty Electoral Competition in the Netherlands and Germany: A Model based on Multinomial Probit, Public Choice 97: 257.293. Schofield, N. and Sened, I. (2005a) Multiparty Competition in Israel: 1988–1996, British Journal of Political Science 36: 635–663. Schofield, N. and Sened, I. (2005b) Modeling the Interaction of Parties, Activists and Voters: Why is the Political Center so Empty? The European Journal of Political Research 44:355–390. Schofield, N. and Sened, I. (2006) Multiparty Democracy: Elections and Legislative Politics, Cambridge University Press. Stokes, D. (1963) Spatial Models and Party Competition, American Political Science Review 57: 368–377. Stokes, D. (1992) Valence Politics, in D. Kavanagh (ed.) Electoral Politics, Oxford University Press. Strom, K. (1990) A Behavioral Theory of Competitive Political Parties, American Journal of Political Science 34 (3): 565–598. Train, K. (2003) Discrete Choice Methods for Simulation, Cambridge University Press.
15. Closeness Counts in Social Choice Tommi Meskanen Department of Mathematics, University of Turku, Finland
Hannu Nurmi Department of Political Science, University of Turku, Finland
1. Introduction A working democratic system of government is based on voting and elections. A glance at the literature reveals an astonishing variety of systems used for electing persons to political offices, for choosing policy alternatives and for enacting legislation. While it is undoubtedly true that voting is just a necessary condition for democracy, it is remarkable how many and how different systems are used for apparently the same purpose, viz. to find out the ‘will of the people’. In early 1980’s Riker called attention to the fact that not only are the voting systems apparently different, but also the voting outcomes may vary a great deal depending on the voting system used (Riker 1982). In extreme cases, any alternative under consideration can be rendered the winner by applying a suitable procedure without changing anyone’s opinion about the alternatives. Although extreme and theoretical, these examples show that the choice of voting system is an important aspect of democratic rule. All voting systems seem, however, to satisfy the minimal requirement of democratic choice, viz. that the outcome depends on votes cast. So, why all this variety? This article dwells on this question. We shall first trace the historical development of the voting procedures by giving a brief tour through recent history mentioning the authors who have brought to discussion new voting systems. Thereafter, we argue that most systems can be characterized by a goal state, i.e. consensus, and that each procedure measures the distance from this consensus by a metric.
2. A Sketch of the History of Voting Systems Historical record on the invention of the first voting methods does not exist, but a version of plurality voting was commonly used in one of the city
290
Tommi Meskanen and Hannu Nurmi
states of ancient Greece, Sparta. Aleskerov (1999: 1) cites the account of the Greek historian Plutarch on the election of Council of Elders: ‘Whoever [of the candidates] was greeted with the most and loudest shouting, him they declared elected’. Sparta was probably not the first place where this method was used, but earlier reports on its use have not come our way. The next record emerges from the first century A.D. in a letter written by Pliny the Younger to Titus Aristo. This letter was brought to wider scholarly attention by R. Farquharson (1969). It reveals that the Roman senate resorted to several voting systems, plurality voting being one of them. The letter hints that also those voting systems that are today used in legislative decision making in parliaments, notably the amendment and successive procedures, were known in the Roman senate. The first systematic treatises that include voting system proposals appeared in the late 18th century. In the middle ages there were a few scattered accounts suggesting voting rules (McLean and Urken 1995), but these did not typically include systematic comparisons with existing systems, let alone theoretical discussion on their properties. But in 1770 Jean-Charles de Borda presented to the Royal French Academy a strong criticism of the prevailing plurality voting system augmented with a proposal for a new system, currently known as the Borda count (BC, for brevity)(McLean and Urken 1995: 81–89). Given a set of individual complete and transitive preference relations (rankings) over the alternatives, BC works as follows. Each voter gives k 1 points to his/her (hereafter his) first ranked alternative, k 2 to the second ranked one, …, zero points to the last-ranked one. The Borda score of each alternative is the sum of points it has received from all voters. The collective Borda ranking is the order of the Borda scores: the larger the score, the higher the alternative in the collective ranking. Borda’s contemporary, Marquis de Condorcet, proposed, some fifteen years later, several voting systems of which one, today called Condorcet’s practical method, seems to have had practical applications. This method calls for each voter to indicate his first and second ranked alternatives. Should one of the alternatives be ranked first by more than 50% of the voters, it is elected. Otherwise, the alternative which has been ranked first or second by more voters than any other alternative is elected. Nanson (1882) calls this Condorcet’s practical method and reports that it was used in Geneva. Yet another system, called least reversal method, is associated with the name of Condorcet. Suppose that pairwise majority comparisons yield a cycle. Then one reverses the majority opinion with smallest support and checks if the resulting order contains other cycles. If it does, one reverses the pair with next smallest support and so on. Eventually an acyclic majority relation emerges. This is adopted as the collective decision. Nearly a hundred years later C. L. Dodgson wrote a series of pamphlets analyzing the voting systems used in colleges of Oxford (McLean and Urken
Closeness Counts in Social Choice
291
1995). At least the following methods are mentioned in his works: methods of simple or absolute majority, method of pairwise elimination, method of elimination, method of marks (i.e. cumulative voting or range voting), method of nomination (i.e. successive procedure), method of procedure (i.e. Borda count) and a method today called the Dodgson’s method. The expressions in parentheses are terms currently used in discussing the methods in question. Dodgson’s own proposal was presented in the pamphlet dated in March 1876. 1 It amounts to electing a Condorcet winner whenever one exists and otherwise looking for the alternative that is closest to a Condorcet winner, closest here meaning that the alternative needs a smaller number of pairwise preference changes of voters than any other alternative. In contrast to Dodgson his contemporary E. J. Nanson (1882) was well aware of the preceding developments in the theory of voting procedures. He discussed voting systems in three categories: Class one (single scrutiny): single vote method, double vote method, Borda count. Class two (voters have to vote more than once): French method of double elections. Class three (more than one scrutiny, but the voters vote only once (preferentially)): Ware’s method, Venetian method (two ballots: on the first one the voters vote for two, in the second for one candidate), Condorcet’s practical method(two ballots: on the first each voter has one vote, on the second two votes; no second ballot if a candidate receives an absolute majority on the first ballot), the proposed method, i.e. Nanson’s method. The last mentioned one is a clever Borda elimination method based on the observation that, although a Condorcet winner is not necessarily the Borda winner, the former receives necessarily a higher than average Borda score. Hence, by eliminating alternatives that get an average or smaller Borda score, one can rest assured that an eventual Condorcet winner is not eliminated. By continuing the elimination one finally ends up with the Nanson winner (or a tie). There is a version of Nanson’s method in which candidates are eliminated one by one so that at each stage of computation the candidate with the lowest Borda score is deleted. This version is known as Baldwin’s method. After Nanson’s article the study of voting procedure became eclipsed by increasing interest in proportional election systems. Of particular interest is the work of Thomas Hare (1865). The system he devised is nowadays called 1
In fact, it might be slightly unfair to attribute this proposal to Dodgson since he explicitly states that in the absence of a Condorcet winner the outcome should be ‘no Election’. However, having said this he proceeds to show that the two ways of making a choice from a majority cycle, the plurality and runoff systems, are erroneous in not electing the candidate that can become the Concorcet winner after smaller number of pairwise preference changes than any other candidate. Hence, calling his proposal Dodgson’s method has, after all, a modicum of plausibility.
292
Tommi Meskanen and Hannu Nurmi
the Hare system. 2 It is an elimination method whereby one looks for a candidate that is first ranked by more than 50% of the voters. If there is no such candidate, the one whom the smallest number of voters ranks first is eliminated. The elimination is continued until one of the non-eliminated candidates is top-ranked by more than half of the electorate. The single-winner voting systems were not the topic of a major treatise until 1940’s when Duncan Black published some articles on the subject. Black’s magnum opus was published in 1958. His suggestion for a voting procedure is preferential balloting with the eventual Condorcet winner elected. Otherwise the Borda winner is elected. The method thus combines the decisiveness of the Borda count with the binary winning intuition of Condorcet. The voting system devised by Kemeny (1959) is explicitly based on the search for the consensus that is closest to the prevailing distribution of opinions. Once individual preference rankings are given, one looks for the (consensus) preference ranking that would result from the given one after the smallest number of binary changes of opinions by individual voters. Slater (1961) suggests a method that is very similar to Kemeny’s. Given a preference profile, one constructs the corresponding tournament matrix, i.e. a 0 1 matrix where 1 in i-th row and j-th column indicates that i-th alternative defeats the j-th one by a majority. Otherwise the (i , j ) entry is 0. This is the observed tournament matrix over the alternatives. This may or may not be cyclic. One then compares this matrix with all those tournaments that represent complete and transitive relations over the same alternatives. Finally, one computes the distance of the observed matrix to all those representing complete and transitive rankings. The distance is measured by the number of binary switches of preferences needed to transform the observed matrix into one representing a complete and transitive relation (Nurmi 2002). In 1970’s the approval voting system was introduced by Steven Brams (1976). 3 In this system every voter can give each alternative either one or zero votes. The alternative with the largest vote sum is elected. Perhaps the first systematic comparison of voting systems was published by Straffin (1980). This study introduces several performance criteria for voting systems. Also the systems attributed to Hare and Coombs are analyzed in the book. Coombs’s system works exactly as Hare’s except that the candidates with the largest number of last ranks are eliminated. 2
There are several minor variations of the system, the best known being the single transferable vote which is designed for multi-member constituencies. In Australia, the Hare system is known as the Hare-Spence system and in Scandinavia as the Andre system. See Lakeman and Lambert (1955) and Doron (1979). The history of the system is briefly described in Hill (1988). 3 An account on how several groups of scholars came up with the same idea roughly simultaneously is given in (Brams and Fishburn 1983).
Closeness Counts in Social Choice
293
In late 1970’s Richelson (1975, 1978a, 1978b, 1979, 1981) conducted an extensive survey and analytic comparison of voting systems introducing, i.a. reverse plurality, nowadays better known as antiplurality system. In this system each voter’s last ranked alternative gets zero votes, while all the others receive one vote from the voter. The alternative with the largest vote sum wins. The above mentioned are all voting systems where each voter casts a ballot containing either ranking of candidates, a list of approved candidates or the name (or number) of a single (most-preferred) candidate. In his widely cited text Riker (1982) discusses two other systems both of which require utility values of alternatives. Bentham’s method defines the winner as the alternative with the largest sum of utility values, while the Nash winner is the alternative with the largest product of utility values. The utility scales are fixed so that 1 is the value of the first ranked and 0.5 the value of the last ranked alternative. 4 An explicitly consensus oriented system is due to Litvak (1982). Given a preference profile over k alternatives, one first agrees on the numbering of alternatives 1,…, k . Then each individual preference ranking is transformed into a vector of k components so that component i indicates how many alternatives are placed ahead of the i-th one in the ranking under consideration. Each such vector thus consists of numbers 0,…, k 1. Litvak’s rule looks for the k-component vector V that is closest to the observed preferences in the sense that the sum of component-wise absolute differences between the reported preference vectors and V is minimal. For example, in a 5-person preference profile where three persons have the preference A ; B ; C and two the preference B ; C ; A , the Litvak sums of all six rankings are: A ; B ;C 8
B ; A ; C 10
C ; A ; B 20
A ; C ; B 14
B ; C ; A 12
C ; B ; A 16
Thus, the Litvak ranking is A ; B ; C . Tideman’s (1987) method is a sequential one proceeding from the largest pairwise victory of an alternative against another. Assume that x 1 and x 2 form such a pair. Denoting by v i the number of votes x i receives, this means that v 1 v 2 is the largest difference of votes. We preserve this pair by drawing an arrow from x 1 to x 2 . We then look for the next largest victory margin and preserve it similarly. Proceeding in this manner we may encounter a situation where preserving a pair would create a cycle. In these situations the respective relation between alternatives is ignored, i.e. no arrow is drawn. Otherwise we proceed in decreasing order of victory margins. Eventually all alternatives are located in a directed graph with no cycles. 4 Resorting to the common 0–1 normalization of utilities would effectively make every voter a vetoer.
294
Tommi Meskanen and Hannu Nurmi
The root of the graph, that is, the starting point is the winning alternative. Since by construction the cycles are excluded, Tideman’s method yields a ranking over the set of alternatives. In late 1980’s I. D. Hill (1988) proposed a variation of Hare’s system. In single-winner elections it proceeds exactly as Hare’s method, but exempts the eventual Condorcet winner from elimination. In other words, in case none of the remaining candidates is ranked first by more than 50% of the voters, the candidate with the smallest number of first ranked is eliminated, unless this candidate happens to be the Condorcet winner among the remaining candidates. Since the Condorcet winner is not eliminated, it wins. Schulze’s (2003) method, in turn, determines the ‘beatpaths’ from each alternative x to every other alternative y. The path consists of ordered alternative pairs in the chain x , w 1 …, w j , y – where w 1 ,…, w j are alternatives – that leads from x to y. The strength of the beatpath is the smallest margin of victory in the chain. There are typically several beatpaths leading from one alternative to another. Each of them, thus, has the strength of its weakest link. Denote now by S xy the strength of the strongest path from x to y. In other words, all but the strongest path between any two alternatives are ignored. Schulze’s (potential) winner is the alternative x S for which S x s y p S yx s for all alternatives y. 5 Schulze proves that at least one such candidate always exists. Moreover, the relation S xy S yx is transitive. That is, if S xy S yx and S yz S zy , , then also S xz S zx , for all alternatives x, y, z. The above list of systems is fairly long, but not exhaustive. The existence of such variety of systems poses the question of why do we have them. Each system is minimally democratic in the sense that the outcome depends, inter alia, on the opinions expressed by the voters. Surely there must then be some more profound motivation for these systems, a motivation that accounts for the plausibility of each system as distinguished from others. In the following we shall approach some of the above systems by looking at the consensus state which they are aimed at. It will be seen that the differences in many systems can be accounted for by differences in the consensus state they are based on and by the method of measuring the distance from that state.
3. Consensus States Apart from Dodgson’s, Slater’s and Kemeny’s rules the above systems do not explicitly contain an idea that the voting outcome would be such a consensus state which is nearest to the reported preferences. Yet, upon closer scrutiny this idea can be associated with most systems outlined above. To focus on Kemeny, the state from which the distance to the observed profile is 5 In case there are several potential winners, Schulze (2003) suggest a tie-breaking procedure which, however, will not be discussed here.
Closeness Counts in Social Choice
295
measured is one of unanimity regarding all positions of the ranking of alternatives, i.e. the voters are in agreement about which alternative is placed first, which second etc. throughout all positions. Another possible consensus state is one where unanimity prevails concerning which alternative is the best. For example, the plurality voting system can be viewed in the light of this consensus state. The observed preference profile is the further away from the consensus, the more opinions have to be changed in order to make one alternative unanimously first ranked. In the context of Dodgson’s method, the consensus is reached when there is a Condorcet winner in the profile. If there is one in the observed profile, we are already there, but in case there isn’t, one looks for the number of opinion changes needed to make each alternative the Condorcet winner. Another possible idea of consensus is one where there is a Condorcet ordering. There exists a Condorcet ordering over alternatives if for every subset of alternatives there is a Condorcet winner. In other words, there is an alternative, say x 1 , that is a Condorcet winner in the entire candidate set X (and consequently in every subset of X to which x 1 belongs), there is Condorcet winner, say x 2 , in X \ x 1 , and so on. It turns out that with these interpretations of the consensus state and the associated distance measures we can characterize many of the above voting systems. Obviously the three different goal consensus states are not adequate to explain the variety of procedures. Hence, we focus next to the distance measures.
4. Metrics We define a distance function over a set P (of points, for example) as any function d P q P l R , where R is the set of non-negative real numbers. A distance function d m is called a metric if the following conditions are met for all elements x, y, z of P : 1. 2. 3. 4.
d m (x x ) 0, if x v y, then d m (x y) 0, d m (x y) d m (y x ), and d m (x z ) b d m (x y) d m (y z ).
Substituting preference relations for elements in the above conditions we can extend the concept of distance function to preference relations. But these conditions leave open the way in which numerical values consistent with the above conditions are assigned to distances between two relations. Kemeny’s proposal is the following (Baigent 1987a,b; Kemeny 1959). Let R and R a be two rankings. Then their distance is:
296
Tommi Meskanen and Hannu Nurmi
d K (R R a) ] \(x y) X 2 ] R(x ) R(y) R a(y ) R a(x )^ ]
Here we denote by R(x ) the number of alternatives worse than x in a ranking R. This is called inversion metric. The distance of two rankings is the number of inversions of consecutive choices needed to transform one ranking into another. This can be easily seen because: x If two consecutive options are in different order they must be inverted at some point x If two consecutive options are in an identical order they need not to be inverted.
The distance between two preference profiles of the same size, i.e. involving equal number of voters and the same alternatives, is, then, the sum of the distances between the individual rankings. That is, n
d K (P , P a) d K (R i , R ia) i 1
where the profile P consists of rankings R 1 ,…, R n and the profile P a of rankings R 1a ,…, R na . Similarly, we can measure the distance between a profile P and a set of profiles S, d K (P , S) min d K (P , P a) P aS
We can generalize any distance function of rankings this way. Let U (R ) denote an unanimous profile where every voter’s ranking is R. Kemeny’s rule results in the ranking R so that d K (P ,U (R )) b d K (P ,U (R )) R R \ R
where P is the observed profile and R denotes the set of all possible rankings. If all the inequalities above are strict then R is the only winner. The Kemeny winner profile is sometimes called the Kemeny median. Barthélemy and Monjardet (1981) show that this is, indeed, a natural way to look at Kemeny’s rule. Let R 1 and R 2 be two binary relations over the same set of alternatives so that R 1 R 2 . In other words, all alternative pairs that are in relation R 1 with each other, are also in relation R 2 with each other, but not necessarily the other way around. Denote by [R 1 , R 2 ] the interval between R 1 and R 2 , i.e. the set of all relations S such that R 1 S R 2 . Let now i range from individual 1 to individual n and (x y) R i denote i ’s preference of x over y. We define R m as follows: (x y) R m if and only if (x y) R i for at least [(n 2)2] individuals. Similarly, let R e be defined so that (x y) R e if and only if (x y) R i for at least [(n 1)2] individuals. Here [x]denotes the integer part of the number x. It can be seen immedi-
Closeness Counts in Social Choice
297
ately that R m R e . Barthélemy and Monjardet define the median interval in the profile as the interval [R m , R e ] and a relation is called a median relation if and only if it is in the median interval. They then show that a relation is a median relation in a profile if and only if it minimizes the distance between itself and the the profile when the distance (or in Barthélemy and Monjardet’s terminology, remoteness) is measured with d K . So, the Kemeny winning relation is in a natural sense the median relation. To turn now to the Borda count, let us consider an observed profile P. For a candidate x we denote by W(x ) the set of all profiles where x is firstranked in every voter’s ranking. Clearly in all these profiles x gets the maximum Borda points. We consider these as the consensus states for the Borda count (Farkas and Nitzan 1979; Nitzan 1981). For a candidate x, the number of alternatives above it in any ranking of P equals the number of points deducted from the maximum points. This is also the number of inversions needed to get x in the winning position in every ranking. Thus, using the metric above, w B is the Borda winner if d K (P , W(w B )) b d K (P , W(x )) x X \ w B
The plurality system is obviously also directed at the same consensus state as the Borda count, but its metric is different. Rather than counting the number of pairwise preference changes needed to make a given alternative unanimously first ranked, it minimizes the number of individuals having different alternatives ranked first. To make this intuition precise, we define a discrete metric as follows: £¦0 if R R a d d (R , R a) ¦¤ ¦¦¥1 otherwise.
As above the distance between two preference profiles of the same size is n
d d (P , P a) d d (R i R ia), i 1
where the profile P consists of rankings R 1 ,…, R n and the profile P a of rankings R 1a ,…, R na . That is, the distance indicates the number of rankings that differ in the two profiles. Note that we consider the profiles always in such a way that the order of the rankings in the profiles is irrelevant. Also, the distance between a profile P and a set of profiles S is, similarly, d d (P , S) min d d (P , P a). P aS
The unanimous consensus state in plurality voting is one where all voters have the same alternative ranked first. With the metric, in turn, we tally, for each alternative, the number of voters in the observed profile who do not have this alternative as their first ranked one. The alternative for which this
298
Tommi Meskanen and Hannu Nurmi
number is smallest is the plurality winner. The plurality ranking coincides with the order of these numbers. Using this metric we have for the plurality winner w p , d d (P , W(w p )) b d d (P , W(x )) x X \ w p
The only difference to the Borda winner is in the metric used. Dodgson’s system is based on a different idea of the goal state, viz. one where there is a Condorcet winner. For any candidate x we denote by C(x ) the set of all profiles where x is the Condorcet winner. Provided that a Condorcet winner exists in the observed profile, the Dodgson winner is this alternative. Otherwise, one constructs, for each alternative x, a profile in C(x ) which is obtained from the observed profile P by moving x up in one or several voters’ preference orders so that x emerges as the Condorcet winner. Obviously, any alternative can thus be rendered a Condorcet winner. It is also clear that for each alternative there is a minimum number of such preference changes involving the improvement of x’s position vis-à-vis other alternatives that are needed to make x the Condorcet winner. The Dodgson winner w D is then the alternative which is closest, in the sense of Kemeny’s metric, of being the Condorcet winner. That is, d K (P , C(w D )) b d K (P , C(x )) x X \ w D
Dodgson’s method is thus characterized by Kemeny’s inversion metric combined with a goal state where a Condorcet winner exists. Litvak’s procedure results in the ranking that is nearest to the observed one in terms of minimizing the sum (over individuals) of absolute rank position differences of alternatives in the former and the latter. As the position numbers play an important role in the Borda count, one would expect that Litvak’s method is similar to the Borda count. This is, however, not the case (Nurmi 2004). For example, Litvak’s procedure may end up with a Condorcet loser being ranked first which is never the case under the Borda count. Litvak’s procedure also differs from Kemeny’s. To see this, consider the distance of A ; B ; C ; D to C ; D ; A ; B , on the one hand, and to D ; C ; B ; A , on the other. In Kemeny’s sense, the latter difference is larger than the former, while in Litvak’s sense they are equidistant from A ; B ; C ; D . We define the Litvak metric formally as follows: d L (R , R a) ] R(x ) R a(x ) ] x X
This metric is sometimes called the Manhattan metric. The Litvak winning ranking R L has the property d L (P ,U (R L )) b d L (P ,U (R )) R R \ R L
exactly like the Kemeny winner except for the different metric.
Closeness Counts in Social Choice
299
The Litvak and Kemeny metrics are related in the following way: x If a ranking R a is derived from ranking R by only moving one candidate up (or down) by n steps then 2d K (R , R a) 2n d L (R , R a).
x If a ranking R cannot be turned into ranking R a by switching adjacent candidates without moving at least one candidate at some point first up then down (or first down then up), then d K (R , R a) d L (R , R a) 2d K (R , R a).
Let us consider an example: ranking A ; B ; C and its reversal C ; B ; A. The distance of these orders is 3 using Kemeny metric and 4 using Litvak metric: 3 4 6. Thus, because of the first property, we have that for any P and x d L (P , W(x )) 2d K (P , W(x ))
and d L (P , C(x )) 2d K (P , C(x ))
In words, we find the same Borda count and Dodgson winners using either of the metrics. For the sake completeness we shortly consider what we get if we combine the discrete metric with consensus states U (R ) or C(x ). In the former case simply the most popular ranking is selected; that is, the ranking with least opposition. Although nowhere in use within our knowledge, this system would make sense in situations where there are strong dependencies between alternatives so that each ranking is a policy program. In this interpretation, the policy program with most support would seem a plausible winner. The latter case, i.e. the one where C(x ) is the consensus state, is more interesting. We can formally define the winner w Y as the option with property d d (P , C(w Y )) b d d (P , C(x )) x X \ w Y
In other words, we find the largest set of voters such that the Condorcet winner exists. This system has been attributed to H. P. Young (Smith 2005).
5. From Rankings to Matrices Preference profiles provide relatively rich information about individual opinions. In some cases this information is essentially reduced. For instance, the winner may be determined on the basis of pairwise comparisons of alternatives. Sometimes only the winner of each comparison is recognized,
300
Tommi Meskanen and Hannu Nurmi
while in other systems also the victory margin plays a role in determining the overall winner. We denote by V the outranking matrix where entry V xy indicates the number of voters in profile P preferring candidate x to candidate y. The diagonal entries are left blank. A metric on outranking matrices can now be defined as follows: if V and V a are the outranking matrices of profiles P and P a then d V (P , P a)
1 ]V xy V xya ] 2 x , y X
In other words, the distance tells us how much pairwise comparisons differ in the corresponding outranking matrices. This metric is very similar to the inversion metric and, indeed, we find the same Borda count (or, alternatively, Kemeny) winners using this metric instead. Let us now turn to systems where the goal is a Condorcet winner. As pointed out above, in the Condorcet least-reversal system the winner is the candidate which can be turned into Condorcet winner with minimum number of reversals of pairwise comparisons. That is, the Condorcet leastreversal system winner w lr is the candidate with property d V (P , C(w lr )) b d V (P , C(x )) x X \ w lr
Copeland’s procedure has also a similar goal state as Dodgson’s and Condorcet’s least reversal one, namely, one with a Condorcet winner. Given an observed profile P one considers the corresponding tournament matrix T where entry Txy 1 if majority of the voters in profile P are preferring candidate x to candidate y. Otherwise Txy 0. Now, the Condorcet winner is seen as a row in T where all k 1 nondiagonal entries are ones. We define yet another, viz. tournament, distance between profiles P and P a with tournament matrices T and T a as d T (P , P a)
1 ]Txy T axy ] 2 x y X
The Copeland winner w C is the alternative that wins the largest number of comparisons with other candidates i.e. has the smallest number of zeros in its row in the tournament matrix. Thus the winner w C is the candidate that comes closest to win every other candidate, that is, using the distance above, d T (P , C(w C )) b d T (P , C(x )) x X \ w C
Obviously, the goal states of Condorcet least-reversal system and Cope-
Closeness Counts in Social Choice
301
land’s system are the same, but metrics differ. The latter pays no attention to majority margins, while the former depends on them. 6 The idea of the Condorcet winner can be extended into a Condorcet ordering. Let P be a profile and X a be a subset of the set of candidates X. We denote by P ]X a the profile obtained when we consider only the candidates in X a and dismiss all other candidates. We say that a profile P has a Condorcet ordering if for every X a X the profile P ]X a has a Condorcet winner. The fact that there is a Condorcet ordering in a profile P over X means that there is a candidate, say w 1 X that wins every other candidate in pairwise comparisons with a majority of votes. Moreover, there is a candidate, say w 2 , that wins every other candidate except w 1 in pairwise comparisons with a majority of votes. In addition, there is a candidate, say w 3 , winning every other candidate except w 1 and w 2 in pairwise comparisons with a majority of votes. And so on. The Condorcet ordering is w 1 ; w 2 ; "; w k , where k ] X ] . It can be easily seen that P has a Condorcet ordering if and only if the tournament matrix of P does not have any cycles, i.e. there are no candidates x 1 , x 2 ,…, x i such that Tx 1x 2 Tx 2x 3 ! Tx i1x i Tx i x 1 1. We denote by Co(R ) the set of all profiles that have a Condorcet ordering R. This set could be viewed as a natural goal state for a decision rule. But, as above, there are several different methods to determine which ordering is at smallest distance from an observed profile that does not have a Condorcet ordering. Two candidates for such measures are (i) the number of pairwise comparisons that need to be reversed and (ii) the number of pairwise losses with majority that need to be turned into victories. That is, we could measure the differences in either outranking or tournament matrices. The latter of these methods is connected to the work of Slater (1961). d T (P , Co(R Sl )) b d T (P , Co(R )) R R \ R Sl
The former method does not have a name: d V (P , Co(R U 1 )) b d V (P , Co(R )) R R \ R U 1
This method is, thus, based on minimizing the distance between the observed profile and one that has a Condorcet ordering, the crucial feature being that the distance is measured in terms of outranking metric. At first sight this method resembles Dodgson’s, but differs from the latter in tallying only differences in the outranking matrices. Also the other three winners we can define with the metrics mentioned above are yet to be named:
6
See, Klamler (2005) for another distance based characterization of the Copeland rule.
302
Tommi Meskanen and Hannu Nurmi
d K (P , Co(R U 2 )) b d K (P , Co(R )) R R \ R U 2 d L (P , Co(R U 3 )) b d L (P , Co(R )) R R \ R U 3 d d (P , Co(R U 4 )) b d d (P , Co(R )) R R \ R U 4 .
6. Elimination Metrics Above we have defined the metric counting the differences in the individual rankings (d K , d L , d V ), the pairwise defeats (d T ), and the nonidentical rankings (d d ). Another approach is to consider the candidates that vary in two preference profiles. There are several recursive elimination systems that are based on the order of the eliminations of the candidates. Let R be some ranking of the candidates, x 1 E x 2 E "E x k and P the voting profile we are considering. We begin by comparing the Hare system and the function k
F P (R ) (n 1 d d (Pi 1 , W(x i )))(n 1)k i i 1
where Pi 1 P ]X \
i 1
* j 1 x j
.
As was pointed out above, on the first round of the Hare system we eliminate the candidate with the smallest number of first places. The function F P gets its smallest value when the function k 1
F a P(x 1 ) (n 1 d d (P , W(x 1 )))(n 1)
gets its smallest value. This happens when x 1 is the candidate with the smallest number of first places in P and thus d d (P W(x 1 )) is maximal. Note that the part k
(n 1 d (P d
i 1
W(x i )))(n 1)k i
i 2
of F P (R ) is always smaller than (n 1)k 1 . That is why we do not need to care about candidates x 2 , x 3 ,…, x k at this point. On the second round of the Hare system we eliminate the candidate with the smallest number of first places after we have removed the candidate that was eliminated at the previous round. The function F P gets its smallest value when the function k 1 F aa P(x 1 x 2 ) (n 1 d d (P , W(x 1 )))(n 1) (n 1dd (P ]X \ x 1 , W(x 2 )))(n 1)k 2
gets its smallest value. This happens when x 1 is the candidate with the
Closeness Counts in Social Choice
303
smallest number of first places and x 2 is the candidate with the smallest number of first places after x 1 is eliminated from P. Note again that the part k
(n 1 d (P d
i 1
, W(x i )))(n 1)k i
i 3
of F P (R ) is always smaller than (n 1)k 2 . That is why we do not need to care about candidates x 3 , x 4 ,…, x k at this point. If we continue these considerations we find that the function F P gets its smallest value when x 1 , x 2 ,… is the order of the eliminations using the Hare system. We now turn this function into a metric. A metric is always symmetric. Let P and P a be two preference profiles of n rankings and their sets of candidates X and X a have k and k a elements, respectively. We write shortly Pi 1 P ]X \ i1 x j . Let the metric for the Hare system be * j 1 £¦ m d H (P , P a) min ¦¤ (n 1 d d (Pi 1 , W(x i )))(n 1)k i ¦¦¥ i 1 ma
(n 1 d (P a d
i 1
i 1
² ¦ , W(x a i )))(n 1)k ai ] x i X , x ia X a, Pm Pma a ¦ » ¦ ¦ ¼
Because d H (P , P ) must be zero for any P, we only calculate the distance until Pm Pma a for some m and m a. Note that when we calculate the distance between a profile P and the sets W(x ) (or, alternatively, C(x ) ) there is always a profile in the closest set W(x ) (or C(x ) ) such that the second part of the distance function is zero. When we combine this metric with goal states W(x ) we get the Hare system. If we instead use the goal states C(x ) we get a system named after Hill. For the Coombs method we have the metric £¦ m d C (P , P a) min ¦¤ (d d (Pi 1 , L(x i )) 1)(n 1)k i ¦¦¥ i 1 ma ² ¦ (d d (Pia1 , L(x ia )) 1)(n 1)k ai ] x i X x ia X a Pm Pma a ¦ », ¦ i 1 ¦ ¼
where we denote by L(x ) the set of all profiles where x is last-ranked in every voter’s ranking Finally, for the Baldwin, also called Borda runoff, method we have £ ¦m d B (P , P a) min ¦ ¤ (kn d K (Pi 1 , W(x i )))(kn )k i ¦ ¦ ¥ i 1 ma ² ¦ (k an d K (Pi a 1, W(x ia)))(k an )k ai ] x i X , x a X a, Pm Pma a ¦» ¦ i 1 ¦ ¼
304
Tommi Meskanen and Hannu Nurmi
7. Two More Systems Turning now to somewhat more recent systems, Tideman’s (1987) procedure operates on pairwise majority margins. At every step the algorithm fixes one comparison between two candidates such that a minimum amount of voters are disappointed. The result is a complete directed graph without cycles i.e. a ranking. Clearly the idea of this procedure is to generate a ranking that contradicts as few pairwise comparisons as possible in the outranking matrix. This is exactly what the Kemeny rule does. While Kemeny’s rule always chooses the ranking that is closest to the profile, Tideman’s procedure tries to find that ranking using greedy algorithm. From the computational point of view the Tideman winner is always fast to find while finding the Kemeny winner can be very slow if the number of candidates is large. The last system considered here is Schulze’s method. Let us denote by S xy the maximum strength of all beatpaths from x to y. Using these we generate the ‘beatpath tournament matrix’ B of the profile P such that the entry B xy 1 if S xy p S yx ; otherwise B xy 0. We say that the ranking R where R(x ) R( y) iff B xy 1 corresponds to this beatpath tournament matrix. As above we could define the Schulze winning ranking R Sc in profile P with the help of a profile PSc that is closest to P (with respect to some metric d ) and has a beatpath tournament matrix that corresponds to some complete ordering R Sc : Let PSc be a beatpath tournament matrix that corresponds to complete ordering R Sc and d(P , PSc ) b d(P , P a)
for all profiles P a that have a beatpath tournament matrix that corresponds to some complete ordering. However, Schulze shows that every profile has a beatpath tournament matrix that corresponds to complete ordering. Thus, regardless of the metric, d(P , PSc ) 0 and the winning ranking R Sc is the ordering corresponding to the beatpath tournament matrix of P.
8. Conclusion The observations made in the preceding are summarized in Table 1. Several entries in the table exhibit goal state-metric combinations that do not correspond to those associated with existing methods. For example, nearly all systems aiming at a Condorcet ordering are purely theoretical, the only exception being Slater’s rule. Yet, for a person who finds the Condorcet winning criterion appealing, the Condorcet ordering would probably seem quite a plausible goal set. Depending then on the way this person would like to measure the difference between the observed profile, outranking matrix or tournament matrix, on the one hand, and the profile or matrix representing the Condorcet ordering, he/she might end up with different methods.
Closeness Counts in Social Choice
305
As a general conclusion we notice that disagreement may take on several intuitively plausible and yet incompatible meanings. Also most methods that are currently being used can be viewed as devices of looking for the closest consensus, given the expressed opinions of the electorate. Why we have so many different methods turns out, then, to be a natural consequence of our having somewhat different views of what constitutes a consensus and how deviations thereof should be measured. Table 1. Goal state
Goal states, metrics and voting systems
Unanimous
Condorcet
Beatpath
Metric
winner W(x )
order U (R )
winner C(x )
order Co(R )
winner/ order
Inversion d K
Borda
Kemeny
Dodgson
U2
Schulze
Manhattan d L
Borda
Litvak
Dodgson
U3
Schulze
Inversion for V matrices, dV
Borda
Kemeny
Condorcet least-reversal
U1
Schulze
Inversion for T matrices, dT
–a
–a
Copeland
Slater
Schulze
Plurality
Plurality
Young
U4
Schulze
Discrete, d d a
The goal states and the metric are incompatible or their meaning unclear.
Acknowledgements The authors are indebted to Marlies Ahlert, Matthew Braham, Steven J. Brams, Bernard Grofman, Manfred J. Holler, D. Marc Kilgour, Moshé Machover and an anonymous referee for numerous constructive suggestions.
References Aleskerov, F. (1999) Arrovian Aggregation Models, Kluwer. Baigent, N. (1987a) Preference Proximity and Anonymous Social Choice, The Quarterly Journal of Economics 102: 161–169. Baigent, N. (1987b) Metric Rationalization of Social Choice Functions According to Principles of Social Choice, Mathematical Social Sciences 13: 59–65. Barthélemy, J. P. and Monjardet, B. (1981) The Median Procedure in Cluster Analysis and Social Choice Theory, Mathematical Social Sciences 1: 235–267. Black, D. (1958) Theory of Committees and Elections, Cambridge University Press. Brams, S. (1976) One Man, n Votes, Module in Applied Mathematics, Mathematical Association of America, Cornell University. Brams, S. and Fishburn, P. (1983) Approval Voting, Birkhäuser.
306
Tommi Meskanen and Hannu Nurmi
Doron, G. (1979) The Hare System is Inconsistent, Political Studies 27: 283–286. Farkas, D. and Nitzan, S. (1979) The Borda Rule and Pareto Stability: A Comment, Econometrica 47: 1305–1306. Farquharson, R. (1969) Theory of Voting, Yale University Press. Hare, T. (1865) Election of Representatives, Longman. Hill, I. D. (1988) Some Aspects of Elections – To Fill One Seat or Many, Journal of the Royal Statistical Society. Series A 151: 243–275. Kemeny, J. (1959) Mathematics without Numbers, Daedalus 88: 571–591. Lakeman, E. and Lambert, J. (1955) Voting in Democracies, Faber. McLean, I. and Urken, A. (eds) (1995) Classics of Social Choice, University of Michigan Press. Meskanen, T. and Nurmi, H. (2006) Distance from Consensus: A Theme and Variations, in B. Simeone and F. Pukelsheim (eds) Mathematics and Democracy: Recent Advances in Voting Systems and Collective Choice, Springer. Nanson, E. J. (1882) Methods of Election, Transactions and Proceedings of the Royal Society of Victoria XIX: 197–240. Reprinted in I. McLean and A. Urken (1995). Nitzan, S. (1981) Some Measures of Closeness to Unanimity and Their Implications, Theory and Decision 13: 129–138. Nurmi, H. (2002) Measuring Disagreement in Group Choice Settings, in M. J.Holler et al. (eds), Power and Fairness, Jahrbuch für Neue Politische Ökonomie 20, Mohr Siebeck, 313–331. Nurmi, H. (2004) A Comparison of Some Distance-Based Choice Rules in Ranking Environments, Theory and Decision 57: 5–24. Richelson, J. (1975) A Comparative Analysis of Social Choice Functions, Behavioral Science 20: 331–337. Richelson, J. (1978a) A Comparative Analysis of Social Choice Functions II, Behavioral Science 23: 38–44. Richelson, J. (1978b) A Comparative Analysis of Social Choice Functions III, Behavioral Science 23: 169–178. Richelson, J. (1979) A Comparative Analysis of Social Choice Functions I,II,III: A Summary, Behavioral Science 24: 355. Richelson, J. (1981) A Comparative Analysis of Social Choice Functions IV, Behavioral Science 26: 346–353. Riker, W. H. (1982) Liberalism against Populism, W. H. Freeman. Schulze, M. (2003) A New Monotonic and Clone-Independent Single-Winner Election Method, Voting Matters 17: 9–19. Slater, P. (1961) Inconsistencies in a Schedule of Paired Comparisons, Biometrika 48: 303–312. Smith, W. D. (2005) Descriptions of Voting Systems, mimeo. Tideman, N. (1987) Independence of Clones as a Criterion for Voting Rules, Social Choice and Welfare 4: 185–206. Zavist, B. T. and Tideman, N. (1989) Complete Independence of Clones in the Ranked Pairs Rule, Social Choice and Welfare 6: 167–173.
16. Freedom, Coercion, and Ability Keith Dowding Research School of Social Sciences, Australian National University, Canberra, Australia
Martin van Hees Faculty of Philosophy, University of Groningen, The Netherlands
1. Introduction In his methodological comments about the study of ethics and politics, Aristotle famously remarked that one should not demand more precision from the study of a subject than that subject allows. 1 He has sometimes been interpreted as suggesting that analytical rigour is not required here. Indeed it may well be true that at the end of analytical scrutiny, central topics in moral and political philosophy, such as freedom or power, may still leave room for interpretation because the superiority of one analysis over another may well be embedded in our intuitions. This fact, if it proves to be so, should not dissuade us from analytical rigour. In the last few decades philosophers, economists and mathematicians have fruitfully applied mathematical and formal analysis to the concepts of ‘power’ and ‘freedom’. From that analysis we have learned much. The concept of power was the first to which formal theorists turned their attention. This was quite natural, given the obvious bargaining ramifications of game theory. It is cooperative rather than non-cooperative game theory that has undergirded most of the analysis however. A natural interpretation of the Shapley value (Shapley 1953) – specifically in voting applications – can be seen as an agent’s power. In voting contexts, the Shapley value assigns to each player the probability of being pivotal in all possible coalitions where orders are sampled randomly with equal probability, and is now known as the Shapley-Shubik index for simple games (Shapley and Shubik 1954). Since then a large literature has developed discussing whether this or rival indices are preferable for measuring voting power (e.g. Felsenthal and 1 Ethica Nicomachea, Book I, 1094a. Unfortunately Aristotle did not specify how much precision is appropriate when studying ethics and politics.
308
Keith Dowding and Martin van Hees
Machover 1998); the precise interpretation of power it contains (and whether we need to incorporate actors’ preferences into the measurement of power (e.g. Napel and Widgren 2004; Braham and Holler 2005); and the relationship between power thus understood and concepts such as freedom and ability (Braham 2004; Holler 2007; Morriss 2002). Such are the obvious and close links between the concepts of freedom and power that analytically distinguishing them might prove problematic. Somewhat surprisingly, therefore, a growing literature on the formal analysis of freedom emerged largely independent of the literature on power. Two central issues in this literature are the analysis of the extent of a person’s freedom of choice and the value of having such freedom. The situations that are to be compared are described by opportunity sets, the elements of which can be seen as the things an individual is free to choose. The alternatives could, for instance, be interpreted as commodity bundles, social states or vectors of functionings. The set of all possible opportunity sets is then taken to be the set of all non-empty sets of feasible states of affairs. The freedoms an individual possesses in some situation is thus described by some opportunity set, and it is assumed the individual selects exactly one element from the set. The question now is how to compare these different opportunity sets in terms of the extent (or degree) of freedom they offer an individual, or in terms of the value to choose from them. Like much of the power literature, the approach is axiomatic. Given some axioms about how to judge the extent or value of a person’s freedom of choice, theorists have studied which ways – if any – of comparing the extent or value of the freedom offered by different opportunity sets are compatible with those axioms (see, among many others, Arrow (1995), Klemisch-Ahlert (1993), Nehring and Puppe (1999), Pattanaik and Xu (1990, 1998), Puppe (1996, 1998), Sen (1990, 1991, 1993), Sugden (1998), van Hees (1998)). Obviously, the simple framework has important limitations. For instance, it is assumed there is a direct relation between performing an action (choosing an element from an opportunity set) and the consequences of the action (the state of affairs described by the element in question): the possible actions of others are ignored. Making a distinction between the things an individual does and the state of affairs that might be realized becomes difficult. Stated differently, in most of this literature the concept of freedom is analyzed within a parametric rather than a strategic setting. The separate development of the formal literatures on power and on freedom seems due, at least to some extent, to the fact that both approaches focus on rather specific issues: much of the power literature concerns voting power, and much of the freedom literature concerns freedom of choice within a parametric context (exceptions include van Hees (2000) and Ahlert (2008). Clearly, an extremely challenging area of research is the formulation of a general formal framework that enables integration of the
Freedom, Coercion, and Ability
309
analyses of power and freedom. An important preliminary step towards the construction of such a framework is offered by Braham (2006). 2 How can the concept of freedom be analyzed within a game-theoretic framework that focuses on power? To answer that question, we need some informal understanding of the concepts of freedom and power that we would like to be represented within the framework. That is, before setting up a formal analysis of power, freedom and their mutual relations, we have to be clear about the nature of power and freedom concepts upon which we will be focusing. For some concepts of freedom – say freedom of will – the relation with (a particular concept of) power may be quite different from that with other concepts of freedom – say legal freedom. Similarly, different concepts of power will be differently related to a particular concept of freedom. The object of this paper is to show that an integrated analysis of freedom and power may be quite useful even when the concepts of freedom and power are very similar to each other. We present a non-formal (or perhaps quasi-formal) discussion of a concept of freedom that is, at least intuitively, very close to that of power, since it is defined within a strategic rather than a parametric setting, and since it identifies the freedoms of an individual with the things a person can do. We shall argue that even for such a concept of freedom, which is very close to the notion of ability and thus to concepts of power defined in terms of ability, there are situations in which the abilities of a person do not coincide with her freedom. 3 Insofar as the power of an individual is determined by her abilities, the analysis thus points to some important differences between freedom and power. Our argument is based on a critical discussion of an inference put forward by Cohen (1979). According to this inference, a person is free to do the things that she is forced to do. On the basis of an extensive discussion of different notions of ability, we argue that this claim cannot be sustained: there are situations in which a person is forced to do something, and thus is actually doing it, and yet is not free to do it.
2 Important earlier contributions to such a general game-theoretic analysis of power include Harsanyi (1962a; 196b) and Miller (1982). 3 More precisely, freedom and power are defined in terms of ‘ableness’, that is, the ability plus the opportunity to perform some act (see Section 3 below). The most rigorous account of freedom-as-ableness is offered by Kramer (2003). In Kramer’s view, ableness is a sufficient and necessary condition for freedom. The weaker claim that ableness is a sufficient but not a necessary condition is held by others: for instance by negative freedom theorists like Carter (1999) and Steiner (1994), but also by theorists such as Sen (1985) and Nussbaum (2000) who embrace a positive conception of freedom. Morriss (2002) presents the most elaborate argument in favour of conceiving power as ableness.
310
Keith Dowding and Martin van Hees
2. ‘To be Forced is to Be Free’ Our examination of the claim that a person’s freedom can be identified with her ability starts off with a criticism of Cohen’s (1979) argument for the view that one can be forced to do something and yet be free to do it. Cohen claims that from the assumptions (a) one cannot do what one is not free to do, (b) one cannot be forced to do what one cannot do, it follows that (c) one is free to do what one is forced to do. It has been claimed that the inference is invalid as stated but there can be no objection if the inference is more simply reformulated not in terms of negations: 4 P1. If one is forced to do x then one can do x, P2. If one can do x, then one is free to do x, hence, C1. If one is forced to do x then one is free to do x. Cohen (1979: 163) calls the conclusion ‘odd sounding but demonstrable’, and goes on to explain why the conclusion, that one is free to do something when one is forced to do it, is of moral and political interest. Before we assess the truth of the premises (and of the conclusion) we point out that the reasoning is based on a particular conception of freedom that does not have universal assent. Consider, for instance, Hayek’s (1960) conception of freedom as the absence of coercion. Hayek believed one is free to do anything that one is not coerced to do, even if doing that thing is impossible. Clearly, if one assumes that there are some things which one is not free to do, then such an account of freedom is not compatible with at least one of Cohen’s premises. Another concept of freedom which is at odds with Cohen’s inference comes from republican accounts (Skinner 1998, Pettit 1997). They define freedom as the absence of domination and domination in turn is defined in terms of possible arbitrary interference by others. The slave who is in the service of a benevolent master is unfree even when the master does not interfere in the activities of the slave. The mere fact that the master could interfere means the slave is not free. Hence, P2 need not be true. C1 may also fail to hold – the slave who is forced by his master is not free. The republican concept of freedom has been criticized itself, notably for conflating the notion of being free to do some x and being free overall. 5 However, if one were to adopt such a conception of freedom, Cohen’s inference will not hold. Yet another account of freedom for which Cohen’s inference will not hold is the view that freedom is to be defined in terms of the possibility of doing what one wants. Clearly, if one is forced to do something that one does not want to do, then one is not free. 4 We now see nothing wrong with the inference though Dowding (1990) objected to Cohen’s original formulation and Boudewijn de Bruin also objected to it in conversation. 5 Kramer (2003). In particular, Kramer shows masterfully that it conflates the impossibility of non-interference with the unlikelihood of such interference.
Freedom, Coercion, and Ability
311
Rather than adopting these or other conceptions of freedom that either directly or indirectly leads to the rejection of P2, we shall assume that to be free to x means having the ability and the opportunity to x. However, we shall show that even with freedom thus defined the interference does not go through; there is equivocation of the ‘can’ of ability.
3. Opportunity, Ability and Counterfactuals Cohen’s first premise seems to be the conjunction of two separate premises. The first is the premise that one can only be forced to do x if one is doing x. Though this premise can be contested too – to wit, for conflating the ‘doing’ of an action with the ‘doing’ of an undergoing – we shall only focus on the second underlying premise: P0. If one does x, then one can do x. In some sense such a premise seems unimpeachable. 6 However, what matters is the role or meaning of the ‘can’ in this statement. To assess the truth of P0, and of P1, we need to know what it means to say that a person can or cannot do something. Or, more generally, to understand a concept of freedom-as-ability, we need to understand what it means to say that a person is able to do something. Morriss (2002: 81) makes a useful distinction between the ‘can’ of ability and the ‘can’ of ableness. The ‘can’ of ability does not necessarily involve the presence of an opportunity, whereas with the ‘can’ of ableness such an opportunity is always presupposed. The statement ‘I can play the piano’ is about ability, whereas the statement ‘I cannot play the piano (now) because there is no piano available to me’ refers to ableness. In the first sense of ‘can’ (that of ability) a person can do something (play the piano) even though there may be no opportunity to do so, whereas in the second sense (that of ableness) the absence of the opportunity (the piano) implies that the person cannot perform the action. On this account, ableness is logically stronger than ability: ‘ableness’ is ‘ability plus opportunity’. The freedom views that we are examining – that is, the views of Cohen and Kramer – define freedom in terms of ableness rather than abilities. On those freedomas-ableness views, having a freedom consists of having the opportunity as well as the ability. A person has the opportunity to perform some action if the means that are needed to perform the action are available to her. Consider the opportunity to fly to England. If you have a ticket (or the money to buy a ticket), a valid passport, transportation to the airport, and so on then you do have the opportunity. To establish whether someone has such an opportunity we have 6 The view that one is always free to do x when one is doing x can also be encountered in Carter (1992), Steiner (1994: 8), Kramer (2003: 245).
312
Keith Dowding and Martin van Hees
to make use of counterfactuals. If you are not inclined to go abroad and so have no reason to go to the airport we can still test the opportunity by considering whether you would be arriving if you did wish to do so. Or to give a different example, to see whether you have an opportunity to play the piano, we not only have to know whether there is a piano available to you, but also what happens if you were to try to play it. That is, will anyone try to prevent you from touching the keys; is the piano fully functional and so on? If no such external blocking would occur, then we can say that you have the opportunity to play the piano. What does it mean to say that someone has the ability to do some x? 7 To answer this question, we also take recourse in counterfactual analysis. Here we examine whether the person will successfully do x if he has the opportunity and were to seriously attempt to do x. 8 You are able to fly to England if you end up flying to England in those circumstances. Similarly, you have the ability to play the piano if you would produce some recognizably musical tune if a piano were available to you and if you were to attempt to play it. It might be argued that thus stated the distinction between opportunity and ability is somewhat blurred. That is, if for some doings counterfactual success is needed to determine both whether a person has the opportunity and whether the person has the ability to perform the action, then it is not clear what the distinction amounts to. Yet there are important differences between the two. In particular, it is the nature of the counterfactual success that differs for the two notions. This is most easily seen in the piano example. To have the opportunity to play the piano means that you are hitting the keys in the relevant counterfactual worlds, but the sounds that you produce need not be what we understand by music. The counterfactual success needed for saying that you have the ability is more demanding – you should not only be hitting the keys, but you should be doing so in a way which we can describe as ‘playing the piano’. Although less obvious, such a difference can also be discerned with respect to the flying example. Suppose that you have all the means needed to fly to England (passport, ticket, and so on), but that you have an extreme fear of flying. Say any serious attempt to board a plane leads to hyperventilation and a fainting fit. Here we might say that you have the opportunity to fly but lack the ability. Stated differently, the absence of an extreme fear of flying is not what we count among the means that are needed to fly to England any more than we count the presence of musical skills a necessary condition for having an opportunity to play the piano. Rather they count as part of one’s abilities. To recapitulate, we say that a person who is not currently doing x has the ability to do x if his attempt to do x does indeed lead to his doing x if he 7 For a different and more detailed counterfactual account of ability and freedom than the one presented here, see Lehrer (1990). 8 The stipulation that the attempt is serious means that one does not make mistakes that one knows (or could have known) how to avoid (Kramer 2003: 267).
Freedom, Coercion, and Ability
313
were to have the opportunity to do so in the relevant counterfactual worlds. The analysis of ableness for those situations is similar to that of ability except for the fact that the opportunity actually exists, not merely counterfactually. To determine whether I have the ableness to play the piano, we thus examine whether there actually is a piano available to me and whether I would indeed be playing it if I were to try to do so. Thus, we are drawn to the conclusion that, for all those situations in which x is not being currently performed, an ability to do x means ‘if one were to have the opportunity to do x, then attempting to do x entails doing x’; and ableness to do x means ‘one has the opportunity to do x, and attempting to do x entails doing x’. However, as we shall see, this account of the distinction between a person’s ability and ableness brings forth a problem. And it is this problem that is crucial for the question whether one can be free to do something when forced to do so.
4. Two Kinds of Abilities In the preceding analysis of the notions of ability and ableness, we began by saying that ableness is logically stronger than ability: ‘ableness’ is ‘ability plus opportunity’, and any person who has the ableness to do some x has a fortiori the ability to do x. Yet our definitions do not guarantee that this assumption is satisfied because in establishing a person’s ability to do some x we focus implicitly on the ‘relevant’ counterfactual worlds where he has that opportunity. However, such a criterion of relevance is not invoked when assessing a person’s ableness. To illustrate, suppose we are examining a famous pianist’s ableness to play the piano. He is sitting at a fully functioning piano suspended 1000 feet up in the air and the pianist suffers vertigo. He is unable to play any decent tune. In this world any attempt of his to play the piano is unsuccessful. Ordinarily, phobic pianists are not thought to lack the ability to play the piano because of their inability when so suspended: the situation does not belong to the set of relevant counterfactual worlds for assessing pianoplaying ability. Being a virtuoso does not guarantee he is always successful in playing the piano. But this implies that, were he actually hanging in mid air, he would have the ability to play the piano and the opportunity but not the ableness: ability does not lead to ableness in that situation. 9 The problem is avoided if we assume that the set of relevant counterfactual worlds used for determining the existence of an ability always contains the actual world. That is, we drop the claim that the piano player has the ability to play the piano when hanging in mid-air. Indeed, on the usual 9
Another tack would be to deny that sitting at a piano suspended high in the air is an opportunity to play the piano for those who suffer vertigo. However, defining opportunities only when one can successfully accomplish the task brings its own problems.
314
Keith Dowding and Martin van Hees
counterfactual approach for ascertaining the truth of conditionals, we would say that the relevant counterfactual worlds are those that are as close as possible to the actual world – they only differ with respect to the presence of an opportunity if no such opportunity actually exists. Clearly, ableness then always implies ability: if the opportunity exists in the actual world, then the closest possible world to the actual world is the actual world itself. In such an account, the piano player lacks the ableness as well as the ability to play the piano at great heights, but possesses the ability otherwise. However, such an approach yields new problems. Let us modify the example of the suspended piano player. Assume again that a person with vertigo is hanging in mid-air next to a piano. Suppose under normal circumstances he would not know how to play the piano. However, his extreme fear somehow elicits strange neuro-physical operations as a result of which he manages to produce music from the piano. Given the definition of ableness in terms of success in the closest possible worlds, we conclude that he has the ableness to play the piano: he has the opportunity, the closest possible world is the actual world, and in that world his attempt is successful. Furthermore, he has the ability when suspended at great heights (but not in any other situation). Now suppose the actual world is the one where he is sitting at home. By assumption, in all of the closest possible worlds in which he has the opportunity to play the piano, his attempts are unsuccessful. Hence he is now said not to have the ability to play the instrument. 10 The examples show that we are dealing with two kinds of abilities: one is generic and the other is time-specific (Morriss 2002: 49). The famous pianist has the generic ability to play the piano but not the time-specific ability to do so when hanging in mid-air. Conversely, the unmusical person has in that situation the time-specific but not the generic ability. To keep the desired logical relation between ability and ableness, we have two options. First, when we talk about ability we should refer to time-specific ability and when we talk about ableness we mean time-specific ability plus opportunity. The second possibility is to define ability generically and ableness as the combination of generic ability plus opportunity. But then we also have two possible views of freedom-as-ableness: freedom-as-specific-ableness and freedom-as-generic-ableness.
5. Freedom-As-Generic-Ableness To establish whether a person has the generic ability to do some x, we assume that we know in which situations his successful performance of the act can be considered a necessary and sufficient condition for attributing to 10
We do not require fantastic examples to illustrate this point. An unfit person might be able to run 100 metres in under nine seconds if the following wind was strong enough. But we would not claim that they had the ability to run 100 metres in under nine seconds.
Freedom, Coercion, and Ability
315
him such an ability. We call these situations the set of relevant possible worlds and denote them by W (x ) . 11 Clearly, the set will depend on the x we are focusing upon – the set of worlds we use to judge whether a person can play the piano will not be the same as the set of worlds needed to ascertain whether he can ride a bike. We assume that in each x W (x ) , individual i has the opportunity to do x and attempts (directly or indirectly) to do x. Definition 1 We say that i has the generic ableness to do x if, and only if, i is doing x in all of the worlds belonging to W (x ) (generic ability) and if he has the opportunity to do x in the actual world (opportunity).
Before proceeding let us note that the stipulation that i ‘directly or indirectly’ attempts to do x in all worlds belonging to W (x ) is important. After all, there may be clear instances where a person is able to do something, yet his doing so is not the direct result of an intention or an attempt. In fact, any such intention or attempt may be self-defeating. A familiar example is the insomniac firmly resolving to fall asleep, thereby failing to do so. The trick is to make an ‘indirect attempt’ to fall asleep: trying to think of things other than one’s need for sleep. But if one knows that not intending to fall asleep is a necessary condition for falling asleep, and if one therefore attempts not to have that intention, then one is still attempting to fall asleep – an indirect attempt is also an attempt. And, indeed, the person who has trained himself to fall asleep in that way can be said to have the generic ability to fall asleep. It is quickly seen that doing x is not sufficient for inferring ability-to-do x and hence also not for ableness-to-do x if ability is defined generically – the situation in which a person is doing x need not belong to the relevant worlds for ascertaining ability-to-do x. Thus P0 need not be true, and it therefore immediately follows that P1 need not be true: that is, a person may be forced to do something and yet be unable to do that very same thing. However, the logical possibility of doings-while-forced-without-ability may be uninteresting if the class of such inabilities is (almost) empty or contains only inabilities devoid of interest for the analysis of freedom. We now show the subset is not empty and contains examples of utmost importance for the analysis of freedom. To do so, we first have to say a little more about being coerced or being forced. These concepts are of course notoriously complex, and we shall not try to give a fully fledged account of them. For our purposes it suffices to assume that there is a class of situations in which one person, say i, is being forced to do some x and in which the coercion consists of – but need not be defined by – (a) interference by some other person(s), which is (b) aimed at i doing x, that (c) i does not want to do x, and (d ) would not be doing x 11 See Morriss (2002: 83–85) for a discussion of some of the issues involved in establishing the nature of W (x ) .
316
Keith Dowding and Martin van Hees
were it not for the interference. The restriction of the interference by others implies that we can avoid questions of whether a person can coerce (or force) himself to do something and whether non-humanly caused events can do so. Secondly, note we assume that these others act intentionally – that is, they want the person to do x and their interference is aimed at him doing so. Thirdly, we take it that the coerced person does not want or intend to do x ; he is doing it only because he is being interfered with. Thus we also avoid the difficult question whether a person can be coerced (or forced) into doing something which he wants to do anyway. Now suppose a person only successfully performs some action if he is forced to do so. Without such coercion the person would not be able to accomplish the task. There are many examples – often very dramatic ones – of such cases. Think of the POWs worked to death building the Thai-Burma Railway. We would not usually say a person has the generic ability to perform a task if the only situations where he successfully performs the task are those where he is forced to do so – the relevant counterfactuals for establishing generic ability will for almost all abilities be ones in which no coercion is present. Exceptions will be formed by abilities that themselves refer to coercion or force: say a person’s (generic) ability not to break down when being interrogated in a forceful way; the (generic) ability to resist an oppressor; and so on. Not all examples of actions that are only possible when forced are as dramatic as the one of the POWs. An athlete’s generic ability to train hard might for instance require the presence of a hard taskmaster who pushes the athlete that extra mile. However, the ability to train hard belongs to the athlete who chooses to work with the coach. So apart from abilities that themselves refer to coercion, a person lacks a generic ability when the only worlds in which he performs the task in question are those in which he is being coerced. But then P1 – if one is forced to do x then one can do x – is not true in general: in many worlds in which he is actually being coerced into doing x, he is doing x but lacks the generic ability and hence generic ableness to do so. A quick and dirty argument against the Cohen inference, is that being forced to do x does not constitute ‘the opportunity’ to do x. On this view, opportunities are things that are available and one can choose – not things which are forced upon one. Such a quick and dirty argument would allow us to distinguish the cases of POWs forced to work the Thai-Burma railway and the athlete forced to train hard by her coach. The first was not an ‘opportunity’ for the POWs, but the second is an opportunity for the athlete; she can always choose not to work with him. (And if it were argued that for some athletes they did not have that choice and hence opportunity, they were not free when being trained so hard.) However, the quick and dirty argument merely shifts the burden of proving choice is somehow involved
Freedom, Coercion, and Ability
317
in freedom onto the concept of opportunity, something that some writers might resist. So despite the quick and dirty argument’s appeal, we leave it to one side. 12 We can not only question P0 and P1, but also premise P2: having the generic ableness to do x often does not entail that one is free to do x. Many cases of what we call unfreedom consist of the absence of a specific ability rather than a generic one. Consider the case of someone threatened with dire consequences if they do x. Rather than give in to the threat, they attempt x. However, fear of failure makes doing x psychologically impossible. Surely we would think the person has been made unfree to do x. Assuming coercion does not belong to the counterfactuals used for establishing the existence of a generic ability to do x, we have to conclude that the person has the generic ability to do x. Since, in the example, he still has the opportunity to do x, then freedom defined in terms of generic ableness means he has retained his freedom to do x. 13 It might be argued that these examples may well show that P1 and P2 are not true in general, but that the kind of situations where they fail to hold (that is, ones where one does something only because one is coerced or in which coercion makes some actions psychologically impossible) are so extreme that we can be sanguine about their consequences for more general discussions of freedom. However, the fact that the examples are extreme does not mean that similar counterexamples could not be formulated for less extreme cases. Their extremity merely allowed us to avoid having to discuss the nature of the criterion for establishing the relevant counterfactual worlds. That is, the possibility of doings-without-ableness is brought out clearly because the examples are so extreme. To give examples of violations of P1 or P2 for less extreme situations, we need more information about the criterion of counterfactual relevance. Has the person who accidentally hits upon a correct password the generic ability to crack the code? Has an author suffering from a writer’s block lost the generic ability to write novels? Can the member of a minority group still be said to be free to protest against a majority decision if he stays at home out of fear of the contempt of his fellow citizens? To answer these questions, we need a fully developed account of the relevant counterfactuals, and that is not our objective here. Moreover, though in our examples the situations are extreme, it can hardly be claimed that they are not cases of importance to a theory of freedom. Indeed, if one’s theory of freedom fails to give a convincing account of the relation between ability and freedom in cases of extreme oppression or coercion, how can we be sure that it will do a better job for ‘ordinary’ ones?
12
Its appeal has been made clear to us in many conversations. Note that we do not claim that all threats reduce a person’s freedom. Whether that claim is true or not has been the topic of much discussion: see Day (1987), Benn and Weinstein (1971), Carter (1999: 224–232), and van Hees (2000). 13
318
Keith Dowding and Martin van Hees
6. Freedom-As-Specific-Ableness Do the premises of Cohen’s inference stand if freedom is defined as specific ableness? We shall first argue that in at least one account of specific ableness it is possible for a person to do something that he is not able to do, hence P0 and P1 will again fail to hold generally. Definition 2 Given a situation A, let B be the closest possible world to A where individual i attempts (directly or indirectly) to do x. 14 Individual i has the specific ableness to do x in A if, and only if, i is doing x in B.
Let x be an action that an individual cannot successfully perform when he (directly or indirectly) tries to do so. On the definition of specific ableness the person is not able to perform x in A – if he were to try to do x he would, by assumption, fail. Note, however, that the impossibility of successfully trying to do x does not preclude the person from actually doing x in A. After all, his doing x need not be the result of a (direct or indirect) intention to do so, nor need it be the realization of an attempt to do so. Indeed, by construction the person can only be doing x if he is not trying to do x. But then it directly follows that on this account of ableness it is possible that a person is doing something that he is not able to do. To illustrate, consider the following example of Hillel Steiner, which purports to show that a person who is doing something is thereby free to do that action. The example concerns a bouncer who pushes a person, let us call him George, out of a nightclub. According to Steiner, we might rightly claim that the bouncer has made George unfree to abstain from leaving the club, but not that he has made him unfree to leave it. What has the bouncer prevented? Has he prevented the behavioural event of [George] leaving the club from occurring? Evidently not. Has he prevented [George] from forming the intention to bring about that event? Again, no. Can we say, then, that what he’s prevented is that intention’s being the cause of the event? Once again, no, because nothing in this story rules out the possibility of that intention including his pushing [George] as the means of bringing that event about (Steiner 2001: 63). Let us rephrase the argument in terms of ‘ableness’ rather than ‘freedom’. Since George might want to be carried out of a nightclub by a bouncer, being carried out makes George able to leave (by being carried out). However, there is equivocation. Holding a lottery ticket means that George might be a lottery winner. But the fact that George might be a lottery winner does not make it the case that he is one. Similarly, the fact that George might want the bouncer to carry him out of the club does not make it the case that when he carries George out, George wants him to do so. It does not since the bouncer may only take people out 14 To simplify the analysis we assume here – as well as for the weaker definition given below – that B is the only such closest world.
Freedom, Coercion, and Ability
319
of the club when he knows that they do not want him to. Indeed, suppose that is the case. Say we have a situation where a person has the intention of not doing x but is forced to do x precisely because he does not want to do x. That is, the coercer’s primary desire is to frustrate the coercee’s intentions: if the coercee had tried to do x, he would have been forced into not doing x. Moreover, assume that in all relevant counterfactual worlds in which he tries to do x, his attempts are frustrated. In such a situation, it follows from the account of specific ableness that a person is not able to do x even though he may well be doing x. In situation A, George does not attempt to leave the nightclub but is being ejected by the bouncer and hence is in fact ‘doing’ the leaving. 15 By assumption, the closest possible world B in which he attempts to leave the night club is the one in which his attempt is unsuccessful. Hence the bouncer is forcing George to leave the night club, though George is not able to leave the night club. After all, if the bouncer were aware that George behaved badly in order to be forcibly ejected, he – by assumption – would not eject him. (He might take George to a back room and thump him.) Being forcibly ejected does not even entail that one can ensure being forcibly ejected through forming the intention to do so. Other conditions are necessary. In other words, whilst being forcefully ejected from a nightclub shows that a person might be able to choose to have themselves forcefully ejected from the nightclub, it does not demonstrate that they can be ejected when(ever) they have this intention: even under the can of specific ableness, a person may be doing something he is not able to do. One might object that our rejection of P1 is based on an incorrect rendition of the notion of specific ableness. If we held that a person’s specific ableness always includes the things she is currently doing, then P1 will hold. That is, we would then use the following weaker definition: Definition 3 Let A be a possible world and let B be the closest possible world to A where i attempts (directly or indirectly) to do x. Individual i has the weak specific ableness to do x in A if, and only if, i is doing x in A or in B.
If we use this weaker definition of specific ableness, George is (specifically) able to leave the night club. Indeed, P0 then holds by definition. We do not see any conclusive argument in favour of the strong or weak definition of specific ableness – both seem to be compatible with at least some of our 15
Note that we assume, with Steiner, that a person who is forcibly (that is, against his will) ejected from a nightclub can be said to be ‘doing the leaving’ even though he is (unsuccessfully) resisting the bouncer in every possible way. A ‘doing’ is not the same as an action, that is, a person may be knowingly doing something while intending and trying not to do it. Thus a person without singing skill may, for example, be said to be singing out of tune even though she tries to sing in tune. The person’s (rather desperate) attempts to sing in tune does not mean that she is not ‘doing’ the out-of-tune singing.
320
Keith Dowding and Martin van Hees
intuitions. However, there are reasons to reject the weak definition as forming a convincing basis for an account of freedom-as-specific-ableness. That is, whereas the weaker definition salvages the truth of P1, it is not compatible with P2. To see why, consider unfortunate George in the night club again. The bouncer is still nasty: that is, if George wants to stay he kicks him out, and if George wants to leave he forces him to stay. Since George wants to stay, he is being thrown out, and hence by definition is free in terms of freedom-asspecific-ableness, George has the freedom to leave the night club but lacks the freedom to stay. Now consider the following variation: the bouncer does not want George to be in the night club, and will thus eject George if George does not leave on his own account. The difference from the previous situation is that the bouncer will now frustrate only the George’s attempt to stay at the night club – he will not force George to stay if George wants to leave. In this situation, if George wants to leave he is able to do so but still lacks the ableness to stay. By the weak definition of specific ableness, George’s ableness is thus the same: in both situations he has the ableness to leave the night club, but not the ableness to stay. Hence, if we equate freedom with specific ableness, the bouncer’s change of policy has not affected George’s freedom – his overall degree of freedom has remained the same, even though the bouncer has lifted some constraints. Clearly, this runs counter to our intuitions. It seems clear that George’s overall freedom has increased because of the lifting of some constraints. But such a difference in overall freedom between the two situations can only result from the fact that George acquires a freedom he did not possess in the first situation: the freedom to leave. But that can only be true if P2 is violated, that is, if he would not have the freedom to leave in the first situation. Another possible way of explaining a difference in overall freedom without abandoning the weak definition is to say that though George does already have the freedom to leave the night club, he does acquire a new freedom, to wit the ‘freedom to do what he wants to do’, a freedom which he does not possess in the first situation. A difficulty of such an argument, however, is that it multiplies the set of potential freedoms enormously. The ‘freedom to do x’ would constitute a freedom different from ‘the freedom to do what a person wants to do’ even when x is in fact the thing that the person wants to do. Moreover, even if one were to allow such an expansion of the realm of possible freedoms, it would have to be explained why the acquisition of the freedom to do what one wants yields more overall freedom when one has also lost such a preference-dependent freedom, i.e., the freedom to do what one does not want to do. In this section we have discussed two arguments for the view that a person may be doing something which he is not free to do x, where freedom is defined in terms of specific ableness. If it is possible that a person is doing some x that he is not (specifically) able to do, then P1 will not be true in
Freedom, Coercion, and Ability
321
general and C1 may thus also fail to hold. On the other hand, if actually doing x always implies that one is able to do x, then one can no longer define freedom in terms of specific ableness; P2 may not be true. Note that much of this and the previous section’s argument has considered situations where the performance of some x is only possible because the person is forced to do x. Freedom as generic ableness is lacking because she fails to perform the task in all relevant counterfactual worlds – worlds in which the coercion is absent. Freedom as specific ableness is lacking because the coercion is such that in the relevant counterfactual world, which is now defined as the world which is as close as possible to the actual world and in which she tries to perform x, she is coerced into doing the opposite of x. We saw that the first kind of generic inability corresponds to a paradigm kind of unfreedom – the unfreedom of the slave or prisoner who has to perform gruesome tasks and is only successful in doing so because of the coercion. Clearly, such examples also have paradigm importance. They concern those cases in which one person sets out to frustrate the wants and desires of another person.
7. Conclusion We have discussed different ways of defining a person’s ableness to do something. In our discussion of generic ableness we argued, first, that a person may do something under force that he is not generically able to do and, secondly, that a person’s generic ableness need not coincide with a person’s freedom. If we were to define power as generic ableness, then it follows that a person may be doing something that he has no power to do, and furthermore, that in the cases discussed a person’s freedom will not coincide with a person’s power. However, one might argue that power should not be defined as generic but rather as specific ableness. We distinguished two possible renditions of the notion of specific ableness. On the strong interpretation, a person can again be said to be doing something under force that he is not able to do. If one were to define a person’s power and freedom in the same way, that is, as the things a person is specifically able to do in the strong sense, one could thus conclude that a person may be doing something that he is neither free to do nor have the power to do. The weaker definition of specific ableness precludes the possibility that a person may be doing something that he is not able to do. However, we argued that under such a conception of ableness it is counterintuitive to equate freedom with ableness. Hence, if one would define power as weak specific ableness, the concepts of freedom and power will again diverge. It was not the objective of this paper to come up with a particular definition of freedom-as-ableness nor of a particular definition of power-as-ableness. We have pointed out various possibilities for doing so, and have dis-
322
Keith Dowding and Martin van Hees
cussed some of the difficulties involved. In order to analyze both the notions of freedom and power in a general formal framework, a choice among the various possibilities should be made. We merely hope that the analysis of this paper is of some help in arriving at such a choice.
Acknowledgements We would like to thank Peter Morriss for his written comments and participants at the conference ‘Power: Conceptual, Formal, and Applied Dimensions’, Hamburg, Germany, 17-20 August 2006.
References Ahlert, M. (2008) Guarantees in Game Forms, in M. Braham and F. Steffen (eds) Power, Freedom, and Voting, Springer, 325–341. Aristotle (1984) Nicomachean Ethics, in J. Barnes (ed.) The Complete Works of Aristotle: The Revised Oxford Translation, vol. 2, Princeton University Press, 1729–1867. Arrow, K. J. (1995) A Note on Freedom and Flexibility, in K. Basu, P. Pattanaik and and K. Suzumura (eds) Choice, Welfare, and Development: A Festschrift in Honour of Amartya K. Sen, Oxford University Press, 7–16. Benn, S. I. and Weinstein, W.L. (1971) Being Free to Act, and Being a Free Man, Mind 80: 194–211. Braham, M. (2006) Measuring Specific Freedom, Economics and Philosophy 22: 317– 333. Braham, M., and Holler, M.J. (2005) The Impossibility of a Preference Based Power Index, Journal of Theoretical Politics 17: 137–157. Carter, I. (1992) Being Free When Being Forced to Choose, Politics 12: 38–39. Carter, I. (1999) A Measure of Freedom, Oxford University Press. Cohen, G. A. (1979) Capitalism, Freedom and the Proletariat, in A. Ryan (ed.) The Idea of Freedom: Essays in Honour of Isaiah Berlin, Oxford University Press, 9–25. Cohen, G. A. (2001) Addenda to ‘Freedom and Money’, mimeo. Day, J. P. (1987) Threats, Offers, Law, Opinion and Liberty, in J.P. Day (ed.) Liberty and Justice, Croom Helm. Dowding, K. (1990) Being Free When Being Forced, Politics 10: 3–8. Felsenthal, D. S. and Machover, M. (1998) The Measurement of Voting Power, Edward Elgar. Harsanyi, J. C. (1962a) Measurement of Social Power, Opportunity Costs, and the Theory of Two-person Bargaining Games, Behavioral Science 7: 67–80. Harsanyi, J. C. (1962b) Measurement of Social Power in n-Person Reciprocal Power Situations, Behavioral Science 7: 81–91. Hayek, F. von (1960) The Constitution of Liberty, Routledge and Kegan Paul. Hees, M. van (1998) On the Analysis of Negative Freedom Theory and Decision 45: 175–197. Hees, M. van (2000) Legal Reductionism and Freedom, Kluwer. Holler, M.J. (2007) Freedom of Choice, Power, and the Responsibility of Decision Makers, in A. Marciano and J.-M. Josselin, Democracy, Freedom and Coercion: A Law and Economics Approach, Eward Elgar, 22–42.
Freedom, Coercion, and Ability
323
Klemisch-Ahlert, M. (1993) Freedom of Choice: a Comparison of Different Ranking of Opportunity Sets, Social Choice and Welfare, 10: 189–207. Kramer, M. H. (2003) The Quality of Freedom, Oxford University Press. Lehrer, K. (1990) A Possible Worlds Analysis of Freedom, in K. Lehrer, Metamind, Oxford University Press. Miller, N. (1982) Power in Game Forms, in M. J. Holler (ed.) Power, Voting and Voting Power, Physica Verlag, 33–51. Morriss, P. (2002) Power: A Philosophical Analysis (2nd ed.), Manchester University Press. Napel, S. and Widgren, M. (2004) Power Measurement as Senstitivity Analysis – A Unified Approach, Journal of Theoretical Politics 16: 517–538. Nehring, K. and C. Puppe (1999) On the Multi-preference Approach to Evaluating Opportunities, Social Choice and Welfare 16: 41–63. Nussbaum, M.C. (2000) Women and Human Development. The Capabilities Approach, Cambridge University Press. Pattanaik, P. K. and Xu, Y. (1990) On Ranking Opportunity Sets in Terms of Freedom of Choice, Recherches Economique Louvain 56: 383–390. Pattanaik, P. K. and Xu, Y. (1998) On Preference and Freedom, Theory and Decision 44: 173–198. Pettit, P. (1997) Republicanism: A Theory of Freedom and Government, Oxford University Press. Puppe, C. (1996) An Axiomatic Approach to ‘Preference for Freedom of Choice’, Journal of Economic Theory 68: 174–199. Puppe, C. (1998) Individual Freedom and Social Choice, in J.F. Laslier et al. (eds) Freedom in Economics: New Perspectives in Normative Analysis, Routledge, 49–68. Sen, A.K. (1985/1999) Commodities and Capabilities, Oxford University Press. Sen, A. K. (1988) Freedom of Choice: Concept and Content, European Economic Review 32: 269–294. Sen, A. K. (1990) Welfare, Freedom and Social Choice: A reply, Recherches Economiques de Louvain 56: 451–485. Sen, A. K. (1991) Welfare, Preference and Freedom, Journal of Econometrics 50: 15–29. Sen, A. K. (1993) Markets and Freedoms, Oxford Economic Papers 45: 519–541. Shapley, L. S. (1953) A Value for n-Person Games, in H. W. Kuhn and A. W. Tucker (eds) Contributions to the Theory of Games II, Princeton University Press, 307–317. Shapley, L. S., and M. Shubik (1954) A Method for Evaluating the Distribution of Power in a Committee System, American Political Science Review 48: 787–792. Skinner, Q. (1998) Liberty Before Liberalism, Cambridge University Press. Steiner, H. (1994) An Essay on Rights, Blackwell. Steiner, H. (2001) Freedom and Bivalence, in I. Carter and M. Riciardi (eds) Freedom, Power and Political Morality: Essays for Felix Oppenheim, Palgrave, 57–68. Sugden, R. (1998) The Metric of Opportunity, Economics and Philosophy, 14: 307–337.
17. Guarantees in Game Forms Marlies Ahlert Department of Economics, Martin-Luther-University, Halle-Wittenberg, Germany
1. Introduction Discussions of freedom, liberty, rights, and power are often extremely confusing and in fact confused because they take place in a conceptual minefield. This paper does not claim to solve any of the deeper riddles of such discussions. Instead of this it tries to side step some of the most problematic and difficult issues by confining itself to publicly or politically provided ‘guarantees’. This seems reasonable since disputes about many legal and social policy issues related to freedom, liberty, rights, and power centre around such provisions. The intuition that individuals should have some control over their lives such that they can secure to themselves certain levels of well-being if they should choose to is as appealing as it is widespread. It is, however, not easy to tell when individuals do have that control to a lesser or higher degree. The conceptual task is to provide some measure for the control individuals have over their lives and the value of that control. Which measure is adequate depends on the point of view, purpose, and interests of those who use the measure. I suggest to adopt the point of view of a fictitious policymaker who wants to evaluate the degree of control that alternative schemes of rules afford to individuals playing out their games of life. The policymaker intends to order such games according to how valuable the outcomes are whose emergence individual participants can at least guarantee to themselves. He compares betweens individuals and between game forms. The outside observer rank orders all the games according to his own perception of the value of outcomes for the participants. This is his personal value judgment – a personal welfare function if you will – for the individuals’ well-being under certain rules of interaction. Since every individual in a society may put himself or herself into the shoes of a fictitious policymaker there may be as many personal rankings of positions of individuals in society as there are individuals. Whether these judgments in the end will converge or not is, however, an issue of secondary importance. As individuals and
326
Marlies Ahlert
moral subjects we do evaluate societies in terms of the control over outcomes that they grant to individuals. As far as that is the case and as far as we want to do it in a rational way it must be shown how a rank ordering of the value of some aspects of control can conceivably be construed. I readily concede that somebody may not intend to evaluate society in the terms discussed here. But I submit that a measure like the one proposed subsequently may be highly relevant for anybody who is interested in a rational discussion concerning the concepts of freedom, liberty, rights, and power. Section 2 is the main section of the paper. Here the model of the interaction is presented and opportunity sets of the individuals and guarantees are defined. We introduce extended orderings to rank the situations of individuals in different outcomes of interactions. These are used first to define a ranking in terms of guarantees that can be given to each individual in different game forms. This ranking is axiomatically derived by the indirectutility approach. Afterwards we compare situations of any two individuals in any two game forms. The characterization of the general ordering uses an additional property of Extended Dominance. In Section 3 the concept is applied to simple game forms like the dictator game and the ultimatum bargaining game. The effect of different extended orderings on the evaluation of feasible guarantees is analyzed. Section 4 deals with the change of guarantee levels that can be achieved when a game develops along a path. Properties of the ordering of guarantee levels in game forms and subgame forms for an individual are proven. Section 5 summarizes the approach and the results.
2. The Conceptual Framework 2.1 The Structure of Interactions We consider strategic interactions of a finite number n of individuals (players). The set of individuals is called N \1,..., n ^ and is fixed. Each strategic interaction is represented by an extensive form of a game ( with n players. We will consider different games ( or ( a which model different interactions of the same set of players. The strategy set of individual i in a game form ( is called S i (() and is assumed to be finite. A strategy of individual i in ( is denoted by s i S i ( . X(s 1 ,...s n ) denotes the outcome that is generated by a strategy combination (s 1 ,...s n ) of all individuals in game form (. We allow strategy sets to be empty (among the n players there may be some that do not act in the game), but we will comment on this later on. In contrast to the general notation, outcomes of the game form ( are not given by payoff vectors but by alternatives that are interpreted as social states or social alternatives. The description of a social alternative generated in ( can include features of ( or information on the strategies the individu-
Guarantees in Game Forms
327
als have used. The set of outcomes of ( is assumed to be finite and is denoted by 8(() . We assume that there is a given general set X of feasible social states so that 8(() X for all (. (We do not assume that the set X has any special structure, especially it is not necessary to represent X as an m– dimensional space. This assumption would form a special case of our model.) We define a procedure to be an extensive form game ( such that outcomes are defined by social states. G is the set of all procedures with n players.
2.2 Extended Orderings We model comparisons of the wellbeing of individuals in the following way: in case individual i lives in the social state x X which is the outcome of some strategic interaction in ( and in case individual j would face a social alternative y X as a possible outcome of a game form ( a we assume that the individuals’ wellbeing in these states can be ranked. We assume that this ranking is construed by some outside observer, a policymaker who evaluates different procedures in society. Different observers may come up with different rankings and convergence to the same ranking would be a special case. It is important for the practicability of an application of the model and weakens the assumption of completeness that it will not be necessary to construct the complete extended ranking for all possible comparisons. As we will show later only a few interpersonal comparisons will have to be made explicitly. Since in principle all comparisons must be feasible, the formal instrument, however, we apply here is an extended ordering defined on X q N . It is not necessary to define an extended ordering dependent on (, since the states in X that are outcomes of ( may include this type of information. Definition Extended Ordering An extended ordering of individuals in N being in some social state in X is an ordering R on X q N . R has the following properties: Completeness Any two pairs (x , i ) and ( y, j ) such that x , y X and i , j N are ranked by R, i.e. (x , )R( y, j ) holds or ( y , j )R(x , i ) holds. Transitivity For any pairs (x , i ),( y, j ) and (w , k ) X q N : if (x , i )R( y, j ) and (y , j )R(x , k ), then (x , i )R(w , k ). We represent R by an ordinal utility function u on X q N . Definition Projection of an Extended Ordering A projection of an extended ordering on a certain individual i is defined by the subranking of R on pairs (x , i ) for all x X . This subranking defines an ordering R i of social states for individual i : for all x , y X , xR i y (x , i )R(y, i ).
328
Marlies Ahlert
There are two possibilities to relate these rankings R i to the problem under evaluation. First we can assume that the outside observer is informed about the personal preferences of each individual i and takes the individual’s own preference ordering as a basis for the extended orderings. In this case the projection ranking R i coincides with the preference ordering of individual i on X. However, if the observer is not completely informed or – for paternalistic or other reasons – does not or not completely take the individual’s preferences into account, the projection of the extended ordering on i, i.e. the ordering R i , may be different from individual i’s preferences. This also implies that we model the individuals’ control only for one fixed extended preference, and not for some larger variety of preferences. If a fixed representation of R by an ordinal utility function u on X q N is given, it induces for each i an ordinal utility function u i on X : u i x
u(x , i ) for all x X . Note, that because of the interpersonal comparability of the utilities of all individuals each single representation u i is not arbitrarily monotonic transformable, but a uniform strict monotonic transformation for all i N can be applied to the set of all u i without changing the analysis.
2.3 Ranking Sets of Individual Opportunities Strategies are interpreted as instruments of an individual to achieve a certain outcome in a given game form. However, instruments are means to some ends. Therefore, we do not focus on the set of possible choices of strategies in order to model the control an individual has in a procedure. Instead, we concentrate on states the individual can determine. One could think of modelling an opportunity by claiming that exactly a specific outcome is reached by some strategy of the individual independent of the strategic choices of all other individuals. As we have seen in examples (Ahlert 2006) this would be a very strong requirement which in many situations implies empty opportunity sets. Instead, we assume that an opportunity for individual i is present if she can guarantee a state x or a state that is preferred by R i to x . By choosing a certain strategy the individual can make sure that she will end up in a state at least as good for her as x (i.e. evaluated by the ordering R i ). Notation E i \x 8(() s i S i s j j v i : X(s 1 ,..., s n )R i x ^ , where E i (() is the set of outcomes x such that individual i can secure herself at least the utility level of x . We interpret E i (() as the opportunity set of individual i in (.
There is the following relation of our notation to the notion of effectivity in game forms. Player i as a one person coalition playing against the coalition N \ \i ^ can bring about outcomes that are at least as good as x for all x
Guarantees in Game Forms
329
in E i ((). Thus means that player i is effective for outcomes x in E i (() under the additional assumption that outcomes which are preferred to x are interpreted as also fulfilling the requirement of securing x, which is different from the standard definition of effectivity (cf. e.g. Moulin 1983). An individual that does not act in a game form and therefore has an empty strategy set is in a position similar to an individual that has only one strategy. Both types of individuals do not have any influence on the outcome of the game. Thus individuals with no action in the game form are artificially interpreted as individuals that act before the game starts but have only one strategy each. Individuals i with only one strategy can only secure their worst outcomes in 8(() with respect to R i . Remark 1 If x E i (() then for all y 8(() such that xR i y it holds that y E i ((), too.
If individual i can secure x then individual i can also secure any outcome y of ( that is worse than x or indifferent to x under R i . This means that by securing u i (x ) all levels B b u i (x ) are guaranteed, too. Question 1 Assume two different game forms ( and ( a and a fixed individual i are given. How to compare the opportunity sets E i (() and E i (( a)?
Any ranking of opportunity sets discussed in the literature on freedom of choice could be applied. Which properties of a ranking of opportunity sets are desirable in our context? If an individual has an opportunity set of states A and in this set welfare levels are secured that are at least as good as those in another set B of states, what will be the relation between A and A B ? Since having better guarantees in A than in B means that by securing all levels in A she has already guaranteed the levels in B. Thus the new options in A B do not increase the rank ordering in terms of welfare levels she can secure to herself, i.e. A and A B are indifferent in terms of levels of utility that can be secured. This is the property of Extension Robustness of a ranking of opportunity sets. Therefore we want the ranking of opportunity sets to fulfil the following property: Property Extension robustness Let p denote a ranking of opportunity sets of person i. Then for all finite nonempty opportunity sets A , B X holds: A p B A A B .
Lemma 1 Let p denote a ranking of opportunity sets of person i that fulfils Extension Robustness, then there exists a ranking R on the set of nonempty and finite subsets of X such that p is the indirect-utility ranking for the ordering R .
330
Marlies Ahlert
Proof The Lemma follows from Kreps (1979) who has shown that an ordering p fulfils extension robustness if and only if there exists an ordering R on the set of nonempty and finite subsets of X such that p is the in, direct-utility ranking for the ordering R . In fact, we use the ordering R i to apply the indirect-utility approach and to define the ranking i pi of opportunity sets of person i such that i pi fulfils Extension Robustness. For this we need the notation of maximal elements. Definition Maximal Elements Let A X be a finite nonempty opportunity set, let an individual i be given and let R be an extended ordering on N q X then we define:
max i (A) \x A (x , i )R(y, j ) y A ^ . Since A is assumed to be finite and nonempty the set of maximal elements is nonempty. It may contain more than one element. In this case we choose one of those to represent the set and name it x i(A ). Definition Ranking i pi Let A , B X be two finite nonempty opportunity sets of individual i, then the following holds: A i pi B x i(A ), i R x i(B ), i .
Remark 2 (a) The definition of i pi is independent of the choice of the element representing max i (A ), since for two different maximal elements x i(A ) and y i(A ) it holds that x i(A ), i I y i(A ), i . (b) Obviously, i pi is an ordering on nonempty finite subsets of X . (c) i pi fulfils Extension Robustness (Kreps 1979).
The (representing) maximal elements individual i has in two opportunity sets A and B determine the ranking of these opportunity sets in terms of freedom of choice for i. Bossert, Pattanaik and Xu (BPX 1994) have proposed and characterized rankings of opportunity sets where the ranking of maximal elements of opportunity sets with respect to some given linear preference ordering and in addition the cardinalities of the opportunity sets are combined as criteria in three different ways: There are the two possible lexicographical fashions (maximum first and then cardinality and vice versa) and there is a third requirement combing both criteria by ‘and’. One could ask why the cardinality criterion does not apply in our model. Intuitively, having identical maximal utility levels in two opportunity sets means that each utility level smaller or equal to the maximal utility can be guaranteed. If we concentrate
Guarantees in Game Forms
331
on guarantee levels it does not matter whether there is a strategy that leads exactly to a certain lower level. Therefore numbers of utility levels that can exactly be reached do not play a role in our context. In case of equal maximal utilities and different cardinalities of two sets each of the three rankings analysed in (BPX 1994) would define a strict rank ordering, whereas in the interpretation of our model indifference has to hold. I concede that the variety of ways to guarantee a certain level of welfare and some other aspects, too, may play a role from then internal point of view of the individual. However, this perspective will be analysed in a different paper. Here the focus is on guarantees that can be realized. Answer to Question 1 For any given game forms ( and ( a and any individual i we rank order E i (() and E i (( a) in terms of welfare levels individual i can secure to herself by comparing i’s situation in the maximal elements under R i :
E i (() i pi E i (( a) x i E i (() , i R x i E i (( a) , i . The interpretation of the ranking of game forms then is that the higher the maximal utility level individual i can secure to herself in a game form the higher the rank ordering of the game form for her. We generalize the definition of a prudent strategy given by Moulin (1986) for the case of zero-sum games to our situation. Definition B-Prudent Strategy Any strategy player i can choose in a game form ( such that she reaches at least the utility level B independent of strategies of other players is called an B-prudent strategy of i.
An B-prudent strategy of i for B u i x i E i (()
is called a maximinstrategy strategy of i. Remark 3 For all B b u i x i E i (()
there exists an B-prudent strategy of player i in game form (.
If the policymaker intends to implement rules of interaction such that he can guarantee individual i a level of well-being of at least B, he has to choose rules that lead to a game form ( such that B b u i x i E i (()
. Individual i can make use of this guarantee by playing an B-prudent strategy. However, if player i acts differently the guarantee level might not be met. For this the player is responsible, but not the policymaker. From the perspective of the outside observer who is interested in guarantees that can be given to individual i in different game forms, the observer prefers game forms where maximal guarantees are higher.
332
Marlies Ahlert
Definition Ranking pG for player i We define a ranking in terms of guarantees on pairs in \i ^ qG for a given extended ranking represented by R :
For all i N and all (, ( a ( :(i , () pG (i , ( a) E i (() i pi E i (( a). Given the extended evaluation R, an individual i has a maximal degree of guaranteed welfare in ( that is greater or equal to the maximal degree individual i has in ( a, iff x i E i (() , i R x i E i (( a) , i holds. In a given game form ( individual i can secure any utility level B b u i x i E i (()
. Having these options does not automatically imply that individual i plays one of her B-prudent strategies. There may be other criteria influencing the strategic choice, like e.g. equilibrium considerations with respect to individual i ’s and the other individuals’ personal preferences. For instance, playing a strategy that might lead to a Nash equilibrium would presuppose some maximizing rationality from the internal perspective of the individuals. How do we interpret a certain degree of guarantee? One could think of an external observer that assigns certain appropriate utility levels Bi to each individual i (socially desirable, minimal standards of living etc.), and looks for prudent strategies that would guarantee these levels. If the procedure ( is such that Bi b u i x i E i (()
, an Bi -prudent strategy exists. It is then up to the individual whether she chooses a strategy that secures that level or decides differently. The outside observer, however, has guaranteed that the individual can realize a certain guaranteed level of well-being. This guarantee is independent of the actions the individual will take. From the perspective of each individual, if the ranking R i displays her personal preferences, such a level Bi b u i x i E i (()
could model a case of satisficing behavior (Simon 1985) in a framework of bounded rationality. An individual may want to make sure that she does not end up with a utility level lower than some standard B i . Within this restriction, however, she can choose any Bi -prudent strategy with respect to some arbitrary criterion. Of course, the higher u i x i E i (()
in a certain game form ( the bigger is the set of feasible levels Bi b u i x i E i (()
. This consideration means that the proposed model of guarantees is not uniquely related to specific rationality assumptions on strategic behavior of players nor to realized choice of a strategy or realized choice from the set of options of security levels by the individuals. The presented model describes from the external point of view how much room a strategic interaction ( offers to an individual to secure herself a certain welfare level and represents this kind of freedom by the maximal level the individual is able to secure.
2.4 Comparing Guarantees Between Individuals The interpersonal comparison of well-being that can be guaranteed to two individuals i and j that act in two game forms ( and ( a has in the end to be
Guarantees in Game Forms
333
modeled by a ranking defined on pairs in N qG . By condensing the information of the game form and the preferences on outcomes we assume that the ranking will only depend on the opportunity sets E i ( and E j (( a) the individuals face in the respective game forms ( and ( a. Question 2 Assume any two game forms ( and ( a and any two individuals i and j are given. How do we compare E i (() and E j (( a) ?
First we consider the simple case where the opportunity sets of individual i and j are singletons. This means that the only state individual i can secure in a game form ( may be state x and y may be the only state individual j can secure in ( a. In this case the rank order of the feasible guarantee levels in the opportunity sets is defined by the extended ordering R of the situations (x , i ) and (y, j ) . The ranking i pj should have the following property for singletons \(x , i )^ and \(y, j )^. Property Extended Dominance
For all x , y X and i , j N :
(x , i )R(y, j ) \(x , i )^ i pj \( y, j )^.
This property is derived from dominance axioms in the literature on ranking opportunity sets and in the freedom of choice literature (see e.g. Barberà, Bossert and Pattanaik 2003). Here it is applied to the extended ordering. In situations where each individual can guarantee exactly one social state, the guarantee levels for the individuals are rank ordered with respect to the ranking of their respective situations in these states defined by R. If guarantees in a game form ( have to be assessed to some player i we condense all information to the element x i E i (() , i and its position in the ranking R. If we want to compare the guarantees that can be given to individual i in ( and the guarantees for individual j in ( a we have to compare the pair x i E i (() , i and x j E j (( a) , j . The canonical generalization of the indirect-utility approach is to define the following ranking i p j for opportunity sets of i and j. Definition Ranking i p j Let A , B X be two finite nonempty opportunity sets and individuals i and j be given, then the following definition holds: A i p j B x i A , i R x j B , j .
We compare the welfare guarantees for individuals i and j in two finite opportunity sets by applying R to their situations under the respective representing maximal elements. The special case for i j is the indirect-utility ranking defined above.
334
Marlies Ahlert
Result 1 The ranking i p j is the only ordering such that: (a) for all i j it is identical to the respective indirect-utility ranking; (b) it fulfils Extended Dominance.
Proof i p j clearly is an ordering and has the properties (a) and (b). Let i and j be given and i pj an ordering that fulfils (a) and (b). If A and B are both singletons, i.e. A \x ^ and B \y ^ the ranking is uniquely defined by ED, i.e. \(x , i )^ i pj \(y, j)^ x i(A), i R x j (B ), j . Thus in these cases i p j is identical to i p j . If A or B are not singletons, in the individual indirect-utility ranking i pi A is indifferent to a singleton \x i(A )^ containing one of the maximal elements of A under R i . Analogously, B is indifferent in the indirect-utility ranking j p j to a singleton \x j (B )^ containing one of the maximal elements of B under R j . This means because of (a) A i j \x i(A )^ and \x j (B )^ i j B. (b) implies the ranking between \x i(A)^ and \x j (B )^ . Transitivity of i pj implies A i pj B x i(A), i R x j (B ), j which is equiva, lent to the definition of i p j . Definition Ranking pG We define a general ranking in terms of guarantees of pairs in N qG for a given extended ranking represented by R :
For all i , j N and all (, ( a ( :(i , () pG ( j , ( a) E i (() i p j E j (( a).
Given the extended evaluation R, an individual i has guarantees in ( that are at least as good as the guarantees individual j has in ( a , iff:
x i E i (() , i R x j E j ((a) , j
holds, which is equivalent to: u i x i E i (()
p u j x j E j (( a)
.
Note that the definition of the interpersonal ranking pG depends on the preference orderings assigned to the two individuals involved and on only one interpersonal utility comparison, namely that of the maximal elements of the sets E i (() and E j (( a). Only a special part of the information from the extended ordering is needed. For applications this means that the outside observer does not have to construct the whole extended ranking, she must be able to find maximal elements for each individual in their opportunity sets, however, there are only a view interpersonal comparisons that have to be evaluated. Result 2
pG is an ordering on N qG .
Proof Completeness and transitivity follow from the fact that u i x i E i (()
Guarantees in Game Forms
335
and u j x j E j (( a)
are uniquely defined and from the equivalence (i , () p ( ( j , ( a) u i x i(E i (() p u j x j E j (( a)
and from the application of , the respective properties of the p ordering of real numbers. In the interpersonal comparison of guarantees the minimal utility level generated by some maximin-strategy is decisive for the ranking: the higher this level the bigger the set of feasible guarantees.
3. Applications In this section we analyze some simple game forms. We assume certain types of extended preference orderings, determine the opportunity sets of all players in identical and also in different games and compare these by the ranking in terms of guarantees.
3.1 Two-person Dictator Game ( Dic : A proposer (player 1) can propose an allocation A (action A) of a given amount of money or an allocation B (action B). The only action the responder (player 2) can choose is to accept (AC ) any proposal. The social states that are the outcomes in case of proposal A are denoted by A and by B in case of the other proposal (see Fig. 1). For the representation of the extended ranking R by a utility function u in this example let us assume the following strict relations for each player: u(A,1) u(B ,1), i.e. player 1 is better off if she proposes A than in state B, and u(B ,2) u(A ,2), i.e. player 2 is better off if the proposal B is realized than in state A. Player 1 can determine A and B. The maximal utility level player 1 can secure herself is u(A ,1). Player 2 can determine nothing. The maximal utility level player 2 can secure herself is that of the worst outcome u(A ,2). Let us assume that the allocation A leads to a utility of player 1 higher than the utility of player 2. We model this by the relation u(A ,1) u(A ,2) . No other interpersonal comparison is necessary to compare the feasible guarantees. Under these assumptions it is implied that from the point of view what guarantees can be given to the players in ( Dic :(1, ( Dic ) G (2, ( Dic ) what coincides with our intuition in the traditional version of dictator games. The game form ( Dic is not alone responsible for the result above. Let us assume different extended preferences. In case the situation of the dictator (player 1) is in both states A and B evaluated to be worse than the worst situation of player 2. Then whatever he proposes, the dictator can only secure a lower utility than player 2. In case of extended preferences like that (2, ( Dic ) G (1, ( Dic ) holds. Let us now permute the roles of player 1 and 2 in the first variant of the
336
Marlies Ahlert
Proposer Player 1 A
B
Responder Player 2 AC
Social State
AC
A
B
Fig. 1. Dictator Game
dictator game which leads to the game form ( Dic inv . The best the new dictator (former player 2) can secure is u(B ,2) . The best the former player 1 can secure is u(B ,1) . It follows that (1, ( Dic ) G (1, ( Dic inv ) and (2, ( Dic ) G (2, ( Dic inv ) , i.e. every individual has better guarantees if she has the role of the dictator. The interpersonal comparison of (1, ( Dic inv ) and (2, ( Dic ) involves the ranking of (B ,1) and (A , 2) and the ranking of (1, ( Dic inv ) and (2, ( Dic inv ) involves the ordering of (B ,1) and (B , 2) and the ranking of (1, ( Dic inv ) and (2, ( Dic inv ) the comparison of (A ,1) and (B , 2). Many cases are possible.
3.2 Ultimatum Bargaining Game ( Ult : A proposer (player 1) can make a fair proposal (strategy FAIR) or an unfair proposal (strategy UNFAIR). The responder knows the proposal (She has two information sets I and II) and can in each set accept it (action AC ) or reject it (action RE ). If she accepts the proposal the social state is called F in case of the fair proposal and U in case of the unfair proposal. If the responder rejects proposal FAIR, the resulting state is called B (break off) and if she rejects Proposal UNFAIR the state is called P (punishment). It is clear that nobody can determine a special state, however, dependent on their respective preference relations they can secure certain utility levels. Let us assume that the ranking of the states from the perspective of player 1 is the following analogous to most of the descriptions of ultimatum bargaining games: u(U ,1) u(F ,1) u(B ,1) u(P ,1). Then player 1 can secure the level u(P ,1) u(B ,1). Case 1 We assume a ranking for player 2 where being treated unfair is better for her than breaking off or punishing: u(F ,2) u(U ,2) u(B ,2) u(P ,2). In this case player 2 can guarantee herself level u(U ,2). If we assume that the utility of player 2 in state U is greater that that of
Guarantees in Game Forms
337
Proposer Player 1 FAIR Responder Player 2
UNFAIR
I
II
AC
RE
AC
RE
F
B
U
P
Fig. 2. Ultimatum Bargaining Game
player 1 in state B or P then the degree of freedom of player 2 is larger that that of player 1, i.e. (2, ( Ult ) G (1, ( Ult ). If we assume that breaking off or punishing has not a relatively strong impact on the situation of player 1 such that in both states she is still better off than player 2 in state U, then the freedom of player 1 is greater that that of player 2, i.e. (1, ( Ult ) G (2, ( Ult ). Case 2 We assume a ranking for player 2 where breaking off or punishing is better for her than being treated unfair: u(F ,2) u(B ,2) u(P ,2) u(U ,2). In this case Player 2 can guarantee herself u(B ,2) u(P ,2). Comparing the states B respectively P for both individuals leads to the comparison of feasible guaranties. The person whose situation is better than that of the other one if the bargaining fails can get and realize higher guarantees (in terms of security levels or aspirations). Both cases demonstrate that in ultimatum bargaining it is not uniquely clear who might be in the better situation in terms of guarantees. Again, this coincides with our intuition that the comparison between both players of what they can secure in this game form depends on the impact the failure of the bargaining (states B or P ) has on the situations of the individuals.
3.3 Comparing Dictator and Ultimatum Bargaining games We assume that state A in the dictator Game is evaluated identically to state F in the Ultimatum Game and B identical to state U. Player 1 can guarantee u(U ,1) in the Dictator game and u(P ,1) u(B ,1) in the Ultimatum game. Since u(P ,1) u(B ,1), Player 1 is better off in terms of the Guarantee ranking in the Dictator Game than in the Ultimatum Game, i.e. (1, ( Dic ) G (1, ( Ult ). Player 2 can guarantee u(U ,2) in the Dictator game. In case 1 of the Ultimatum Game above, where breaking
338
Marlies Ahlert
off or punishing is worse for her than accepting an unfair proposal, it is the same level. In this case her degree of freedom is identical, i.e. (2, ( Dic ) G (2, ( Ult ). In case 2, she can guarantee u(B ,2) u(P ,2) in the Ultimatum game, which is in case 2 larger than u(U ,2), i.e. (2, ( Dic ) G (2, ( Ult ). Combining the results leads to (1, ( Dic ) G (2, ( Ult ) pG (2, ( Dic ). However the position of (1, ( Ult ) in this ordering depends on the extended ordering we assume, it could be any place such that (1, ( Dic ) G (1, ( Ult ). This results from the fact that to rank order (1, ( Ult ) presupposes to have judgments on the impact a break off or a punishment might have on the situation of player 1. The less strong the impact of a break off on her welfare the better off she will be in the guarantee ranking.
4. Guarantees in Subgames In this section we apply the proposed instrument to compare guarantees in game forms to guarantees in subgame forms. For simplicity, we will only cases of perfect information. A subgame form in an extensive form can be generated from a game form ( by choosing an information set of some player i which under perfect information is a set with one decision node, i.e. a singleton. Such a decision node a 0 is the starting point of the extensive representation of a subgame form ( 0 such that the nodes and edges following a 0 define ( 0 . Each strategy s i of player i in the original game form ( contains an action chosen at a 0 and at all following of his information sets. We call the resulting strategy for i in ( 0 an induced strategy. Also for all other players holds that strategies in ( induce strategies in ( 0 . (If a player does not act in ( 0 we use the artificial construction mentioned above to assign an additional information set to her with only one action.) First, we assume that a subgame form starts at some decision node a 0 of one of the paths on an B-prudent strategy of player i in (. Definition B-prudent path A node a 0 lies on an B-prudent path of player i in a game form (, if a 0 lies on a path that is generated by any vector of strategies such that player i plays one of her B-prudent strategies, combined with any choice of strategies of all other players. A special case for maximal B is a maximin-path for a maximin-strategy. Result 3 Consider the case where a 0 is a decision node of i on an B-prudent paths of some B-prudent strategy s i of i. Then s i induces an B-prudent strategy for player i in ( 0 .
Proof By assumption there exists a strategy s i in S i (() that is B-prudent and leads to a possible path through a 0 . In this case, anything that may happen as a final state in ( 0 if i plays s i at all of her decision nodes in ( 0 could also happen in ( under strategy s i . Since B is a utility level B player i
Guarantees in Game Forms
339
secures in ( by playing s i , i secures level B in ( 0 , too, by playing the actions of s i in the subset of decision nodes in ( 0 . , Corollary 1 Consider the case where a 0 is a decision node of i on one of the possible paths of some maximin-strategy of i. Then (i , ( 0 ) pG (i , () holds.
Proof Apply the above result to the case of a maximal B. The result implies that the maximal security level for i in ( 0 is at least as high as in (, i.e. (i , ( 0 ) pG (i , (). , Some other players may have acted before i has to make a decision in a first decision node a 0 . Definition First decision node Consider a case where the path from the starting node of ( through some decision node a 0 of player i does not go through any decision node of player i. We call such a node a 0 a first decision node of player i.
Of course there will typically be several first decision nodes of player i . Result 4 Let ( be a game form and ( 0 be a subgame form starting at a first decision node a 0 of player i, then any B-prudent strategy of player i in ( is also an B-prudent strategy of player i in ( 0 .
Proof Any B-prudent strategy of player i in ( leads to a path through a 0 . The result is implied by Result 3 above. , Result 5 Let ( be a game form and ( 0 be a subgame form starting at a first decision node a 0 of player i, then (i , ( 0 ) pG (i , () holds.
Proof Any maximin-strategy of i can lead to a path that goes through a 0 , dependent on actions taken by other players. Corollary 1 implies (i , ( 0 ) pG (i , (). , This result means that actions of other players that have already been taken before player i acts for the first time do not decrease the guarantee level of i. If one would define opportunities in terms of strategies that can be chosen, the opportunity set of one player would decrease if actions of others have already eliminated some decision nodes of the player. In our framework, a guarantee B that is given to player i is still feasible during the whole development of an interaction as long as the individual behaves prudent in the senses that she chooses actions that are induced by any B-prudent strategy. This choice is general not unique, so that the player
340
Marlies Ahlert
can make advantage out of the development of the game. Sometimes she may be able to improve the welfare level she can secure, since some bad outcomes of the original game form may be excluded from the subgame form.
5. Summary The aim of this paper is to propose a method to compare the range of control individuals have in different game forms over the level of welfare they can secure to themselves. The comparison is generated from the perspective of an external observer or policy maker on the basis of an extended ordering that compares the circumstances of life between any two individuals in any two social states that are the outcomes of some social interaction of all individuals. Opportunity sets for each individual in any game form are defined by the set of states such that the individual can secure herself at least the welfare of that state independent on the strategic choices of all other individuals. The indirect-utility approach is applied to rank these opportunity sets. Thus the maximal utility level an individual can secure herself in an opportunity set by playing a prudent strategy in a given game form defines the maximal guarantee the policy maker can give to that individual in the game form under consideration. The concept is applied to the dictator game and the ultimatum bargaining game. The guarantees that can be given to the two players are compared under different assumptions on the ordering of their situations in the outcomes of the games. The concept mirrors e.g. the intuitive evaluation of the effect of being the proposer or responder on the ordering of the guarantees. Is also captures the intuition of the influence of utilities of a failure of the ultimatum bargaining on the ordering of guarantees. Properties of the ranking in terms of guarantees are derived for cases where certain types of subgame forms of a given game form are considered. Here for a given individual e.g. the following intuition is met. The more decisions in a game have been made already by the other players the higher will be the welfare level that can be guaranteed to this individual. This paper does not consider the evaluation of game forms from the perspective of each player. Such an approach would need assumptions on the perceptions of outcomes by each individual (Ahlert 2006). The aspect of welfare levels an individual can secure to herself by playing an B-prudent strategy, however could also be applied to the concept of perceived outcomes and would lead to criterion of freedom of choice in game forms.
Acknowledgements I would like to thank Matthew Braham, Manfred Holler, Hartmut Kliemt, Clemens Puppe, the participants of a Workshop on Freedom at the Uni-
Guarantees in Game Forms
341
versity of Hamburg, January 2006 and the participants of the conference on ‘Power: Conceptual, Formal and Applied Dimensions’ 17–20 August 2006, Hamburg, for their comments and discussion. This paper was written during the my stay at the Institute of SocioEconomics, University of Hamburg. The hospitality of the institute is gratefully acknowledged. Comments of an anonymous referee were very helpful to sharpen the focus of the paper.
References Ahlert, M. (2006) A New Approach to Procedural Freedom in Game Forms, Volkswirtschaftliche Diskussionsbeiträge, Martin-Luther-Universität Halle-Wittenberg 48, February 2006. Barberà, S., Bossert, W. and Pattanaik, P. (2003) Ranking Sets of Objects, in S. Barberà, P. Hammond and C. Seidl (eds) Handbook of Utility Theory, vol. 2, Kluwer, 893–977. Bossert, W. Pattanaik, P. and Xu, Y. (1994) Ranking Opportunity Sets: An Axiomatic Approach, Journal of Economic Theory 63: 326–345. Klemisch-Ahlert, M. (1993) A Comparison of Different Rankings of Opportunity Sets, Social Choice and Welfare 10: 189–207. Kreps, D. (1979) A Representation Theorem for ‘Preference for Flexibility’, Econometrica 47: 565–577. Moulin, H. (1983) The strategy of Social Choice. North-Holland. Moulin, H.(1986) Game Theory for the Social Sciences (2nd ed.), New York University Press. Pattanaik, P. Xu, Y. (1990) On Ranking Opportunity Sets in Terms of Freedom of Choice, Recherches Economiques de Louvain 56: 383–390. Sen, A. (1991) Welfare, Preference and Freedom, Journal of Econometrics 50: 15–29. Simon, H.A. (1985) Models of Bounded Rationality , vols 1 and 2, MIT Press. Sugden, R. (2003) Opportunity as a Space for Individuality: It’s Value and the Impossibility of Measuring it, Ethics 113: 783–809.
18. Individual Control in Decision-Making and Attitudes Towards Inequality: The Case of Italy Sebastiano Bavetta Department of Economics, University of Palermo, Italy
Antonio Cognata Department of Economics University of Palermo, Italy
Dario Maimone Ansaldo Patti Department of Economics University of Essex, UK
Pietro Navarra Department of Economics, University of Messina, Italy
1. Introduction Power is commonly defined as the control exercised by one or more persons over the choices, behaviours and attitudes of another or others. In this paper we focus on a different form of control, i.e., the control that a person exercises on her own choices, behaviours and attitudes. We conceptualize this different form of control by using the Millian idea of autonomy freedom. We argue that the power required for an individual to be in control of her own actions is exercised through her level of autonomy freedom. Autonomy freedom is, therefore, instrumental for an individual to have selfcontrol over her own life. We claim that the extent of autonomy freedom significantly affects an individual’s attitudes toward income inequality. More specifically, we point out, and empirically demonstrate, that individuals who enjoy high levels of autonomy freedom value income differences more than those whose degree of autonomy freedom is low. Let us introduce the theme of our study by considering an example. Consider two societies, A and B, both sharing the same income distribution. However, in society A there is a widespread belief that economic success is highly dependent on effort. In this society, therefore, those born in families at the bottom of the income distribution believe that they are as likely to
344
Sebastiano Bavetta et al.
end up at the bottom or at the top as those born to rich parents; and so do children born to well-off families. In contrast, in society B people believe that effort does not pay since an individual’s economic success is largely determined by luck and privilege. Those born in poor families believe that they have little chance to improve their future economic conditions. It is easy to notice that these two societies, although equally unequal in terms of income distribution, greatly differ in the perceptions regarding the nature and causes of their inequality. Unlike people in society B, those who live in society A consider income dynamics fair since effort and skills are justly rewarded. Individuals in society A are, therefore, likely to be more tolerant of existing inequalities in the distribution of income than those living in society B. The fact that these two societies are polar cases facilitates our understanding of the importance that people’s attitudes toward inequality have on their preferences for redistribution. In society A, the widespread belief of living in a just world in which the process of social mobility is driven by effort would lead to a demand for low levels of income redistribution. On the contrary, in society B the view that income dynamics are unjust because they are based on luck and privilege leads individuals to demand large redistributive schemes. Summarizing the main message of the example described above, we can say that individuals consider income inequality fair if the pre-tax distribution of income is perceived to be caused by factors under their volitional control, i.e. effort, and they consider the pre-tax distribution of income unfair if it is perceived as caused by circumstances beyond individual control, i.e. luck or privilege. The individual’s control over the determinants of income distribution, either through the working of a meritocratic society or through the functioning of an extensive welfare state seems, therefore, to inspire fairness considerations about inequality. In this context, however, one important question still remains unanswered: when are individuals in a position of voluntarily affecting and, therefore, controlling the pre-tax distribution of income? Answering this question would shed some more light in the relationship between the above concept of fairness and the individuals’ preferences for redistribution. This is the main task in this paper. We argue that the development of a person’s autonomy is closely connected with her ability of making choices that express volitional control over the way her life turns out. We argue that the fuller the exercise of a person’s autonomous behaviour, the more the individual is in a position of voluntarily affecting the pre-tax distribution of income and the lesser her support for redistribution. To provide theoretical foundations to our claim, in line with the J. S. Mill’s notion of individuality and on its operationalization given in social choice theory (Sugden 1998; Bavetta and Guala 2003; Bavetta and Guala 2005; Bavetta and Navarra 2005), we develop a concept of autonomy freedom to explain people’s attitudes toward income redistribution. We use
Individual Control in Decision-Making
345
Italy as a case study and collect individual level data from the World Value Survey project to empirically assess the validity of our hypothesis. We show that the higher the extent of autonomy freedom that an individual perceives, the larger her control over her choices and actions, the greater the probability of supporting the view that larger income differences are needed as incentives for individual effort. Conversely, the lower the extent of autonomy freedom perceived by an individual, the smaller the degree of control over her choices and actions, the higher the probability of supporting the view that incomes should be made more equal. These results appear to be robust with respect to different specifications and estimation techniques, even after controlling for a large set of socio-economic variables and individual characteristics. The paper is structured as follows. In section 2 we discuss our theoretical hypothesis. In Section 3 we describe the data used in the empirical analysis. In Section 4 we present the econometric results and comment on our findings. In Section 5 we draw some concluding remarks.
2. The Theoretical Hypothesis Borrowing from research in psychology, sociology and political science, economists have recently started to evaluate, both theoretically and empirically, the impact of fairness considerations on economic outcomes. One of the most relevant areas of investigation focuses on the relationship between fairness in social competition and income inequality. But how is it that some people consider some sources of inequality justifiable and others unfair? One way of answering this question is autonomy freedom in decision-making. An autonomous person retains control over one-self and therefore perceives achievements as fair since she considers what is achieved as depending to a large extent on her own effort. Autonomy freedom is, therefore, instrumental for the individual control over her own actions. The role that autonomy freedom and self-control in decision-making play can be understood by considering its specific interpretation and some observed empirical regularities. Consider the interpretation first. How can the process of developing and affirming one’s own autonomy be captured? The answer is by gathering information about the menu of available alternatives and on the set of potential preference relations that a decision maker confronts. Potential preferences are defined to be preference relations that belong to individuals that share an arbitrarily large set of characteristics with the decision maker, e.g. siblings (Sugden 1998). Suppose P1 and P2 are two potential preference relations over the alternatives X and Y. If X is preferred to Y under P1 and Y is preferred to X under P2 , then the choice between X and Y is a relevant one (Bavetta and Peragine 2005). 1 1
In contrast, a choice among two pairs of shoes when one is smaller than the decision
346
Sebastiano Bavetta et al.
According to Mill (1859), relevant choices engage the decision maker in a deliberative process that requires reliance on her ability to discern, to judge, to stick to a given choice, and so on. So, having relevant choices develops and exercises a person’s autonomy since it forces her to rely upon her personal and moral qualities which are constitutive of her own individuality. 2 Simply said, individuals are autonomous if they provide ‘good reasons’ for their choices. How likely is it that they also retain control over their achievements? Our evidence suggests that it is likely. If we accept the Millian view that people who can make sense of their choices in life are autonomous, we need to establish a connection between ‘providing good reasons’ and control over the outcomes. This is made possible by empirical regularities. A number of theories in social psychology, human resource management, and economics suggests that autonomy is directly connected with intrinsic motivations and performance (Deci 1980; Frey 1997; Sansone and Harackiewicz 2000; Benabou and Tirole 2003). In these studies autonomy is never equated with independence or individualism, but rather with a sense of internal self-control that shapes the development of a person’s individuality. Research analyzing the impact of freedom and autonomy in an individual’s and/or team’s productivity showed that personal development, self-determination and responsibility are crucial factors in improving work effort and performance (Eisenberger et al. 1999; Langfred 2005; Mudambi, Mudambi and Navarra 2007; Navarra 2005). Given, therefore, the positive associations between Millian autonomy and performance and performance and achievements, we can reasonably hypothesize that an individual’s autonomy is also linked with the control she exerts over her achievements. The link between autonomy, self-control in decision making, and achievement is then established. What has been argued so far leads us to hypothesize that the individual’s control over her achievements can be viewed as the result of a deliberative process in which autonomous choices take place. In this framework, the individual’s role in determining the way her life turns out is more effective. In the context of individual preferences for redistribution, this concept implies that the higher the extent of an individual’s autonomy and control over the choices regarding her economic conditions, the stronger her belief that the pre-fiscal distribution of income is determined by factors under her volitional control such as effort and the greater she opposes income redistribution. Contrarily, the lower the extent of an individual’s autonomy and conmaker’s size cannot constitute a relevant choice since it is hard to think of a good reason for wearing shoes that are too small. A choice between two pairs of shoes, both of which fit the decision maker, constitutes a relevant choice. In other words, the set of relevant choices is that sub-set of the overall opportunity set that could have been chosen under the decision-maker’s set of potential preference relations. 2 Mill means with the term ‘individuality’ what we mean with the word ‘autonomy’ (Gray 1996). We use the two terms interchangeably here.
Individual Control in Decision-Making
347
trol over the choices regarding her economic conditions, the stronger her belief that the pre-fiscal distribution of income is determined by factors beyond her volitional control such as privilege and luck and the greater she supports income redistribution. Further, autonomy freedom and self-control in decision-making cast light upon the concept of fairness individuals rely upon in shaping their preferences for redistribution. In accord with Frey, Benz and Stutzer (2004), we argue that social mobility may be interpreted in procedural terms: if people believe that society offers equal opportunities of actual income mobility, they may be less concerned with inequality because they see social processes as fair. In this perspective, it is the non-instrumental pleasures and displeasures of the process that are valued by individuals, rather than the actual outcomes that they achieve. In this view, we claim that if individuals perceive themselves as autonomously determining and, therefore, controlling their income dynamics, they might feel that the mobility process is fair because equal opportunities really exist. In contrast, those who perceive that their income dynamics are not autonomously determined might see social mobility as a biased process in which mobility opportunities escape from their volitional control and as such they are exploited only by some and not by all. Besides providing a theoretical foundation to preferences for redistribution, autonomy freedom in decision-making sheds further light upon the economic analysis of redistributive policies. It has been suggested in the literature that preferences for redistribution are affected by the racial composition of a society (Alesina and La Ferrara 2000; Alesina, Glaeser and Sacerdote 2001). But autonomy freedom in decision-making allows for a more fine-grained analysis: rather than looking at race or ethnic characteristics, it enables us to focus on whether people retain control of their lives, independently of any other distinctive feature. Preferences for redistributive policies would therefore depend more clearly on individual rather than group characteristics. Recognition of such intra-group preference heterogeneity is likely to be the basis for more efficient policy design.
3. Data Description We use data from the World Value Survey Series 1999–2001 compiled by the Institute for Social Research of the University of Michigan. These series provide micro-data obtained from face-to-face interviews carried out to representative samples of the population across more than 60 independent countries around the world. This empirical source is designed to enable a cross-national comparison of values and norms and to monitor changes in individual attitudes across the globe. We restrict our analysis to Italy where a total of 2000 individuals were interviewed.
348
Sebastiano Bavetta et al. Table 1.
Variables, Definitions and Sources
Dependent Variable Preferences for redistribution
How would you place your views on this scale? 1 means that you agree completely with the statement incomes should be made more equal; 10 means that you completely agree with the statement we need large income differences as incentives for individual effort; if your views fall somewhere in between, you can choose any number in between.
Demographic Variables Sex
Dummy variable taking the value of 1 if the respondent is male and the value of 2 if the respondent is female.
Age
Age of the respondent.
Marital status
Are you currently:
Income
Variables measuring the level of income and taking values that range from 1(lowest level of income) to 10 (highest level of income).
1 married.
2 single.
Main Independent Variable Autonomy freedom Some people feel they have completely free choice and control over their lives, while other people feel that what they do has no real effect on what happens to them. Please use this ten-point scale in which 1 means none at all, and 10 means a great deal to indicate how much freedom of choice and control you feel you have over the way your life turns out. Political and Socio-Economic Variables Political orientation In political matters people talk of ‘the left’ and ‘the right’. How would you place your views on this scale, generally speaking? Please use this ten-point scale in which 1 means left, and 10 means a right . Competition
How would you place your views on this scale? 1 means that you completely agree with the statement competition is good since it stimulates people to work hard and develop new ideas, 10 means that completely agree with statement competition is harmful since it brings out the worst in people; if your views fall somewhere in between, you can choose any number in between.
Freedom vs. equality Which of these two statements comes closest to your own opinion? 1 means that: I find that both freedom and equality are important. But if I were to choose one or the other, I would consider personal freedom more important, that is, everyone can live in freedom and develop without hinderance. 2 means that: certainly both freedom and equality are important. But if I were to choose one or the other, I would consider equality more important, that is, that nobody is underprivileged and that social class differences are not so strong.
Individual Control in Decision-Making
349
… / Table 1 cont’d. … / Political and Socio-Economic Variables Private ownership
How would you place your views on this scale? 1 means that you completely agree with the statement private ownership of business and industry should be increased, 10 means that completely agree with statement government ownership of business and industry should be increased; if your views fall somewhere in between, you can choose any number in between.
Merit
Please tell me if it is important recognizing people on their merits? 1 (important) to 4 (not at all important).
Trust
Generally speaking would you say that most people can be trusted or that you need to be very careful in dealing with people? 1 means most people can be trusted, 2 means need to be very careful.
Job good pay
Here there are some aspects of a job that people say are important. Please tell me whether good pay is important in a job? 1 means important, 2 means not important.
Job opportunity
Here there are some aspects of a job that people say are important. Please tell me whether a job that gives the opportunity to use initiative is important? 1 means important, 2 means not important.
Job interesting
Here there are some aspects of a job that people say are important. Please tell me whether a job that is interesting is important? 1 means important, 2 means not important.
All the variables used in the empirical investigation and listed in Table 1 are drawn from the 1999–2002 World Values Survey Questionnaire compiled by the International Network of Social Scientist.
350
Variables
Mean
SD
1
2
1. Inequality 2. Sex 3. Age 4. Marital Status 5. Income 6. Political Orientation 7. Autonomy Freedom 8. Competition 9. Freedom vs Equality 10. Private Ownership 11. Merits 12. Trust 13. Job Good paid 14. Job Opportunity 15. Job Interested
6.037 1.519 45.151 2.933 5.069 5.366 6.292 4.167 1.720 4.102 1.981 1.668 0.843 0.640 0.749
2.733 0.500 1.691 2.335 2.945 2.203 2.305 2.499 0.652 2.211 1.065 0.471 0.364 0.480 0.434
1.000 –0.063 –0.085 0.012 0.145 0.192 0.169 –0.182 –0.121 –0.244 –0.184 –0.075 –0.044 0.076 –0.033
1.000 0.004 –0.007 –0.098 –0.048 –0.114 0.165 0.071 0.148 0.071 –0.014 0.005 –0.050 0.030
Summary Statistics and Correlation Matrix 3
4
5
6
7
8
9
10
11
12
1.000 –0.413 1.000 –0.171 0.001 1.000 0.016 –0.027 0.020 1.000 –0.208 0.099 0.126 0.059 1.000 –0.045 0.037 –0.085 –0.188 –0.058 1.000 –0.065 0.014 0.036 –0.089 0.004 0.068 1.000 0.015 0.025 –0.178 –0.242 –0.098 0.322 0.114 1.000 –0.093 0.091 –0.018 –0.141 –0.025 0.244 0.070 0.157 1.000 0.025 –0.029 –0.144 0.077 –0.144 –0.046 –0.084 –0.005 –0.061 1.000 –0.097 0.048 0.049 0.025 0.023 –0.049 –0.050 0.004 –0.032 0.078 –0.051 0.022 0.088 0.019 0.142 –0.167 –0.030 –0.019 –0.088 –0.044 –0.026 0.053 0.020 0.027 0.099 –0.103 0.043 0.027 –0.070 0.003
13
14
15
1.000 0.168 0.235
1.000 0.411
1.000
Sebastiano Bavetta et al.
Table 2.
Individual Control in Decision-Making
351
Our dependent variable measures people’s attitudes toward inequality by collecting the answers of the individuals interviewed on the basis of the following question: How would you place your views on this scale? 1 means that you agree completely with the statement incomes should be made more equal; 10 means that you completely agree with the statement we need large income differences as incentives for individual effort; if your views fall somewhere in between, you can choose any number in between.
Respondents were facing a ten-point scale in which the two extremes, 1 and 10, are those defined in the question above. From the construction of the question, each individual’s taste for inequality is ordered in a descending fashion: low values indicate high preferences for equality in the income distribution and viceversa. Several studies examining the determinants of individuals’ attitudes toward inequality, in either single country or in crosssection of countries, have used similar survey measures for assessing the individual’s tastes for income distribution (see for example Ravallion and Loskin 2000; Suchrcke 2001; Fong 2001; Corneo and Grünner 2002; Ohtake and Tomioka 2004). As the main objective of our empirical analysis is to assess the impact of autonomy freedom and self-control in decision-making on people’s attitude for inequality, we need a measure of the degree of autonomy enjoyed by individuals. We construct this measure by considering the respondents’ answers to the following question: Some people feel they have completely free choice and control over their lives, while other people feel that what they do has no real effect on what happens to them. Please use this ten-point scale in which 1 means none at all, and 10 means a great deal to indicate how much freedom of choice and control you feel you have over the way your life turns out.
Again, respondents were facing a ten-point scale in which the two extremes, 1 and 10, are those defined in the question above. The variable is coded in ascending order: high values indicate a high degree of autonomy freedom and self-control in decision-making and viceversa. It is important to point out that our indicator of autonomy freedom and self-control in decision-making is consistent with an axiomatic measure of autonomy developed recently within the Freedom of Choice Literature (Pattanaik and Xu 1990; Sen 1988, 1993; Bavetta and Guala 2003, 2005; Bavetta and Peragine 2005). According to this literature, the measure of autonomy is made up of an objective and a cognitive component. The former assesses the extent of options that are available for choice. The latter quantifies, by means of information about the decision maker’s potential preferences, the extent to which her choices are relevant. If so, she is autonomous in the sense that
352
Sebastiano Bavetta et al.
she may provide good reasons for choosing an option or a course of action over another. The empirical regularities that we have illustrated in the previous section allow us to connect in a meaningful way autonomy to control over achievement or performance and, ultimately, over outcomes in one’s own life. The question posed by the World Value Survey refers explicitly to the objective component of our notion of autonomy, as it mentions complete ‘free choice’. Choice is complete if it encompasses availability of opportunities. It does not, though, contain a direct reference to the cognitive component as it moves swiftly to control over life, i.e. outcomes. The empirical regularities described above allow us to consider the information it delivers as an approximation to our notion of autonomy. To capture the effect of income on the individuals’ preference for income distribution we use an individual level variable similar to Ravallion and Loskin (2000) and Corneo and Grüner (2002). To construct this variable respondents were asked to express the level of their income on a tenpoint scale with low values indicating low level of income and high values high levels of income. This variable allows us to test the hypothesis that pecuniary self-interest drives the individual’s preferences for redistribution – the so-called homo oeconomicus effect (Corneo and Grüner 2002). A set of control variables have also been used. Some refer to demographic characteristics of the respondents such as sex, age and marital status. Others define some political and socio-economic preferences of the respondents. All the variables and their respective definitions are listed in a Table 1. Summary statistics and correlation matrix of variables used in the empirical investigation are displayed in Table 2.
4. Methodology and Results We relate individual attitudes toward inequality across a sample of 2000 Italians to the level of autonomy freedom and self-control in decision-making as well as some other demographic characteristics and political and socioeconomic preferences. It is important to highlight from the outset that the use of subjective perceptions through survey responses leads us to make the standard identifying assumption for the analysis of attitudinal questions: the existence of interpersonal comparability of the interpretation given to the survey question. We adopt ordered logistic estimates to assess the validity of the hypothesis described in Section 2 as follows: R i* X i C Fi
(1)
where R i* is a latent variable, R i is the observed variable and X i is the vector of regressors which includes explanatory variables as well as demographic, political, and socio-economic controls. R i is given by the
Individual Control in Decision-Making
353
surveyed individual tastes for inequality that are coded on a ten-point scale in descending order: low values indicating strong preferences for equality in the distribution of income and high values indicating strong preferences for income differences. The results of four different ordered logit specifications are reported in Table 3. Independent variables include a constant (unreported), twenty region-specific dummy variables to allow for regional heterogeneity (unreported) and standard control variables. In model (a) we consider only the effect of demographic characteristics of the respondents together with their political orientation. In model (b) we add the effect of autonomy freedom and self-control in decision-making. In model (c) we include socio-economic beliefs of the respondents regarding their attitudes towards the working of a competitive society. Finally, in model (d) we complete our specifications with the individuals’ opinions about the intrinsic characteristics that a job should have. The main result of our econometric exercise is the following: in all specifications in which we include our measure of autonomy freedom and selfcontrol in decision-making amongst the regressors (models (b), (c) and (d)), it affects the variation in the individuals’ attitudes toward inequality. The estimated coefficients are positive and statistically strongly significant. Respondents asserting high control over the choices regarding the way their lives turn out oppose equality in the distribution of income and consider income differences as incentives for individual effort. This result supports our hypothesis and suggests that individuals consider income inequality fair if the pre-tax distribution of income is perceived as dependent upon factors under volitional control such as effort and merit. On the contrary, they consider the pre-tax distribution of income unfair if it is viewed as determined by circumstances beyond individual control such as luck or privilege. As far as the effect of income on the individual’s attitudes toward inequality is concerned, we note that higher positions in the scale of income reported by the respondents are associated with greater support for income differences. This result is in line with the hypothesis that pecuniary self-interest affects individuals’ tastes for inequality. This relationship is statistically significant and consistent across the four model specifications displayed in the table. Interestingly, singles rather than couples appear more attracted by greater equality in the distribution of income and the youth prefer income differences more than the elderly. The political orientation of the respondents significantly affects individual attitudes toward income inequality. Right-leaning individuals dislike equality, whereas left-leaning individuals display higher preferences for redistribution. People exhibiting attitudes toward a competitive society by supporting competition, freedom, private ownership of firms and merit prefer income differences. Those who believe that most people can be trusted are more likely to support equality in the distribution of income.
354
Sebastiano Bavetta et al. Table 3. Ordered Logit Estimation: Income inequality as a function of autonomy freedom (a)
(b)
(c)
(d)
Sex
–0.206** (–1.98)
–0.156 (–1.46)
0.004 (0.04)
0.027 (0.25)
Age
–0.012*** (–3.34)
–0.008* (–2.25)*
–0.011*** (–2.89)
–0.011*** (–2.90)
Marital status
–0.045* (–1.79)
–0.044* (–1.77)
–0.021 (–0.80)
–0.016 (–0.62)
Income
0.066*** (3.42)
0.065*** (3.33)
0.0339* (1.63)
0.037* (1.75)
Political orientation
0.159*** (5.66)
0.152*** (5.35)
0.118*** (3.96)
0.121*** (4.03)
0.111*** (3.91)
0.103*** (3.40)
0.102*** (3.30)
Competition
–0.078** (–2.33)
–0.081** (–2.39)
Freedom vs Equality
–0.301*** (–3.46)
–0.300*** (3.47)
Private ownership
–0.170*** (–4.18)
–0.167*** (–4.13)
Merits
–0.236*** (–3.74)
–0.236*** (–3.78)
Trust
–0.286** (–2.49)
–0.254** (–2.19)
Autonomy freedom
Job good pay
–0.285* (–1.73)
Job opportunity
0.271** (2.11) –0.393*** (–2.66)
Job interested Regions dummies Pseudo R 2 Log–Likelihood Wald Test No. observations *** ** *
46.75***
41.14***
39.01**
28.98***
0.0244
0.0266
0.0513
0.054
–2516.0537
–2461.3596
–2201.2667
–2193.1039
113.11***
121.04***
213.30***
220.18***
1151
1130
1037
1036
, , significance at 1%, 5%, and 10% respectively. Robust standard errors in parentheses.
6.4
355
6.2
North-West North-East
6
Inequality
Individual Control in Decision-Making
5.8
South
Center
5.6
Islands
5.8
6
6.2
6.4
6.6
6.8
Autonomy Freedom Fig. 1
This result can be understood in the light of the fact that the greater a person trusts other persons, the more she believes that any income transfer does not benefit cheaters, but people really in need. Finally, those who believe that good pay and interest are important aspects that a job should have support higher equality in the distribution of income. In contrast, those who consider important that a job represents an opportunity for the worker are more likely to accept unequal distribution of income. The next step of our empirical analysis is to plot the position of five Italian geographical macro-areas on a four-quadrant figure constructed on the basis of the relationship between the extent of autonomy freedom and selfcontrol in decision-making perceived by individuals and the preferences for redistribution of their respective residents. The five macro-areas are homogeneous groups of regions created on the basis of similar socio-economic characteristics. A visual inspection of Fig. 1 lends support to the direct relationship between the extent of autonomy freedom enjoyed by individuals and their preferences for inequality in the distribution of income. We note that the macro-areas are positioned in such a way that on average low (high) autonomy areas are combined with strong (weak) preferences for equality in the distribution of income. Further, richer (poorer) areas, namely north-eastern and north-western regions (islands and southern central regions), are also those where greater (lower) autonomy is matched by preferences for income differences (equality).
7.5
Sebastiano Bavetta et al.
7
MAR
6.5
Inequality
356
TAA LOM
LIG
FVG
6
CAM
PIE
VEN
EMR PUG
5.5
SIC TOS
LAZ
5
SAR ABR
CAL
5.8
6
6.2
6.4
6.6
6.8
Autonomy Freedom Fig. 2
In Fig. 2 the combination between the individuals’ attitudes toward income inequality and their perception of autonomy freedom and self-control in decision-making is averaged at the regional level. Again, a visual inspection of the figure provides support to our theoretical hypothesis. The distribution of Italian regions is largely placed over a south-east north-west diagonal, indicating that individuals with high degree of autonomy freedom and self-control in decision-making on average live in regions whose residents display stronger preferences for inequality in the distribution of income. The only exceptions are Lazio (located in the south-west quadrant) and Marche and Liguria (located in the north-east quadrant). In Figs. 1 and 2 the positions in the four-quadrants are those of the average individual in a given macro-area and in a given region. These positions are defined on the basis of the combination of the individual’s preferences for redistribution and the perception of her degree of autonomy freedom. One interesting issue to be examined at this stage of our empirical investigation is to evaluate each individual position in the four-quadrant figure conditional on demographic, political and socio-economic variables. To implement this analysis let us assemble the 1885 individuals considered in our study into four group categories, according to the position they hold in the four-quadrants of Fig. 3. Groups 1 and 2, positioned respectively in the north-east and south-west quadrants, gather together individuals whose opinions about the pre-fiscal distribution of income are driven by fairness
Individual Control in Decision-Making
357
High autonomy
Group 4
Group 1
Income equality
Income inequality
Group 2
Group 3
Low autonomy
Fig. 3
consideration. More specifically, individuals in group 1 believe that their own income dynamics are under their volitional control and as such determined by effort and merit. Let us call these individuals fairness-driven and merit-believer individuals. Differently, individuals in group 2 think that their own income dynamics are beyond their volitional control and as such determined by luck and privilege. Let us call these individuals fairnessdriven and privilege-believer individuals. People positioned in these two quadrants, therefore, are those who display preferences consistent with our empirical findings shown in Table 3. In groups 3 and 4, positioned respectively in the south-east and northwest quadrants of Fig. 3, we assemble individuals whose opinions about the pre-fiscal distribution of income are not driven by fairness concerns. More specifically, individuals in group 3 behave selfishly by opposing redistribution, although they believe that people’s economic conditions are affected by factors beyond their volitional control, such as privilege and luck. Let us call these individuals non-fairness-driven and selfish individuals. On the other hand, individuals in group 4 display collective altruistic attitudes by supporting redistribution, although they believe that people’s economic conditions are affected by factors under their volitional control, such as effort and merit. Let us call these individuals non-fairness-driven and collective altruistic individuals. 3 3
Collective altruism is here intended as in Becker (1974): positive attitudes toward income transfers are driven by people’s taste for giving. People support redistribution because they receive status or acclaim or they simply experience a ‘warm glow’ from having ‘done their bit’.
Multinomlial Logit Estimation: Position
1
2
3
1
–
1.1571 [0.638]
1.0432 [0.230] 0.9015 [– 0.487]
Sex 3 4 1 2 Age 3 4 1 2 Income 3 4
0.8642 [– 0.638] 0.9586 [– 0.230] 0.9859 [– 0.080] – 0.9976 [– 0.321] 0.9892* [– 1.679] 1.0032 [0.513] – 0.9175** [– 2.235] 0.8935*** [– 3.413] 1.0642* [1.947]
– 1.1092 [0.487] 1.1408 [0.632] 1.0024 [0.748] – 0.9916 [– 1.247] 1.0056 [0.824] 1.0899** [2.235] – 0.9739 [– 0.746] 1.1599*** [4.217]
4
– 1.0285 [0.173] 1.0109* [0.321] 1.0085 [– 1.247] –
1.0143 [0.080] 0.8765 [– 0.632] Competition 0.9723 [– 0.173] – 0.9969 [– 0.513] 0.9945 [– 0.824] Freedom vs 0.9861** Equality [– 2.468]
Pos
1
2
3
4
1
–
0.9279 [– 1.534]
0.9577 [– 1.051] 1.0321 [2.698]
0.963 [– 0.977] 1.0378 [0.831] 1.0055 [0.15]
2 3 4 1 2 3
**
1.0141 [2.468]
1.1191*** [3.413] 1.0268 [– 0.746] –
– 0.9397* [– 1.947] 0.8622*** [– 4.217] Private 0.8396*** Ownership [– 6.015]
4 1 2 3
***
1.1910 [6.015]
–
4
0.0748 [1.534] 1.0441 [1.051] 1.0384 [0.977] – 1.6428*** [2.656] 0.8684 [– 0.969] 1.0697 [0.476] – 1.0065 [0.119] 1.059 [1.335] 0.8496*** [– 3.63]
– 0.9689 [– 2.698] 0.9635 [– 0.831] 0.6087*** [– 2.656] – 0.5286*** [– 3.673] 0.6512** [– 2.536] 0.9936 [– 0.119] – 1.0522 [1] 0.8442*** [– 3.172]
– 0.9945 [– 0.15] 1.1516 [0.969] 1.8918 [3.673] – 1.2319* [1.667] 0.9443 [– 1.335] 0.9504 [–1] – 0.8023*** [– 5.141]
– 0.9348 [– 0.476] 1.5357** [2.536] 0.8118* [– 1.667] – 1.1770*** [3.63] 1.1846*** [3.172] 1.2464*** [5.141] –
Sebastiano Bavetta et al.
Pos
2
358
Table 4.
… / Table 4 cont’d
Political Orientation
Pos
1
2
3
4
1
–
0.9771 [–0.416]
0.9658 [0.774] 0.9885 [– 0.236]
1.0204 [0.458] 1.0443 [0.873] 1.0565 [1.446]
2 3 4 1
Merits 3 4 1 2 Job interesting 3 4
– 1.0373 [0.328] 0.9445 [– 0.6] 1.0651 [0.69] – 2.7041*** [2.834] 1.131 [0.532] 1.2932 [1.13]
– – 1.0117 [0.236] 0.9575 [– 0.873] 0.964 [– 0.328] – 0.9106 [– 0.926] 1.0268 [0.274] 0.3698*** [– 2834] – 0.4179*** [– 2.581] 0.4782*** [– 2.163]
– 0.9465 [– 1.446] 1.0587 [0.6] 1.0982 [0.926] – 1.1277 [1.507] 0.8849 [– 0.532] 2.3927*** [2.581] – 1.1443 [0.634]
3
4
1
–
0.8562 [–0.655]
0.8325 [–0.962] 0.9724 [– 0.125]
0.935 [–0.149] 1.1371 [0.592] 1.1693 [0.926]
Trust 3 4
0.9389 [– 0.69] 0.9739 [– 0.274] Job good paid 0.8868 [–1.507] –
–
2
2
–
0.7733 [– 1.13] 2.0910*** [2.163] 0.8739 [0.634]
1
1 2 3 4 1 2
Marital 3 4
1.168 [0.655] 1.2012 [0.962] 1.0272 [0-149] – 2.3013** [2.135] 1.1366 [0.495] 0.972 [– 0.177] – 0.9971 [– 0.055] 0.9744 [– 0.604] 1.029 [0.488]
– 1.0284 [0.125] 0.8795 [– 0.592] 0.4345** [– 2.135] – 0.4939* [– 1.87] 0.4224** [– 2.34] 1.0029 [0.055] – 0.9772 [– 0.485] 1.0238 [0.494]
– 0.855 [0.926] 0.8798 [– 0.495] 2.0247* [1.87] – 0:8552 [– 0.67] 1.0263 [0.604] 1.0234 [0.485] – 1.0478 [1.222]
– 1.0288 [0.117] 2.3675** [2.34] 1.1693 [0.67] – 0.9795 [– 0.488] 0.9767 [– 0.494] 0.9544 [– 1.222] –
Individual Control in Decision-Making
2
1.0234 [0,146] 1.0354 [0.774] 0.98 [– 0.458]
Pos
359
360
Sebastiano Bavetta et al.
In order to calculate the probability of belonging to one of the four different group categories and to assess the determinants of the position of a given individual in one of the four-quadrants of Fig. 3 we implement a multinomial logit (MNL) model. This choice is justified by the fact that the position of each individual in a given group (our dependent variable) is not ordered. Let p j be the probability of belonging to group j. This probability is derived as follows: p j exp C aj x / D
j 1,2,3
(2)
and p 4 1/ D
(3)
where 3
D 1 exp X aC j
(4)
j 1
where J 1,2,3 and 4 indicate the different positions of each of the 1885 individuals taken into consideration in our analysis; p j indicates the probability of belonging to group j; x is a vector of explanatory variables; and C j is the vector of coefficients associated to group j. The MNL results are reported in Table 4. The model is fully estimated using all combinations of outcome categories. In the table we present the estimated elasticities of the explanatory variables with respect to the probability shares of each of the four group categories together with their respective t -values. We note that overall some of the regressors used in the estimated equation do not appear to affect the position of individuals in each group category. More specifically, the sex of respondents, their marital status, their political orientation, their attitudes toward trusting others and their preferences toward merit and competition do not determine the position of each individual in a given group. In contrast, another set of regressors plays a statistically significant role on the probability that an individual moves from a given group category to another. More specifically, such regressors are Freedom vs Equality, Income, Private Ownership, Job Interesting and Job Good Paid. Let us consider the effect of each of them on the position of individuals in each of the four group categories. We start by commenting on the determinants of an individual’s move from group 2 in Fig. 3 to any of the other three remaining group categories. Fairness-driven and privilege-believer individuals are more likely to become fairness-driven and merit-believer individuals when their incomes rise, their support to freedom as opposed to equality increases and if they change their mind by considering important that a job is well paid and interesting. In terms of Fig. 3, there is therefore a statistically significant probability that individuals belonging to group 2 move to group 1 when their incomes
Individual Control in Decision-Making
361
rise, their support to freedom as opposed to equality increases and if they change their mind by considering important that a job is well paid and interesting. The same independent variables and with the same causal relationship affect the probability that individuals displaying fairness-driven and privilege-believer preferences change their attitudes and become nofairness-driven and collective altruistic. Again, in terms of Fig. 3, there is a statistically significant probability that individuals belonging to group 2 move to group 4 when their incomes rise, their support to freedom as opposed to equality goes up and when they modify their opinions about what characteristics they believe a job should have. However, differently from the previous case, a move of an individual from group 2 to group 4 is also determined by a greater support for public ownership of business and industry. Finally, fairness-driven and privilege-believer individuals are more likely to become non-fairness-driven and selfish individuals if their preferences for freedom as opposed to equality rise and if they consider important that a job is well paid and interesting. Fairness-driven and merit-believer individuals are more likely to become non-fairness-driven and collective altruistic individuals when both their income and their support to private ownership of business and industry rise. Further, a decrease in the level of income changes the individual type from fairness-driven and merit-believer to non-fairness-driven and selfish. Finally, non-fairness-driven and collective altruistic individuals become non-fairness-driven and selfish when the level of income declines, when freedom (as opposed to equality) increases, and when their support for public ownership of business and industry rises.
5. Concluding Remarks In the economics literature, the relationship between individual beliefs about social mobility and income redistribution went down two different paths (Fong 2006). The prospective mobility view assumes that expectations of upward mobility will decrease demand for redistribution on purely selfish grounds. The fairness view holds that people who believe that there are few constraints to upward mobility also believe that the economy is a meritocracy and, therefore, that the pre-fiscal distribution of income is fair. In this paper we argue that fairness concerns may drive people’s attitudes towards inequality. In contrast to recent literature (Benabou and Tirole 2003; Alesina and Glaeser 2004; Alesina and Angeletos 2005; Corneo and Gruner 2002; Fong 2002), we relate the individual’s concept of fairness with the Millian notion of autonomy freedom. We claim that individuals’ control over the determinants of income distribution, either through the working of a meritocratic society or through the functioning of an extensive welfare state, inspire fairness considerations about inequality. We point out that such a control is voluntarily exercised by an individual when she makes
362
Sebastiano Bavetta et al.
autonomous choices over her life. The greater the exercise of a person’s autonomous behaviour, the more the individual is in a position of voluntarily affecting the level of her income and the lesser her support for redistribution. We therefore hypothesize that on the one hand the greater the extent of an individual’s autonomy freedom in decision-making, the higher her perception that income dynamics are driven by merit and the lower her support for redistribution and income equality. On the other hand, the lower the extent of an individual’s autonomy freedom in decision-making, the higher her perception that income differences are due to luck and privilege and the greater her support for equality in the distribution of income. The implications of our study are important with regard to the classical problem of the trade-off between freedom and income inequality in liberal democracies. The political debate over income inequality has been traditionally characterized by two opposing views. Liberals consider economic inequality unjust and socially destructive. Conservatives generally feel that riches are the best way to reward those who contribute the most to prosperity and/or that a generous welfare state encourages idleness and folly amongst the poor. These two apparently divergent views may be reconciled in the light of the results obtained in this study. Income inequality can be considered as fair according to the extent of autonomy freedom people enjoy. In societies where individuals are not autonomous and do not voluntarily determine the source of their income, state intervention via redistribution schemes is instrumental to guarantee social justice. Yet, in societies of autonomous individuals, income distribution is fair and social justice guaranteed by the volitional control people exercise over their life. This view can be better understood if social mobility is interpreted in procedural terms. It is not the degree of inequality that matters, but the process that brought it about. Two different societies may therefore present the same income inequality, but can be differently fair according to the extent of autonomy freedom people enjoy in each of them.
References Alesina A. and Angeletos G.M. (2005) Fairness and Redistribution, American Economic Review, 85: 960–80. Alesina A. and Glaeser E.L. (2004) Fighting Poverty in The US and Europe: A World of Difference, Oxford University Press. Alesina A, Glaeser E.L. and Sacerdote B. (2001) Why Doesn’t the US Have a European-Style Welfare State, Brookings Papers On Economic Activity, Fall: 187–278. Bavetta S. and Guala F. (2003) Autonomy Freedom and Deliberation, Journal of Theoretical Politics 15: 423–43. Bavetta S. and Guala F. (2005) Opportunities and Individuality: Their Value and the Possibility of Measuring It, mimeo. Bavetta S. and Peragine V. (2006) Measuring Autonomy Freedom, Social Choice and Welfare 26: 31–45.
Individual Control in Decision-Making
363
Bavetta S. and Navarra P. (2004) Theoretical Foundations of an Empirical Measure of Freedom: A Research Challenge to Liberal Economists, Economic Affairs 24: 44– 6. Becker G.S. (1974) A Theory of Social Interactions, Journal of Political Economy 82: 1063–93. Benabou R.J.M. and Tirole J. (2003) Intrinsic and Extrinsic Motivation, Review of Economic Studies 70: 489–520. Benabou R.J.M. and Tirole J. (2006) Belief in a Just World and Redistributive Politics, Quarterly Journal of Economics 121: 463–492. Deci E. (1980) ThePsychology of Self-Determination, Lexigton Books. Corneo G. and Grüner H.P. (2002) Individual Preferences for Political Redistribution, Journal of Public Economics 83: 83–107. Eisenberger R., Rhoades L. and Carmeron J. (1999) Does Pay For Performance Increase Or Decrease Perceived Self-Determination and Intrinsic Motivation? Journal of Personality and Social Psychology 64: 1026–40. Fong C. (2001) Social Preferences, Self-Interest, and the Demand for Redistribution, Journal of Public Economics 82: 225–46. Fong C. (2006) Prospective Mobility, Fairness and the Demand for Redistribution, Behavioural Decision Research Working Paper Series, SDS Carnegie Mellon University. Frey B.S. (1997) Not Just For TheMoney: An Economic Theory of Personal Motivation. Edward Elgar. Frey B.S., Benz M. and Stutzer A. (2004) Introducing Procedural Utility: Not Only What, But also How Matters, Journal of Institutional and Theoretical Economics 160: 377–401. Langfred C.W. (2005) Autonomy and Performance in Team: The Multilevel Moderating Effect of Task Interdependence, Journal of Management 31: 513–29. Mill J.S. (1991) On Liberty. John W. Parker and Son, 1859. Oxford University Press. Mudambi R., Mudambi S. and Navarra P. (2007) Global Innovation in MNCs: The Effects of Subsidiary Self-determination and Teamwork, Journal of Product Innovation Management 24: 442–455. Ohtake F. and Tomioka J. (2004) Who Support Redistribution, Japanese Economic Review 55: 333–354. Pattanaik P.K. and Xu Y. (1990) On Ranking Opportunity Sets in Terms of Freedom of Choice, Recherches Economiques De Louvain 56: 383–90. Ravallion M. and Loskin M. (2000) Who Wants to Redistribute? The Tunnel Effect in 1990 Russia, Journal of Public Economics 76: 87–104. Sansone C. and Harackiewicz J. (2000) Intrinsic and Extrinsic Motivation: The Search for Optimal Motivation and Performance, Academic Press. Sen A.K. (1988) Freedom of Choice: Concept and Content, European Economic Review 32: 269–94. Sen A.K. (1993) Markets and Freedoms: Achievements and Limitations of the Market Mechanism in Promoting Individual Freedoms, Oxford Economic Papers 45: 519–41. Suhrcke M. (2001) Prefernces for Inequality: East Vs. West, Innocenti Working Paper #01/16, UNICEF Innocenzi Research Centre. Sugden R. (1998) The Metric of Opportunity, Economics and Philosophy 14: 307–37.
19. The Principle of Fairness: A Game Theoretic Model Luciano Andreozzi Department of Economics, University of Trento, Italy
1. Introduction The history of international agreements aimed at environmental protection offers a mix of successes and failures, with the latter being far more frequent than the former. The best known example is the Kyoto protocol: a treatise signed by over 160 countries to reduce emissions of gasses held responsible for green-house effect. This agreement is usually considered to be a half failure, mostly because of the decision of the United States not to ratify it (Pizer 2006: 26). Other international treatises faced similar problems. For example, the Helsinki treaty for the reduction of emissions responsible for acid rains, signed in 1985 by a group of European and American countries, failed to be ratified by United States and United Kingdom (Barrett 2005). These failures are not a coincidence because, as these types of agreement are usually concerned with the production of public goods, they suffer from two main sources of instability. First, signatory countries are tempted not to respect the terms of the agreement they have ratified, which can only be avoided if appropriate incentives are provided. For example, economic sanctions could be imposed on countries that fail to keep their emissions within the limits imposed by the treatise. However, these incentives do not apply to those countries that decided not to sign the treatise in the first place, even if they stand to gain from it. This generates a second source of instability: each country is tempted to stay out from the agreement, hoping that others will join it nonetheless. Game theoretical models of international cooperation show that this second form of instability is pervasive and far more difficult to solve than the first. Using a large variety of analytical and conceptual tools (cooperative and non cooperative games, Nash equilibria and their refinements, various definitions of the core) these models invariably show that the situation in which an agreement is signed by all countries that benefit from it is rarely
366
Luciano Andreozzi
an equilibrium (Carraro 2003). Stable cooperative agreements are usually characterized by a large amount of free-riding and substantial inefficiencies. There is a natural remedy to this source of instability: signatory countries might force would-be free riders to join the treaty against their will. Universal endorsement of the agreement might thus be reached under the threat of economic sanctions against non-signatories. Of course, not all threats are credible. Signatory countries will have an incentive not to punish non-signatory ones whenever carrying-out the threat is costly for them. (Barrett 1997) However, even when threats are costless – and therefore credible – their use rises some interesting philosophical problems concerning their moral admissibility that are seldom addressed in this kind of literature. On what ground can signatory countries harm non-signatories ones in order to induce them to join the treatise? Are there limits to the admissible punishment? Would it be morally acceptable to bomb the capital of a country that refuses to ratify a treaty for the reduction of seal fishing? Maybe other, less dramatic forms of punishment would be acceptable, but where should the limit be drawn? In a paper published in 1955, H.L.A. Hart presented what came to be known as the ‘principle of fairness’, which provides a qualified, positive answer to these questions (Hart 1955). This principle states that when an individual receives a benefit from a collective action initiated by others, he is under an obligation to do his part and hence can be legitimately coerced into doing it. Hart’s principle is extremely powerful because it implies the existence of obligations that are independent from individual consent. Its application to the problems of international cooperation would imply, for example, that countries that have ratified an international agreement for pollution control have a right to ask that all the countries that gain from it ratify the treatise as well. In turn, this implies that some harm could be rightfully imposed on countries that refuse to do so. Not surprisingly, this principle has come in for close scrutiny by philosophers who are sensitive to individual rights and has sometimes been claimed to be indefensible (Nozick 1974; Simmons 1979). The present paper defends Hart’s principle of fairness from these criticisms on the background of the above examples of international cooperation. The defence is articulated as follows. In Section 2, I provide a brief outline of the principle of fairness, presenting its various formulations due to Hart himself and John Rawls. Section 3 follows a standard procedure in this kind of literature and proposes some amendments which make this principle easier to discuss and to defend. In particular, I shall present an analysis of the meaning of coercion in this context, which is original with respect to the current literature and crucial for my argument to work. Sections 4 and 5 present an intuitive argument in favor of the principle of fairness. I follow Arneson (1982) in stressing the close analogy with Robert Nozick’s defense of property rights on natural resources, the so called Lockean proviso. Finally,
The Principle of Fairness
367
in Section 6 I present a simple game theoretical model originally due to Dixit and Olson (2000) which provides a more rigorous definition of the idea of a freely accepted agreement, and shows why the production of public goods involving more than two individuals cannot be resolved through agreements of this kind. This is the main reason I provide in favor of the idea that some obligations might be independent from individual consent. In Section 7, I discuss some objections that are likely to be raised against the approach presented here. I conclude in Section 8.
2. The ‘Principle of Fairness’ The content of the ‘principle of fairness’ is best presented by quoting its first and most important proponent: H.L.A. Hart. It states that When a number of persons conduct any joint enterprise according to rules and thus restrict their liberty, those who have submitted to these restrictions when required have a right to a similar submission from those who have benefited by their submission (Hart 1955: 185).
This principle has been christened as ‘principle of fairness’ by John Rawls, who formulates it as follows: when a number of persons engage in a mutually advantageous cooperative venture according to rules, and thus restrict their liberty in ways necessary to yield advantages for all, those who have submitted to these restrictions have a right to a similar acquiescence on the part of those who have benefited from their submission. We are not to gain from the cooperative labors of others without doing our fair share (Rawls 1971: 112).
The principle of fairness can be rephrased as follows (this more cumbersome presentation will facilitate our forthcoming discussion). Consider a set N {1,2,..., n } of individuals, with n 2 (the reason for restricting our attention to the situation in which there are strictly more than two individuals will become clear later). Let S be a proper subset of N, and S c the complement of S in N. Let X be a joint action taken by members of S, which negatively affects one or more of the members of S c . In normal conditions (that will be spelled out shortly) members of S do not have a right to X. That is, members of S c have a right to ask members of S to refrain from X. However, if the members of S have performed a second action Y that gives benefits to those who are negatively affected by S c , then the members of S do come to have a right to X. That is, X, which is a generally impermissible action, becomes rightful in virtue of the benefits the members of S have bestowed on the members of S c . All this is very abstract, so an example might help. Suppose N is the set of states that have a coast on the Mediterranean Sea. Let S be equal to N save
368
Luciano Andreozzi
for one state, for example Greece. Let X be a set of economic sanctions imposed by all members of S against Greece. For instance, X might be: ‘Greek ships are banned by all non-Greek ports in the Mediterranean Sea.’ In ‘normal conditions’ X is not rightful, that is, the states with a coast on the Mediterranean Sea do not have a right to ban Greek ships from their ports. 1 Suppose, however, that all states in M have signed an agreement to reduce pollution in the Mediterranean Sea which yields benefits to all states, Greece included. This is the joint action Y mentioned above. The principle of fairness states that action X (banning all Greek ships from Mediterranean ports) might become rightful in the presence of Y. It is the benefit the other states bestow on Greece that authorizes them to take an action which harms Greece. That action would have been morally wrong absent such benefits. The principle of fairness has an intuitive appeal to it, and it is thus surprising that many authors have found it unsound. In a much quoted passage of Anarchy State and Utopia, Robert Nozick writes: Suppose some of the people in your neighborhood (there are 364 other adults) have found a public address system and decide to institute a system of public entertainment. They post a list of names, one for each day, yours among them. On his assigned day (one can easily switch days) a person is to run the public address system, play record over it, give new bulletins, tell amusing stories he has heard, and so on. After 138 days on which each person has done his part, your day arrives. Are you obligated to take your turn? You have benefited from it, occasionally opening your window to listen, enjoying some music or chuckling at someone’s funny story. The other people have put themselves out. But must you answer the call when it is your turn to do so? As it stands, surely not (Nozick 1974: 93).
Notice that this is a crucial passage in Nozick’s defense of an ultraminimal state, because it is the main reason he offers for denying that the State has a right to coerce individuals to produce public goods. In turn, this is the main point of departure of Nozick’s libertarianism from more standard version of radical liberalism such as those of Milton Friedman and F.A. von Hayek. Notice also that Nozick is not providing reasons here. He presents his moral intuition that run against the principle of fairness and hopes that the reader shares the same intuition. Some commentators (Simmons 1979, 1987) agree with his view, others do not (Klosko 1987a, 1987b, 1990). 2 1
I am not defending this point, that is, I am not claiming that banning all Greek ships from Mediterranean ports violates (in normal conditions) Greece’s rights. Readers who are uncomfortable with this example can replace it with any joint action taken by all Mediterranean states that hurts Greece and that they believe would constitute a violation of Greece’s rights in ‘normal’ or ‘ordinary’ conditions. 2 Other authors share Nozick’s emphasis on personal intuitions in rejecting the principle of fairness. After discussing an example close to Nozick’s public address system, Simmons arrives at the conclusion that ‘People cannot simply force institutions on me, no matter how just, and force on me a moral bond to do my part’ (Simmons 1979: 148).
The Principle of Fairness
369
I will not question Nozick’s moral intuitions. As I briefly mention below, I believe that most of his arguments are based on a rather blunt examples, so that by slightly changing some of them we would reach very different conclusions. However, in this paper I follow a different line of attack, which was pioneered by Arneson (1982). This line of argument defends the principle of fairness in a way that parallels Nozick’s defense of private appropriation of natural resources. It consists in showing that (a limited amount of) punishment against free-riders is morally justified by the same argument that justifies individuals in appropriating natural resources even when they are in scarce supply. Before we do that, however, we need a sharper definition of the conditions under which the principle of fairness applies.
3. Tightening Up (but not too much) the Principle of Fairness Many commentators have felt that the original presentation of the principle of fairness by Hart and Rawls is somewhat unsatisfactory. It has attracted a long list of amendments (Cullity 1995). Here I propose my own list (which partly overlaps with others). The first amendment that one definitely needs is that individuals who are asked to contribute obtain a benefit from the public good which is at least as large as the cost they are required to pay. This clause was explicitly introduced by Nozick (1974: 94), who believed that it was not sufficient to make the principle viable, and can be considerably strengthened without harm. For instance, in his presentation of the principle of fairness in A Theory of Justice, Rawls states that ‘a person is required to do his part as defined by the rules of an institution’ only ‘when (…) the institution is just (or fair)’ (Rawls 1971: 112). While the nature of fairness might be difficult to spell out (in this passage Rawls appeals directly to his own theory of justice, although other definitions are not ruled out), it is easy to agree that individuals cannot rightly be coerced to play their part in a cooperative scheme that is blatantly unfair. Your neighbours have no right to force you to contribute to the production of a public good which benefits all in the same manner, if they have allocated a much larger share of the cost on you. This remains true even if they so arranged things that the (unequal) share of the cost they ask you to pay is still smaller than the benefit you receive. In his presentation of the principle of fairness, Rawls introduces a second requirement which is much less easy to defend and that, not surprisingly, has attracted a fair amount of criticism. He requires that this principle only applies when an individual ‘has voluntarily accepted the benefits of the arrangement or taken advantage of the opportunities it offers.’ Arneson (1982) noticed that this clause is too strong if the principle has to ‘fulfill the philosophical ambition assigned to it by Hart’ (Arneson 1982: 619). Other commentators have claimed that this proviso collapses the principle of fairness into an elaborate form of consent argument. That is, taken literally,
370
Luciano Andreozzi
Rawls’ formulation of the principle of fairness would imply that individuals can be legitimately coerced into doing their part into a cooperative venture only when they have given their consent to it. 3 This might be a good point but I think there is a much stronger reason to exclude Rawls’ further clause from the final formulation of the principle of fairness. The reason is that we want to apply this principle to the free-riding problems that are created by all public goods. Many public goods, besides being non rival and non excludable, are also non optional, which means that individuals who benefit from their production cannot avoid enjoying them. To take our previous example, if all Mediterranean countries agree to a joint effort to reduce pollution, there is nothing Greece can do to avoid the benefits generated by this effort. In this and all similar cases, there is not a clear sense in which individuals have ‘done something’ to receive the benefits produced by others through their collective action. 4 Finally, there is a third clause which is extremely important and (to my knowledge) has never been presented in this form in the literature. In Hart’s original formulation, the principle of fairness reads: ‘When a number of persons conduct any joint enterprise according to rules…’ The way I read this passage, which leads to my third ammendment, is that there are more than two people involved: at least two individuals who start a joint venture and a third individual who gets a benefit from it. This might seem to be a technicality but it is not. The literature contains long discussions of examples in which individual A gives individual B a benefit and then asks a payment. For example, Cullity (1995) invites us to consider the following situation: ‘On the first day in my newly carpeted house, I leave my shoes outside. In the morning I am delighted to find they have been extraordinarily well repaired. I am less delighted when I receive the bill’ (Cullity 1995: 10). Examples like these were originally introduced by Nozick in his attack on the principle of fairness. He noticed that: you may not decide to give me something (…) and then grab money from me to pay for it, even if I have nothing better to spend the money on. One cannot, whatever one's purposes, just act so as to give people benefits and then demand (or seize) payment. Nor can group of persons do this’ (Nozick 1974: 95).
Some of the amendments of the principle of fairness discussed in the literature (e.g. Arneson 1982) contain clauses that eliminate these kind of 3 See Simmons (1979: 124) for this line of argument and for some ingenuous examples that show why Rawls’ clause do not collapse the fairness principle into a kind of consent argument. 4 One should contrast this situation with those involving public goods that are non excludable and non-rival, but are optional. For example, if the government of my country runs a public radio system (which is also the only radio station in my area), I can avoid enjoying the benefits it provides by simply refraining from buying a radio set. Rawls’ statement of the principle of fairness implies that in the act of buying a radio (i.e. accepting a benefit) I obliged to contribute to the public radio system.
The Principle of Fairness
371
situations by stipulating that the principle of fairness applies only when ‘large groups’ of individuals are involved. I find this restriction too vague (how large must a group be to be ‘large’) and unnecessarily strong. In Section 6 I defend the view that the principle of fairness can be rightly applied when at least three individuals are involved. Putting together the elements presented above, the principle of fairness can be restated as follows: Principle of Fairness When a group of (two or more) individuals join into a collective action, they have a right to ask all those who have benefited from their decision to participate as well, provided that the costs and benefits of the joint enterprise are shared fairly. This requires, at the very least, that nobody is asked to pay a cost larger than the benefits he gets from the collective action. It might include some further requirements, such as that costs are shared proportionally to benefits. In turn, the idea that those who have joined the collective action have a ‘right to ask’ all those who have benefited from it to do their share can be put in a more formal way as follows: The Right to Harm There is at least one conceivable action X that can be taken by those who have participated in the collective action with the following characteristics: (a) X hurts those who did not participate. (b) In the absence of the collective action (and the benefits it produces) individuals who are hurt by X would have a right to ask the others to refrain from X. However, (c) because of the benefits the collective action has bestowed on the non participants, those who have participated to it do have a right to take X. Notice that I have claimed that there is a ‘conceivable’ action X that can be used as a punishment against free-riders. This means that the benefits cooperators have given to free-riders do not authorize any form of punishment. Mediterranean states are not entitled to bomb Athens if Greece fails to ratify their treaty. However, to say that they have a right to force Greece to enter the agreement requires only that there is at least a conceivable action X that they could take to this effect. In the remainder of this paper I will show that a defence on the principle of fairness crucially hinges on the way we specify X.
4. The Principle of Fairness and the ‘Lockean Proviso’ Consider the following stylized public good problem which I will use throughout in my defence of the principle of fairness. There are three individuals, John, Paul and Matthew who can engage in a common enterprise which produces a public good each of them values v. The public good costs
372
Luciano Andreozzi
c, with c v 2c . This implies that a single individual will not find it worthwhile to produce the public good (because c v ) and, at the same time, two individuals are sufficient for a successful collective action (because v 2c ). This hypothesis excludes the case in which v 2c v 3c , i.e. the case in which the public good will be produced only when its cost can be divided among all individuals involved. It thus captures Rawls’ intuition that the principle of fairness applies when the ‘benefit produced by cooperation are, up to a certain point, free’ (Rawls 1964) in the sense that not all individuals are indispensable for the collective action to be successful. Suppose now that John and Paul decide to produce the public good and divide equally its cost among all who benefit from it, asking Matthew to contribute 3c . Matthew refuses. Notice that dividing the cost between John and Paul alone still gives each of them a benefit v that is larger than the cost 2c . However, now Matthew is free-riding on their efforts. The question is: if John and Paul decide to carry over their joint project, do they have a right to harm Matthew? The answer to this question depends on the nature and the extent of the harm they inflict on him. To see this, suppose that the public good is the repairing of a private road that serves the houses of John, Paul and Matthew, which produces a value of fifty euros for each house, at a total cost of sixty euros. Suppose further that the only way John and Paul can punish Matthew for not paying his fair share of twenty euros is to kill him. We would certainly agree that punishing Mathew in this way for not contributing to the public good would be morally unacceptable. We also agree that John and Paul cannot even threaten Matthew with such a punishment. This is true even in those cases in which it is common knowledge among the three players that the threat is credible, so that Matthew will pay his share and will not be killed. If killing Matthew for not paying twenty euros is morally unacceptable, it is also morally unacceptable to extort a certain behavior from him under such a threat. In a more realistic setting, however, John and Paul will have at their disposal plenty of different ways to hurt Matthew, which will fall short of killing him. Suppose for example that John and Paul are internet hackers who can access Matthew’s on-line bank account and destroy an amount of p euros. This will not hurt Matthew in any other way. In normal conditions we would all agree that Matthew has a right not to have a single dollar destroyed by John and Paul. However, because of their collective action, John and Paul are now bestowing on Matthew a benefit of fifty euros. If they destroy any amount smaller than fifty euros, Matthew still receives a positive net benefit from them. For example, if they delete forty euros from his bank account, Matthew is still better off by an amount of ten euros with respect to the situation in which John and Paul decided not to repay the road. 5 5 Notice that I am assuming here that John and Paul destroy more than Matthew’s ‘fair’ share of the cost (which is twenty). The point is that not only that John and Paul have a right to
The Principle of Fairness
373
Notice that the prospect of having a sum p stolen from his bank account would not induce Matthew to join John and Paul in their collective action if p is smaller than twenty euros. However, any sum between one-hundred euros and twenty euros will have the following characteristics. First, it will induce Matthew to join the cooperative action, because the threatened punishment is worse than the cost he is asked to pay. Second, even if Matthew decides not to join John and Paul in the production of the public road and hence he is punished, he still receives from John and Paul a positive net benefit. Arneson (1982) was the first to spot a clear analogy between the argument sketched so far and the so called ‘Lockean proviso’, which, according to Nozick, justifies appropriation of previously un-owned objects in the State of Nature. In the Second Treatise on Government, Locke stipulated that individuals have a right to appropriate natural, un-produced goods as long as there are ‘enough and as good left in common for others’ (quoted by Nozick 1974: 175). One can use this kind of right to justify, for example, an individual appropriating a glass of water from a large river. This is a very stringent requirement, though, which would never allow appropriation of un-produced goods when these goods are in short supply. Taken literally, Locke’s condition will forbid individuals to appropriate land whenever it is not as abundant (with respect to the population) as it was in pre-Colombian America. In view of these difficulties, Nozick amends the Lockean proviso by stipulating that individuals can appropriate natural resources as long as (in consequence of this appropriation) all those who could not appropriate are made no worse off. Nozick’s argument can be summarized as follows. When previously unoccupied land can be appropriated, individuals will have a much stronger incentive to cultivate and improve it. Such incentives enormously increase the availability of food and other agricultural products. Everybody stands to benefit from such abundance, including those who could no appropriate any part of the available land. Interestingly enough, in Nozick’s (1974: 177) theory the private property of natural resources comes to be justified by the ‘various familiar considerations favoring private property: it increases the social product (…) experimentation is encouraged (…), (it) enables people to decide on the pattern and types of risk they wish to bear …’ etc. Nozick (p. 182)concludes that the benefits brought about by the private property of natural resources are so large that the possibility that somebody would be better off in the bleak state in which appropriation is forbidden would be no more interesting for political theory than ‘any other logical possibility’. The analogy with the principle of fairness is clear enough but it is worth discussing it in some detail. Consider the following situation. John, Paul and ‘grab’ a payment from Matthew (i.e. to appropriate the fair share of the cost, without Matthew’s consent); they also have a right to ‘punish’ Matthew for being a free-rider.
374
Luciano Andreozzi
Matthew live on a otherwise deserted island in which there is a single field. Absent any form of property right, the field remains unused and therefore its value is approximately zero for each of them. Suppose John and Paul know how to put that field into productive use. This new use of the field will generate a value v for them every year. Also, because of the greater availability of food on the island, Matthew will be better off with respect to the situation in which the field was un-owned (Matthew can now trade fish with John and Paul’s corn). Nozick’s way of formulating the Lockean proviso will give John and Paul a right to appropriate the land, even in the absence of Matthew’s consent. Notice that there is a clear analogy with the situation in which John and Paul propose to repair the road and threaten to destroy fifty euros from Matthew’s bank account if he does not contribute. The same logic that would guarantee them a right to appropriate an un-owned field (i.e. that in so doing they will indirectly give a benefit to Matthew, even if now he is no longer free to use that piece of land) give them a right to threaten Matthew to destroy some money from his bank account if he fails to contribute to the repairing of the road.
5. Why Not the Consent Argument? Both the Lockean proviso and the principle of fairness are based on the idea that an action X (e.g. excluding some people from the enjoyment of a commonly used field) which prima facie violates some individuals’ rights, is justifiable in view of the benefits the universal admissibility of this action produces for all the individuals involved. This makes both principles look very reasonable, but with further reflection one might wonder whether they are superfluous. Why does one need ad-hoc principles to justify acts that constitute Pareto improvements over the status quo? After all, if action X brings benefits to all people involved, is it not sufficient to assume that individuals will spontaneously contract to bring about such a desirable outcome? Consider again the situation in which John and Paul are granted a right to appropriate a piece of land without obtaining Matthew’s consent. It was part of the example that such appropriation gave benefits to all parties involved, Matthew included. Why did John and Paul not simply ask Matthew to accept their appropriation of the field, giving him part of the value v as a compensation? In other words, John and Paul could simply buy the right for the exclusive use of the field from Matthew. The transition to a regime of private property rights is now achieved through the much simpler and uncontroversial principle that individuals can freely give up their rights: Matthew sells to John and Paul his right to use the previously commonly owned field. The idea that all obligations are grounded in an individuals’ free decision to give up some of their rights is very old in political philosophy and it
The Principle of Fairness
375
is surely far less controversial than the principle of fairness. In some of its variants, this idea can be found in all the classical versions of contractarianism (Simmons 1979). When it comes to the justification of political authority, this position is usually denoted as the ‘consent argument’, whose best known presentation is in Locke's Second Treatise (§ 9 5): MEN being (…) by Nature, all free, equal and independent, no one can be put out of this Estate, and subjected to the Political Power of another, without his own Consent. The only way whereby any one devests himself of his Natural Liberty, and puts on the bonds of Civil Society is by agreeing with other Men to joyn and unite into a Community, for their comfortable, safe, and peaceable living one amongs another.
The consent argument is usually dismissed by saying that it requires that we take seriously the metaphor of the ‘state of nature’. The right the Italian state has to force me to pay my taxes cannot be grounded on an hypothetical consent I would have given in a fictitious state of nature that existed before the Italian Republic was founded. As Ronald Dworkin (1977: 151) famously wrote, ‘a hypothetical contract is not simply a pale form of an actual contract; it is no contract at all.’ This form of criticism of the consent argument is weaker than it looks. In dealing with international cooperation aimed at pollution control, for instance, we are not justifying an existing coercive institution, such as the Italian Republic. Rather, we are justifying the eventual use of coercion on independent countries. In all these cases, one cannot dismiss the consent argument by simply saying that nobody has ever been asked to give his consent to a certain cooperative venture, because this is exactly what happens in these settings. Those who propose the existence of obligations that are independent of an individual’s decision to accept them are to provide (at least in these cases) a sound argument that proves that what is achieved through the proposed principle (be it the principle of fairness or the Lockean proviso) could not have been reached via unanimous consent. The next section investigates this point.
6. A Game Theoretical Approach This section presents a simplified version of a model Dixit and Olson (2000) used to refute the claim, frequently attributed to Coase (1960), that in the absence of transaction costs coercion is not necessary to achieve Pareto optimality. The intuition behind the ‘Coase theorem’ is straightforward: every resource allocation that represents a Pareto improvement over the status quo can be reached by a (possibly very large) web of voluntary agreements. The Coase theorem is thus the economic counterpart of the consent argument in political philosophy. Proponents of the Coase theorem in economics proclaim the futility of coercion in the achievement of economic efficiency
376
Luciano Andreozzi
(at least when transaction costs are negligible), just as proponents of the consent argument deny that one has to assign individuals obligations without their (tacit or explicit) consent to them. To defend the principle of fairness one has to show that there are situations in which efficient resource allocations that can be reached when this principle is operating could not be reached otherwise. In particular, that they could not be reached through a series of voluntary agreements. Consider again the public good dilemma faced by John, Paul and Matthew. Olson and Dixit invite us to consider the following form of interaction. The game proceeds in two stages. In the first, non-cooperative stage, the three individuals decide independently whether to join a meeting in which it will be decided whether to produce the public good. Then a second, cooperative stage follows, in which those who have joined the meeting decide whether to produce the public good and how to divide the costs. To assume that the second stage is a cooperative game amounts to assume that contracting is costless and that agreements reached can be enforced also without costs. Of course, this hypothesis is unrealistic and it is introduced here only because we want to give the consent argument the best chance of working. The reason why this model is relevant to our discussion is similar to the reason Olson and Dixit provide to show that it is relevant in the discussion of the Coase theorem: it helps to illustrate the nature of a voluntary agreement. Individuals should have the right to decide freely whether to participate (to a cooperative venture). Once participants have emerged, and have struck a deal, it will be enforced by the prevailing transaction technology. But in the strict logic of the argument, there is no such thing as society until individuals come together to form it, and statements such as ‘the society will devise a conditional contract to ensure efficiency’ are empty until one specifies the decision process of individuals that leads to the formation of this society (Dixit and Olson 2000: 313).
The present model helps our discussion of the principle of fairness because it provides a clear framework in which to answer the following question: if those who have joined the meeting decided to provide the public good, do they have a right to coerce those who did not participate in doing their part in the production? To answer this question we have to examine what would happen if we first say no, and then if we say yes. Suppose that individuals know that any agreement reached in the second stage will not be forced on those who decided not to participate to the meeting. As in any finite game, the solution is obtained by backwards induction. We first have to determine what would happen in the second stage of the game, depending on the number of participants to the meeting. Given that we have assumed the simplest public good problem (the same value for
The Principle of Fairness
377
John & Paul
Matthew
0 IN 2 OUT
1 IN 1 OUT
2 IN, 0 OUT
IN
0
v 2c
v 3c
OUT
0
0
v
Matrix 1
all participants), this part of the model has a trivial solution: the good will be provided if two or three players decide to participate to the collective action, and the cost will be shared equally among them. If only one individual is present to the meeting, the good will not be produced. All this is a simple consequence of the assumption that c v 2c . Given these expectations about the outcome of the second stage of the game, we have to examine how players will choose in the first stage, when they decide simultaneously and independently whether or not to attend the meeting. Each player can play IN or OUT. Matrix 1 represents the payoff each player (e.g. for Matthew) expects on the basis of the decisions of the other two. If both John and Paul decide to stay out, the public good is not provided and hence Matthew gets zero regardless of his strategy choice. (We are assuming that there are no costs in attending the initial meeting, otherwise IN would yield a smaller payoff than OUT). If Matthew and either John or Paul choose IN, the public good is produced and the cost is shared between Matthew and the other player who attended the meeting. (The player who stayed OUT gets v in this case, because the good is produced and he pays no cost). Matthew’s payoff in this case is v 2c , as he is paying half of the cost of the public good. If Matthew decides to stay OUT while another player chooses IN, the good is not produced and his payoff is zero. Notice that when only one player chooses IN, Matthew is pivotal: the public good is produced if and only if he plays IN. Finally, when both John and Paul play IN, if Matthew plays IN as well the good is produced and the cost is shared equally among them, so that Matthew’s payoff is v 3c . If Matthew stays OUT, the other two players pay for the public good and share the cost among them, so that Matthew’s payoff is v . This is the case in which Matthew free-rides on John and Paul’s efforts. This game has several Nash equilibria. For example, there is a (Pareto efficient) Nash equilibrium in which John and Paul choose IN and Matthew stays OUT. In another Nash equilibrium, Paul stays OUT while John and Matthew play IN. However, all these pure strategies Nash equilibria are unreasonable in that they require a certain degree of coordination among the choices of the three players. How should the three players decide who stays OUT and enjoys the benefits of free-riding? As Michael Taylor (1987) has shown in his pioneering study of the repeated Prisoners’ Dilemma, if pre-
378
Luciano Andreozzi
play communication is possible, each player would try to convince the others that he has pre-committed himself to play OUT. The outcome of such communication would clearly be unpredictable. A natural choice is to follow Olson and Dixit in restricting the attention to symmetric equilibria (i.e. equilibria that require all players use the same strategy). This is a sensible enough move in a game like the one we are discussing, in which players are symmetrically placed one with respect to the others and have the same payoffs for each possible outcome. There are only two symmetric Nash equilibria, that are both weak and inefficient. In the first, all players stay OUT in the first stage. This is a (weak) Nash equilibrium because, given the choice of the others, each player is in fact indifferent between IN and OUT. The second symmetric Nash equilibrium requires players to use a mixed strategy: each player plays IN with probability B and OUT with probability (1 B) . To find such an equilibrium we have to find the value of B such that IN and OUT yield the same payoff. Let Q(IN, B) and Q(OUT, B) the payoffs yield by IN and OUT respectively when both the other players are playing IN with probability B. We have: Q(IN, B) 0 ¸ (1 B)2 (v 2c )¸ 2 ¸ B ¸ (1 B) (v 3c )B 2 Q(OUT, B) 0 ¸(1 B)2 0 ¸ 2 ¸ B ¸ (1 B) v B 2 . We look for a value of B which equalizes these two magnitudes, which is B * v c2 v c2 . Hence, in equilibrium each player will attend the kick-off meeting with a probability which is strictly smaller than one, implying that the equilibrium itself is inefficient. To see this, consider that the equilibrium payoff is equal to Q(IN, B * ) (which in turn is equal to Q(OUT, B * ) ), which is a linear combination of 0, v 2c and v 3c . Given that B * 1 , implying that this payoff is strictly smaller than v 3c , the payoff the three players would obtain by participating to the meeting with probability 1. However, such a state is not an equilibrium because if each player expects the others to participate in the kick-off meeting with a probability larger than B * he will prefer to stay OUT. In the intention of Olson and Dixit this model shows that, even in the absence of transaction costs, there would be inefficiencies in the production of public goods. They suggest that the only escape route from these inefficiencies is to abandon the Coasian assumption that cooperation can be based only on voluntary agreements. Their main conclusion is not that public goods will not be provided in reality, but that ‘successful provision of public goods, or internalization of externalities in large groups, usually requires some form of coercion’ (Olson and Dixit 2000: 316). A similar conclusion can be derived for our principle of fairness. Suppose there is a way in which those who have played IN can punish those who stayed out. As with our previous example, suppose that those who have played IN and have produced the public good have a way to destroy, at zero
The Principle of Fairness
379
John & Paul
Matthew
0 IN 2 OUT
1 IN 1 OUT
2 IN, 0 OUT
IN
0
v 2c
v 3c
OUT
0
0
vp
Matrix 2
cost, some of the money in the bank account of those who played OUT. 6 Matrix 2 represents the game for the case in which the punishment technology is feasible and its existence and eventual use is common knowledge among the three players involved. The only difference with the matrix above is the payoff Matthew gets when two other players play IN and he stays OUT. In this case his payoff is v p , where p is the punishment he receives from the other two players. The fact that punishment is costless shows up in the fact that the payoff for Matthew and one other player who plays in IN is unchanged, v 2c . In this case Matthew and the other cooperating player would be punishing the free-rider. If punishing were costly the payoff would be lower than v 2c . The structure of the game is unchanged as long as the cost inflicted on the free-rider is low enough, ie. if p c3 . In such a case, even in the presence of punishment, the game has a mixed strategy Nash equilibrium in which the three players participate to the initial meeting with a probability B 1 . If p c3 the game’s structure changes radically. Besides the weak symmetric Nash equilibrium in which the three players stay OUT, it has a single strict Nash equilibrium in pure strategies in which all players choose IN and receive a payoff equal to v 3c . Furthermore, this equilibrium is the only one which survives the elimination of weakly dominated strategies, because OUT is now weakly dominated by IN. This is a clear improvement over the situation in which the public good was not produced at all (in the first equilibrium in which everybody played OUT) or it was produced with probability strictly smaller than one (in the second equilibrium in which all individuals played IN with probability B * ). There is a clear reason why any pair of players who have played IN are entitled (after producing the public good) to punish the one who stayed OUT, which we have already presented in our discussion of the intuitive reasons in favour of the principle of fairness and briefly reiterate here. The condition for any punishment strategy to induce all individuals to partici6 The assumption that the punishment technology is costless is innocent and greatly simplifies matters. It makes the model simpler because the threat to punish free-riders by destroying some of their money is always credible if there are no costs involved in the punishing itself. It is an innocent assumption because we are interested in the normative issue raised by the rightfulness of such a punishment, rather than the positive issue concerning its feasibility. More on this in Section 7.
380
Luciano Andreozzi
John
Paul
IN OUT
IN
OUT
v 2c , v c2
0, 0
0, 0
0, 0
Matrix 3
pate in the production of the public good is p c3 . Consider now any p c3 and such that v p Q(IN, B * ) Q(OUT, B * ) . With this level of punishment, each player, including an eventual free rider, is made better off with respect to the situation in which the principle of fairness was not operating and therefore punishing free-riders was impermissible. The analogy with the Lockean proviso should be evident here. I sketched a model in which the equilibrium in the absence of the principle of fairness is inefficient. I have thus shown that the introduction of the principle of fairness brings about benefits for all the parties involved. I have further shown, and this is the crucial point, that such benefits could have not been produced otherwise. In particular, I have shown that consent alone could not induce individuals to cooperate in the production of public goods. A natural conclusion would be that the same logic that justifies people in appropriating parts of the natural resources, also justifies them in forcing freeriders to do their share in the production of public goods. To appreciate the power and the limits of the argument presented so far, it is interesting to see why the principle of fairness does not imply that individuals always have a right to impose benefits on others and require a payment. Consider the following variant of the above model, in which only John and Paul are involved. We assume, as always, that c v c2 , so that the production of the public good is not profitable for a single individual, but also that two individuals are willing to provide it if they can share the cost. Matrix 3 represents the first stage of the game, on the assumption that the good will only be provided if both players participate in the meeting (this is a consequence of assuming that c v ). Therefore, if either of the two stays out, the payoff is zero for both players. If both players show up at the meeting, the public good is produced and the cost is shared evenly between them. The resulting payoff is v 2c . This game has a single Nash equilibrium with weakly dominant strategies. In fact, playing IN yields a payoff which is never smaller than the payoff yielded by OUT. As a consequence, giving the individuals the right to punish those who have not attended the meeting would be redundant, because the equilibrium produced by the consent argument alone is already Pareto efficient. Giving Paul the right to force John to pay for the public good in this setting will not have positive benefits, and therefore cannot be justified by the kind of arguments Nozick uses in his defense of the Lockean proviso.
The Principle of Fairness
381
7. Refinements and Directions for Further Research There are a number of hidden assumptions in the Olson and Dixit model that are worth discussing in some detail before putting the principle of fairness to rest. First, all the results are based on the implicit assumption that punishment is costless for the punishers. This would be a fatal flaw if the model were meant to offer a working solution to real-life situations involving the production of public goods (as in the example discussed in the Introduction). However, one has to keep in mind that the main target of this paper is normative, not positive. The question we are trying to answer is ‘do people have a right to punish free-riders’? The simplest way to answer this question is to imagine a world in which punishment is costless. In fact, it would be meaningful to ask whether John and Paul have a right to delete money from Matthew’s bank account, even if this action were completely costless for them and Matthew had no way to retaliate. Introducing a cost in punishing would unnecessarily complicate the matter. Second, most of the conclusions reached above seem to be grounded on the author’s intuitions alone. 7 For example, why should a shoe repairer be denied the right to work on my shoes without my consent, and then demand a payment, while my neighbors are granted a right to clean the streets and then force me to pay my fair share? My definition of the principle of fairness discriminates between these two (and many other similar) cases, although it might be difficult to dispel the impression that it does so only through a sophisticated appeal to (shared) moral intuitions. This impression is in part misleading. The aim of the game theoretical model is to shift the onus of the proof from the principle of fairness per se to a principle which is far less controversial, that of Pareto. I demonstrated that granting cooperators the right to punish free-riders improves the conditions of all individuals involved, including, paradoxically enough, those who get punished because of it. The appeal to moral intuition is thus moved to a ground on which it is easier to find an agreement. One might object that the Pareto principle is sometimes rejected, at least by the authors with a strong liberal bent like Robert Nozick himself. Consider, however, that my discussion of the Lockean proviso (Section 4) is meant to show that even Nozick’s defence of private property on natural resources is based on the Pareto principle. In fact, the point here is to indicate that his argument can be translated into a defence of the principle of fairness by changing just a few words. On the other hand, a fully-fledged defense of the Pareto principle is clearly beyond the scope of this paper. Finally, the game theoretical model presented here is based on the questionable assumption that costs and benefits are common knowledge among the players. This assumption is crucial because the fair punishment that can
7
I thank an anonymous referee for forcing me to discuss explicitly this matter.
382
Luciano Andreozzi
be imposed on individuals depends upon the benefits they receive from other people’s collective action. To see the importance of this assumption, consider the following. John’s neighbours decide to hire a street cleaner for their neighbourhood and ask John to pay fifty euros. The service gives John a benefit of sixty euros, but he feels that the costs have been shared unfairly and refuses to pay what asked. The day after, John finds that his car window has been smashed in. Under the windscreen there is a small notice in which his neighbours let him know that this is a punishment for his unfair refusal to pay for the street cleaning. They also explain that they believe that the benefit John received is well above the cost of repairing the window, which they estimate as eighty euros. Hence, they conclude, even with this small punishment John still received a net benefit from them. The problem here is that John’s neighbours have overestimated the benefit the street cleaning gives him, wrongly believing that it was larger than eighty euros. This example shows that the principle of fairness must be handled with care. Granting groups of people the right to punish those who failed to join their collective action might induce a large amount of unfair punishment. To cope with these issues, the present model needs to be modified to allow for the possibility that costs and benefits might be private information. This is a promising line of research, which would probably lead to a more sophisticated (and restricted) version of the principle of fairness.
8. Conclusions The principle of fairness entails that individuals might have obligations that do not depend on their consent. Not surprisingly, it has been looked at with suspicion by philosophers of a libertarian persuasion, such as Robert Nozick. My defence of (an emended version of) the principle of fairness starts with a reformulation of Hart’s and Rawls’ idea that those who have participated in a collective action ‘have a right to a similar acquiescence on the part of those who have benefited’ from it. According to my definition, this means that there exists at least one conceivable action X which harms free-riders and is morally acceptable only in the light of the benefits brought about by the collective action itself. The rest of the paper demonstrates how to construct such an action X. The first step involves the construction of a model of voluntary contribution to a collective action. This model shows that that when cooperators cannot punish free-riders, the non-cooperative equilibrium which results is Pareto inefficient. Allowing cooperators to punish free-riders – this is the second result of the model – produces a Pareto improvement which is particularly strong. In fact, in the new equilibrium all individuals are better off than in the equilibrium in which the principle of fairness was not operating. It can also be shown that when punishment is sufficiently mild, individuals are also better off in the (counterfactual) case that they decide to free-ride and
The Principle of Fairness
383
therefore be punished. The principle of fairness is thus demonstrably a logical consequence of the much less controversial Pareto principle.
Acknowledgments I would like to thank the participants to the conference ‘Power: Conceptual, Formal, and Applied Dimensions’ 17–20 August 2006, Hamburg, for their comments. The final version of this paper owes much to Manfred Holler’s detailed comments on an earlier version, as well as numerous suggestions made by two anonymous referees. Usual disclaimers apply.
References Arneson, R.J. (1982) The Principle of Fairness and Free-Rider Problems, Ethics 92: 616–633. Barrett, S. (1997) Towards a Theory of International Environmental Cooperation, in C. Carraro and D. Siniscalco (eds) New Directions in the Economic Theory of the Environment, Cambridge University Press, 294–335. Barrett, S. (2005) Environment and Statecraft: The Strategy of Environmental TreatyMaking, Oxford University Press. Carraro, C. (2003) Introduction: Global Governance and Global Public Goods, in C. Carraro (ed.) Governing the Global Environment, Edward Elgar. Coase, R.H. (1960) The Problem of Social Cost, Journal of Law and Economics 3: 1–44 Cullity, G. (1995) Moral Free Riding, Philosophy and Public Affairs 24: 3–34. Dixit A. and Olson, M. (2000) Does Voluntary Participation Undermine the Coase Theorem, Journal of Public Economics 79: 309–335. Dworkin, R. (1977) Taking Rights Seriously, Harvard University Press. Dworkin, R. (1986) Law’s Empire, Harvard University Press. Hart, H.L.A. (1955) Are There Any Natural Rights? Philosophical Review 64: 175–191. Klosko, G., (1987a) The Principle of Fairness and Political Obligation, Ethics, 97: 353–362. Klosko, G. (1987b) Presumptive Benefits, Fairness, and Political Obligation, Philosophy and Public Affairs 16: 241–259. Klosko, G. (1990) The obligation to Contribute to Discretionary Public Goods, Political Studies 37: 196–214. Locke, J. (1680–690/1988) Two Treatises of Government, Peter Laslett (ed.) Cambridge University Press. Nozick, R.(1974) Anarchy, State and Utopia, Basic Books. Pizer W.A. (2006) The Evolution of Global Climate Change Agreement’, American Economic Review (Papers and Proceedings) 26: 26–30. Rawls, J. (1964) Legal Obligation and the Duty of Fair Play, in S. Hook (ed) Law and Philosophy, New York University Press, 3–18. Rawls, J. (1971) A Theory of Justice, Harvard University Press. Simmons, A.J. (1979) Moral Principles and Political Obligations, Princeton University Press. Simmons, A.J. (1987) The Anarchist Position: A Reply to Klosko and Senor, Philosophy and Public Affairs 16: 269–279.
20. Power, Productivity, and Profits Frederick Guy School of Management and Organizational Psychology, Birkbeck College, University of London, UK
Peter Skott Department of Economics, University of Massachusetts, Amherst, USA
1. Introduction A change in workplace technologies may affect the relative earnings of workers in at least two distinct ways. One is through the market for skill, the other through workers’ power in relation to their employers. Increases in earnings inequality since the late 1970s in many industrial economies – and in particular, in liberal market economies like the US and UK – have been explained by many economists as a consequence of skill-biased technological change (SBTC). However, the evidence cited for SBTC can be read instead as evidence that new technologies affect the distribution of earnings not through supply and demand, but through changes in the relative power of different groups of employees. The reasons for these changes are detailed in Guy (2003) and the implications are analyzed more formally by Guy and Skott (2005) and Skott and Guy (2007). This paper explores the implications of power-biased technical change for the functional distribution of income. Empirically, it is not just earnings inequality among workers that has increased. There have also been significant changes in the functional distribution of income. In the US real wages have stagnated or fallen for most workers, and the share of wages and salaries in GDP has fallen dramatically in both the US and many European countries (De Long 2005; Blanchard and Wolfers 2000). Over the same period the remuneration of top managers has sky rocketed, and the decline in wages has been associated with relatively weak productivity growth and an intensification of the work process (Piketty and Saez 2003; Green 2004). Although our analytic framework is similar to theirs, our use of the ‘power’ term is slightly different than that of Bowles and Gintis (1990). In their usage, efficiency wage models like the one used here show the employer exercising power over the employee through the payment of an em-
386
Federick Guy and Peter Skott
ployment rent, combined with the threat of dismissal (which is a threat to withdraw the rent). However – in their model, as in ours – this rent depends on the employee’s ability to affect profitability by varying effort. We understand that ability of the employee as representing power, as well. Thus, an employee has power, in relation to the employer, because of the employee’s ability to affect outcomes that matter for the employer. All jobs entail some power, according to this use of the term: an investment banker makes investments which may make or lose millions for the bank, and a burger flipper can burn a few batches of burgers; the difference in degree is important, but in both cases there is an agency problem with which the employer must reckon. Among the factors which determine the employee’s power are the extent of the assets or operations concerning which the employee makes decisions; the quality and timeliness of the employer’s information about the employee’s actions; and the quality and timeliness of the employer’s information about the situation in which the employee acts (the ’state of nature’). Employers also have power over their employees. With incomplete contracting for employee actions, employers will typically want to pay a wage in excess of what their workers can expect if fired. Thus, the employer’s ability to fire a worker (thereby reducing the worker’s utility) is a source of power to the employer. This paper considers technological change that affects the balance of power between workers and employers. A large Marx-inspired literature has analyzed how firms’ choice of technique can be influenced by considerations of power. Important contributions include, among others, Marglin (1974) and Braverman (1974). Our paper is closely related, in particular, to Bowles (1989) and Green (1988). Quoting Marx’s (1967: 436) statement that ‘it would be possible to write quite a history of the inventions made since 1830, for the sole purpose of supplying capital with weapons against the revolts of the working class’, Bowles goes on to describe how the pursuit of profit may lead capitalist firms to choose ‘capitalist technologies’ that are technically inefficient but enable firms to reduce wages and/ or enforce an increase in the intensity of work. A similar argument is presented by Green (1988), and both papers contain some formal modeling to back up the argument. The modeling, however, is partial and it is not carried very far. Thus, the main contribution of this paper is to reconsider and refine ideas that have been around in the Marxian literature for a long time and to relate these ideas to recent changes in information and communication technology (ICT). The paper is structured in five sections. In section 2 we describe and discuss some of the ways in which employee power – and thus the willingness of firms to pay – will be affected by changes in ICT. In Section 3 we set up a formal model and derive some comparative static results. In section 4 we consider the stability of the different steady states. We summarize the main conclusions in section 5.
Power, Productivity, and Profits
387
2. ICT and Monitoring New ICT should not be seen as something that is simply plugged into organizations, with the organizations otherwise unchanged. The use of new ICT is often tied up with choices about larger changes in the organization of work. This paper is related to a large empirical literature on the relationships between ICT, the organization of work, and power (Drago 1996; Guy 2003; Hunter and Lafkas 2003; Ramirez et al 2007; Sewell 1998). New technologies, and in particular ICTs, allow organizations to become flexible, flat, decentralized, customer-oriented, and as a consequence to give employees increased discretion. But not every employee who uses new ICTs has been given a charter for increased decision making. ICTs facilitate increased flexibility in the coordination of activity by making it cheaper to gather information about, among other things, what employees do (monitoring), and to fine tune the instructions given to employees. The industrial sociology and human resource management literatures abound with examples of large classes of employees, often in expanding parts of the economy such as wholesale and retail trade, financial services, hotels and restaurants, whose use of up to date ICTs is associated with more detailed instruction sets and closer monitoring. The process is complex, and it can be difficult, even ex post, to sort out what is the net change in discretion for any particular employee. The use of the new technologies often entails, or is associated with, significant changes in the way organizations are managed and individual jobs are structured. These changes are not easy to characterize, because they take a number of different forms, and also because the rhetoric of organizational transformation is not always a good guide to reality. As an extreme case, assume for the moment that although ICTs improve, the task the employee is asked to complete does not change. Improved monitoring may narrow the scope of action open to a worker in two ways. One is that the manager has a better idea of what the worker actually does. The other is that the manager has better information about the environment in which the worker works, the options she faces and the effect that different actions the worker might take would have on completion of the task. In other words, the manager has improved knowledge of both the worker’s actions, and the state of nature in which those actions take place. For instance, prior to the 1980s a truck driver’s employer usually had only a vague idea of where he and the truck were. Now the location of the truck, and even the behavior of its engine, are often tracked by satellite. The driver’s task may have changed little, but his scope for taking advantage of possible slack in his schedule is diminished, and the employer has new information with which to remove slack from the schedule over time. Contrary to the assumption made above, tasks typically do change as part of the organizational transformations that go together with the introduction
388
Federick Guy and Peter Skott
of new ICTs. In many workplaces, for instance, workers who once had narrowly defined individual jobs now do all or part of their work in teams; a worker may be expected to do a number of different jobs within the team, and some teams are assigned problem-solving or decision-making responsibilities which were not previously within the remit of employees at their level. Such teamwork may enhance the scope of action open to a worker, both because of the broadening of tasks (e.g., ‘problem solving’) and because of what may be the greater difficulty assigning individual accountability when actions are taken by teams. Changes also occur in managerial work. The de-layering of organizations, and the competitive need for organizations to be flexible, give the remaining managers a greater range of decisions to make. On the other hand, managers get monitored, too. It is tempting, especially for those of us trained to recognize the beauty of markets as examples of spontaneous, un-regimented order, to associate delegation, delayering and decentralization as marketization, the sunset of central control. But within organizations, decentralization is typically facilitated by improved controls; the invention of the multi-divisional corporation in the 1920s, for instance, was made possible by improved cost accounting and ’management by numbers’ (Chandler, 1962). Our formal model below disregards these complications. It assumes symmetry across workers and considers technical change that enhances the ability of managers to monitor the actions of the firm’s workers. Implicitly the categories of (top) management and capitalist are merged in the model. Managers may get increased discretion, but it is assumed that they want to maximize profits and that there is no agency problem in the relation between capitalists and top managers.
3. The Model We use a standard efficiency wage framework to analyze the effects of PBTC. To keep the analysis simple, we assume price taking behavior in the product market and constant returns scale. Labour is homogeneous and the production function is CES: Y A(C K B (1 C )(eN )B )1B
(1)
where e and N denote effort and employment, and changes in the multiplicative constant represent Hicks-neutral technical change. Leaving aside all issues of capital accumulation, we shall take K to be constant throughout this paper. Workers’ choice of effort is determined by the cost of job loss and the sensitivity of the risk of job loss to variations in effort. 1 As a formal specifica1 Most expositions of efficiency wage models emphasize the former effect, with the risk of job loss and its dependence on effort taken as exogenous; exceptions include Bowles (1985,
Power, Productivity, and Profits
389
tion, we assume that if a firm pays the wage w, the effort of its workers may be determined by the maximization of the objective function V,
V p(e )<w v(e ) h(w , b , u )>
(2)
where w u and b denote the average wage, the unemployment rate and the rate of unemployment benefits. Arguably the choice of effort should be determined by an optimization problem that is explicitly intertemporal. As shown in Appendix A, however, a simple intertemporal optimization model reduces to a special case of problem (2). The function v(e ) describes the disutility associated with effort, and we assume a log-linear functional form, v(e ) e H , H 1.
(3)
This specification is quite standard, the parameter restriction H 1 implying that given the chosen scale of effort, the disutility of effort is strictly convex. 2 The convexity assumption ensures that the firm’s unit cost does not decrease monotonically as wages increase and that, therefore, an equilibrium solution for w exists. The function p(e ) captures the effect of effort on the expected remaining duration of the job; since high effort reduces the risk of being fired, we have p a 0. The effect of technical change on firms’ ability to monitor effort may be represented by a shift in the p-function. The key property of this shift is that it affects the sensitivity of the firing rate to variations in effort. An improvement in firms’ ability to monitor the efforts of individual workers makes the expected job duration of any individual worker more sensitive to changes in the worker’s own effort. This property of the p-function can be captured by assuming that the elasticity of p can be written: ep a M(e , N) p
(4)
where the parameter N describes monitoring ability and M N 0. It should be noted that equation (4) says nothing about the average firing rate and, as explained in appendix A, the average firing rate may be unaffected by a change in N. Analytically, it is convenient to assume that the elasticity M is independent of e, ep a M(e , N) N. p
(5)
1988), Gintis and Ishikawa (1987), and several subsequent joint papers by Bowles and Gintis. 2 Effort is ordinal and the convexity assumption is conditional on the chosen scale. This scale is determined implicitly by the specification of the production function (Katzner and Skott 2004).
390
Federick Guy and Peter Skott
This specification of the elasticity can be seen as a log-linear approximation of the p-function around the equilibrium solution for e. 3 Using (2)–(3) and (5), the first-order condition for the worker’s maximization problem can be written: N ¯ e ¡ (w h )° ¡¢ N H °±
1H
(6)
The wage is set by the firm. The standard first order conditions imply that: e ww 1 e
or, using (6), 1 w 1. H w h
The solutions for the wage can now be written: w
H h. H 1
(7)
The function h(w , b , u ), finally, represents the fallback position, that is, the expected utility in case of job loss; the partial derivatives satisfy h w 0, h b 0 and h u 0 under all standard assumptions. We use the specific functional form obtained from the optimization model in Appendix A: h
E(1 u ) (r E)u b (w v(e )) ru E ru E
(8)
where e is determined by setting e e and w w in equation (6); r and E are the discount rate and the rate of job separations, respectively. Intuitively, the fallback position is a weighted average of the utility when unemployed (b) and in an alternative job ( w v(e )) The weights depend on u since (in a steady state) the unemployment rate is equal to the proportion of time one can be expect to be unemployed; if there is no discounting (r 0) the weights are simply u and 1 u but when r 0, unemployment (the initial state in case of job loss) is weighted more heavily. Turning to the demand for labor, the first-order condition for profit maximization implies that the wage satisfies the equation: 3
Integration of (5) implies that p(e ) Be N where B is an arbitrary constant. The intertemporal interpretation in Appendix A of workers’ maximisation problem implies that p(e ) is bounded, unlike the above expression. Thus, the approximation will be bad for ‘large’ values of e. It may be good, however, for effort levels in the relevant range, and all our simulations below yield modest variations in effort.
Power, Productivity, and Profits
391
w (1 C )A(C K B (1 C )(eN )B )(1 B )B e B N (1 B ) ¯ K (1 C )Ae ¡(1 C ) C( )B ° eN ¢¡ ±°
(1 B )B
(9)
In equilibrium, finally, w w and e e , and using the definitional relations between unemployment u and employment N, equations (6)–(9) yield equilibrium solutions for the endogenous variables (w , e , N , h ). 4 We now introduce a decline in the power of workers (a rise in N). Intuitively, this rise puts upward pressure on e (equation (6)) and thus, for a given value of N, on the effective labor input eN . For a given ratio of relative inputs, eN K , a rise in e will increase the wage w (equation (9)), but w is affected negatively if the upwards pressure on eN generates a rise in the input ratio eN K (equation (9)). This negative effect is stronger the larger is B, that is, the lower the elasticity of substitution. Strong complementarity between the inputs also implies that any rise in e tends to affect N negatively. Thus, the elasticity of substitution plays a critical role for the effects of a change in relative power. An example is given in Table 1. 5 The rise in N leads to an improvement in both wages and employment if A is unchanged. Given the assumptions in Appendix A, the welfare of unemployed and employed workers can be measured by h( rU ) and j (w e H ) r r E h r E E ( rV ), respectively, and welfare also improves. 6 This result may seem counter-intuitive at first sight but the explanation is straightforward. Agency problems lead to outcomes that are Pareto suboptimal, and the increased ability of firms to monitor effort reduces the agency problem. Taking into account the derived effects on employment and wages, workers may therefore in some cases benefit from a decline in their workplace power. The interesting aspect of Table 1, however, is that when the rise in N from 0.1 to 0.5 is combined with a very substantial loss of technical efficiency (a 15 percent fall in A from 10 to 8.5) profits Q still increase significantly while workers suffer a large reduction in wages and welfare. The negative effect on profits of lower technical efficiency is more than compensated for by the decrease in workers’ power and the associated changes in effort and wages. Table 1 assumes that the elasticity of substitution in the production function is 0.5. Complementarity may be the relevant case from an empirical perspective, but it should be noted that complementarity is critical for the conclusion. Assuming profit maximization and perfect competition in the product market, for instance, a Cobb-Douglas specification implies that 4
With inelastic labor supplies (normalized at unity), we have u 1 N . The table uses B 1 The other parameters are C 05 H 5 r 005 E 02 K 1 and b 1. 6 The values of h are proportional to w (cf. equation (7)). A separate h column is included to facilitate a comparison between the welfare measures for employed and unemployed workers. 5
392
Federick Guy and Peter Skott Table 1. Equilibrium effects of changes in monitoring A
N
e
w
u
h
j
Q
10 10 8.5
0.1 0.5 0.5
0.45 0.63 0.61
4.91 5.53 4.68
0.21 0.19 0.20
3.93 4.42 3.75
4.12 4.62 3.91
1.40 2.29 1.83
profits are a constant share of output, and it follows that if technological change (a shift in A and/or N) generates an increase in profits then aggregate wages must also go up.
4. Transition The previous section looked at the comparative statics of a change in technique. The comparison of different equilibrium positions can be misleading, however. The configurations underlying Table 1 imply not only that equilibrium profits increase following the change in technique but also that the individual firm has an incentive to adopt the new technique (see Table 3a below). It is easy, however, to find examples of techniques that may yield an increase in profits if all firms were to adopt them, even though no single firm has an incentive to adopt the techniques. Conversely, individual firms may have an incentive to introduce a new technique, even if the equilibrium profits when all firms introduce the technique are lower than they would have been, had all firms kept the old technique. Consider the decision problem of a single firm. The firm’s profits depend on the technical parameters A and N as well as on it’s choice of wage and employment, 1 A(C K B (1 C )(eN )B )1B wN Y (A , K , e , N ) wN 1(A , K , e , w , N )
(10)
where N ¯ e¡ (w h )° ¡¢ N H °± e(w , h , N).
1H
(11)
For any given technique and capital stock (that is, A,N,K) the firm chooses w and N so that the first-order conditions are satisfied: s1 1ee w 1w 0 sw
(12)
Power, Productivity, and Profits
s1 0. sN
393
(13)
4.1 Marginal Changes Now assume that a new technique offers a (marginal) change in both A and N. Workers’ fallback position is independent of the firm’s own choices so h is constant, and the effect on the firm’s optimal profits is given by d 1 partial 1 AdA 1 e (e w dw e Nd N) 1 w dw 1 N dN 1 AdA 1 e e Nd N Y AdA Y e e Nd N
(14)
Y A A ¯ Ye e e N N Y ¡ d log A d log N° ¡¢ Y °± Y e ¯ 1 Y ¡d log A I L d log N° ¡¢ °± N H
where the second equality comes from using the first-order conditions (12)– (13) (or, equivalently, from the envelope theorem); the third equality comes from the definition of profits in (10); I L is the share of wages in income and I L Y NYN YYe e follows from profit maximization. Equation (14) implies that the firm has an incentive to introduce the new technique if d log N p
N H d log A. IL
(15)
The incentive depends on the wage share, and the wage share, in turn, depends on the fallback position h. A rise in h implies an increase in w e and a decline in eN K (equations (6)–(7)), and if B 0 the decline in eN K generates an increase in the wage share; if B 0, the wage share falls. Thus, although the single firm treats h as a constant, the incentives that it faces depends on h. As long as the changes in technique are purely marginal this dependence of the incentive condition on the level of h does not matter: if the change of technique is marginal the associated change in h will also be marginal, and the level of I L can be taken as given. Thus, it follows from (15) that either all firms will adopt the new technique or no firms will do it. When we consider non-marginal changes in technique in the next subsection, however, the incentives will depend on whether other firms had chosen to introduce the new technique. Even with marginal changes in technique, the induced marginal effects on h need to be taken into account in order to calculate the equilibrium effects of changes in technique on the change in profits,
394
Federick Guy and Peter Skott
d 1 eq 1 A dA 1 e (e w dw e Nd N) 1 w dw 1 N dN 1 e e h dh d 1 partial 1 e e h dh
(16)
¯ 1 Y ¡d log A I L d log N° 1 e e h dh. ¡¢ °± N H
Since 1 e 0 and e h 0 (cf. equation (11)), the sign of the last term is the opposite of that of the change in the fallback position, dh. If dh is positive, then the single firm’s change of technique produces a negative externality on the profits of all other firms; if dh is negative, the externality is positive. The existence of this externality lies behind the possibility that firms’ individually rational decisions may lead them to adopt a technique that reduces the equilibrium level of profits. We illustrate this possibility in the next subsection which also generalizes the setting by allowing for nonmarginal changes in technique.
4.2 Non-Marginal Changes With non-marginal technical changes, the fulfillment of the incentive condition (15) for a single firm may depend on the proportion of firms that have introduced the new technique. Assume that workers cannot move directly from one job to another and that equilibrium firing rates are the same in all jobs (this is consistent with differences in monitoring, cf. the argument in Appendix A) and let x denote the proportion of employed workers in firms that use the new technique. With these assumptions, the fallback position h is given by (see Appendix B): h(x ) <x(w n (x ) v(e n (x ))) (1 x )(w o (x ) v(e o (x ))> (r E)u . b ru E
E(1 u ) ru E
(17)
Equations (6), (7) and (9) still hold; the only difference is that for each of them there will now be two separate equations, one for the old and one for the new technique: No ¯ e o (x ) ¡ o (w o (x ) h(x ))° ¡¢ N H °±
1H
Nn ¯ e n (x ) ¡ n (w n (x ) h(x ))° ¡¢ N H °± w o (x )
H h(x ) H 1
(18) 1H
(19)
(20)
Power, Productivity, and Profits
w n (x )
o
w (x ) 05
1B
H h(x ) H 1
(1 z )K ¬B ¯ ° A e ¡¡1 o e (x )N o ® ° ¢ ±
395
(21) (1 B )B
o
zK ¬B ¯ ° w n (x ) 05 1B A n e ¡¡1 n n ° ¢ e (x )N ® ±
(22) (1 B )B
(23)
where z is the proportion of firms that have adopted the new technique; the proportion of firms (z) and the proportion of the employment (x) will differ if, as will generally be case, the new technique leads to a change in the capital labour ratio. By definition, finally, we have: N o (1 x )(1 u )
(24)
N n x(1 u ).
(25)
For any given value of x, equations (17)–(25) can solved for the nine variables w o , w n , e o , e n , N o , N n , h , u , z We get four qualitatively different possible outcomes for the equilibrium value x : (i) x 0, (ii) x 1, (iii) 0 x 1, and (iv) multiple solutions with x 0 or x 1. These possibilities as well as the externalities described above in the case of marginal changes can be illustrated by the examples in Tables 2–5. All tables assume that a new technique has become available and that this technique offers better monitoring ( %N 0) but a lower productivity parameter (%A 0) The tables differ with respect to the precise values of the changes in N and A as well as the values of other parameters. 7 Tables 2a–2b show examples in which no firm has an incentive to introduce the new technique with improved monitoring, that is x 0. In 2a the equilibrium profits would decline with the introduction of the new technique while in 2b a positive externality implies that even though there is no individual incentive to introduce the technique, the new technique would in fact have generated an increase in aggregate profits. The key difference between the two scenarios is the elasticity of substitution in production. Increased monitoring increases effort at any given wage rate and thus tends to raise the ratio of effective labor (eN ) to capital and/or reduce the employment-capital ratio (N K ). The effect of these changes on wages and employment depend on the elasticity of substitution. When there is complementarity (B 0), an increase in the effective labor-capital ratio reduces the wage share (and thus also wages and/or employment) with detrimental
7
All tables use C 05, A old 10, N old 01, r 005, E 02, K 1, b 1.
396
Federick Guy and Peter Skott Table 2a. Neither micro incentive nor equilibrium increase in profits (but workers still have to be benefited) B 0.5, H 0.5; Aold 0.5, Nold 0.1; Anew 8.5, Nnew 0.5
x 0.001 x 0.999
Qold
Qnew
u
w
h
j old
j new
3.88 3.80
3.60 3.51
0.26 0.24
2.87 3.01
2.30 2.41
2.41 2.524
2.40 2.516
Table 2b. No micro incentive but equilibrium increase in profits (but workers lose utility) B 1, H 5; Aold 10, Nold 0.1; Anew 7.5, Nnew 0.5
x 0.001 x 0.999
Qold
Qnew
u
w
h
j old
j new
1.40 1.97
1.10 1.54
0.208 0.207
4.91 4.12
3.92 2.30
4.12 2.41
4.11 2.40
Table 3a. Both micro incentive and equilibrium increase in profits (but workers lose utility) B 1, H 5; Aold 10, Nold 0.1; Anew 8.5, Nnew 0.5
x 0.001 x 0.999
Qold
Qnew
u
w
h
j old
j new
1.40 1.55
1.69 1.83
0.21 0.20
4.91 4.68
3.93 3.74
4.12 3.93
4.11 3.92
Table 3b. Micro incentive but equilibrium profits decrease (but workers gain utility) B 0.5, H 0.5; Aold 0.5, Nold 0.1; Anew 8.5, Nnew 0.5
x 0.001 x 0.999
Qold
Qnew
u
w
h
j old
j new
3.88 3.71
3.98 3.74
0.26 0.23
2.87 3.20
2.30 2.56
2.41 2.69
2.40 2.68
effect on workers’ fallback position; when the inputs are substitutes (B 0), the rise in the effective labor – capital ratio raises the share of wages which tends to raise the fallback position. A deterioration in workers’ fallback position in turn represents a positive profit externality while an improvement in the fallback position provides a negative profit externality. In Tables 3a–3b firms introduce the new technique, and in 3a equilibrium profits go up while in 3b the negative externality leads to a fall in profits. The scenario in Table 3b illustrates a falling rate of profit. This violation of the Okishio theorem (Okishio, 1961) is possible because – unlike the Okishio theorem – our analysis includes endogenous changes in the unit wage cost via the efficiency wage mechanism (other ways of
Power, Productivity, and Profits
397
Table 4. Interior Solution B 0.5, H 0.5; Aold 0.5, Nold 0.1; Anew 8.5, Nnew 0.5
x 0.001 x 0.410 x 0.999
Qold
Qnew
u
w
h
j old
j new
3.88 3.81 3.72
3.90 3.81 3.69
0.26 0.25 0.23
2.87 2.98 3.16
2.30 2.39 2.53
2.41 2.50 2.65
2.40 2.49 2.64
Table 5. Multiple Solutions B 0.1, H 1.5; Aold 10, Nold 0.1; Anew 4.3, Nnew 20
x 0.1 x 0.9
Qold
Qnew
u
w
h
j old
j new
1.88 2.07
1.87 2.11
0.74 0.49
5.81 4.04
1.93 1.35
2.66 1.85
1.99 1.39
introducing wage changes in the analysis of the Marxian law of the falling rate of profit have been analyzed by, inter alia, Foley (1986) and Skott (1991)). Tables 4 and 5, finally, show the possibility of an interior solution and multiple equilibria, respectively. The interior solution in Table 4 is based on an assumption of good substitutability (B 0), but interior solutions can be obtained both in the case of complementarity (B 0) and in the case of substitutability (B 0). The reason for this is simple. When there is complementarity, an increase in the proportion of workers and firms using the new technique will tend to reduce workers’ fallback position and when B 0 this reduction in h generates a decline in workers’ share of income; that is, the incentive condition becomes more restrictive as x increases. When B 0 an increase in x may (but need not) increase h, but with B 0 the wage share is negatively related to h and a decline in the wage share is obtained in this case too. Multiple solutions are harder to get. In order to get multiple solutions it is assumed in Table 5 that the new technique provides a very dramatic increase in monitoring (N new 20) and that the elasticity of worker utility with respect to effort is low (H 15). As a result, the value of a job using the old technique is much higher than that associated with a job using the new technique ( j old 6 j new ). Even though an increase in the proportion of new jobs generates an increase in employment, the employment effect is kept relatively small by using a negative but numerically small value of B. The welfare effects of the changing composition of the jobs therefore dominates, and workers suffer a net welfare loss as the proportion of jobs using the new technique goes up. Since good substitutability is assumed (B 0) the decline in h generates an increase in the wage share, and the incentive condition (15) is relaxed as x increases.
398
Federick Guy and Peter Skott
5. Conclusions ICT has a monitoring function, but the adoption of new ICT has often facilitated organizational changes which go far beyond changes in monitoring. The telegraph, the telephone, and a host of other technologies made it feasible to coordinate elaborate planned divisions of labor involving hundreds of thousands of employees in big corporations, and tens of millions in the planned economy of the Soviet Union. The rigid bureaucratic structures for which the mid twentieth century was known were a reflection of the ICTs of the day. Microprocessors and other, more recent, developments in ICT have made possible more flexible systems. Whether with regard to sweeping organizational changes such as these, or narrower changes in particular jobs or functions, the adoption of new ICT brings changes in the organization of work as well. In this paper we have considered the implications of changes that give (top) managers increased discretion and allow them to monitor the actions of employees more closely. Changes of this kind may increase the discretion for managers while constraining the ability of (many or most) employees to make consequential choices for the organization. Discretion and its inverse, constraint, develop hand in hand. Our analysis shows that if the same information technologies that allow the managers to prepare more accurate plans and to correct errors more quickly also enable the manager to monitor workers more closely, then we may see a polarization of incomes with workers losing out and increases in profits and managerial incomes. One limitation of the model is the aggregation of profits and the income of top managers. With the explosion in managerial remuneration, the split between these two categories has changed significantly, and by aggregating the two we leave out agency problems in the relation between managers and owners. It is unclear how this may bias our analysis of the choice of technique, but since the choice of technique is made by managers, the alternative assumption that only narrow profits (that exclude managerial remuneration) influence the choice of technique would probably represent a more serious distortion than our hybrid assumption. The assumption of complete symmetry across all (non-managerial) workers represents another gross simplification. In fact, power-biased technical change may have been a significant factor, not just behind the increase in the share of broad profits but also behind increased inequality among workers (Guy and Skott 2005; Skott and Guy 2006). Other obvious limitations of the model concern (i) the absence of any consideration of capital accumulation when, in fact, it is hard to conceive of technical change without investment; (ii) the focus on steady states and the assumption that workers and firms have full knowledge of the various parameters underlying workers’ choice of effort; (iii) the assumption that the choice takes place
Power, Productivity, and Profits
399
over some exogenously given techniques rather than allowing for decisions over how to allocate R&D resources and where to search for new techniques. These simplifications clearly make the analysis much more tractable, and relaxing the assumptions would not, we believe, invalidate the fundamental mechanism that is the focus of this paper: the pursuit of profits by private enterprise affects the choice of technique, 8 and technically inefficient production methods may be chosen if they enable firms to squeeze workers. But the analysis also shows that micro incentives and class interests do not always go together: the profit incentive will not invariably lead firms to choose the technique that gives the highest equilibrium level of profits. Technical change typically produces losers as well as winners, also in a Walrasian world without agency problems. In the absence of agency problems, however, there is a presumption that, in principle, the winners could compensate the losers, leaving a net gain. There is no basis for the presumption of welfare improvements in the case of power-biased technical changes. A new technique can be profitable and may be adopted even if it is less efficient than existing techniques. One final comment may be called for. We have analyzed the PBTC hypothesis using a traditional efficiency wage model as it applies to individual wage determination. This model, arguably, provides a good approximation of wage setting in the US, UK, and other liberal market economies (using this term in the sense employed by Hall and Soskice (2001)). The model may be less appropriate, however, for countries in which wage bargains are more likely to be collective. Unions, moreover, influence working conditions as well as wages. Thus, there is evidence that the presence of strong unions reduces the impact of the cost of job loss on effort (Green and McIntosh 1998), and among European countries there is a correlation between loss of union power and the rate of work intensification (Green and McIntosh 2001). In the – admittedly extreme – case in which effort levels are set and controlled by unions there are no agency problems between firms and workers: from a single firm’s perspective, effort is exogenously given. Having effort levels exogenously determined, however, does not block technological progress; it merely weeds out those changes of technique that are profitable only because of work intensification. But if technical progress has been mainly of a power-biased and technically inefficient kind since the late 1970s, then one would expect differences between liberal market economies and economies in which collective bargaining over wages plays an important role: over this period the liberal market economies would be experiencing faster measured productivity growth but also a stronger 8 Indeed, this conclusion would be strengthened, we believe, by including the endogenous choice of the direction of technical change.
400
Federick Guy and Peter Skott
tendencies toward work intensification and income inequality. This prediction appears to be consistent with the evidence from the last 30 years.
Appendices Appendix A: Intertemporal Optimization Consider an infinitely lived agent with instantaneous utility function: u(c e ) c v(e ).
Assume that the interest rate r is equal to the discount rate. The time profile of consumption is then a matter of indifference to the agent, and we may assume that consumption matches current income. If U denotes the value function of an unemployed worker, a worker who is currently employed at a wage w faces an optimization problem that can be written: T ¯ max E ¡¡ ¨ (w v(e ))exp(rt ) dt exp(rT )U °° ¢¡ 0 ±°
where the stochastic variable T denotes the time that the worker loses the job. Assuming a constant hazard rate, T is exponentially distributed. In a steady state the objective function can be rewritten: T ¯ E ¡¡ ¨ (w v(e ))exp(rt )dt exp(rT )U °° ¡¢ 0 °± T ¯ E ¡¡ ¨ (w v(e ) h )exp(rt )dt °° U ¢¡ 0 ±° w v(e ) h ¯ E¡ (1 exp(rT ))° U ¡¢ °± r (w v(e ) h )p U
where h rU and p E(1 exp(rT ))r (1 r E E )r r 1 E is an increasing function of the rate of separations E. Effort affects the firing probability and thus the rate of separations, so the worker’s first order condition can be written: v a(e )p(e ) (w v(e ) h )p a(e ) 0. The value function for an unemployed worker will depend on the average level of wages, the rate of unemployment benefits and the hiring rate. With a constant rate of unemployment, the hiring rate q is proportional to the average rate of separations
Power, Productivity, and Profits
q E
401
L 1 u E N L u
where u is the unemployment rate and E is the average rate of separations. The risk of job loss gives an incentive for workers to provide effort. But an increased average firing rate does not help the firm unless it raises effort (on the contrary, high labor turnover is usually costly). Since effort is determined by the semi-elasticity p ap (see the first order condition) it follows that the average firing rate in the economy need not be related to the average level of effort and, secondly, that an improved ability to detect individual effort – a rise in p ap – may change the average (standard) effort but need not be associated with any changes in the firing rate for workers that meet this changed standard. Thus, it is reasonable to assume that E is constant. But since average effort is itself determined by w b and u, whether or not E depends on e , we have h h(w , b , u ). In equilibrium, w w and in order to find the value of h we note that: V U (w h v(e ))p
¦£ U V (b rV )s ¦¤b r ¦¥¦
h ¯ ¦² ¡(w h v(e ))p ° ¦» s ¡¢ r °± ¼¦¦
(A1) (A2)
where s E 1 exp(rTu ) r and the stochastic variable Tu denotes the remaining length of the spell of unemployment of a currently unemployed worker. With a constant rate of separations, random hiring and constant unemployment, the stochastic variable Tu follows an exponential distribution with expected value ETu 1uu ET where ET 1E is the average expected remaining duration of employment for an employed worker. Using (A1)– (A2) and the expressions for p and s ( p 1(r E) s 1(r E(1 u )u )): h (w v(e )) (w v(e ))
p rps s b p s rps p s rps
E(1 u ) (r E)u . b ru E ru E
Thus, the fallback position is a weighted average of the utility flows while employed and unemployed with the weights depending on the rate of unemployment.
Appendix B Let x be the proportion of employed workers in firms that use the new technique, and assume (i) that workers cannot move directly from one job to another and that (ii) equilibrium firing rates are the same in all jobs (this is consistent with differences in monitoring, cf. the argument in Appendix
402
Federick Guy and Peter Skott
A). Proceeding along the same lines as in Appendix A, the value function for workers in firms using the old and the new technique are then given, respectively, by: V o (x ) (w o (x ) h(x ) v(e o (x )))p U (x ) V n (x ) (w n (x ) h(x ) v(e n (x )))p U (x ) where p
1 . r E
In a steady state the value function for an unemployed worker is: Tu ¯ U (x ) E ¡¡ ¨ b exp(rt )dt exp(rTu )V (x )°° ¢¡ 0 ±° b rV (x ) ¯ E¡ (1 exp(rT )° V (x ) r ¢¡ ±° (b rV (x ))s V (x ).
where V (x ) xV n (x ) (1 x )V o (x ) <x(w n (x ) v(e n (x ))) (1 x )(w o (x ) v(e o (x ))> p h(x )p U (x ) and 1 exp(rTu )¯ sE¡ ° 1(r E(1 u )u )). ¡¢ °± r
Hence, h(x ) <x(w n (x ) v(e n (x ))) (1 x )(w o (x ) v(e o (x ))>
E(1 u ) (r E)u b . ru E ru E
References Blanchard, O. and Wolfers, J. (2000) The Role of Shocks and Institutions in the Rise of European Unemployment: The Aggregate Evidence, Economic Journal 110: C1– 33. Bowles, S. (1985) The Production Process in a Competitive Economy: Walrasian, Neo-Hobbesian, and Marxian Models, American Economic Review 75: 16–36. Bowles, S. (1989), Social Institutions and Technical Choice, in M. DeMatteo, A. Vercelli, and R. Goodwin (eds) Technological and Social Factors in Long Term Economic Fluctuations, Springer, 67–68. Bowles, S. and Gintis, H. (1990) Contested Exchange: New Microfoundations for the Political Economy of Capitalism, Politics and Society 18: 165–222.
Power, Productivity, and Profits
403
Braverman, H. (1974) Labor and Monopoly Capital, Monthly Review Press. Chandler, A.D. Jr. (1962) Strategy and Structure: Chapters in the History of the American Industrial Enterprise, MIT Press. De Long, B. (2005) Wages and Salaries as a Share of Net Domestic Product. http://delong.typepad.com/sdj/2005/07/wages_and_salar.html Drago, R. (1996) Workplace Transformation and the Disposable Workplace: Employee Involvement in Australia, Industrial Relations 35: 526–543. Foley, D.K. (1986) Understanding Capital: Marx’s Economic Theory, Harvard University Press. Gintis, H. and Ishikawa, T. (1987) Wages, Work Intensity, and Unemployment, Journal of the Japanese and International Economies 1: 195–228. Green, F. (1988) Technical Efficiency and Production Relations: An Expository Note, mimeo. Green, F. (2004) Why Has Work Effort Become More Intense? Industrial Relations 43: 709–741. Green, F. and McIntosh, S. (1998) Union Power, Cost of Job Loss and Workers’ Effort, Industrial and Labor Relations Review 51: 363–83. Green, F. and McIntosh, S. (2001) The Intensification of Work in Europe, Labour Economics 2001 8: 291–308. Guy, F. (2003) High-Involvement Work Practices and Employee Bargaining Power, Employee Relations 24: 453–469. Guy, F. and Skott, P. (2005) Power-biased Technological Change and the Rise in Earnings Inequality, Working Paper 2005–17, Department of Economics, University of Massachusetts at Amherst. Hall, P.A. and Soskice, D. (eds) (2001) Varieties of Capitalism: The Institutional Foundations of Comparative Advantage, Oxford University Press. Hunter, L.W. and Lafkas, J.J. (2003) Opening the Box: Information Technology, Work Practices, and Wages, Industrial and Labor Relations Review 56: 224–243. Katzner, D.W. and Skott, P. (2004) Economic Explanation, Ordinality and the Adequacy of Analytic Specification, Journal of Economic Methodology 11: 437–453. Marglin, S.A. (1974) What Do Bosses Do? Review of Radical Political Economics 6: 60– 112. Marx, K. (1967) Capital, vol I, International Publishers. Okishio, N. (1961) Technical Change and the Rate of Profit, Kobe University Economic Review 7: 85–99. Piketty, T. and Saez, E. (2003) Income inequality in the United States, 1913–1998, Quarterly Journal of Economics 118: 1–39. Ramirez, M., Guy, F., and Beale, D. (2007) Contested Resources: Unions, Employers, and the Adoption of New Work Practices in US and UK Telecommunications, British Journal of Industrial Relations 45: 495–517. Sewell, G. (1998) The Discipline of Teams: The Control of Team-Based Industrial Work Through Electronic and Peer Surveillance, Administrative Science Quarterly 43: 397–428. Skott, P. (1991) Imperfect Competition and the Theory of the Falling Rate of Profit, Review of Radical Political Economics 24: 101–113. Skott, P. and Guy, F. (2007) A Model of Power-Biased Technological Change, Economics Letters 95: 124–131.
21. Trust, Responsibility, Power, and Social Capital Timo Airaksinen Department of Philosophy, University of Helsinki, Finland
1. Definitions of Social Capital This paper discusses trust as a form of social capital, that is, as a social resource which works as a facilitating condition of successful action coordination and social cooperation. It also discusses responsibility as a special source of trust. Coleman (1988) defines social capital in terms of its function of which, roughly, the following features are required: it is a property of a social structure which helps its individual or collective members’ successful action, that is, the function which is called social capital makes it easier for those actors to reach their goals (Putnam 1995). Fukuyama (1999) says that social capital is ‘an instantiated informal norm that promotes cooperation’. He does not mention social structure as Coleman does, and Coleman does not mention cooperation as Fukuyama does. These two definitions may aim at a common idea, but their details disagree radically. Some social capital theorists talk about trust as a factor which exemplifies social capital, but if we believe in Coleman’s structural definition, it may be difficult to see how the most demanding forms of trust fit in. Fukuyama explicitly allows for trust. This short paper is a reflection on the various intuitive ways in which trust and responsibility may be defined and understood but without grappling the core philosophical difficulty of what constitutes a structuralist or functionalist explanation. This is not a paper on the philosophy of knowledge but an intuitive analysis of how we might talk about trust and responsibility. Let us start from a philosophical analysis of the concept of trust.
2. Trust Trust is of two different types, both more or less normative (Baier 1986; Gambetta 1988; Hardin 1991,1996; Holton 1994; Misztal 1996). The following distinction is well-known and its main features relatively uncontroversial:
406
Timo Airaksinen
reliance or weak trust and full trust which correspond into reliability and trustworthiness. This distinction is the methodological cornerstone of my argument in this chapter. Many tricky looking problems dissolve if one focuses on it. I trust my car in the sense that I rely on it. It is reliable. This implies confidence. If we understand such a case in terms of reliability, it means that the car works in a predictable manner under normal conditions. We do not mean mere predictability though. If we did, I could say that I am confident that my old junk car will break down, and this means that I trust my car. I may confidently expect it, but I don’t trust that my car will break down. ‘Trust’ in this case is used ironically. Thus, trust entails the satisfaction of my desire in the sense that the car takes me where I want to go. Ascriptions of reliance are conditional on desires. A dependable and, in this sense, trusted car does, as a car, what I want and not what I do not want. Another condition is needed as well: the car does what it is supposed to do as a car. When I say my old computer can be relied on, I do not mean that it is a reliable boat anchor. Reliability is to be understood in relation to its normal function. That is where it is trusted. In this same sense, I may also trust (some) social systems and institutions. A slave owner trusts his slaves in the same way. In some cases an employer may trust her employees as functionally as this. They are dependable because they exhibit reliability of service according to certain desire based, normal expectations. This is social dependability–based trust as a normal and desirable reliability. Let us call it the weak minimal or nominal notion of trust. It, of course, allows for degrees. Notice that analytically speaking; to rely on x I need to have certain beliefs about it. These beliefs may not be rationally evidence based; they can be derived from any source and by any means. A boss may rely on his workers because he saw a dream last night or because he needs their services anyway. Rational reliance is the normatively ideal case. One should have good evidence. In most cases, people do not have it or they do not use it. This is understandable if ‘a basic function of social trust is the reduction of cognitive complexity’ (Earle and Cvetkovich 1999: 9). Too much epistemic labor is counterproductive. All you need is trust. But then trust may be dangerous. Foolish trust is not recommended, although it may be a common attitude in social life. I trust my family members. What is the new point here? A major difference must exist between the case in which I trust a slave, a machine, or a large scale social system, another full person and a social agent. When I trust a person, I do not necessarily focus on his or her reliability in satisfying my personal desires. A person is much more than an object of normal expectations which his or her reliable functioning satisfies. A person has no normal function. The reason is that to assign such a normal function to her would be ethically degrading or denying her personhood, and this is a contradiction. I would assign an attribute to a person as a person, such as, it
Trust, Responsibility, Power, and Social Capital
407
denied her personhood. The point is that a person’s actions show much variation and creativity as they express her own beliefs, desires, intentions, plans, and goals. She has free will and her own plans. This entails unpredictability and, therefore, also fundamental unreliability. A person is never at my service in the same way a slave, my car, or even the social welfare system might be. A person is, and has a right to be, unreliable. This is the reason why a power wielder may want to deny the subordinate agent’s personhood. I suggest the following: Trust is based on mutual benevolence, shared salient values, assured virtues, and largely uniform and socially compatible goals, and it leads to cooperation (in a normative sense of the term) which is egalitarian, free of coercion, and mutually satisfying over a wide range of situations. Therefore, ‘Social trust is based on value similarity, with the value basis varying across people, contexts, and time’ (Earle and Cvetkovich 1999: 9). Trust allows them to expect cooperation in the long term. Such is the foundation of the trustee’s trustworthiness from the trustor’s point of view. This is what I call full trust. The object of trust, in this sense, is a person. I trust another person, but in the case of weak trust or reliance, I trust her acting in a certain desirable normal manner. Full trust is a virtue notion because, clearly, the trustor’s ability to trust a trustee is a desirable character trait between its two vicious extremes, namely, gullibility (too much trust) and cynicism or paranoia (no trust). We welcome trusting people to our communities, and we may find it easy to trust them too. We admire them. Their presence in the community makes life easier, more relaxed, and fulfilling. Also, trustworthiness is a virtue term in the Aristotelian sense, because people who are characterized so must display some visible and lasting features of a good character, otherwise we would not trust them. It seems that we tend to pay more attention to trustworthiness than to the ability to trust (to be trusting), but this need not be so. One of the reasons to trust another person is that she trusts you. Thus, full trust is, at least in some typical cases, a symmetrical relation between a truster and a trustee. Reliance is not symmetrical in the same way. The main point is, very briefly, that if the other person does not trust me, I need not trust him, and vice versa. For example, I trust that you pay back your debt, although I know that you do not trust me. Why would you pay it back then? Because you do not trust me, you cannot expect anything good from me in the future, even if you pay me back. The present loan is a mere exception. This provides you a solid motive of not paying back. Because you have such a known motive, I should not have lent you money in the first place. A trustworthy person is a virtuous truster as well. My trust in a person often means mutual trust also; therefore, we are able to coordinate our actions on that basis. This also means that reciprocity is, at least, a necessary condition of trust, in the strongest case of full trust. The
408
Timo Airaksinen
following can be suggested: If you trust me fully, or find me a trustworthy person, I can rationally trust you in a similar way. The reason for this is that you believe that we share some relevant goals, values, norms, and desires – or certain virtues – and therefore you trust me as a person. We are able to cooperate. And because these are your own values and virtues, you are that kind of a person and will act and live accordingly, and these facts entail trustworthiness and my well–founded full trust in you. Trust has, accordingly, its community forming function. People do not necessarily trust each other because they are members of a community; they are members of a community because they trust each other. Mutual full trust entails covenanting, promise keeping, truth telling, faithfulness, and benevolence. Finally, the social surroundings and institutions of a trusting relationship are important. An example is a family or friendship. If these conditions are missing or it they are disfavorable, full trust becomes a problematic possibility. Some circumstances are either so chaotic or competitive that no sensible person should trust anyone. But, there are also social situations where it is easy and natural to trust. The point is that we share our values in a social context and display our virtues depending on some special social conditions. However virtuous the agent is, trust is always contextual. Anyway, I cannot offer an analysis of this context here. They are, obviously, relevant to social capital. Now the following counterexample can be suggested. A nasty child knows that his father will bail him out of trouble even though he knows that his father does not trust him to avoid trouble in the future. The father does not trust his son, yet the son can trust his father. This looks like a powerful counterexample and a refutation of my suggested position concerning full trust above – but this is not so. I have already drawn a distinction between mere reliance and full trust. It is easy to see that the present counterexample confuses the two in a philosophically unacceptable manner. The son treats his unconditionally loving father as a reliable source of help, which he can make to serve his own desires according to the normal functioning of fatherhood (when ‘father’ is a social role). The son can use his father. Thus, the son ‘trusts’ the father in the weak sense of mere reliability and reliance. This is to say that one sided trust tends to collapse into mere reliance. I trust my trusting wife (my virtuous type of attitude toward her person), but I know she might steal from my purse (her particular unjust action). One word of warning is needed though. I admit that such cases may exist in which full trust is not mutual. I only say that they are exceptions, and in this paper I have no intention of going into such anomalous details. Notice that a distinction must be drawn between cooperation and coordination of actions. An important difference between the two concepts, as they are used in this paper, is that the former is an egalitarian notion in the sense that those who cooperate are of equal value and social status (at least within the limited cooperative context itself), which need not be true of the
Trust, Responsibility, Power, and Social Capital
409
latter case. This idea can be extended to full and weak trust. The former does not allow for a higher/lower power distinction, although the latter does: the trustee’s actions are supposed to serve the trustor’s desires in a normal and reliable manner but not necessarily vice versa. For example, if I order you to act and I trust (rely on) that you will, the situation is not symmetric, as you cannot be said to trust that I will punish you if you fail to deliver. Reliance entails something desirable. Therefore, you only believe that I will punish you. The reason is that weak trust entails a relevant desire, and you cannot desire punishment. But perhaps I will promise you a cool million if you deliver. In this case you can be said to trust me, that is, you expect to get your reward if you act as I desire. In such an exchange, our relative power and status differences are camouflaged but yet they exist, at least in the sense that I initiate and define the transaction. I use you. You can not afford to decline the offer. This is my game of reliance, not yours. The exchange of trust still depends on my desires, which is the reason why it is weak trust. I do not treat you as a full person. You have no choice. If I offered you candy, your situation would be different. A cool million is something else. Now you are predictable. Such examples of action coordination show how mere reliance and full trust get easily confused. Next, let us pay attention to the difference between cooperation between persons and coordination of their actions and strategies. Cooperation may require full trust between persons. You do not cooperate with your reliable old car, nor do you cooperate with (formal) institutions. Institutions do not trust you. Coordination, however, can be achieved on the basis of mere reliance, as it seems. In a traditional conflict theoretical model, agents’ coordinated deals are possible because of a coercive agent or authority and his effective or lawful sanctions. The subordinate person can be sure that the sanctions follow if he or she does not act as required. This is coordination which does not entail trust in the full sense. We can now test our theory against the following very different view: First, trust implies a difference in power and control. The one who trusts assumes a position of subordination and relinquishes decision and behavior control to the one who is trusted. … Second, … trust also involves risk. … one can never be entirely certain that one’s trust will not be violated. … Third, trust is an expectation about a relationship. … Fourth, … individuals have a choice about when to trust and who to trust (Cvetkovich and Löfstedt 1999: 5). Clearly, here the authors mean reliance and not full trust, except when they make their third point. When I rely on technology, I am at risk as its slave. That much is true. But, when I trust my brother there is no risk involved because if there were, I would not trust him. Remember that full trust entails mutual trust. If my trust in him involves a risk, he is facing the similar risk, too. My risk corresponds to a risk to him. We do not want to lose each other’s trust, and as virtuous persons in our normal social situa-
410
Timo Airaksinen
tions we act so as to avoid it. Of course anything is possible, and this fact entails a risk, but we are not interested in fiction now. About the fourth point, it seems to me that we do not freely choose our trustees. Who we trust depends on background knowledge, social context and its historical characteristics. I trust my family over the broadest range of issues because it is my family. I cannot choose to trust my favorite used car dealer even if I deal with him. It would, indeed, be strange if I chose (freely) to distrust my family and, instead, trusted the used car dealer. Yet in certain borderline cases trust is not fixed, and I must choose whom to trust. The following statement is a good summary of what was said above about full trust: Things can go better with trust. People can enjoy their interpersonal relations more if there is no shadow of distrust hanging over them. They have a better chance of consummating mutually beneficent agreements if they don’t have to decode proposals and positions and are not always guessing at ‘what do they mean by that?’ With trust, people can create the social capital that is only possible when people work together (Fischhoff 1999: viii). Full trust is a kind of social capital. It does not only contribute to it. Reliance is different in this respect.
3. Responsibility In many cases trust must be created somehow, because it is not automatically a feature of human interactions. Our normative intuitions tend to be vague and inconsistent. It is difficult to keep track of the differences between reliance and full trust. We certainly have many reasons not to trust each other. In the modern liberal reality, full trust may look more like an exception than a regular feature of human interactions and social relations. Yet in normal social life trust may also look like a default attitude. When I ask directions in a strange city, I may decide to rely on an unknown person; why? I have no reasons not to take her advice, and she has no obvious reasons to mislead me. In many other cases she has such reasons, so then I should not rely on her. However, we are all social creatures with a background of sociability. This means that we are prone to trust other people, and we even do that with ease. We even assume that the person’s intentions are good and beneficial. She seems willing to share my goals and values. Yet we should relax only in special circumstances. But because we need to be aware all the time, we can ask: on what special grounds can we feel confident in our social situations, relations, and interactions? A partial answer can be given in terms of responsibility. The suggested epistemic rule is: it is reasonable to trust responsible people. This idea can also be used to explain our reliance on social institutions and systems which are in a key role when we discuss the ‘embodied’ forms of social capital. Four different types of responsibility can be distinguished: (1) account-
Trust, Responsibility, Power, and Social Capital
411
ability and liability, based on (negative) sanctions; (2) competence–based responsibility; (3) social responsibility based on (informal and personal) authority; and (4) moral responsibility, based on either universalizable norms or moral virtues. Responsibility is not often discussed in connection with trust and social capital, perhaps, because of an emphasis on liability (1) as the sole option. The point of responsibility has been understood in terms of the causal consequences of one’s actions, the ensuing fault, harm, and appropriate negative sanctions (Fischer 1999: 93–9). If this is so, responsibility is not relevant to full trust. If I perform according to others’ desire–based expectations only or mainly because of a perceived threat, I deserve to be trusted only in the shallow, reified, and ironic sense of reliance discussed above. I am supposed to perform in a reliable manner according to others’ desires or according to their rules because of the threat. Counterfactually: if the threat didn’t exist, I could not be expected to act as desired. Here the successful coordination of actions requires sanctions which actually entail the lack of trust. In one sense, I am indeed trusted (relied on), while in another sense, I am not (fully trusted). I am under an efficient threat and its sanctions, because I could not be relied on without them. I can be made reliable, but then I can only be trusted in the minimal sense. Responsibility as liability and the corresponding case of personal reliance do not seem to be relevant to social capital. For instance, the law coordinates individual actions, and if you follow the law because of your fear of sanctions in the case of your liability, your contribution to social capital is minimal. Nevertheless, in such a legally effective context you can still be relied on. Competence is a strong source of minimal trust or reliance. In this case an agent freely accepts the responsibility which is offered to her by an authority or an authorizing agency (Airaksinen 1988). This is unlike the case of liability which imposes responsibility on an agent normally and prima facie against his or her own will. (Of course one may regret one’s actions which allow one to accept one’s own liability. But this is an exception.) The acceptance of responsibility based on competence should be free, and it should be offered to an agent on good grounds by a relevant authority. All competence– based responsibility is authorized and therefore cannot be spontaneous or self imposed. A bus driver is competent, he works freely, he wants the job, and the job is offered to him. His passengers can trust his actions in the minimal sense. However, this trust is stronger than that based on liability even if both are of the minimal type. Two reasons for this exist: (i) In the case of liability, trust ends when the threat disappears. Competence considerations do not include any such trigger factor, so that the trusting agents need not worry about checks and precautions against inefficient or disappearing sanctions. A competent agent works more predictably. (ii) Genuine cooperation and full trust can (almost) be achieved on the
412
Timo Airaksinen
basis of competence responsibility. In many cases, the nature of competence is such that it requires cooperation along with coordination, although there are cases where this fact is not obvious. Medical doctors are expected to cooperate with their patients and in ideal circumstances they do on the basis of their perceived benevolence, shared values, and respective virtues. A patient should be afraid of doctors whom they only rely on – they are competent – but whose personal (full) trustworthiness cannot be verified or known. The reason is that the patient and her doctor should share some genuinely cooperative values and norms; if not, the doctor is like a body mechanic. It seems that competence–based trust is open to full trust and virtuous cooperation. The critical questions one should ask are these. First, is the relevant value basis of full trust part of one’s competence? It should be. When we speak of competence, we presuppose a service relation between the competent agent and her client. And here, shared values are crucial. Second, can the client be said to be competent too? If not, how can competence be a source of full, cooperative trust? At least a doctor’s patients should show some competence as patients. Perhaps air passengers should do the same. They need the right attitudes and some special skills. Third, can we respond to the latter question by saying that competence creates, first, weak trust which then leads toward full trust, because trust creates trust when the relevant agents share certain values and virtues? This would explain the creation and growth of some forms of social capital in those cases where agents are competent and responsible. Two different cases of competence responsibility exist: (1) personal (informal) and (2) institutional (formal). In the first case, the audience, as an informal authority, is convinced of the competence of an agent on the basis of shared experience and thus offers responsibility to her. Leaders are often authorized in this way. In the second case, responsibility is offered by a formal social institution which itself is seen both as a responsible and reliant authority. The first case is that of minimal authority and the second case is that of full authority. Legal authority belongs to the second case. The first case is problematic, because we can ask whether it is authority at all. Such an informal authority can be called ‘proto authority’ or even ‘pseudo authority’, but we may call it authority anyway. It assumes the power of deciding and is fully committed to its own decisions, and moreover, it has the crucial right to expect that the chosen agent is committed as well. An audience, as authority, has an active role in evaluating and judging the candidate, and after making a decision it is committed to its authorizing act, however informal. In some cases, the validity of the professed authority is a problem, however, and even when a formal agent authorizes an actor, the audience may well be suspicious and not trust the agent unconditionally. But in the first case, the audience itself assumed an authoritative role and whom it authorizes to act is necessarily trusted. If he is not, the au-
Trust, Responsibility, Power, and Social Capital
413
thority does not trust itself, and this leads to some strange and quite intractable questions. Notice that this idea of competence–based responsibility is peculiarly value free. If you need to hire a killer, it is a good idea to get a competent one from your local Mafia. Their recommendation and recognition are enough, as you need not cooperate with the Mafia in any other way. You merely coordinate your own plans with their recommendations. You rely on the hired killer as a competent agent even if you do not trust the Mafia in the full sense. This type of case is not conducive to the growth of social capital in any way, unlike the case of a civil servant who is working for a trusted state institution. In conclusion: Competence responsibility is a good basis of trust, perhaps even full trust and cooperation, but it need not contribute to social capital. The values which run certain common coordination games can be antisocial, partisan, and in general, dysfunctional. In the case of minimal trust the agent’s desires, which the trustee serves in a reliable manner, may be harmful. Social responsibility (or public responsibility) looks like an obvious candidate for a contribution to trust and social capital. Notice that this type of responsibility is again independent of liability. An example of felt social responsibility is a person who collects a piece of trash from a public place and puts it into a garbage bin – suppose she has not dropped it herself. The person accepts her social responsibility which is never offered or given to her (unlike in the case of competence). Social responsibility belongs to a person as a member of society. This formulation does not explain much, of course. It seems that we can explain social responsibility in terms of personal authority if we understand the term correctly. I suggest that we say that when a person assumes his social responsibility he adopts a position in which he himself is an authority in one’s socially shared world. He authorizes himself. His is a freely chosen act which results in an exemplary position which entails something like ‘what comes to common good, you can trust me (in the full sense)’, and ‘do likewise and we can cooperate further in the service of public good’. This is authority in a special, informal, hidden, and personal form. A good example is care. When I care about the environment, other people, or institutions, I assume a position of informal and personal authority in relation to my object of care and a relevant audience. I do what I see I should do, and other people should act accordingly and either receive care, facilitate it, or provide it. Therefore, socially responsible action is model behavior which has its own normative point. This is the key message, and it is exemplary. Thus, it is a quiet, informal, and hidden form of authority, but it is a position of authority anyway. Of course, a socially responsible person is not in a position to authorize anyone to act, but he is in an authoritative position. His actions exemplify his prevalent benevolent attitude called social responsibility.
414
Timo Airaksinen
Notice that this is in–authority, not authority–to. One is not authorized to perform any special tasks or given any typical privileges like the permitted forms of violence and information gathering of the police. Yet the person is trustworthy because she is in an authoritative position in which she has assumed certain social responsibilities. This is an authoritative position which anyone is invited to share with her. It is a shared good and, as such, part of social capital. Social responsibility is different from social duty in the sense that many social responsibilities are supererogatory, either in a strong or a weak sense (Urmson 1958). To pick up some trash one has not dropped himself, is not one’s duty. That much is clear. One may do it if one wants to – and a socially responsible person does. It is a good act which carries its negative cost factor, however small, but it is not a duty. Therefore, in this sense, it is a supererogatory action. Yet it is such only in a weak sense. The reason is that the act is not demanding, risky, or very costly. It is neither heroic nor saintly. Some other examples of social responsibility may well be supererogatory in the strong sense, namely, that they are heroic or saintly. The main point is, however, that the acts of social responsibility are not duties, and so they are of a variable nature, subjective by motivation and restricted in scope. Informal rules exist such that they give direction and meaning to acts of social responsibility, but these are culture–relative and in many ways indefinable. The relevant acts often need a separate explanation or a background story which allows their audience to make sense of them. Old people feed city pigeons which require a long narrative account before we may understand their motivation correctly. The following counter–argument can be suggested: Social responsibility is not always supererogatory. Am I not socially responsible if I do not drop litter? Someone may be going beyond duty if they pick up others’ litter, but such behavior is not necessarily a requirement of social responsibility. My response is as follows. It is, indeed, my duty not to drop litter, and thus, such behavior is not supererogatory. To drop litter is a harmful act. But to refuse to do so is not socially responsible behavior in the sense discussed above. To refuse to drop litter is doing one’s personal duty, but to pick up other people’s litter is a good act but not a duty. My conclusion is that social responsibility is supererogatory in the weak sense. It is never a duty although it is always good. Of course, I do not deny that socially beneficial duties exist, but they are not social responsibilities. As it is easy to see, social responsibility is a strong contribution to social capital. As a socially responsible person, I can be trusted and cooperated with in all matters under my personal authority, as long as I and my audience share some basic background assumptions about social values and virtues. I am willing to do more than my duty, but only in some special circumstances which may be difficult to predict or control. But cooperation also means discussion, negotiation, exploration, and compromise. During these
Trust, Responsibility, Power, and Social Capital
415
varied and indefinable processes, the scope and limits and strengths of the participant’s social responsibility are revealed. Anyway, I can cooperate with people to whom common good and the value of social life matter, if they matter to me. This is full trust which concerns persons and not only their specific actions or plans. Notice also how the requirement of mutual trust is satisfied here: my socially responsible attitudes and actions are authoritative in the required sense. They suggest that others do the same which satisfies the requirement of mutuality in cooperative contexts. Moral responsibility can be seen either as an extension of social responsibility or not. In the first case, we apply the idea of universalizability to the norms (u-norms, for short) and values which are relevant to social responsibility (Hare 1963). In the second case, we need to discuss practical virtues and the corresponding duties of virtue. From the point of view of u-norms, it is relevant to notice that this may be the only way to clear the artificial borders of social capital as drawn by the other grounds of trust. This is the key trouble: social capital is limited to certain areas and groups of people so that their coordinated actions are turned into proper cooperation which is further facilitated, but at the same time, a potentially dangerous and dysfunctional border is created between ‘us’ and ‘them’. We trust our own people, and although we may rely on others if the conditions are right, it is not at all clear that they can rely on us. Reliance is a functional social relationship and its distribution is regulated on utilitarian grounds. Moreover, it is clear that they should not trust us, as we do not trust them; full trust is so demanding that it cannot extend far across social dividers. Neither trust nor cooperation can extend across the divide between us and them. On the contrary, the other side is that of strangers who can be ignored and enemies who can be exploited or must be feared. Competence and social responsibility may not directly contribute to the creation of this divide, but they allow it. Competence requires its own audience and clients, and social responsibility is meaningful only if the agent’s values are shared by other people. Those who remain outside the sphere of the ‘social’ cannot benefit from anyone’s socially responsible actions. The outsiders do not care, and they are not cared for. But there are worse cases as well. As Fukuyama writes: … social capital is a private good that is nonetheless pervaded by externalities, both positive and negative. An example of a positive externality is Puritanism’s injunction, described by Max Weber, to treat all people morally, and not just members of the sib or family. The potential for cooperation thus spreads beyond the immediate group of people sharing Puritan norms. Negative externalities abound, as well. Many groups achieve internal cohesion at the expense of outsiders, who can be treated with suspicion, hostility, or outright hatred. Both the Ku Klux Klan and the Mafia achieve cooperative ends on the basis of shared norms, and therefore have social capital, but they also produce abundant negative externalities for the larger society in which they are embedded (Fukuyama 2005).
416
Timo Airaksinen
It is possible that all the forms of responsibility-based trust are open to such an accusation except moral responsibility in its u–form. But this is a cheap victory if u– morality is a high ideal and a noble goal whose degree of social realism is as low as its semantic clarity. If u–morality is not applied in society and if it is just an ideological mantra, it does not increase the overall social capital. But this is difficult to study empirically. In the general sense, we can say that moral people can be weakly trusted even by strangers and fully trusted by moral people. In Fukuyama’s terms, you need not be a Puritan to cooperate with Puritans, because they treat you fairly anyway. Yet they will apply their own notions of fairness and justice, not yours. And, they cannot be relied on to help you advance your immoral plans. (For instance, a moral agent does not keep his promise to a criminal who wants to profit from it immorally. He cannot be relied on in such a case.) Morality does not entail rigorism or fanaticism. They are reliable but strict limits exist. As long as we talk about u–morality, we need not mention social capital. The reason is that u–morality is a better known, more inclusive and fully applicable, and in general, a stronger concept to which the notion of social capital does not add much or anything at all. U–morality is also less metaphorical. The point is, if all people are moral it follows analytically that social cooperation is as easy as it can be. And if this is true, we need not mention social capital. It has become redundant. But because all people are not moral or they are moral in different ways, we need the notion of social capital. The key problem here is that full trust and social cooperation (not mere coordination) based on u–norms and high values still require reciprocity: it is suicidal to be moral in the midst of enemies or other savages and fiends. Universalizability is not enough; moral knowledge and sentiments must also be general or wide spread. U -norms bind us all, but they must be recognized first. This is the idea of moral generality which is different from universalizability. In short, both sides, us and them, should recognize u–morality. One sided u–morality does not add to social capital more than social responsibility does. Only if both sides are u–moral can we trust each other across borders. Yet it is true that the ultimate and most desirable form of social capital is u–morality. We should aim for it. But as long as we do not have it, social capital is too easily good for ‘us’ and outright dangerous to ‘them’. It may look like a tribal good. Virtues (in the classical sense) and their duties form a special case here. For example, Aristotle in his Politics allows a social system of slavery and a depreciation of women in social life. These groups are said to be incapable of full virtue. Therefore, practical virtues, even when they are called moral virtues not only allow for but actually create a divide between ‘us’ and ‘them’, supporting altruism between us, facilitating our coordination, planning, and cooperation and in general making our lives glorious in many ways – at the same time leaving major segments of the same society in re-
Trust, Responsibility, Power, and Social Capital
417
jected or subordinated positions. We need generalized social capital. We do not need social capital which is restricted to us and harmful to them. Even the strongest forms of social capital allow dividers which can be dismantled only on a fully general u–moral basis, as it seems. An example is the classical virtue of fairness whose maxim is: ‘Hate your enemy, love your friend’. It is, of course, important that we do not mix the subjects of such attitudes, but the gap between ‘us’ and ‘them’ is clear anyway. Within the world according to u–morality, it is much more problematic to hate your enemy, although even Weber’s Protestants have had their share of difficulties with such noble values. Yet we can trust u–moral people, that much is true. But as I said above, this may be an empty statement from the point of the social capital theory. Us/them –thinking is a problem for social capital. If we do not accept this, we need to congratulate class–based thinkers, elitists, fascists, racists, slave owners, misogynists, Mafiosos and many other similar social agents for their successes in developing and exploiting their respective social capital which entails a temptation to imitate and follow them. An example is the recent neo–Aristotelian and communitarian social theory and its acolytes. They all celebrate their particular form of social capital in a restricted communitarian context (Mulhall and Swift 1992; Sandel 1982; Walzer 1981, 1987).
4. Conclusion Social capital can be understood in two different ways: as a ground for action coordination between rationally prudential agents (individual and institutional) or as a characterization of social life of cooperation according to trust-related virtue. I already said above that social capital tends to draw a distinction between ‘us’ and ‘them’, a divide which cannot be crossed merely by means of trust. ‘They’ cannot be trusted except, perhaps, in the weakest sense of the term. This is why they are ‘they’. Social capital must be much more limited in its area of application than it may seem. We say that we fully trust agents and institutions and cooperate on that basis. But because we are always willing to double-check, litigate, seek compensation, and cry for liability, we trust only in the weak sense of reliance or in a sense which ultimately presupposes a coercive authority and the Hobbesian sword of law. We rely on people because we know that no violation of trust will go unpunished. This is weak trust, ‘trust’, or trust in an ironic sense. The good part of it is that trust, coordination, and social capital can be extended as far as our (coercive) power base reaches, and that is very far under the wings of the modern post–colonial and neo–imperialist states. Notice that the talk about coercive power does not make social capital void because such power can be consensually and freely accepted. Non–coerced acceptance of coer-
418
Timo Airaksinen
cion is not only possible, but it is normal. This is to say that it grounds social capital too. Full trust is demanding and as such restrictive. Its scope is necessarily narrow. We must be able to act outside the scope of coercive power on our own authority among our equals. This context does not extend far as it is often too risky to try. Full trust is to be reserved for those virtuous few with whom we feel close and in the case of the rest of the world we pretend to trust. If our powers are too weak to do even that, we ignore those who are on the other side or attack them. Both are popular strategies. Then there are the u-norms of Kantian ethics. Their problem is that – even if they should be practical, wide-spread, and applicable – they are, too often, theoretical idealizations which do not work well in real life. Yet unorms offer full social capital to those who are in a position to try them, or they replace social capital. My conclusion is that social capital is a Janus faced social phenomenon. It pretends to be where it is not, and then it is influential and successful (Mafia). It exists where it is so restricted that its resources are redundant (friends and family). Its best examples are taken from the ethics of duty. The first version creates limits which can be crossed by means of u– norms; the second case, that of virtues, is more difficult. Full trust is its own worst enemy.
Acknowledgements I am especially indebted to Pekka Mäkelä for comments on an earlier version of this paper as well as to two anonymous referees for their critical comments. The research for this paper was supported by the Academy of Finland and Tekes.
References Airaksinen, T. (1988) Ethics of Coercion and Authority, Pittsburgh University Press. Baier, A.(1986) Trust and Antitrust, Ethics 96: 231-260. Cvetkovich, G. and Löfstedt, R. E. (1999) Introduction, in G. Cvetkovich et al. (eds) Social Trust and the Management of Risk, Earthscan. Coleman, J.S. (1988) Social Capital in Creation of Human Capital, American Journal of Sociology 94: 95–120. Earle, T.C. and Cvetkovich, G. (1999) Social Trust and Culture in Risk Management, in G. Cvetkovich et al. (eds) Social Trust and the Management of Risk, Earthscan. Fischer, J. M. (1999) Recent Work on Moral Responsibility, Ethics 110: 93-139. Fischhoff, B. (1999) If Trust is So Good, Why Isn’t There More of It?, G. Cvetkovich et al. (eds) Social Trust and the Management of Risk, Earthscan. Fukuyama, F. (1999) Social Capital and Civil Society, International Monetary Fund, IMF Conference on Second Generation Reforms. Gambetta, D. (ed.) (1988) Trust: Making and Breaking Cooperative Relations, Blackwell.
Trust, Responsibility, Power, and Social Capital
419
Hardin, R., (1991) Trusting Persons, Trusting Institutions, in R. J. Zeckhauser (ed.) Strategy and Choice, MIT Press, 185–209. Hardin, R (1996) Trustworthiness, Ethics 107: 26-42. Hare, R. M.. (1963) Freedom and Reason, Oxford University Press. Holton, R. (1994) Deciding to Trust, Coming to Believe, Australasian Journal of Philosophy 72: 63-76. Misztal, B. A. (1996) Trust in Modern Societies, Polity Press. Mulhall, S. and Swift, A. (1992) Liberals and Communitarians, Blackwell. Putnam, R. D. (1995) Bowling Alone: America’s Declining Social Capital, Journal of Democracy 6: 65–78. Sandel, M (1982) Liberalism and the Limits of Justice, Cambridge University Press. Urmson J. O. (1958) Saints and Heroes, in A. I. Melden (ed.) Essays in Moral Philosophy, University of Washington Press, 198–216. Walzer, M. (1981) Philosophy and Democracy, Political Theory 9: 379-99. Walzer, M (1987) Interpretation and Social Criticism, Harvard University Press.
22. Exploiting The Prince Manfred J. Holler Institute of SocioEconomics, University of Hamburg, Germany
But if you do not have the right base, the right foundations, it is impossible for you to do anything good [or] even if you had the right practice and possess the greatest skill in the world. — Albrecht Dürer (1471–1528)
1. Who Exploits Whom? Did Pope Alexander VI exploit his son Cesare Borgia to extend the papal state and finally convert it into a Borgia state? Cesare Borgia became Machiavelli’s model of the political character labelled ‘The Prince’ and he is the hero of his booklet Il Principe (The Prince) which contains Machiavelli’s vision on how to unite his beloved Italy and to bring peace and the rule of law to it. Cesare’s papal father, however, was given a rather negative evaluation by Machiavelli, whom he accused of using ‘old religious customs’ to do ‘nothing else but deceive men … no man was ever more able to give assurance, or affirmed things with strong oaths, and no man observed them less; however, he always succeeded in his deceptions, as he well knew this aspects of things’ (Prince, 93 m). Of course, Cesare Borgia also exploited his position as a papal son in order to create support for his military expeditions and political feuds. This became obvious with the early death of his papal father and the reign of Julius II. 1 In the end, Cesare was arrested and brought to Spain, the native country of his family, where he died in an ambush at the age of thirty-two fighting for his brother-in-law Juan de Albret, King of Navarra. The author of this paper will also do some exploiting. He will exploit The Prince by recklessly distilling some building blocks for his interpretation of Machiavelli’s writings. And he will exploit the editors of a volume that is edited in his honour with the expectation that this essay will not be sent to reviewers and be rejected. The idea of this paper is to show (a) how 1 Alexander VI was succeeded by Pius III of the Piccolomini family. However, he died within a year. Machiavelli tends to ignore this pope – so we ignore him, too.
422
Manfred J. Holler
Machiavelli developed a political theory that heavily leans on the concept of The Prince and (b) how important Machiavelli is as a point of reference for modern political thinking and theory formation. Too much of Machiavelli’s thought has been neglected or left to outrageous misinterpretation. By revisiting Machiavelli we will be able to clarify some of the problems that have plagued political theory over the centuries: the aggregation of preferences, the origin of the state and the law, the status of power and morality in politics, 2 and the dynamics and efficiency of political systems. In particular it will be shown how Machiavelli circumvented the aggregation problem by postulating that only the undivided will of the prince can guarantee the consistency that is necessary to organize a state and to bring about good laws – if society suffers from a chaos, interests are conflicting and preferences are diverse. It seems straightforward to relate this thesis with Arrow’s impossibility theorem and its (non-)dictatorship property. This reference, however, shows that Machiavelli is not only aware of the problems relating to the aggregation of preferences but also of the issue of power required to implement the outcome of preference aggregation. To appreciate Machiavelli’s reasoning we have to see that the prince has a constructive role when there seems no other way available to achieve peace and order, and to assure freedom. Chaos and the prince are constituent elements of Machiavelli’s theory of political history. It seems that Machiavelli exploits the ideal of the prince in order to complete his political worldview. In principle, chaos cannot be avoided, but its reign can be postponed and its governance kept to a minimum if people read Il Principe and act in accordance with the recipes given in this book. In order to understand this message, the next section will summarize Machiavelli’s theory of political history and discuss the role of the constructive and destructive prince in this scheme. Having accomplished this task, section 3 will be devoted to discussing the examples of princes that Machiavelli gave us and try to determine what is specific about them. This is what will be called ‘The Machiavelli Programme’. He focuses on Romulus and Cesare Borgia, but also tries to convince contemporary members of the Medici family to accept such a role in history. In Section 4 I will relate Arrow’s social choice framework to Machiavelli’s view of the world and in section 5 I will discuss the relationship between consistency and efficiency. Once the will of the prince and the laws he granted have defeated chaos, the political participation of the people can increase efficiency and thus contribute to the fame of glory of the dictatorial lawmaker. Machiavelli gives a series of arguments why he thinks that ‘the people are wiser and more constant than princes’ (Discourses, 214) if their behaviour is regulated by law. If his arguments hold, then a state that allows for the participation of 2 Holler (2007a,b) deal with the power problem and the issue of ‘dirty hands in politics’ in Machiavelli’s writings, respectively.
Exploiting The Prince
423
the people is preferable to principalities that are dominated by a single despot, a king of divine right, or a small clique of nobles. It should, therefore, be in the interest of the prince that his government flows into a republican system – in order to match consistency with efficiency and give stability to what he created. The paper ends with some conjectures about politics and politicians that can be derived from the Machiavellian framework presented here. For instance, Machiavelli’s notion of the state seems to be closer to its German concept, which refers to ‘the collective whole, the collective entity that transcends the particulars’ rather than the Anglo-Saxon tradition that tends to identify the state by government (Michaels and Jansen 2006: 853). Before the discussion begins, it should be noted that I am not able to read Il Principe in its original so that I have had to make use of various translations and the interpretations that the authors put into their work. Throughout this text The Prince is quoted from Detmold’s translation of 1882 as well as from the Mentor edition of 1952. Choices are made after comparing the alternative translations. In some cases, I also refer to the German translations.
2. Machiavelli’s Theory of Political History Machiavelli holds a cyclical view on history: history repeats itself and that is why we can learn from studying it. There is growth and prosperity followed by destruction, chaos, and possible reconstruction. Princely government is followed by tyranny, revolution, oligarchy, again revolution, popular state, and finally the republic which in the end collapses into anarchy waiting for the prince or tyrant to reinstall order (Discourses, 101). In Machiavelli’s History of Florence we can read: The general course of changes that occur in states is from condition of order to one of disorder, and from the latter they pass again to one of order. For as it is not the fate of mundane affairs to remain stationary, so when they have attained their highest state of perfection, beyond which they cannot go, they of necessity decline. And thus again, when they have descended to the lowest, and by their disorders have reached the very depth of debasement, they must of necessity rise again, inasmuch as they cannot go lower (History, 218).
Machiavelli (Discourses, 101f) concludes: Such is the circle which all republics 3 are destined to run through. Seldom, however, do they come back to the original form of government, which results from the fact that their duration is not sufficiently long to be able to undergo these repeated changes and preserve their existence. But it may well happen that a republic lacking 3 The German translation is ‘die Regierungen aller Staaten’ (Machiavelli 1977: 15), i.e. ‘the governments of all states’, which is perhaps more adequate than to address the republic only.
424
Manfred J. Holler
strength and good counsel in its difficulties becomes subject after a while to some neighbouring state, that is better organized than itself; and if such is not the case, then they will be apt to revolve indefinitely in the circle of revolutions.
This quote indicates that the ‘circle’ is no ‘law of nature’ although the image is borrowed from nature. 4 There are substantial variations in the development of the governmental system and there are no guarantees that the circle will close again. Obviously, there is room for political action and constitutional design that has a substantial impact on the course of the political affairs. For instance, Machiavelli concludes ‘… if Rome had not prolonged the magistracies and the military commands, she might not so soon have attained the zenith of her power; but if she had been slower in her conquests, she would have also preserved her liberties the longer’ (Discourses, 388). The prolongation of military command was the cost-efficient consequence of larger distances due to the extension of the Roman empire. Although this solution reduced transaction costs in the short-run, it did not take into account long-run political costs. In the sequel commanders formed a more intimate relationship with their troops than in the time when the dominion of Rome was smaller and commanders were exchanged frequently after much shorter periods of service. The result of the more intimate relationship was that the troops had closer ties to their commander than to the ‘political heart’ of the republic. It was this that made it possible for Julius Cesar to cross the Rubicon and march his troops to Rome. This argument shows that there is an inner logic in the growth of the Roman republic which leads to its decline. It was the consequence of the republic’s necessity to keep its military active and to expand its territory in order to secure freedom against outside rivals. The political cycle has its causal interpretation for this turning point, and we can find similar interpretations in Machiavelli’s writings for other turning points. However, it seems more difficult to find a political constellation which leads out of decline and chaos into the haven of a well-ordered republic. Here the prince is more difficult to find. In any case, it is an inner logic which drives the ups and downs of the political system and not an exogenous divine authority or a divine law of nature. Despite the cyclical principle, Machiavelli believed political action and 4
Kersting (2006: 61ff) contains arguments that suggest Machiavelli had a much stronger belief in the cyclical principle than is proposed here. Human nature does not change. It wavers between selfish creed and ruthless ambition, on the one hand, and the potential to strive for the common good, on the other hand. Depending on the state of the world, we find that the one or the other inclination dominates in frequency and success. There is also the possibility of the uomo virtuoso who, supported by fortuna, will lead his people out of the lowlands of anarchy and chaos. The result of this potential and the alternative inclinations is a cyclical up-and-down which sees tyranny and free state as turning points but still contains enough leeway for the formative power of virtù and fortuna.
Exploiting The Prince
425
constitutional design to be an imperative element of history. What the principle does is to provide the opportunity to learn from history and control the future. He repeatedly suggests that his contemporaries should study and learn from the Romans. In fact, in can be said that he wrote the Discourses to serve this purpose. Similarly, The Prince is addressed to Lorenzo the Magnificent 5 to whom he says that it will not be ‘very difficult’ to gain power in Italy and to redeem the country of the barbarous cruelty and insolence of the foreigners if he calls ‘to mind the actions and lives of the men’ that he gave him as examples: Moses, Cyrus, and Theseus. … as to exercise for the mind, the prince ought to read history and study the actions of eminent men, see how they acted in warfare, examine the uses of their victories and defeats in order to imitate the former and avoid the latter, and above all, do as some men have done in the past, who have imitated some one, who has been much praised and glorified, and have always kept his deeds and actions before them, as they say Alexander the Great imitated Achilles, Cesar Alexander, and Scipio Cyrus (Prince, 83 d,m).
Machiavelli emphasizes that man has a free will. ‘God will not do everything, in order not to deprive us of free will and the portion of the glory that falls to our lot’ (Prince, 125 m). ‘It is not unknown to me’, he writes, … that many have been and are of the opinion that worldly events are so governed by fortune and by God, that men cannot by their prudence change them, and that on the contrary there is no remedy whatever, and for this they may judge it to be useless to toil much about them, but let things be rules by chance. When I think about them, at times I am partly inclined to share this opinion. Nevertheless, that our free will may not be altogether extinguished, I think it may be true that fortune is the ruler of half of our actions, but that she allows the other half or thereabouts to be governed by us (Prince, 121 m).
The quote by Albrecht Dürer that opens this paper6 illustrates that there was a major discussion during Machiavelli’s era as to whether the capacity of a human being to do something substantial was predetermined or whether it was subject to his free will and acquired skills. 7
5
He was the grandson of the Lorenzo di Medici who entered history books as ‘The Magnificent’ and died in 1492. The younger Lorenzo died 1519, too early to fulfil what Machiavelli hoped for. However it is not evident that the ‘new’ Lorenzo ever had a chance to look at Machiavelli’s text. See Gauss (1952: 11). 6 Published at the exhibition ‘Officina Dürer’ at the Museo Diocesano at Venice, April 2007. 7 Dürer, whose dates of birth and death almost coincide with Machiavelli’s worked for many years in Venice. He was not only influenced by Italian art and philosophy but also influenced it with his etchings and woodcuts which were distributed in large numbers.
426
Manfred J. Holler
3. The ‘Machiavelli Programme’: Princely Heroes and Lawmakers A prince can be destructive. He can exploit the problems of a declining republic to his own advantage. Obviously, Julius Cesar is Machiavelli’s first candidate for an illustration. A prince can be constructive. He can implement law in a society haunted by chaos and anaemia. Despite his bloodstained path to the throne, Machiavelli’s favourite example is Romulus, the mythical founder of Rome. It is well known, and Machiavelli reports it as well, that Romulus ‘killed his brother, and then have consented to the death of Titus Tatius, who had been elected to share the royal authority with him’ (Discourses, 120). Machiavelli admits that ‘from which it might be concluded that the citizens, according to the example of their prince, might, from ambition and the desire to rule, destroy those who attempt to oppose their authority’ (Discourses, 120). However, … this opinion would be correct, if we do not take into consideration the object which Romulus had in view in committing that homicide. But we must assume, as a general rule, that it never or rarely happens that a republic or monarchy is well constituted, or its old institutions entirely reformed, unless it is done by only one individual; it is even necessary that he whose mind has conceived such a constitution should be alone in carrying it into effect. A sagacious legislator of a republic, therefore, whose object is to promote the public good, and not his private interests, and who prefers his country to his own successors, should concentrate all authority in himself; and a wise mind will never censure any one for having employed any extraordinary means for the purpose of establishing a kingdom or constituting a republic.
Here the notorious dictum applies that the ‘end justifies the means.’ 8 Machiavelli (Discourses, 120f)concludes: It is well that, when the act accuses him, the result should excuse him; and when the result is good, as in the case of Romulus, it will always absolve him from blame. For he is to be reprehended who commits violence for the purpose of destroying, and not he who employs it for beneficent purposes.
In the case of Romulus and Rome, history unfolded and the Roman republic evolved. Machiavelli gave what can best be said to be an efficiency argument as to why, in the end, the princely government is expected to transform into a republican system if the governmental regime is to be stable ‘… 8
This is the famous translation in the Mentor edition of The Prince (94). The corresponding lines in Detmold’s translation of 1882 are ‘… for actions of all man, especially those of princes, are judged by the result where there is no other judge’ (49). The latter translation is perhaps less impressive. However, it does clarify that Machiavelli refers to an empirical observation and not to a normative statement.
Exploiting The Prince
427
although one man alone should organize a government, yet it will not endure long if the administration of it remains on the shoulders of a single individual; it is well, then, to confide this to the charge of many, for thus it will be sustained by the many’ (Discourses, 121). Essential for endurance, however, is the glory of a prince who is a nation builder and a lawgiver; and future glory is the carrot that motivates the prince. As we know from history, and stated in the Discourses, the transformation of Rome into a republic was not a peaceful event. It took several generations of Etruscan kings before Lucius Junius Brutus ousted king Lucius Tarquinius Superbus from Rome and founded the republic. History says that he was elected to the first consulship, but also that he had condemned his own sons to death when they conspired to bring back the Tarquinians. However, Machiavelli (Discourses, 121) concludes that Romulus deserves to be excused for the death of his brother and that of his associate, and that what he had done was for the general good, and not for the gratification of his own ambition, is proved by the fact that he immediately instituted a Senate with which to consult, and according to the opinions of which he might form his resolutions. And on carefully considering the authority which Romulus reserved for himself, we see that all he kept was the command of the army in case of war, and the power of convoking the Senate.
In fact, Machiavelli argues, that Romulus had already created the institution which assured the successful installation of the republic (Discourses, 121): This was seen when Rome became free, after the expulsion of the Tarquins, when there was no other innovation made upon the existing order of things than the substitution of two Consuls, appointed annually, in place of an hereditary king; which proves clearly that all the original institutions of that city were more in conformity with the requirements of a free and civil society than with an absolute and tyrannical government.
It can be conjectured that Machiavelli hoped that a Borgia Italy would finally transform into a republic, had it become reality and matured like Rome. Indeed there is some evidence that Cesare Borgia was interested in establishing peace and order in the territory under his control. When he [Cesare Borgia] took the Romagna, it had previously been governed by weak rulers, who had rather despoiled their subjects than governed them, and given them more cause for disunion than for union, so that the province was a prey to robbery, assaults, and every kind of disorder. He, therefore, judged it necessary to give them a good government in order to make them peaceful and obedient to his rule. For this purpose he appointed Messer Remirro de Orco, a cruel and able man, to whom he gave the fullest authority. This man, in a short time, was highly successful, whereupon the duke, not deeming such excessive authority expedient, lest it
428
Manfred J. Holler
should become hateful, appointed a civil court of justice in the centre of the province under an excellent president, to which each city appointed its own advocate. And as he knew that the hardness of the past had engendered some amount of hatred, in order to purge the minds of the people and to win them over completely, he resolved to show that if any cruelty had taken place it was not by his orders, but through the harsh disposition of his minister. And having found the opportunity he had him cut in half and placed one morning in the public square at Cesena with a piece of wood and blood-stained knife by his side. The ferocity of this spectacle caused the people both satisfaction and amazement’ (Prince, 55 m).
Note that Cesare Borgia used the law and the camouflage of legal procedure to sacrifice his loyal minister in order to please the people and to create a stable social environment. The quote nicely illustrates Machiavelli’s statement that ‘a prince should seem to be merciful, faithful, humane, religious, and upright, and should even be so in reality; but he should have his mind so trained that, when occasion requires it, he may know how to change to the opposite’ (Prince, 59 d). However, Machiavelli concludes: ‘It is not necessary … for a prince to possess all the above-mentioned qualities; but it is essential that he should at least seem to have them. I will even venture to say, that to have and to practise them constantly is pernicious, but to seem to have them is useful’ (Prince, 58f d). With an eye on Machiavelli and to Jean Paul Sartre’s play ‘Les mains sales’ (‘Dirty Hands’), Michael Walzer wrote: ‘No one succeeds in politics without getting his hands dirty’ (Walzer 1973: 164). 9 Cesare Borgia, also called ‘the Duke’, was not tempted to be constantly merciful, faithful, humane, religious, and upright. On the contrary. In fear that a successor to his papal father might seek to take away from him what he had gained under his father’s pontificate, he destroyed ‘all who were of blood of those ruling families which he had despoiled, in order to deprive the pope of any opportunity’ (Prince, 56 m). However, it seems that either the Duke misjudged Julius II – or, in the end, he could not prevent that the Rovere gained the papal crown as he himself was ‘of very weak health’ when his father died. Machiavelli conjectured that in good health Cesare Borgia could have blocked the Rovere from the Pontificate and assured his own power and freedom (Prince, 27 d). But fortuna was not with him: sick as he was he was imprisoned and brought to Spain. It has been argued that Machiavelli’s choice of Cesare Borgia to become the hero of The Prince was a grave error from the standpoint of his later reputation as ‘Cesare had committed crimes on his way to power, and it might be added that he had committed other crimes too’ (Gauss 1952: 12f). It seems that Machiavelli had foreseen such a critique and writes in The Prince (57 m): ‘Reviewing thus all the actions of the Duke, I find nothing to blame, on the contrary I feel bound, as I have done, to hold him up as an 9
See Holler (2007b) an extensive discussion of Machiavelli and the dirty hands principle.
Exploiting The Prince
429
example to be imitated by all who by fortune and with the arms of others have risen to power’. A central hypothesis is that the target of Machiavelli’s political writings was the revival of Roman republic in sixteenth century Italy in the form of a united national state that could resist the claims and the power of the vassals and followers of the French and Spanish Crown and of the German Emperor who divided Italy like a fallen prey. There are straightforward indicators of this agenda in The Prince. At the end of Chapter 26, Machiavelli directly addresses the governing Medici to whom he dedicated his text: ‘It is no marvel that none of the before-mentioned Italians have done that which it is hoped your illustrious house may do’ (Prince, 125 m); and ‘May your illustrious house therefore assume this task with that courage and those hopes which are inspired by a just cause, so that under its banner our fatherland may be raised up …’ (Prince, 107 m). However, a unification of Italy under the umbrella of a ‘princely’ family is just the first step in the Machiavelli programme. It was meant to be the first stage in an evolutionary process which, in the end, would lead to a more or less stable republican system. The dedication of The Prince to Lorenzo has been interpreted as Machiavelli’s attempt to gain the favour of a powerful Medici ‘in the hope that they might invite him back to public service’ (Gauss 1952: 11). This interpretation seems to be widely accepted and probably has some truth. In the context of his programme, however, the dedication can also be interpreted as an attempt to initiate a second go at creating a united Italy under the rule of the Medici to guarantee peace and order. In a letter to his friend Francesco Guicciardini, Machiavelli suggested the Condottiere Giovanni de’Medici as liberator of Italy. 10 The dynamics of Machiavelli’s programme becomes evident when we compare the history of Rome as interpreted in the Discourses with the facts which we learn about Cesare Borgia as written down in The Prince. In both cases we have an extremely cruel beginning in which the corresponding ‘heroes’ violate widely held norms of humanity. However, whoever has the power should follow the path outlined by Cesare Borgia – and by Romulus. In Machiavelli’s interpretation these murders guaranteed that one (and only one) person will define the common good. It was the will of the prince. If his choices were consistent then the social choices were consistent as well. The prince might even consider the wellbeing of the people – in order to gain their love and support and assure eternal glory. In the next section we will discuss the idea that Machiavelli’s prince can be identified with an Arrovian dictator. However, it is important to note that for Machiavelli, Cesare Borgia’s cruelties and Romulus’s fratricide were vio10
Francesco Guicciardini later became the highest official at the papal court and even first commander of the army of the Pope. He remained a friend to Machiavelli till the latter died, but often did not support his plans and ideas. See Zorn (1977: xxxviif, lix).
430
Manfred J. Holler
lations of moral norms. He even argues that the violation of moral norms can be a handicap to fame and glory. ‘It cannot be called virtue to kill one’s fellow-citizens, betray one’s friends, be without faith, without pity, and without religion; by these methods one may indeed gain power, but not glory’ (Prince, 60 m). Cesare was well-advised to let Messer Remirro commit the cruelties. However, Romulus was also excused for killing his brother. The period of cruelties and ‘destructive purification’ were meant to be followed, in the case of both Rome and the unified Italy, by peace and order. In the case of Rome, the establishment of law by the prince was a major component for the establishment and the success of the republic.
4. Preference Aggregation and Dictatorship Machiavelli was a republican. However, he was quite aware that the aggregation of individual preferences to form a social preference order does not work if preferences are ‘unconstrained’ and could, for instance, violate Duncan Black’s single-peakedness or a weaker condition. More than 400 years later, Kenneth Arrow (1963) showed that a social preference order which satisfies a few intuitively appealing conditions, exists only if it is dictatorial, i.e., only if it is identical with the preferences of an individual i, irrespective of what the preferences of the other members of the society are and how they change. A precondition, of course, is that the preferences of the dictator form a (transitive, reflexive, and complete) ordering. This solution naturally necessitates a strong will and power of implementation. As we can read in textbooks, Arrow was not concerned about power and will. The implication of his analysis was that it was impossible to guarantee an ideal democratic aggregation of preferences and that in real life we have to look for ways out of this bind. Obviously we have to note that no one will go to the barricades if the independence of irrelevant alternatives, one of Arrow’s appealing axioms, is violated and the social ranking of the social states x and y depend on whether there is an alternative social state z in the agenda. Such a problem prevails if society relies on simple majority voting and faces a Condorcet Paradox (i.e. the majority cycle). Starting from this observation the conclusion is either to drop this independence axiom (or one of the other axioms) or to take a more radical path and reject the ordinal utility project which is at the heart of Arrow’s theory. With cardinal utilities and adequate rules of interpersonal comparison of utilities we can overcome the problems of aggregation of preferences as generalized in Arrow’s theorem. For example, in a recent paper, Hillinger (2005: 295) suggests ‘utilitarian voting’ which allows a voter to ‘score each alternative with one of the scores permitted by a given voting scale’. He finds that in ‘ordinal voting’ scores are unjustifiably restricted. This of course is an interesting aspect because one of the axioms of Arrow’s theorem is unrestricted domain. Hillinger’s analysis suggests that there is a
Exploiting The Prince
431
(too) strong restriction implicit to the choice of ordinal scales. Why not to allow a domain which is cardinal? In general, however, societies are more explicit in violating the unrestricted domain assumption. Minors, for instance, are not allowed to vote, and in the State of Iowa felon disenfranchisement laws bar almost 35 percent of its African-American population from voting (DeParle 2007: 35). In a recent study, Manza and Uggen (2006: 248–253) observed that by election day in 2004, the number of disenfranchised felons had grown to 5.3 million, with another 600,000 effectively stripped of the vote because they were jailed awaiting trial. Nationally, these individuals made up less than 3 percent of the voting-age population, but nine percent in Florida, eight percent in Delaware, and seven percent in Alabama, Mississippi, and Virginia. 11 Strictly speaking, the disenfranchisement does not restrict the domain, but the set of voters. However, one idea of jailing is that people revise their preferences so that they finally become good citizens, i.e. adopt preferences which are consistent with the rules of the society and the preferences of the people who define these rules in accordance to their preferences. This reasoning seems even more straightforward with respect to minors: forming consistent preferences is at the heart of schooling. Is this the meaning of education and the reason why the public takes an interest in it? Only if minors become grown-ups and, in the USA, can prove that they had some education, they can register as voters. Whether the disenfranchisement policy stabilizes American democracy is questionable, at least in the long run. But, to some extent, it bridges the gorge formalized in Arrow’s theory because it homogenizes the preference profile of the society when it comes to voting. ‘Aristotle must be turning in his grave. The theory of democracy can never be the same (actually, it never was!) since Arrow’, is how Paul Samuelson (1972) commented on Arrow’s work when Arrow was awarded the Nobel Prize in economics. Perhaps one should emphasize the ‘actually, it never was!’ because as William Riker (1982) argued, democracy is a set of rules which allows the substitution of one governing elite by another – by means of majority voting. In part this is an adequate description of the governmental system of ‘his America’; and in part it is what he suggests as a way around the aggregation problems that hide in what he calls the ‘populist’ conception of democracy. In the taking-turns model of U.S. democracy and the compromise model of the Roman republic which was governed by the aristocrats and plebeians, representation is limited and the set of relevant preferences appears to be 11
‘Disenfranchised felon is a term that encompasses three groups. Some 27 percent are still behind bars. Others, 34 percent, are on probation or parole. And the larger share, 39 percent, are ‘ex-felons’ whose sentences have been served’ (DeParle 2007: 35).
432
Manfred J. Holler
very constrained. But both systems are quite efficient if we take the international success of the regime as a measuring rod. Considering world power, the two regimes have often been compared to each other. However, both regimes were installed in pre-existing order: the one was colonial while the other traces back to the will of a principe nuovo (i.e., a tyrant hero). Therefore, neither the U.S. nor the Roman republic model can tell us how order emerges from a world of chaos. In two volumes Binmore (1994, 1998) analyzes the condition for the evolution of social norms, more specifically, of justice. It has to be said that for Binmore justice and moral behaviour are only a means of co-ordination: ‘Just as it is actually within our power to move a bishop like a knight when playing Chess, so we can steal, defraud, break promises, tell lies, jump lines, talk too much, or eat peas with our knives when playing a morality game. But rational folk choose not to cheat for much the same reason that they obey traffic signals’ (Binmore 1998: 6). Binmore’s game theoretic analysis makes clear that a homogeneous society is just an evolutionary possibility and not a necessary consequence. There is hope, but most likely more then one set of norms will develop and conflict seems unavoidable – if only substantiated as coordination failure. Machiavelli proposes the principe nuovo who combines power and will to solve the coordination problem in accordance with his preferences. To some extent, these preferences can generally be summarized by capturing power and defending it against competitors. However, there is also the expected glory of the founder of a state which seems to add to the power motivation. To qualify for this glory the principe nuovo has to stabilize what he has created. This is where social efficiency and the republic enter the scene. In the short run, the prince can gain the love of the people by taking their preferences into account. Here the urge for efficiency could match with princely consistency. However, the prince might be forced to ignore some preferences and thus restrict the domain to solve the aggregation problem to his satisfaction – or assure generally shared standards, e.g. of stability and leave the pursuance of happiness to the individual citizens. However, in Chapter 21 of The Prince, Machiavelli suggests a more active part for the prince. He should … encourage his citizens quietly to pursue their vocations, whether of commerce, agriculture, or any other human industry; so that the one may not abstain from embellishing his possessions for fear of their being taken from him, nor the other from opening new sources of commerce for fear of taxes. But the prince should provide rewards for those who are willing to do these things, and for all who strive to enlarge his city or state (Prince, 76 d).
Again, efficiency could be a by-product of consistency and a policy that aims for stability.
Exploiting The Prince
433
Machiavelli repeatedly gives a warning of excessive taxes, especially if high taxes are the consequence of excessive spending in order to appear ‘liberal.’ If the prince desires ‘the reputation of being liberal,’ he must not stop at any degree of sumptuousness; so that a prince will in this way generally consume his entire substance, and may in the end, if he wishes to keep up his reputation for liberality, be obliged to subject his people to extraordinary burdens, and resort to taxation, and employ all sorts of measures that will enable him to procure money. This will soon make him odious with his people; and when he becomes poor, he will be contemned by everybody; so that having by his prodigality injured many and benefited few, he will be the first to suffer every inconvenience, and be exposed to every danger (Prince, 52 d).
The love of the people is a valuable protection against internal and external enemies. In the long run, the prince may even give power to the people and introduce a republican order of the kind which Machiavelli finds in the history of Romulus. The motivation is the glory which lies in the continuation of a system which the prince created. The problem with the princely preferences is that they are too focused on power and glory and it is very different from deriving them from a social preference ordering. Consistency is always in danger if the decision maker is only weakly motivated to order alternatives he is not very much interested in. An additional problem is that for the prince there will always be a tradeoff between power and glory and there will be different answers to this trade-off depending on his experiences and expectations. Obviously, the set of social states which is relevant for the prince is different from the set of social states which is relevant to the people. The preferences of the citizens are constraints on princely action; they become relevant for him if he wants to gain their support or has to be afraid of their resistance. The way Cesare Borgia treated the people of the Romagna is illustrative. Thus we can conclude that the problem of the prince is not about aggregating preferences to solve a social choice problem, but how to make use of the people to assure power and gain glory. Inconsistent preferences, cyclical majorities, etc. are often favourable preconditions to exploit the potential of the people. How the prince can make use of it is a public choice problem.
5. Consistency versus Efficiency Machiavelli (Prince, 121 d) proposes that the ‘lawgiver should … be sufficiently wise and virtuous not to leave this authority which he has assumed either to his heirs or to any one else; for mankind, being more prone to evil than to good, his successor might employ for evil purposes the power which he had used only for good ends.’ But there is no guarantee that the founding hero is wise and strong
434
Manfred J. Holler
enough to follow this advice. On the other hand, power mutates to glory if the consistency and stability of the princely time is complemented by the efficiency of the republic in a later period. Machiavelli gives a series of argument why ‘the people are wiser and more constant than princes’ (Discourses, 149) if controlled by law. For a prince who knows no other control but his own will is like a madman, and a people that can do as it pleases will hardly be wise. If now we compare a prince who is controlled by laws, and a people that is untrammelled by them, we shall find more virtue in the people than in the prince; and if we compare them when both are freed from such control, we shall see that the people are guilty of fewer excesses than the prince, and that the errors of the people are of less importance, and therefore more easily remedied. For a licentious and mutinous people may easily be brought back to good conduct by the influence and persuasion of a good man, but an evil-minded prince is not amenable to such influences, and therefore there is no other remedy against him but cold steel. We may judge then from this of the relative defects of the one and the other; if words suffice to correct those of the people, whilst those of the prince can only be remedied by violence, no one can fail to see that where the greater remedy is required, there also the defects must be greater. The follies which a people commits at the moment of its greatest license are not what is most to be feared; it is not the immediate evil that may result from them that inspires apprehension, but the fact that such general confusion might afford the opportunity for a tyrant to seize the government. But with evil-disposed princes the contrary is the case; it is the immediate present that causes fear, and there is hope only in the future; for men will persuade themselves that the termination of his wicked life may give them a chance of liberty. Thus we see the difference between the one and the other to be, that the one touches the present and the other the future. The excesses of the people are directed against those whom they suspect of interfering with the public good; whilst those of princes are against apprehended interference with their individual interests.
As we have argued above, the individual interests of a prince, the striving for power and glory, can coincide with the public good. Yet, in the hand of the prince the public good is an instrument and not an aim. This is not the case with a republic if people participate in public decision-making. Once again, Machiavelli (Discourses, 149) turns to Rome to demonstrate the good sense of the people: … so long as that republic remained uncorrupted, neither obeyed basely nor ruled insolently, but rather held its rank honorably, supporting the laws and their magistrates. And when the unrighteous ambition of some noble made it necessary for them to rise up in self-defence, they did so, as in the case of Manlius, the Decemvirs, and others who attempted to oppress them; and so when the public good required them to obey the Dictators and Consuls, they promptly yielded obedience .
The participation of the common people in the governance of the
Exploiting The Prince
435
Roman republic was necessary to find compromises which could serve as a well-balanced proxy of the common good and to implement it. Machiavelli writes that ‘… under their republican constitution,’ the Romans had one assembly controlled by the nobility, another by the common people, with the consent of each being required for any proposal to become law. Each group admittedly tended to produce proposals designed merely to further its own interests. But each was prevented by the other from imposing its own interests as laws. The result was that only such proposals as favoured no faction could ever hope to succeed. The laws relating to the constitution thus served to ensure that the common good was promoted at all times (Skinner 1984: 246).
It is obvious from Machiavelli’s political writings that he believed a republic to be the most stable of political institution. The costs in taking them by force and to establish a princely power are likely to be prohibitive, compared to the capture of power in a principality. ‘… in republics there is greater life, greater hatred, and more desire for vengeance; they do not and cannot cast aside the memory of their ancient liberty, so that the surest way is either to lay them waste or reside in them’ (Prince, 47 m). This goes along with the theoretical observation that democracies have a larger capacity of making binding commitments. Huck and Konrad (2005: 577), for instance, demonstrated that if the democratic decision body becomes sufficiently large, and voters cannot coordinate, democracies are able to ‘commit on ex post irrational outcomes’ – which, however, are efficient from a social point of view. The larger the number of decision makers, the stronger the inclination to let others commit the ‘crimes’. The consequence is that there will be no majority to implement an individual rational result which would violate social welfare. The social good is the result of a coordination failure. Moreover, efficiency results from the fact that in a successful republic citizens are ranked in accordance with their potential to contribute to the common welfare and duties and positions are allocated correspondingly. Machiavelli (Discourse, 221) reports that … the Roman republic, after the plebeians became entitled to the consulate, admitted all its citizens to this dignity without distinction of age or birth. In truth, age never formed a necessary qualification for public office; merit was the only consideration, whether found in young or old men. … As regards birth, that point was conceded from necessity, and the same necessity that existed in Rome will be felt in every republic that aims to achieve the same success as Rome; for men cannot be made to bear labor and privations without the inducement of a corresponding reward, nor can they be deprived of such hope of reward without danger.
Having read this one might wonder why both the republic and the people often suffer from such a bad reputation. If we follow Machiavelli, it is because they seem to be unable, in a situation of disorder, to create good
436
Manfred J. Holler
laws and to implement them. But under the control of the law the reputation of the republic and the people is still highly challenged despite the above arguments in their favour when compared to monarchy or dictatorship. Machiavelli (Discourses, 149) gives a very thoughtful explanation for this ‘paradox’ which seems even more valid today than ever. The general prejudice against the people results from the fact that everybody can freely and fearlessly speak ill of them in mass, even whilst they are at the height of their power; but a prince can only be spoken of with the greatest circumspection and apprehension.
6. Politics and Politicians Machiavelli’s core advice to politicians can be summarized by the following well-known recipe: ‘Cruelties should be committed all at once, as in that way each separate one is less felt, and gives less offence; benefits, on the other hand, should be conferred one at a time, for in that way they will be more appreciated’ (Prince, 32 d). More specifically, he (Prince, 31f d) explains that … we may call cruelty well applied (if indeed we may call that well which in itself is evil) when it is committed once from necessity for self-protection, and afterwards not persisted in, but converted as far as possible to the public good. Ill-applied cruelties are those which, though at first but few, yet increase with time rather than cease altogether. … Whence it is to be noted that in taking possession of a state the conqueror should well reflect as to the harsh measures that may be necessary, and then execute them at a single blow, so as not to be obliged to renew them every day; and by thus not repeating them, to assure himself of the support of the inhabitants, and win them over to himself by benefits bestowed. And he who acts otherwise, either from timidity or from being badly advised, will be obliged ever to be sword in hand, and will never be able to rely upon his subjects, who in turn will not be able to rely upon him, because of the constant fresh wrongs committed by him.
In a recent article, based on an interview, Václav Havel, former president of Czechoslovakia (1990–1992) and then of the Czech Republic (1993– 2003) gives the following account of a meeting with Václav Klaus, his successor. After the June 1990 election, the Civic Forum asked its president Václav Havel to tell Václav Klaus that ‘he wasn’t going to be minister of finance, but rather chairman of the Czechoslovak State Bank’. Havel (Havel 2007: 19) reports that he ‘failed shamefully’: When I informed Klaus of this, he shot back that it was out of question, that the entire world knew him as the Czechoslovak minister of finance, that he could hold no other position, and that his departure from the government would be catastrophic. And rather than telling him that that was the decision of the winning party, and if he didn’t want to head the state bank, then he could do whatever he pleased, I politely backed down and said something like ‘All right, then.’ The Civic Forum was very up-
Exploiting The Prince
437
set with me for not doing the job, and Klaus’s antipathy toward me grew into hatred. I had behaved like a typical bad politician: I hadn’t done what I’d promised to do and in the process managed to make everyone mad at me.
It should be added that in spite of breaking substantial Machiavellian rules of successful political conduct, Václav Havel was elected twice to serve as Czech president. When the interviewer asked Havel whether he thinks ‘that if Václav Klaus had not become the first minister of finance right after the revolution, and if he had not continued in that post after the first elections, he would really have stayed out of politics’, Havel (Havel 2007: 19) responded that he does not think so. ‘On the contrary’, he has ‘the feeling that sooner or later, regardless of circumstances,’ Klaus ‘would have risen to the top, and perhaps even become the head of the state’. As managing editor of the European Journal of Political Economy I was responsible for two book reviews co-authored by Václav Klaus. 12 Contrary to Havel’s experience, my experience (then and now) with Klaus was more pleasant. Perhaps this was due to the fact that I was not, and never will be, in a position of political power. This closes my exploitation of The Prince.
References Arrow, K.J. (1963) Social Choice and Individual Values (2nd edn), John Wiley. Binmore, K. (1994) Playing Fair, Game Theory and the Social Contract, vol. I, MIT Press. Binmore, K. (1998) Just Playing, Game Theory and the Social Contract, vol. II, MIT Press. DeParle, J. (2007) The American Prison Nightmare, New York Review of Books 56 (April 12): 33–36. Gauss, C. (1952) Introduction to the Mentor Edition of Niccolò Machiavelli, The Prince, Mentor Books. Havel, V. (2007) ‘Václav vs. Václav’, New York Review of Books 54 (May 10): 18–21. Hillinger, C. (2005) The Case for Utilitarian Voting, Homo Oeconomicus 22: 295–321. Holler, M.J. (2007a) Niccolò Machiavelli on Power, in L. Donskis (ed.) Niccolò Machiavelli: History, Power, and Virtue, Versus Aureus. Holler, M.J. (2007b) The Machiavelli Program and the Dirty Hands Problem, in P. Baake and R. Borck (eds) Contributions in Honor of Charles Blankart, Springer, 39– 62. Huck, S. and Konrad, K.A. (2005) Moral Cost, Commitment, and Committee Size, Journal of Institutional and Theoretical Economics 161: 575–588. Kersting, W (2006) Niccolò Machiavelli (3rd ed.), C.H. Beck. Michaels, R. and Jansen, N. (2006) Private Law Beyond the State? Europeanization, Globalization, Privatization, American Journal of Comparative Law 54: 843–890. Machiavelli, N. (1882) The Prince, in The Historical, Political, and Diplomatic Writings of Niccolò Machiavelli, transl. by Christian E. Detmold, vols i–iv. Osgood and Co. (quoted as Prince d). Machiavelli, N. (1882) Discourses on the First Ten Books of Titus Livius, in: The Historical, 12
See volumes 5 (1989) and 6 (1990).
438
Manfred J. Holler
Political, and Diplomatic Writings of Niccolò Machiavelli, transl. by Christian E. Detmold, vols i–iv. Osgood and Co. (quoted as Discourses). Machiavelli, N. (1882) History of Florence, in: The Historical, Political, and Diplomatic Writings of Niccolò Machiavelli, transl. by Christian E. Detmold, vols i–iv. Osgood and Co. (quoted as History). Machiavelli, N. (1952) The Prince, Mentor Books (quoted as Prince m). Machiavelli, N. (1977/1531) Discoursi. Gedanken über Politik und Staatsauffassung, transl. by Rudolf Zorn (2nd ed.), Alfred Kroener Verlag. Manza, J. and Uggen, C. (2006) Locked Out: Felon Disenfranchisement and American Democracy, Oxford University Press. Riker, W. (1982) Liberalism Against Populism, Freeman. Samuelson, P. (1972) The 1972 Nobel Prize for Economic Science, Science 178: 487– 489. Skinner, Q. (1984) The Paradoxes of Political Liberty, The Tanner Lectures on Human Values, Harvard University. Walzer, M. (1973) Political Action: The Problem of Dirty Hands, Philosophy and Public Affairs 1: 160–180.