Environmental Economics, Experimental Methods
The 1970s and 1980s saw environmental economists increasingly turn to ex...
84 downloads
2220 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Environmental Economics, Experimental Methods
The 1970s and 1980s saw environmental economists increasingly turn to experimental methods in an attempt to discover new ways of protecting people and nature without wasting scarce resources in the process. Today the experimental method is commonly applied to environmental economic questions; this book brings together 63 leading researchers in the area and their latest work exploring the behavioral underpinnings of experimental environmental economics. The chapters in this volume will be illuminating for both researchers and practitioners, specifically in relation to questions of environmental policy and how a proposed change in incentives or benefits might affect behavior and, consequently, the likely success of a policy. This book argues that the experimental evidence complements theoretic insights, field data and simulating models to improve our understanding of the underlying assumptions and incentives that drive behavioral responses to policy. This volume covers topical areas of interest such as tradable permit markets, common property and public goods, regulation and compliance and valuation and preferences. Its critical advantage is that each part concludes with discussion points written by environmental economists who do not use experimental methods. This book will interest students and researchers engaged with environmental economics, both experimental and non-experimental and offer a unique in-road into this field of study. Environmental policy makers will also gain insight into behavior and decision making under alternative institutional and policy designs. Todd L. Cherry is the Harlan E. Boyles Professor in the Department of Economics at Appalachian State University, where he also is a research fellow at the Appalachian Energy Center. Stephan Kroll is Professor at the Department of Economics at California State University in Sacramento. Jason F. Shogren is the Stroock Distinguished Professor of Natural Resource Conservation and Management, and Professor of Economics at the University of Wyoming.
Routledge explorations in environmental economics Edited by Nick Hanley University of Stirling, UK
1
Greenhouse Economics Value and ethics Clive L. Spash
2
Oil Wealth and the Fate of Tropical Rainforests Sven Wunder
3
The Economics of Climate Change Edited by Anthony D. Owen and Nick Hanley
4
Alternatives for Environmental Valuation Edited by Michael Getzner, Clive Spash and Sigrid Stagl
5
Environmental Sustainability A consumption approach Raghbendra Jha and K.V. Bhanu Murthy
6
Cost-Effective Control of Urban Smog The significance of the Chicago cap-and-trade approach Richard F. Kosobud, Houston H. Stokes, Carol D. Tallarico and Brian L. Scott
7
Ecological Economics and Industrial Ecology Jakub Kronenberg
8
Environmental Economics, Experimental Methods Edited by Todd L. Cherry, Stephan Kroll and Jason F. Shogren
Environmental Economics, Experimental Methods
Edited by Todd L. Cherry, Stephan Kroll, and Jason F. Shogren
First published 2008 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Simultaneously published in the USA and Canada by Routledge 270 Madison Ave, New York, NY 10016 This edition published in the Taylor & Francis e-Library, 2007. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” Routledge is an imprint of the Taylor & Francis Group, an informa business © 2008 Selection and editorial matter, Todd L. Cherry, Stephan Kroll and Jason F. Shogren; individual chapters, the contributors All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloguing in Publication Data A catalog record for this book has been requested ISBN 0-203-93536-5 Master e-book ISBN ISBN10: 0-415-77072-6 (hbk) ISBN10: 0-203-93536-5 (ebk) ISBN13: 978-0-415-77072-9 (hbk) ISBN13: 978-0-203-93536-1 (ebk)
Contents
List of figures List of tables List of contributors
ix xii xvi
Foreword
xix
VERNON L. SMITH
Introduction
1
TODD L. CHERRY, STEPHAN KROLL, AND JASON F. SHOGREN
PART I
Tradable permit markets 1 Baseline-and-credit emission permit trading: experimental evidence under variable output capacity
7
9
NEIL J. BUCKLEY, STUART MESTELMAN, AND R. ANDREW MULLER
2 A laboratory analysis of industry consolidation and diffusion under tradable fishing allowance management
29
CHRISTOPHER M. ANDERSON, MATTHEW A. FREEMAN, AND JON G. SUTINEN
3 Caveat emptor Kyoto: comparing buyer and seller liability in carbon emission trading
47
ROBERT GODBY AND JASON F. SHOGREN
4 A test bed experiment for water and salinity rights trading in irrigation regions of the Murray Darling Basin, Australia CHARLOTTE DUKE, LATA GANGADHARAN, AND TIMOTHY N. CASON
77
vi Contents 5 Aligning policy and real world settings: an experimental economics approach to designing and testing a cap-and-trade salinity credit policy
100
JOHN R. WARD, JEFFERY CONNOR, AND JOHN TISDELL
6 Discussion: tradable permit markets
131
DALLAS BURTRAW AND DAN SHAWHAN
PART II
Common property and public goods 7 Communication and the extraction of natural renewable resources with threshold externalities
141
143
C. MÓNICA CAPRA AND TOMOMI TANAKA
8 Unilateral emissions abatement: an experiment
157
BODO STURM AND JOACHIM WEIMANN
9 Voluntary contributions with multiple public goods
184
TODD L. CHERRY AND DAVID L. DICKINSON
10 Can public goods experiments inform policy? Interpreting results in the presence of confused subjects
194
STEPHEN J. COTTEN, PAUL J. FERRARO, AND CHRISTIAN A. VOSSLER
11 Spies and swords: behavior in environments with costly monitoring and sanctioning
212
ROB MOIR
12 Discussion: common property and public goods
234
CATHERINE L. KLING
PART III
Regulation and compliance
241
13 Managerial incentives for compliance with environmental information disclosure programs
243
MARY F. EVANS, SCOTT M. GILPATRIC, MICHAEL MCKEE, AND CHRISTIAN A. VOSSLER
14 An investigation of voluntary discovery and disclosure of environmental violations using laboratory experiments JAMES J. MURPHY AND JOHN K. STRANLUND
261
Contents vii 15 Congestion pricing and welfare: an entry experiment
280
LISA R. ANDERSON, CHARLES A. HOLT, AND DAVID REILEY
16 Social preferences in the face of regulatory change
293
J. GREGORY GEORGE, LAURIE T. JOHNSON, AND E. ELISABET RUTSTRÖM
17 The effects of recommended play on compliance with ambient pollution instruments
307
ROBERT J. OXOBY AND JOHN SPRAGGON
18 Discussion: regulation and compliance
324
KATHLEEN SEGERSON
PART IV
Valuation and preferences
329
19 Preference reversal asymmetries in a static choice setting
331
TIMOTHY HAAB AND BRIAN ROE
20 Measuring preferences for genetically modified food products
344
CHARLES NOUSSAIR, STEPHANE ROBIN, AND BERNARD RUFFIEUX
21 An experimental investigation of choice under “hard” uncertainty
366
CALVIN BLACKWELL, THERESE GRIJALVA, AND ROBERT P. BERRENS
22 Rationality spillovers in Yellowstone
383
CHAD SETTLE, TODD L. CHERRY, AND JASON F. SHOGREN
23 Wind hazard risk perception: an experimental test
395
BRADLEY T. EWING, JAMIE B. KRUSE, AND MARK A. THOMPSON
24 Consequentiality and demand revelation in double referenda KATHERINE S. CARSON, SUSAN M. CHILTON, AND W. GEORGE HUTCHINSON
407
viii Contents 25 Investigating the characteristics of stated preferences for reducing the impacts of air pollution: a contingent valuation experiment
424
IAN J. BATEMAN, MICHAEL P. CAMERON, AND ANTREAS TSOUMAS
26 Forecasting hypothetical bias: a tale of two calibrations
447
F. BAILEY NORWOOD, JAYSON L. LUSK, AND TRACY BOYER
27 Discussion: valuation and preferences
466
JOHN C. WHITEHEAD
Index
476
Figures
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 2.1 2.2 2.3 2.4 2.5 2.6 3.1 3.2 3.3 3.4 3.5 3.6 3.7 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9
Firm cost curves Sequence of events in a typical period Cap-and-trade equilibrium Baseline-and-credit equilibrium Capacity Output volume Aggregate emissions Efficiency Permit trading prices Aggregate permit inventory Profit functions for operators Diffusion treatment prices with an initial lease period Consolidation treatment prices with an initial lease period Average efficiency in the diffusion treatment Average efficiency in the consolidation treatment Percentage of market shares held by large and medium–large operators Session procedure Efficiency and emission outcomes by treatment Mean aggregate buyer production by treatment Mean aggregate seller production by treatment Mean permit prices by treatment Mean trades by treatment Efficiency and emission outcomes by treatment The Sunraysia irrigation districts and salinity impact zones Market demand and supply for water Transaction price in the water market, treatment 1 Transaction price in the water market, treatment 2 Transaction price in the water market, treatment 3 Transaction quantity in the water market, treatment 1 Transaction quantity in the water market, treatment 2 Transaction quantity in the water market, treatment 3 Transaction price in the salt market, treatment 2
12 13 18 19 22 22 23 23 25 26 35 37 37 39 40 41 56 59 60 61 61 68 72 79 83 87 88 88 89 89 90 91
x
Figures
4.10 4.11 4.12 5.1 5.2 5.3 5.4 5.5 5.6 7.1 7.2 7.3 7.4 7.5 8.1 8.2 8.3 8.4 8.5 8.6 8.7 9.1 9.2 9.3 9.4 10.1 10.2 11.1 11.2 11.3 11.4 15.1 15.2 15.3
Transaction price in the salt market, treatment 3 Transaction quantity in the salt market, treatment 2 Transaction quantity in the salt market, treatment 3 Schematic of the hydrogeology of irrigation water quality affected by variable upper catchment salt loads Observed aggregate recharge in the discriminant and uniform price tender treatments Observed and predicted bids for the uniform tender treatment Observed and predicted bids for the discriminant tender treatment Aggregate farm income, including gains from trade, observed in the open and closed call market treatments Recharge units traded in the 70 percent reduction, uniform price, social payment and communication treatments Production functions Average resource amount Resource stock and number of subjects with relevant messages in each period and horizon Resource stock and number of subjects with relevant messages in each period and horizon Resource stock and number of subjects with relevant messages in each period and horizon Abatement per period Scatterplots for seq-treatments Abatement over periods Alpha Individual behavior of country 1 and j Profit per period Classification of 36 groups in the sequential treatments MPCR by group account for multiple heterogeneous treatment Total contributions to group accounts by treatment Group contributions across homogeneous competing group accounts Group contribution across heterogeneous competing group accounts GHL application, comparison of all-human and virtual-player contributions Ferraro and Vossler (2005) experiment, mean contributions Aggregate CPR appropriations by treatment Gross efficiency gain over Nash by treatment Net efficiency gain over Nash by treatment Group monitoring levels by treatment Benefits and costs of the risky route An entry game session with 60 rounds (me070804) Predicted and observed distributions of entry outcomes for a session with 60 rounds (me070804)
91 92 92 103 117 119 119 121 123 147–8 149 151 152 152 166–7 167 170–1 172 172 174–5 177 189 189 190 191 203 206 220 221 222 225 283 285 286
Figures xi 15.4 15.5 15.6 16.1 17.1 17.2 17.3 17.4 17.5 17.6 17.7 19.1 20.l 20.2 21. l 21.2 22.1 23.1 23.2 24.1
Entry game with $2 entry fee (me062904) Entry game with information about other entrants (me071504) Entry game with voting on entry fee (me063004) Distributions of bids for the alternative solutions Mean group totals by treatment and period, tax/subsidy instrument Mean group totals by treatment and period, tax instrument Mean efficiency by treatment and period, tax/subsidy instrument Mean efficiency by treatment and period, tax instrument Mean decision by subject type and period, tax/subsidy instrument Distributions of individual decisions, by treatment, tax/subsidy Distributions of individual decisions, by treatment, tax Between-subject comparison: experiment 1 post-treatment preferences for tasks (by treatment) Average bids for the four biscuits in each period of GMO phase, experiment 1 Average bids for the two identical chocolate bars in periods 1–3, experiment 2 Basic decision tree Histogram of participant criteria selection for scenarios 1–3 Preference reversal rates Subject predictions on failure by shingle loss and building breach (modular and manufactured test specimens) Certainty equivalents for incorrect and correct answers Schematic of the design of the double referendum experiments
286 288 289 301 313 313 314 314 317 318 319–20 337 355 358 370 375 390 401 404 410
Tables
1.1 1.2 1.3 1.4 1.5 2.1 2.2 2.3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.1 4.2 4.3 4.4 4.5 5.1 5.2 5.3 5.4 7.1
Cost parameters Variable capacity predictions Equilibrium surplus efficiency Mean values over periods 6 to 9 by treatment Mean efficiency over periods 6 to 9 Market share holdings for operators First-order autoregressive model of average prices in asset rounds 5 and 6 Two-way random effects regression of market share consolidation Experimental design Experiment conditions and producer costs Experiment results by treatment Buyer mean production choices (periods 6–8) Seller mean production choices (periods 6–8) Session mean price results (periods 6–8) Estimated regression results (p-values in parentheses) Experiment results by treatment: additional sessions Overproduction-to-total sales ratio Parameter ranges from Sunraysia Private redemption and cost values Experimental design Model equilibrium predictions Random effects estimation models for average transaction price in the water market Decision set, farm income, crop mix and recharge of two of 12 experimental farms ANOVA of discriminant and uniform price tender auction treatments ANOVA of closed and open call market treatments ANOVA of 70 percent reduction, uniform price, social payment and communication tender treatments Parameters of the model
18 18 20 21 24 34 38 43 52 55 63 64 65 66 67 69 71 82 82 86 86 94 111 118 121 123 145
Tables xiii 7.2 Number of periods per horizon 7.3 Random effects GLS estimation of the effects of communication on the levels of resource stock 8.1 Cost and benefit functions 8.2 Experimental treatments 8.3 Summary of parameters, abatements, profits, and payments 8.4 Regression analysis 8.5 Efficiency index 9.1 Experimental design 9.2 Individual contributions: two-way fixed effect estimates 10.1 GHL application, all-human treatment results 10.2 GHL application, virtual-player treatment results 10.3 GHL application, estimated logit equilibrium models 10.4 Ferraro et al. (2003) VCM experiment, mean contributions 11.1 Group contribution predictions 11.2 Summary data 11.3 Within treatment comparisons 11.4 Across treatment comparisons 13.1 Potential cheating and non-compliance cases 13.2 Design parameters by treatment 13.3 Observed cheating in experiments 13.4 Probit model results 14.1 Experimental design 14.2 Sequence of treatments using a Latin Square 14.3 Mean violation probabilities, expected numbers of violations, and expected numbers of enforcement actions 15.1 Payoff for the risky route 15.2 Variances of entry rates in the first ten rounds, by view treatment 15.a.1 Sessions, treatments, and data averages 16.1 Per period activity table for stage one 16.2a Typical distribution options in stage two 16.2b Selected distributions for one-stage experiment 16.3a Characteristics of subject pool from each experiment: treatment characterization (N = 168), stage 1 and stage 2 results 16.3b Characteristics of subject pool from each experiment: demographic questionnaire responses 16.4 Regression results 17.1 Mean aggregate decision numbers by treatment 17.2 Mean aggregate decision numbers, under the tax/subsidy by treatment 17.3 Mean aggregate decision numbers, under the tax by treatment 19.l Anchor treatments for experiment 1 19.2 Post-treatment preferences by treatment 19.3 Probit results on within-subject preference reversals, experiment 1
146 154 161 161 162 168 175 187 190 201 202 203 206 219 221 223 224 248 253 254 255 268 270 271 282 288 291 296 297 298 299 300 302 312 315 316 335 336 338
xiv Tables 19.4 Probit results on within-subject preference reversals, experiment 2 20.1 Sequence of events in GMO phase of an experimental session, experiment 1 20.2 Deviations of bids from valuation, induced value phase of both experiments 21.1 Regret analysis 21.2 Outcomes for treatment 1 21.3 Outcomes for treatment 2 21.4 Participants’ selections for scenarios 1–3 21.5 Variable definitions and descriptive statistics 21.6 Multinomial logit model 1 21.7 Multinomial logit model 2 22.1 Familiarity of participants to the lake trout introduction 22.2 Participants’ perceptions of the seriousness of the lake trout introduction 22.3 Participants’ preference for fish 22.4 Percentage of participants affected by seeing attractions of the park 22.5 Logit regression results for preference reversals 23.1 Descriptive statistics certainty equivalents and accuracy 24.1 Predicted and observed vote distributions – inconsequential double referendum 24.2 Demand revealing predictions and observed vote distributions – consequential double referendum treatments 24.3 Chi-squared p-values for differences in vote distributions in inconsequential double referendum and consequential double referendum treatments, by subject type 24.4 Levels and rates of non-demand revealing voting 24.5 Chi-squared p-values for tests of differences between rates of non-demand revelation in the inconsequential double referendum and consequential double referendum treatments 24.6 Chi-squared p-values for tests of differences between vote distributions, conditioned on first vote outcome 25.1 Experimental design and subsample structure 25.2 Socio-economic and demographic profile of subsamples 25.3 Descriptive WTP statistics by subsample and scheme 25.4 Significance of differences in WTP values for schemes 25.5 Mean and median WTP (£) for three air pollution impact reduction schemes, by five treatments 25.6 Treatment effects 25.7 Comparing stated WTP for Scheme A with values calculated from stepwise first responses for Scheme H and Scheme P 26.1 Descriptive statistics of experiment
340 353 359 372 372 373 374 376 377 377 385 386 386 387 392 403 411 416
417 417
417 418 432 433 435 436 438 440 441 456
Tables xv 26.2 Relationship between certainty question, hypothetical bid residuals, and hypothetical bias 26.3 Stochastic frontier estimation 26.a.1 Written answers to question
457 460 462–3
Contributors
Christopher M. Anderson, University of Rhode Island, Rhode Island, USA. Lisa R. Anderson, College of William and Mary, Virginia, USA. Ian J. Bateman, University of East Anglia, Norwich, UK. Robert P. Berrens, University of New Mexico, New Mexico, USA. Calvin Blackwell, College of Charleston, South Carolina, USA. Tracy Boyer, Oklahoma State University, Oklahoma, USA. Neil J. Buckley, York University, Toronto, Canada. Dallas Burtraw, Resources for the Future, Washington, DC, USA. Michael P. Cameron, University of Waikato, Hamilton, New Zealand. C. Mónica Capra, Emory University, Georgia, USA. Katherine S. Carson, United States Air Force Academy, Colorado, USA. Timothy N. Cason, Purdue University, Indiana, USA. Todd L. Cherry, Appalachian State University, North Carolina, USA. Susan M. Chilton, University of Newcastle upon Tyne, Newcastle upon Tyne, UK. Jeffery Connor, CSIRO, Policy and Economic Research Unit, Australia. Stephen J. Cotten, University of Tennessee, Tennessee, USA. David L. Dickinson, Appalachian State University, North Carolina, USA. Charlotte Duke, University of Melbourne, Victoria, Australia. Mary F. Evans, University of Tennessee, Tennessee, USA. Bradley T. Ewing, Texas Tech University, Texas, USA. Paul J. Ferraro, Georgia State University, Georgia, USA.
Contributors xvii Matthew A. Freeman, University of Rhode Island, Rhode Island, USA. Lata Gangadharan, University of Melbourne, Victoria, Australia. J. Gregory George, Macon State College, Georgia, USA. Scott M. Gilpatric, University of Tennessee, Tennessee, USA. Robert Godby, University of Wyoming, Wyoming, USA. Therese Grijalva, Weber State University, Utah, USA. Timothy Haab, Ohio State University, Ohio, USA. Charles A. Holt, University of Virginia, Virginia, USA. W. George Hutchinson, Queen’s University, Belfast, UK. Laurie T. Johnson, University of Denver, Colorado, USA. Catherine L. Kling, Iowa State University, Iowa, USA. Jamie B. Kruse, East Carolina University, North Carolina, USA. Stephan Kroll, California State University, California, USA. Jayson L. Lusk, Oklahoma State University, Oklahoma, USA. Michael McKee, Appalachian State University, North Carolina, USA. Stuart Mestelman, McMaster University, Hamilton, Ontario, Canada. Rob Moir, University of New Brunswick, New Brunswick, Canada. R. Andrew Muller, McMaster University, Hamilton, Ontario, Canada. James J. Murphy, University of Alaska, Anchorage, USA. F. Bailey Norwood, Oklahoma State University, Oklahoma, USA. Charles Noussair, Tilburg University, Tilburg, the Netherlands. Robert J. Oxoby, University of Calgary, Alberta, Canada. Stephane Robin, Université Louis-Pasteur, Strasbourg, France. David Reiley, University of Arizona, Arizona, USA. Brian Roe, Ohio State University, Ohio, USA. Bernard Ruffieux, Institut National de la Recherche Agronomique, Grenoble, France. E. Elisabet Rutström, University of Central Florida, Florida, USA. Kathleen Segerson, University of Connecticut, Connecticut, USA. Chad Settle, University of Tulsa, Oklahoma, USA.
xviii Contributors Dan Shawhan, Cornell University, New York, USA. Jason F. Shogren, University of Wyoming, Wyoming, USA. John Spraggon, University of Massachusetts, Massachusetts, USA. John K. Stranlund, University of Massachusetts, Massachusetts, USA. Bodo Sturm, Centre for European Economic Research (ZEW), Mannheim, Germany. Jon G. Sutinen, University of Rhode Island, Kingston, Rhode Island, USA. Tomomi Tanaka, Arizona State University, Arizona, USA. Mark A. Thompson, University of Arkansas, Little Rock, Arkansas, USA. John Tisdell, Griffith University, Queensland, Australia. Antreas Tsoumas, University of the Aegean, Mytilene, Greece. Christian A. Vossler, University of Tennessee, Tennessee, USA. John R. Ward, CSIRO, Policy and Economic Research Unit, Australia. Joachim Weimann, Otto von Guericke University Magdeburg, Magdeburg, Germany. John C. Whitehead, Appalachian State University, North Carolina, USA.
Foreword
Public goods theory and its allied “tragedy of the commons” began on a deeply pessimistic note 40–50 years ago with the contributions of Samuelson (1954) and Hardin (1968). The prevailing wisdom emphasized the impossibility of the private provision of goods whose outcomes were common across all users. Pre-dating Hardin was the less flashy and not well known but classic paper by Gordon (1954), on fisheries, which emphasized that where there was a resource management failure it was useful to think of the problem as one of property rights failure. Indeed, that was the conceptual key to finding solutions; also that almost all such resources users were, or potentially were, excludable if you could just find the way. This insight was reinforced by Acheson’s (1975) study of the Maine lobstermen who created home-grown privately enforced property rights in the open lobster sea beds off the coast of Maine. By the 1970s, however, main stream economics thought and taught that public goods could not be produced efficiently by private means. Samuelson and Hardin had swept the field. But a key contribution to inducing an about-face in the thinking of economists came from Coase (1976) on lighthouses. The canonical example of a pure public good, and of the impossibility “theorem” in the private provision of public goods, was the lighthouse, emitting signals that all ships could observe at zero marginal cost. This dramatized the concept of market failure. But Coase in effect asked, “I wonder how the early light houses came about and who financed them?” As it turned out lighthouses were privately financed before economics had become a well defined profession, let alone developed its tools for a theory of public goods. The problem of supplying incentives for private investments and aborting free riders was solved very practically by lighthouse owners who contracted with port authorities to charge docking ships for lighthouse services. These “incentive” contracts allowed the capital cost of lighthouses to be prorated among ship dockings. All ships have to dock somewhere, use lighthouses on the way, and these dockings provided an effective and practical measure of lighthouse service utilization and value in consumption. The theoretical argument that for “efficiency” the so-called “fixed” cost once incurred should not affect the price of lighthouse services was a fallacious non-starter because it omits the inefficiency that results if the lighthouse is not built! Docks, lighthouses and ships all value lighthouse services and the contracts uncovered by Coase had focused their mutual interest on a solution to this public goods problem.
xx
Foreword
And the famous “tragedy of the commons” in grazing cattle was not necessarily a tragedy, at least since AD 1224 for the high Alpine Swiss cheese makers who each summer pastured their cows on the commons: entry to summer pastures was controlled by a property right rule that “no citizen could send more cows to the alp than he could feed during the winter” (Netting 1976, p. 139). These economic design problems were solved by people completely unschooled in free rider theory, but experienced enough in their behavioral coordination problem to seek solutions that might work. Somehow they perfected them by trial and error “natural experiments” over time. The reader will have noticed by now that 1976 was a good year. Moreover, the solution that Coase found people had used to build lighthouses was actually based on the same principle used in the Swiss Alps, namely, and I will make my point by paraphrasing Netting, “no shipping company could pass more of its ships past the lighthouse than it paid for as part of ship docking charges.” The rights to a common were tied to a corollary privatized right. Theory had enabled us to see that these were examples of excludable public goods, and in all such cases the question is whether there are feasible ways of limiting use to avoid or internalize external costs, or assuring payments that cover investment cost. The solutions, as in the examples by Netting and Coase, are often ingenious beyond the imagination of the first pencil and paper theories whose primary value was in enabling us to see why there is and were problems that needed solution, but which alone could not facilitate a solution. The imagination had to range beyond the mathematics of incentive failure. Equally important, one badly needed an empirical testing ground for exposing new models and ways of thinking to tests of effectiveness. Experimental economics responded positively and effectively to this challenge beginning in the 1970s and 1980s when the Samuelson problem of public goods was addressed, along with the Gordon–Hardin problem of managing common property resources. I won’t belabor that story or its literature. You will find it in this book and my Papers in Experimental Economics (1991). That early fledgling literature has grown into an imposing contribution collected in this book on how to model, test bed, perfect and apply human ingenuity to the creation of incentives that make the solutions to these problems possible. Public goods theory will never be the same. We now think differently, more openly and positively on these issues. This exciting collection of research papers provides a compendium of practical earthy rule-governed examples – fisheries management, emission abatement, water markets, salinity control, congestion control and related measurement and monitoring issues – all in the ancient problem solving spirit of the Alpine Swiss cheese makers and the entrepreneurs who believed lighthouses could be provided privately and did it. As Hayek, puts it, “Rules alone can unite and extend order.” Enjoy! Vernon L. Smith Arlington, VA
Foreword xxi
References Acheson, J.M. 1975. “The lobster fiefs: economic and ecological effects of territoriality in the Maine lobster industry.” Human Ecology 3 (3), 183–207. Coase, R.H. 1976. Adam Smith’s View of Man, Selected Papers No. 50, Graduate School of Business, University of Chicago (Mont Pelerin Society, St. Andrews, 1–33). Gordon, H. Scott. 1954. “The economic theory of a common-property resource: the fishery.” Journal of Political Economy 62 (2), 124–42. Hardin, Garrett. 1968. “The tragedy of the commons.” Science 162: 1243–8. Netting, Robert. 1976. “What Alpine peasants have in common: observations on communal tenure in a Swiss village.” Human Ecology 4 (2), 135–46. Samuelson, P.A. 1954. “The pure theory of public expenditure.” Review of Economics and Statistics 36 (4) (November), 387–9. Smith, Vernon L. 1991. Papers in Experimental Economics, Cambridge: Cambridge University Press.
Introduction Todd L. Cherry, Stephan Kroll and Jason F. Shogren
Greater environmental protection at fewer costs – few people would disagree with the idea that environmental policy should be trying to achieve this wideranging goal. How to identify and implement the strategies that can move people toward this goal falls within the purview of environmental economics. Over the last five decades, researchers in environmental economics have discovered and created new methods and tools aimed at protecting people and nature without wasting scarce resources in the process. These economists and policy makers, with the input of researchers in other disciplines like biology, forestry and hydrology, have taken a pragmatic approach to their task, pursuing what works rather than preconceived mindsets. Such pragmatism led to a natural progression for researchers in environmental economics – they quickly adopted the methods of the newly emerging area of experimental economics in the 1970s and 1980s. As Cathy Kling points out in Chapter 12 of this volume, “some of the earliest work in experimental economics was done by environmental economists.” Economists like Peter Bohm, Jack Knetsch, Ron Cummings, Charles Plott, Vernon Smith, William Schulze, Don Coursey, Elizabeth Hoffman, Jeff Bennett, and Michael McKee turned to laboratory experiments to test the efficacy of alternative mechanisms to provide public goods efficiently and voluntarily, to understand how people value gains and losses of a good or service, and to explore how well Pigovian taxes work relative to Coasean bargaining solutions to resolve externality problems. Today, we have come full circle, and the experimental method is commonly applied to environmental economic questions, as evidenced by the research in this book and in the general economics literature. A reader might be asking him- or herself whether such small-scale experiments are the appropriate tool to test large-scale environmental policy. We all know environmental protection is more complex than any laboratory or field study. Lay people and policy makers must make decisions within a mix of biotic and abiotic phenomena combined with social institutions like markets and nonmarket allocation systems. Do the attempts to use the experimental method to understand better the micromotives that underpin the theory of environmental economics have anything to say about the efficiency and fairness of global environmental policy?
2
T.L. Cherry et al.
Yes, it does, would be our answer. The lessons learned by experimental economists can help guide environmental policy by providing insights into how a proposed change in incentives or benefits might affect behavior and, consequently, the likely success of a policy. By supplying information on the behavioral link between incentives, values, and choice, experiments might affect how policy is formed and evaluated. Since the laboratory environment differs from the wilds by necessity, experimental data used to back stated positions of policy should be viewed as support for or against a specific case of a more general phenomenon or theory. Experimental evidence complements theoretical insight, field data, and simulation models to improve our understanding of the underlying assumptions and incentives that drive behavioral responses to policy.1 Experiments have proven to be a useful tool to stress-test theory, look for empirical patterns of behavior, and test bed new institutions designed to protect nature. First, researchers use experiments to test the predictive power of a theory, to test the robustness of the axioms underlying the theory, to test the specific boundaries of a general theory, and to measure the gradient of behavioral change. See, for example, the huge body of experimental research on contributions to public goods. Second, economists use the lab to look for patterns of behavior, to identify and measure breakdowns from rationality, to examine how contextual frames affect behavior, to determine reactions to new information, and to consider how people coordinate actions voluntarily and under duress. For example, the long debate on the divergence between willingness to pay and willingness to accept has been rejuvenated by experimental research (e.g. see Shogren 2006). Third, laboratory experiments are used as a test bed for institutional design – the construction of new institutions, markets, and mechanisms designed to improve resource allocation. For example, Cason and Plott (1996) examined in a laboratory environment the incentives for sellers in new emission trading mechanisms proposed by the US Environmental Protection Agency (see the overview on incentive design in Bohm 2003). This volume brings together 63 leading researchers and their latest work exploring the behavioral underpinnings of environmental economics using experimental economic methods. Some of these researchers are environmental economists who occasionally employ experimental methods, others are experimental economists who use their research method in different subfields of economics, one of which happens to be environmental economics. We divide the 24 chapters into four topical parts that cover the range of ongoing research today – tradable permit markets, common property and public goods, regulation and compliance, and valuation and preferences. Each part concludes with a discussion chapter. The audience for this volume is as diverse as the authors are – we hope both experimental economists who want to conduct policy-relevant work, and environmental economists unfamiliar with experimental methods will find useful information and ideas for future research (experimental and non-experimental) here. To bridge the gap between experimental methods and non-experimental
Introduction 3 research into environmental economics and policy, we asked four renowned environmental economists to write the discussion chapters and to give the reader outside views on how relevant the chapters in this book and experimental economics in general are for practical work on real-world environmental problems. We purposefully chose researchers who are not experimental economists per se. The instructions we gave the discussants were broad: read a set of chapters, react to what you found useful for the general topic, describe what you think was missing and still needed in particular or in general, and give us and the readers your view toward how helpful experimental methods are or could be as a tool to help inform your research. The authors of the discussion chapters did a marvelous job in providing critical summaries of the respective papers in each section; and experimental economists who try to reach an audience broader than just the circle of other experimental economists should read these chapters and the advice therein carefully. What follows is a brief summary of the insights found in the discussion chapters. The one common thread through all four discussion chapters is the call for more context. Experimental economists traditionally use “context-free” settings and instructions in their experiments to make the experiment as general and applicable as possible, and “it is an accepted practice in economics experiments to strip away a lot of social context that is not an essential part of the economic theories being tested” (Holt 2006, p. 13). For example, most public good experiments employ language such as “contributions to a public account” or “investments.” Given their environmental policy backgrounds, the discussants wonder if this neutral-language doctrine can sometimes be relaxed to move, as Cathy Kling writes, “along the continuum towards the more immediate policy relevance side by adding more relevant context.” All discussants, of course, recognize the fundamental tradeoff. John Whitehead points out that on the other end of the continuum “contingent valuation economists go overboard on supplying contextual information,” which makes it difficult to extrapolate beyond the current situation. It is still useful for an experimental economist to be reminded that the use of context-free language is not the only and not always the preferred option. This is particularly true in research areas like valuation and bargaining, in which context matters a lot (e.g. Cummings and Taylor 1999; Cherry 2001), or (Pigovian) taxation in which just the use of the word “tax” can have significant impact on behavior (e.g. Eckel et al. 2005) and political acceptance (e.g. Kallbekken and Kroll 2007). What other themes do the non-experimentalists address? Dallas Burtraw, from Resources for the Future, and Dan Shanan, at Cornell University, studied the five chapters in Part I on “Tradable Permit Markets” and suggest useful research extensions for each one of them. In addition, they present some more general topics for experimental research on permit markets: a laboratory comparison of different auction designs under different (and partly competing) government goals; a test of sensitivity of different permit market designs to market power, bubbles, and extreme prices; and an inquiry of market designs for unconventional tradable permits.
4
T.L. Cherry et al.
Cathy Kling, from Iowa State University, discusses the chapters in Part II on “Common Property and Public Goods.” She focuses on and emphasizes the question of context, and makes a case for adding relevancy by doing three things that most experimental economists do not do: describing the good in question in non-generic terms, increasing the payoffs for subjects, and using subjects from a directly relevant subject pool. While the latter two suggestions might sometimes be at odds with each other (using subjects from directly relevant pools usually means subjects with higher opportunity costs of time, which makes sufficient incentive structures expensive to implement), experimental economists are well advised to think each time they plan an experiment about whether the benefits of heeding one or more of her suggestions are not large enough to cover the costs of higher inconvenience. This is the classic question of Saliency raised decades ago by Vernon Smith. Kathleen Segerson, from the University of Connecticut, points out in her discussion of Part III, “Regulation and Compliance,” that laboratory experiments “provide a very useful middle ground [between economic theory and empirical analysis of actual real world policies] for investigating the likely impacts of proposed policies at lower cost.” For her too, however, it seems “imperative” that certain experiments are conducted in a field environment. She suggests three “inter-related issues that arise in assessing the usefulness of laboratory experiments in understanding environmental policies”: in addition to context, the other two are the actual purpose of the experiment and the evaluation of alternative policy approaches. Interestingly, she points out that she is more interested in knowing that a proposed policy mechanism is shown to be effective under laboratory conditions than that it is efficient since she believes that an effective outcome is more likely to be replicated in the real world than an efficient outcome. In his witty essay on the contributions in Part IV, “Valuation and Preferences,” John C. Whitehead, from Appalachian State University, states that despite their flaws due to the lack of context, economic experiments have done a reasonable job in getting contingent valuation economists “out of their orbit around a far off hypothetical planet.” He sees laboratory experiments and stated preference surveys as complementary approaches, where one’s strength can help to cover the other one’s weaknesses. In particular, he calls for combining forces – research studies that include the laboratory and surveys simultaneously. Research into less-than-rational behavior of consumers and voters he calls interesting and productive “but not referenda on neoclassical theory.” Our four discussants have provided the ideal foil against which the contributors and editors can rethink how we use and sell the experimental method. Because in the end, all experiments, including those in environmental economics, reveal the perpetual scientific tension between control and context. At the core, the experimental method is about control. One controls the experimental circumstances to avoid confounding; i.e. two or more elements change, which confounds our understanding of cause-and-effect. Without control, it is unclear whether unpredicted behavior is due to a poor theory or experimental design, or
Introduction 5 both. In contrast, others argue context is desirable to avoid a setting that is too sterile and too removed from reality for something so real as environmental policy. Context affects participants’ motivation. All experiments face this challenge. Therein lies the beauty of the experimental method as applied to human beings rather than terrestrial plants or subatomic partials – one can use one’s imagination to experiment with alternative degrees of control versus context. As experimental evidence continues to accumulate, a clearer and more definitive picture will emerge about how our institutional choices affect the efficiency of environmental policy and how people value these policies. The future of experimental work will be to help design institutions that address the combination of market failure and behavioral anomalies; otherwise we could find environmental economics falling into a new second-best problem: correcting market failure without addressing behavioral biases could reduce overall welfare. Experimental methods will likely provide one of the most useful tools researchers can use to understand the behavioral underpinnings of environmental policy. We would like to thank the authors and discussants, and the anonymous referees for their hard work and contributions to this volume. We thank Rob Langham and Routledge for their support and patience. We wish to extend a special thanks to Appalachian State University. In 2005, the Department of Economics at Appalachian State University hosted an Experimental Economics and Public Policy Workshop, from which this project originated. Lastly, we wish to thank our friends and families for their support.
Note 1 For exhaustive surveys of the use of experimental methods in environmental economics see, for example, Shogren and Hurley (1999) and Sturm and Weimann (2006).
References Bohm, P. (2003), “Experimental Evaluations of Policy Instruments,” Handbook of Environmental Economics, volume 1, K.G. Mäler and J. Vincent, eds., Amsterdam: North Holland, pp. 437–460. Cason, Timothy N. and Charles R. Plott (1996), “EPA’s new emission trading mechanism: a laboratory approach,” Journal of Environmental Economics and Management 30, 133–160. Cherry, Todd L. (2001), “Mental accounting and other-regarding behavior: evidence from the lab.” Journal of Economic Psychology 22, 605–615. Cummings, Ronald G. and Laura O. Taylor (1999), “Unbiased value estimates for environmental goods: a cheap talk design for the contingent valuation method,” American Economic Review 89, 649–665. Eckel, Catherine, Peter J. Grossman, and R.M. Johnston (2005), “An experimental test of the crowding-out hypothesis,” Journal of Public Economics 89, 1543–1560. Holt, Charles A. (2006), Markets, Games and Strategic Behavior, Boston: Pearson Education.
6
T.L. Cherry et al.
Kallbekken, Steffen and Stephan Kroll (2007), “Do you not like Pigou, or do you not understand him? Tax aversion in the lab,” working paper, Center for International Climate and Environmental Research, Oslo, Norway. Shogren, J. (2006), “Experimental Methods and Valuation.” Handbook of Environmental Economics, volume 2, K.G. Mäler and J. Vincent, eds., Amsterdam: North Holland, pp. 969–1027. Shogren, Jason F. and Terry Hurley (1999), “Experiments in Environmental Economics,” in J.C. van den Bergh ed., Handbook of Environmental and Resource Economics, Cheltenham: Elgar. Sturm, Bodo and Joachim Weimann (2006), “Experiments in Environmental Economics and Some Close Relatives,” Journal of Economic Surveys 20, 419–439.
Part I
Tradable permit markets
1
Baseline-and-credit emission permit trading Experimental evidence under variable output capacity Neil J. Buckley, Stuart Mestelman, and R. Andrew Muller
Introduction Emission trading is now well established as a method for regulating emissions of uniformly mixed pollutants. The classic analysis assumes that the regulatory authority sets an aggregate cap on emissions from a set of sources and then divides the cap into a number of tradable permits (frequently called allowances), each of which authorizes the discharge of a unit quantity of emissions. Although the allowances could be sold at auction to raise revenue, the most frequently discussed plans assume that the permits will be distributed to the regulated firms on some ad hoc basis. Firms then trade the allowances, establishing a market price. In equilibrium, individual firms choose emissions such that the marginal cost of abating pollution equals the allowance price, thereby minimizing the cost of maintaining the mandated level of emissions. They redeem allowances equal to the emissions discharged, selling or banking the remainder. If emissions exceed the initial distribution of allowances the firm must purchase allowances to cover the excess. Such plans are generally known as cap-and-trade plans. A good example is the US EPA’s sulfur dioxide auction. Many field implementations of emissions trading take a different approach. An example is the clean development mechanism proposed under the Kyoto Protocol. In these baseline-and-credit plans there are no explicit caps on aggregate emissions. Instead, each firm has the right to emit a certain baseline level of emissions. This baseline may be derived from historical emissions or from a performance standard that specifies the permitted ratio of emissions to output. Firms create emission reduction credits by emitting fewer than their baseline emissions. These credits may be banked or sold to firms who exceed their baselines. The effect is to limit aggregate emissions to an implicit cap equal to the sum of the individual baselines. Typical baseline-and-credit plans also differ from classic cap-and-trade in a number of institutional details. For example, credits are often computed on a project-by-project basis rather than on the basis of enterprise-wide emissions. They must be certified and registered before they
10
N.J. Buckley et al.
can be traded and there are generally restrictions that credits cannot be registered until the emission reductions have actually occurred. Baseline-and-credit plans are theoretically equivalent to a cap-and-trade plan if the cap implicit in the baseline-and-credit plan is fixed and numerically equal to the fixed cap in a cap-and-trade plan. In many cases, however, the baseline is computed by multiplying a measure of firm scale (energy input or product output) by a performance standard specifying a required ratio of emissions to input or output.1 In this case, the implicit cap on aggregate emissions varies with the level of aggregate output. Fischer (2001, 2003) refers to such plans as tradable performance standards. The variable baseline in a baseline-and-credit plan introduces a critical difference in long-run performance compared to cap-and-trade with the same implied performance standard.2 Specifically, the variable baseline acts as a subsidy on output. Firms receiving this subsidy will tend to expand their capacity to produce output. This introduces two potential inefficiencies. First, if the performance standard remains the same in both plans, the baseline-and-credit plan will exhibit inefficiently high output, emissions and external costs. Second, if the performance standard under baseline-and-credit is tightened so as to meet the aggregate emissions specified under cap-and-trade, then industry costs will increase due to unnecessarily tight restrictions on emitting firms (Muller 1999; Dewees 2001; Fischer 2001, 2003). It should be noted that this reasoning presumes that firms are adjusting to pollution regulation on two margins: the emission intensity of output and the level of output itself. Moreover the reasoning is essentially long run in that output is changed by firms investing or divesting themselves of productive capacity and equilibrium is computed by imposing a zero-profit restriction on firms in the market. Currently, at the international level there are more active baseline-and-credit greenhouse gas-trading plans than cap-and-trade greenhouse gas-trading plans (Hasselknippe 2003). However, the predictions on the relative performance of baseline-and-credit versus cap-and-trade have not been tested in the laboratory. Thus far, experiments have been fruitful in shaping cap-and-trade public policy (Cason 1995; Cason and Plott 1996), but as of yet no baseline-and-credit laboratory studies have been published. Laboratory implementation of baseline-andcredit trading would serve several goals: it would verify that market processes are sufficient to drive agents to competitive equilibrium, demonstrate the contrast between baseline-and-credit and cap-and-trade to policy makers, and possibly create a vehicle for training policy makers and practitioners in the nature of alternative emission trading plans. We have undertaken a long-term research project to compare the properties of baseline-and-credit and cap-and-trade plans in the laboratory. In previous work (Buckley et al. 2006) we have developed a tractable model with constant returns to scale in production and multiple firm types. We have implemented a computerized laboratory environment with explicit capacity and emission intensity decision, fully specified markets for emission rights and output, and a complete accounting framework. We have demonstrated that predicted results
Baseline-and-credit emission trading 11 hold in simulated markets with robot traders adjusting on both the output and emissions intensity margins. However, market instability occurs when capacity is freely adjustable, so we have implemented work with human subjects slowly, examining the emissions intensity margin and the output market margin one at a time. Previous experiments involving human subjects have focused on the intensity decision. Buckley (2005) and Buckley et al. (2006) report on six sessions comparing baseline-and-credit with cap-and-trade when firm capacities are fixed and firm adjustment is limited to emission intensity. They sought to evaluate the prediction that the outcome of the two approaches would be the same when the output subsidy inherent in the baseline-and-credit plan could not possibly lead to productive expansion. Any deviation from parallel results would be then laid to the institutional differences between the two plans rather than the implied subsidy on output and emissions. Those studies confirm that the overall predictions on emissions hold. Efficiency in the market was improved, although only about one-half the available gains from trade were realized. However there were some deviations from the benchmark values computed under the assumption of perfectly competitive equilibrium. Emission permit prices were higher under baseline-and-credit trading and inventories of permits were unpredictably high in both treatments. In the present chapter we investigate the complementary problem of adjustment on the capacity margin. That is, we hold emission intensity constant at the optimal level for each type of firm and allow firms to increase or reduce their productive capacity each decision period. We have three objectives. First we wish to see whether market equilibria emerge in these markets. Second, are there treatment effects that differentiate the two trading policies? Finally, do the theoretical competitive equilibria characterize the behavioral outcomes: does the baseline-and-credit policy lead to higher emissions and output than occur under cap-and-trade?
Methods We ran six laboratory sessions (three cap-and-trade and three baseline-andcredit), each involving eight subjects, in September and October of 2004. All subjects were undergraduate students at McMaster University who had completed at least an introductory course in economics. Sessions lasted approximately 3 hours. For the first hour and a half, students received instruction and participated in four training periods using an alternate set of parameters.3 These training periods were rewarded by a flat fee of $10. Subjects then took a short break and returned to participate in ten paid rounds using the parameters reported here. After ten rounds they were informed of their results and paid privately in cash. Subjects earned between $18.75 and $53.25 with a mean of $38.91, including the training fee. The software implementation of the environment detailed below was programmed at McMaster University using Borland’s Delphi programming environment and the MySQL open source database.
12 N.J. Buckley et al.
Dollars per unit output
Subjects were told that they represented firms that create emissions while producing output and selling it on a simulated market. We chose not to present the experiment in neutral terms, because we believed that the explicit emissions trading environment would help subjects understand the nature of the decisions they were making. There were four types of firms distinguished by emission intensity: two, four, six and eight emission units per unit of output for firm types A, B, C and D respectively. There were two subjects of each type. Each firm was initially given four units of productive capacity, k. Output could be produced at zero marginal cost up to the fixed capacity. The unit cost of capacity varied from $32 per unit for the dirtiest firms (type D) to $128 per unit for the cleanest firms (type A). Each firm created external costs proportional to its emissions, although the instructions did not explicitly inform subjects of this. The marginal damage of emissions (not provided to the subjects) was assumed constant at $16 per unit of emissions. These parameters were chosen to equate the marginal social cost (MSC) of each firm so that all could be present in final equilibrium.4 Figure 1.1 illustrates the short- and long-run cost curves for a typical firm. There were two treatments: cap-and-trade and baseline-and-credit. In both treatments subjects were started off at the cap-and-trade equilibrium, which was chosen to coincide with the social optimum. In the cap-and-trade treatment 160 permits were distributed each period and aggregate production capacity began at 32 units of output. This implies an average emission intensity of five at the social optimum. We expect the system to remain stable at the equilibrium point with firms trading 32 permits every period. In the baseline-and-credit treatment we imposed a tradable performance standard of five, equivalent to the average emission intensity in the cap-and-trade treatment. In this treatment we expect the output and emissions to increase due to the inherent subsidy to output.
MCi
ACi
LACi
ki
Figure 1.1 Firm cost curves.
Output, qi
Start of period
Subjects begin this period with last period’s capacity
Is the subject facing a cap-and-trade scheme?
Yes Endow subjects with allowances
No Permit market: call auction trading allowances or credits
Output market: call auction with simulated buyers
Is the subject facing a baseline-and-credit scheme?
Yes
No Allowances and credits are redeemed
Subject chooses to raise or lower capacity by one unit or leave it unchanged
End of period
Figure 1.2 Sequence of events in a typical period.
Credits created if emission rate was below the performance standard
14 N.J. Buckley et al. The sequence of decisions differed slightly between the two treatments. A flowchart is provided as Figure 1.2. In the cap-and-trade treatment subjects began with capacity and allowance holdings determined in the previous period. They received an endowment of allowances. Their first action was to trade allowances in a multiple-unit uniform-price sealed bid-ask auction (call market).5 Subjects were permitted to enter three different price bids to purchase additional allowances. The first bid was the highest price the individual would pay for an allowance, and the individual could bid to purchase as many allowances at this price as he wished. The second price bid (if any) had to be lower than the first. The individual could bid to purchase as many additional allowances at this lower price as he wished. A third, still lower bid, could be entered for allowances beyond those bid for at the first two higher prices. Subjects were also permitted to place up to three different price asks to sell allowances. The first ask was at the lowest price and the subsequent were at higher prices. The next highest price asked was for additional allowances. The number of allowances for sale was only restricted by the individual’s inventory of allowances. This action required subjects to estimate the price they were willing to pay for additional permits and the price at which they were willing to sell their permits. They were provided with extensive on-screen help to aid them in this decision.6 Once all bids and asks were submitted, the allowance market cleared, determining a price of permits and a quantity bought or sold for each subject. Output decisions were automatically made by the computer program. Each subject was required to produce and offer for sale as much output as he could, given his capacity and permit holdings.7 Demand for output was represented by an exogenous demand function with known intercept and slope. The output market then cleared, determining a common output price and an individual quantity sold and revenue earned for each subject. Permits not requiring redemption were banked for use or sale in future periods. After reviewing their financial report for the period, subjects decided whether to increase or decrease their output capacity by one unit, or keep capacity unchanged. The baseline-and-credit sequence was identical to cap-and-trade except that subjects did not receive any emission permits before the credit market opened. Consequently, they could only trade credits that were produced in previous periods. The quantity of credits created in the current period was determined by the firm’s emission intensity and quantity of output sold, and so permits could only be credited after output for the current period was determined.
Parameterization and benchmarks In this section we derive benchmark equilibria for the two treatments under the assumption of perfectly competitive, price-taking firms. We first introduce some notation and describe the general model that allows adjustment on both the emission intensity and output margins. Second we report on the parameterization of the model for this experiment, and finally present the benchmarks.
Baseline-and-credit emission trading 15 Theory Consider an industry with N firms. Each firm i [1, ..., N] produces qi units of output at an emission rate of ri = ei /qi where ei is quantity of emissions. Industry output is Q = ∑i =1 qi . N
Aggregate emissions are E = ∑i =1 ei =− ∑i =1 ri qi . N
N
Environmental damages are assumed to be a positive and weakly convex function of total emissions: D = D(E), D′(E) > 0 and D′′(E) ≥ 0. Willingness to pay for the output is a weakly concave function of aggregate output, Q
WTP = ∫ P ( z ) z 0
where P = P(Q) is an inverse demand curve with positive ordinate (P(0) > 0) and negative slope (P′(Q) < 0). The private cost of production is a linear homogeneous function of output and emissions:
C i = C i (qi , ei ) = qi C i (1, ri ) . Unit cost Ci(1,ri) can be separated into unit capacity cost ci(ri), which is a positive and declining function of the emission rate with ci(ri) > 0 and c´i(ri) ≤ 0, and unit variable cost wi, which is a constant function of output. Consequently, total cost is Ci = ci(ri)qi + wi qi . Note that the marginal cost of output is ci(ri) + wi and the marginal cost of abating pollution is
MAC = − ∂C i ∂ei = − ci′ (ri ) . An omnipotent social planner would choose an output and emission rate for each firm such that it would maximize total surplus, S. The social planner’s welfare maximization problem is Q
N
N
N
0
i =1
i =1
i =1
max S = ∫ P( z )dz − ∑ ci (ri )qi − ∑ wi qi − D(∑ ri qi ) . {ri , qi }
16 N.J. Buckley et al. There are two first order conditions, one for each margin of adjustment. They are N
− ci′ = D ′(∑ ri qi* ) ∀i ∈ N ,
(1.1)
i =1
N
* * * * * and P (Q ) = ci (ri ) + wi + ri D ′(∑ ri qi ) ∀i ∈ N ,
(1.2)
i =1
where an asterisk denotes optimal values and it is assumed that qi* > 0 for all i. The efficient abatement condition (equation 1.1) requires that firms choose emission intensities such that the marginal abatement cost – c´i equals the marginal damage caused by emissions. The efficient output condition (equation 1.2) ensures that output is surplus maximizing by requiring that each firm’s marginal social cost (the right hand side of equation 1.2) equal the marginal willingness to pay for output (the left hand side of equation 1.2). Note that condition 1.2 determines only the aggregate level of output. Any combination of qi* such that the qi*s sum to Q* and the ri*qi*s sum to E* satisfies the efficient output condition.8 In the present experiment we suppress adjustment on the emission intensity margin by setting each firm’s emission intensity to its optimal value ri*. Condition 1.1 vanishes and we are left with condition 1.2. Because the emissions intensity for each type of firm, ri, is fixed, firms cannot independently adjust the marginal social cost of their output. For any given marginal damage, D ′(∑i =1 ri* qi* ) , N
there will be a set of firm types with least marginal social cost. This set may contain more than one firm type, because two firm types can have identical marginal social cost if the reduced social damage generated by the clean firm type is exactly offset by an increase in private cost. The social optimum can be supported as a competitive equilibrium under capand-trade regulation. The regulator distributes allowances Ai to each firm so that the sum of allowances granted equals the optimal level of emissions, that is,
∑
N
i =1
Ai = E *.
If Pc denotes the price of permits under cap-and-trade, firm i’s profit maximization problem is
max π ic = P(Q)qi − ci (ri* )qi − wi qi − Pc (ri* qi − Ai ) . {qi }
The first order condition for an interior maximum is
P(Q c ) = ci (ri* ) + wi + ri* Pc .
(1.3)
Baseline-and-credit emission trading 17 Equation 1.3 requires that each firm earn zero marginal profit, and identifies Qc . Because equation 1.3 can be obtained from equation 1.2 by replacing D ′(∑i =1 ri* qi* ) N
by Pc and Q* by Q c, a solution to the surplus maximization problem is a competitive equilibrium and vice versa. Under a baseline-and-credit plan, the regulator sets an industry-wide performance standard, r s. Firm i’s demand for credits is (ri* – r s)qi . Negative values denote a supply of credits. If the price of credits is Pb , then firm i’s profit maximization problem is max π ib = P (Q)qi − ci (ri* )qi − wi qi − Pb qi (ri* − r s ) . {qi }
The first order condition for an interior maximum is
P(Q b ) = ci (ri* ) + wi + ri* Pb − r s Pb .
(1.4)
Equation 1.4 is the zero marginal profit condition which determines Q b. Let us assume that the regulator sets the emission rate standard equal to the average emission rate under the social planner scenario,9 r s = ∑i =1 ri* qi* Q * . N
Comparing the baseline-and-credit condition (equation 1.4) to the cap-and-trade condition (equation 1.3), we immediately see that the latter differs from the former only in the last term, – r sPb , which acts as an output subsidy to the firm. Consequently, marginal private cost to the firm is less than the marginal social cost and the corresponding output, Q b, will be higher than under cap-and-trade. Since the equilibrium of both trading plans involve the same average emission ratio, this higher level of output necessarily implies that aggregate emissions will be higher than optimal under baseline-and-credit regulation. Parameterization Table 1.1 presents firm-specific parameters used in the sessions reported in this chapter. Table 1.2 summarizes the associated equilibrium predictions under the alternative emission trading mechanisms. It is useful to illustrate the equilibria diagrammatically. Figure 1.3 illustrates the cap-and-trade equilibrium when only type A and D firms are in the market. The dirty firms have long-run average costs (LAC) of 32 and create damages of rDMD = 8(16) = 128 per unit of output. Marginal social
18 N.J. Buckley et al. Table 1.1 Cost parameters Firm type
Unit fixed cost, ci(ri*)
Fixed emission rate
Endowment
Performance standard
B&C initial credits
A B C D
128 96 64 32
2 4 6 8
20 20 20 20
5 5 5 5
12 4 0 0
Price of Output allowances or price credits (permits)
Aggregate output
Aggregate emissions
Active firm types
48 32
240 160
A, B, C, D A, B, C, D
Table 1.2 Variable capacity predictions Trading Institution
Baseline-and-credit 16 Cap-and-trade 16
80 160
cost is 160. Firm type A has a higher unit capacity cost at $128 but lower damages of rAMD = 2(16) = 32 per unit of output, yielding the same marginal social cost. Optimal output Qc* = 32 is determined by the intersection of the demand curve and marginal social cost. At the optimal output, type A firms earn 160 – 128 = 32 in rent per unit of output, or 32/2 = 16 per unit of emissions. Type D firms earn 160 – 32 = 128 in rent per unit output, or 128/8 = 16 per unit of emissions. Both types of firms are willing to pay $16 per permit. Under capand-trade, the regulatory authority allocates 160 allowances and the allowance market clears at $16 per permit. Long-run average cost is now $160 for each firm type. Equilibrium at a price of $160 implies output of 32 units, and an
Dollars per unit output
320
Demand curve
S(E 160)
MSC LAC MD
160 rAPC
LACA, rA 2
128 rDPC 32
LACD, rD 8 Qc32
Figure 1.3 Cap-and-trade equilibrium.
64
Output, Q
Baseline-and-credit emission trading 19
Dollars per unit output
320
Demand curve
Damages exceed increase in surplus by this area 160 128
MSC LAC MD LACA, rA 2
rSPB
(rA rS)PB
LACA&D, r rS 5
80 (rD rS)PB
LACD, rD 8
32 QC 32
QB 48
64
Output, Q
Figure 1.4 Baseline-and-credit equilibrium.
average emission intensity of five. The only way to achieve an average emission intensity of five with type A (rA = 2) and D (rD = 8) firms is to have equal output capacity of each firm type. This equilibrium implies the presence of 16 units of capacity from type A and 16 units of capacity of type D in the market. Figure 1.4 shows the equivalent baseline-and-credit equilibrium. The performance standard is rs = 5 units of emissions per unit of output. Restricting attention to type A and D firms, we see this implies that there must be equal capacity of each firm type. The effect of baseline-and-credit trading is to equate the LAC of both firm types. Given equal capacity shares, average LAC = (128 + 32)/2 = 80. This determines the inefficient equilibrium output of 48 units, 24 from each firm type. At this point, type D firms must buy rD – r s = 3 credits per unit of output and they are willing to pay (80 – 32)/3 = 16 per credit. Type A firms create r s – rA = 3 credits per unit of output. They must receive at least (128 – 80)/3 = 16 per credit to earn non-negative profits under baseline-and-credit. Since there is equal capacity of type A and D firms (24 units for each type), the supply of credits equals demand for credits at a price of $16. Efficiency We compute the efficiency of baseline-and-credit and cap-and-trade equilibria relative to the maximum surplus available. The social surplus is equal to the sum of consumers’ surplus and producers’ surplus less any environmental damage. In computing the environmental damages we assume constant marginal damages of $16 per unit of emissions. From Figure 1.3 it is clear that under cap-and-trade consumers’ surplus in equilibrium is $0.5(320 – 160)(32) = 2560. Producers’ surplus is (160 – 80)(32) = 2560, the same amount. External damages are equal
20 N.J. Buckley et al. Table 1.3 Equilibrium surplus efficiency Components of efficiency Efficiency =
Consumers’ Producers’ Environmental surplus surplus damages + + –
$2560 100%
$2560 100%
$2560 100%
$2560 100%
Baseline-and-credit equilibrium Surplus $1920 Efficiency index 75%
$5760 225%
$0 0%
$3840 150%
Cap-and-trade equilibrium Surplus Efficiency index
to total emissions multiplied by the marginal damage, 160(16) = 2560. Note that this exactly offsets the producers’ surplus, so that total social surplus is equal to the consumers’ surplus of 2560. Because the emissions cap was set to the socially optimal level of 160 units of emissions, the cap-and-trade surplus values are optimal. Using Figure 1.4, the corresponding consumers’ surplus, producers’ surplus, external damages and total social surplus under baseline-and-credit are 5760, 0, 3840 and 1920, respectively. Given these definitions we can compute an efficiency index
Efficiency =
Actual Total Surplus . Optimal Total Surplus
It is convenient to decompose efficiency into components associated with consumers’ surplus, producers’ surplus and external costs. Thus the consumers’ surplus component of the efficiency index is
Consumer Surplus Component =
Actual Consumer Surplus . Optimal Total Surplus
Table 1.3 reports the equilibrium values for total surplus and its components under the two treatments.
Results Figures 1.5 to 1.10 provide an overview of the data. We have three independent series in each treatment. The figures show the range and mean of observations for each period. Many series show a distinct time trend. Moreover, the fact that there was no payoff to subjects’ inventories of permits held at the end of the session may have induced an end-game effect in period 10. Accordingly we drop periods 1 through 5 and 10 in summarizing the results numerically and report mean values for periods 6 through 9 in Table 1.4. We
Baseline-and-credit emission trading 21 Table 1.4 Mean values over periods 6 to 9 by treatment Capacitya
Output volumea
Aggregate emissionsa
Permit market Price
Volume
Permit inventoriesa
Cap-and-trade Session 1 Session 2 Session 3 Treatment mean Prediction
36.50 34.50 38.75 36.58 32.00
33.50 34.25 30.00 32.58 32.00b
159.50 157.50 163.00 160.00 160.00b
14.75 7.00 23.25 15.00 16.00
30.75 31.25 46.75 36.25 32.00b
17.50 69.50 10.50 32.50 0.00b
Baseline-and-credit Session 4 Session 5 Session 6 Treatment mean Prediction
48.25 51.00 45.25 48.17 48.00
47.00 49.75 45.25 47.33 48.00c
212.00 218.00 217.50 215.83 240.00c,d
11.50 10.75 6.50 9.58 16.00
46.00 45.50 50.50 47.33 48.00c
106.25 142.50 119.00 122.58 0.00d
Notes a Treatment effect is significant using a t-test and a Mann-Whitney U-test at a 5 percent critical level. b The baseline-and-credit treatment mean is significantly different from the cap-and-trade prediction using a t-test at the 5 percent level. c The cap-and-trade treatment mean is significantly different from the baseline-and-credit prediction using a t-test at the 5 percent level. d The baseline-and-credit treatment mean is significantly different from the baseline-and-credit prediction using a t-test at the 5 percent level.
test for treatment effects using parametric (F-test) and non-parametric methods. However these tests have extremely limited power, even adopting a critical level of 10 percent, so caution should be applied when interpreting the statistics.10 Capacity, output, emissions and efficiency Consider first the key predictions on capacity, output and emissions. Figure 1.5 shows the evolution of capacity. Under baseline-and-credit trading, capacity rises steadily to reach the predicted level of 48 by period 7. Under cap-andtrade, capacity stabilizes quickly between 36 and 40, significantly above the benchmark of 32. The treatment effect is strongly significant. Figure 1.6 shows a similar pattern for output, except that under cap-and-trade output exceeds the benchmark level of 32 by only a small amount. This suggests pervasive underutilization of capacity due to inability to acquire permits. Emissions (Figure 1.7) follow the same pattern as output. Despite the general resemblance to the benchmarks shown in Figures 1.5 to 1.7, both treatments display substantial deviations from their benchmarks. Nevertheless, the statistical tests reported in Table 1.4 provide evidence of a treatment effect (cap-and-trade versus baseline-and-credit trading) that is strongly significant. Overall, these observations conform well to the underlying theory.
Cap-and-trade
Baseline-and-credit
55
Capacity
48 45
35 32
25 1
5
10 1 Period Min/max capacity
5
10
Mean capacity
Figure 1.5 Capacity.
Cap-and-trade
Baseline-and-credit
Output volume
60
48 40 32
20 1
5
10 1 Period
Min/max output volume
Figure 1.6 Output volume.
5 Mean output volume
10
Cap-and-trade
Baseline-and-credit
240
Aggregate emissions
220
180 160 140
100
1
5
10 1 Period
Min/max aggregate emissions
5
10
Mean aggregate emissions
Figure 1.7 Aggregate emissions.
Cap-and-trade
Baseline-and-credit
Efficiency (1 = 100%)
1
0.75
0.5 1
5 Min/max efficiencys
Figure 1.8 Efficiency.
10 1 Period
5
Mean aggregate efficiency
10
24 N.J. Buckley et al. Table 1.5 Mean efficiency over periods 6 to 9 (%) Components of efficiency
Cap-and-trade Session 1 Session 2 Session 3 Treatment mean Prediction Baseline-and-credit Session 4 Session 5 Session 6 Treatment mean Prediction
Efficiency =
Consumers’ surplusa +
Producers’ surplusa +
Environmental damagesa –
95 99 70 88 100b
109 115 88 104 100b
85 82 83 83 100b,e
99 98 101 100 100b
75 67 83 75 75
215 243 199 219 225c
–8 –39 19 –9 0c
132 136 135 134 150c,d
Notes a Treatment effect is significant using a t-test and a Mann-Whitney U-test at a 5 percent critical level. b The baseline-and-credit treatment mean is significantly different from the cap-and-trade prediction using a t-test at the 5 percent level. c The cap-and-trade treatment mean is significantly different from the baseline-and-credit prediction using a t-test at the 5 percent level. d The baseline-and-credit treatment mean is significantly different from the baseline-and-credit prediction using a t-test at the 5 percent level. e The cap-and-trade treatment mean is significantly different from the cap-and-trade prediction using a t-test at the 5 percent level.
These results imply that the efficiency losses from baseline-and-credit trading will be similar to those predicted by theory. Figure 1.8 reports the evolution of efficiency over the ten periods. Table 1.5 reports the numerical results. Efficiency was highly variable across sessions. Two of the cap-and-trade sessions attained close to 100 percent efficiency, while the third achieved only 70 percent. Mean efficiency in the three baseline-and-credit sessions was almost exactly the predicted level of 75 percent. Due to the wide variation the difference in means was not significant. Treatment effects were significant for each of the three components of surplus, however. Under cap-and-trade consumers’ surplus and damage were close to their benchmark values, while producers’ surplus was significantly lower, suggesting that costs were not being minimized. Under baseline-and-credit trading consumers’ and producers’ surpluses were close to the benchmarks while emission damage was less than expected, although still higher than in the cap-and-trade treatment. Credit and allowance markets The relatively promising results discussed above were obtained despite some rather strange behavior in the markets for credits and allowances. Figure 1.9
Baseline-and-credit emission trading 25 Cap-and-trade
Baseline-and-credit
Permit price (Eq'bm = $16)
150
100
50
16 0 1
5
10 1 Period
Min/max permit price
5
10
Mean permit price
Figure 1.9 Permit trading prices.
shows dramatic differences in permit prices across treatments. With cap-andtrade, permit prices are consistently very close to the benchmark and not significantly different from it. With baseline-and-credit, permit prices start very high, then fall rapidly to below equilibrium levels. However the two series are not significantly different across periods 6 through 9. The high early prices and rapid decline of permit prices under baseline-and-credit are probably due to bidding errors in the early periods of the session. A similar pattern was observed in Buckley (2005), leading to the supposition that it is an institutional factor associated with baseline-and-credit plans that is driving the differences. One such factor is the requirement that credits be generated before being offered for sale. The convergence of baseline-and-credit permit prices toward zero in periods 9 and 10 could likely be caused by an increased supply of permits due to increased permit inventory carryover in the credit treatment. To determine this we must also examine aggregate levels of permit inventories by treatment. In general, there is no clear explanation for subjects to hold permit inventories. Risk neutral subjects should hold no permits in either treatment. Risk averse subjects may hold permit inventories between periods but have no incentive to carry permit inventories past period 10 because they have no redemption value at the end of the session. Permit banking might be intentional, for speculation or inadvertent, due to errors in permit trading and capacity choice. The data do not allow us to differentiate among these explanations. In fact, inventories accumulate in both treatments, as illustrated in Figure 1.10. Both treatments exhibit significant inventory build-ups, but while the cap-and-trade inventories
Aggregate permit inventory (Eq'bm = 0 permits)
26 N.J. Buckley et al. Cap-and-trade
Baseline-and-credit
250 200 150 100 50 0 1
5
10
1 Period
Min/max aggregate permit inventory
5
10
Mean aggregate permit inventory
Figure 1.10 Aggregate permit inventory.
stabilize below 50 units, baseline-and-credit inventories climb steadily. The treatment differences are likely driven by the increased supply of credits generated by expanded output under the fixed performance standard of baseline-andcredit regulation. The supply of permits is fixed under cap-and-trade regulation. Our conjecture regarding increased permit supplies causing lower prices under baseline-and-credit trading appears to be supported by the inventory data.
Discussion and conclusions Theory predicts higher aggregate output and emission under baseline-and-credit than under cap-and-trade when the former imposes a performance standard consistent with the cap under the latter plan. This is because a performance standard acts as a subsidy on output. The question remained, however, whether the theoretical predictions regarding the two mechanisms would hold in real markets. This chapter reports results from real markets in controlled laboratory sessions in an environment involving fixed emission technologies and variable output capacities. Results from the laboratory sessions reported here support the theory. Using graphical and tabular data, we have confirmed that price incentives conveyed through markets perform as theory predicts. Cap-and-trade emission and output levels stay close to their predicted equilibrium values and emissions and output soar and converge to their predicted higher levels under baseline-and-credit. Despite differences in early permit trading prices under the two plans, the results strongly support the theoretical predictions.
Baseline-and-credit emission trading 27 One caveat, however, is that it appears that baseline-and-credit regulation is susceptible to higher levels of permit inventories than cap-and-trade. Even though permit inventories are predicted to be zero in the baseline-and-credit equilibrium, evidence shows that permits are accumulated in inventory over the entire experiment. This behavior might be caused by the relatively more complex framing of the baseline-and-credit institution in addition to the variable permit supply inherent in baseline-and-credit regulation. An experimental environment has now been designed and tested. This chapter reports experimental sessions involving fixed emission rate and variable capacity while Buckley (2005) provides results from sessions assuming fixed output capacity and variable emission rates. With the theoretical framework and corresponding experimental environment in place, future work can now assess the long-run theoretical prediction of higher output and emissions under baseline-and-credit trading in a full model in which firms choose emission rates and output capacities.
Acknowledgments We gratefully acknowledge the support of the Social Sciences and Humanities Research Council of Canada, Grant No. 410-00-1314. This work was presented at the 2004 Canadian Experimental and Behavioural Economics Workshop, the 2004 Meetings of the Southern Economics Association, the 2005 Southern Ontario Resource and Environmental Economics Workshop and the 2005 Experimental Economics and Public Policy Workshop at Appalachian State University. We thank Daniel Rondeau, Asha Sadanand and Bart Wilson for helpful comments.
Notes 1 This ratio is generally called the emission intensity. 2 A cap-and-trade plan with aggregate cap on emissions may be said to imply a performance standard of rs = E/Q where E and Q are respectively aggregate emissions and output in long-run equilibrium. 3 See Appendix A, online, available at: socserv.mcmaster.ca/econ/mceel/papers/ varcapercappa.pdf for the laboratory instructions. 4 Marginal social cost equals unit capacity cost plus the external costs created by each unit of output. For our parameters MSC equals 160 for all four firm types. 5 A multi-unit uniform price sealed bid–ask auction was chosen because of the relatively quick trading time and high efficiency associated with it. As discussed by Smith et al. (1982), while traders have incentives to bid below values and ask above costs, traders of infra-marginal units near the margin that determines price should fully reveal costs and values to avoid being excluded from the market by extra-marginal units. See Cason and Plott (1996) for a comparison of the uniform price auction with a discriminatory price auction in the context of emission permit trading. 6 In order to aid them in their bid and ask decisions subjects were provided with a planner window in the computer software that displayed the latest output and permit market prices and volumes. In addition, the planner provided a cost calculator that detailed firm costs under various hypothetical capacity and permit value conditions. For screenshots of the computerized environment, see Appendix B, online, available at: socserv.mcmaster.ca/econ/mceel/papers/varcapercappb.pdf.
28 N.J. Buckley et al. 7 At the pilot stage output decisions were explicitly made by the subjects. We determined that this excessively complicated the environment. In the present environment output is implicitly determined by the subjects’ capacity choices and the outcome of the permit market. This environment assures full compliance with emissions standards and determines the number of permits that will be banked each period. 8 This feature of the model is a direct result of the constant marginal cost of output assumption. Unit cost, ci(ri*), is a function of emission rate but not output. If this assumption were relaxed, condition 1.2 would imply a firm specific output level but would result in a more complicated laboratory environment. 9 As mentioned in the Introduction, we will find that setting the performance standard equal to the optimal average emission rate will result in quantities of emissions and output that are inefficiently high. We could set a stricter standard so that quantities of output and emissions are optimal but then firm costs will be inefficiently high. Considering that both methods yield inefficiency we choose to focus on the case comparing cap-and-trade with a baseline-and-credit system with a performance standard equal to the average emission rate from the optimal scenario. 10 With a critical level of 10 percent there is about a 45 percent chance of detecting a true difference in means of 1.5 standard deviations. We would need a critical level of 25 percent to get a 70 percent chance of detecting this large a difference in means (two-tailed tests, common variance).
References Buckley, N.J., 2005. Implications of Alternative Emission Trading Plans. PhD Thesis. Hamilton, Canada: McMaster University. Buckley, N.J., Mestelman, S. and Muller, R.A., 2006. Implications of alternative emission trading plans: experimental evidence. Pacific Economic Review, 11 (2), 149–166. Cason, T.N., 1995. An experimental investigation of the seller incentives in the EPA’s emission trading auction. American Economic Review, 85 (4), 905–922. Cason, T.N. and Plott, C.R., 1996. EPA’s new emission trading mechanism: a laboratory evaluation. Journal of Environmental Economics and Management, 30 (2), 133–160. Dewees, D., 2001. Emissions trading: ERCs or allowances? Land Economics, 77 (4), 513–526. Fischer, C., 2001. Rebating environmental policy revenues: output-based allocation and tradable performance standards. Discussion Paper 01–22, Resources for the Future, Washington, DC. Fischer, C., 2003. Combining rate-based and cap-and-trade emission policies. Climate Policy, 3S2, S89–S109. Hasselknippe, H., 2003. Systems for carbon trading: an overview. Climate Policy, 3 (Supplement 2), S42–S57. Muller, R.A., 1999. Emissions trading without a quantity constraint. Department of Economics Working Paper 99–13, McMaster University, Hamilton, Ontario. Smith, V.L., Williams, A.W., Bratton, W.K. and Vannoni, M.G., 1982. Competitive market institutions: double auction versus sealed bid-offer auctions. American Economic Review, 72 (1), 58–77.
2
A laboratory analysis of industry consolidation and diffusion under tradable fishing allowance management Christopher M. Anderson, Matthew A. Freeman, and Jon G. Sutinen
Introduction Tradable allowance systems are being increasingly applied to address numerous environmental and natural resource management problems, including water use, pollution and overfishing (Tietenberg, 2002). In a typical application of tradable allowances, a management authority sets an allowable level of activity, allocates allowances to that level among users and permits users to trade their allocations. In theory, the ensuing market allows inefficient resource users to transfer their use or pollution rights to efficient users who can earn more profit from them, in exchange for a payment that exceeds the profits sellers would make from using them. While economically defensible, the resulting industry transformation can threaten local economies and traditional ways of life, especially in applications such as irrigation rights and fishery management, where the affected parties are usually small and family businesses. The fear of consolidation can often lead stakeholders to oppose implementation of market-based management, even in cases where market fundamentals suggest consolidation is unlikely. The effect of fears about consolidation is particularly acute in fisheries, where stakeholders are traditionally so heavily involved in management that major changes are dependent upon stakeholder support. Without addressing consolidation concerns, it may be difficult to garner the benefits associated with tradable allowance management. In experiences in seven major fishing nations, and totaling 10 percent of ocean harvests worldwide, individual transferable quota (ITQ) management has proven effective at constraining exploitation within set limits, mitigating the race to fish, reducing overcapacity and gear conflicts while improving product quality and availability (Arnason, 2002; OECD, 1997; Squires et al., 1995; Sutinen and Soboil, 2003). These experiences have led three independent national panels in the United States to endorse increased use of tradable allowance management for federally regulated fisheries upon expiration of a 6-year federal moratorium on new programs (NRC, 1999; Pew Oceans Commission, 2003; US Commission on Oceans Policy, 2004). In fisheries, such management is dependent on both government and stakeholder support to be
30
C.M. Anderson et al.
acceptable and effective, but until stakeholders, who have an active role in management, have a clear understanding of how tradable allowances will affect their way of life, they will be unable to make informed decisions concerning their stance on tradable allowance implementation. The sort of consolidation feared by fishermen is illustrated by the Dutch demersal North Sea fishery, where the number of ITQ-holders decreased about 29 percent from 1988 to 1997, and the number of vessels employed by the industry decreased about 33 percent from 1987 to 1998 (Davidse, 2001). In a study that followed fishermen after they sold, Ford (2001) found that many of the 17 percent of vessels who sold out from the Tasmanian rock lobster industry expected to move into other fisheries. However, rationalization in those fisheries made switching too costly, and they were unable to sell their vessels because of a general contraction in all fisheries in the region caused by rationalization. In this chapter, we consider whether consolidation is a general property of market-based management, or if instead some applications will experience consolidation, and others diffusion, based on observable market fundamentals. Economic theory suggests that the equilibrium outcomes, including the market shares they imply, will emerge through trading. However, market volatility inherent in the equilibration process plays a significant role in the transition process, and in the winners and losers that it determines (Matulich et al., 1996). The economic analysis of consolidation requires distinguishing among two phenomena. First, under tradable allowances, efficient scaling of the market will occur. This will likely result in a smaller number of operators, as inefficient operators sell out at the equilibrium price. Second, consolidation may take place in excess of that predicted by equilibrium. This could result from market volatility, or market power exercised by large or well-capitalized allowance owners. Both economists and fishers wish to avoid excess consolidation, so an understanding of whether and how it occurs can indicate when preventive measures are required. In the absence of excess consolidation, economists can use equilibrium models to help fishers understand the likely extent of consolidation (or diffusion!) in their fishery, helping them identify the management regime best suited to their goals. We assess the extent to which market forces and the equilibration process may lead to consolidation using a controlled economic experiment in which human subjects play the role of fishermen managed under a tradable allowance system. We examine market outcomes in two experimental treatments that reflect a fishery with the same initial distribution of fishing effort, and the same allocation of allowances under the new tradable allowance system. However, different cost structures among the large and small operators lead to different theoretical predictions about market share under tradable allowances: in one, market fundamentals predict small operators will sell out to larger operators and, in the other, small operators will acquire marginal units from the larger operators, leading to a less concentrated industry than before the implementation of tradable allowances. Our results show that concentration is not a necessary consequence of tradable allowance management. Rather, they affirm the theoretical
Industry consolidation and diffusion 31 predictions that the level of consolidation observed is explained by underlying market fundamentals. This implies that the extent of consolidation is predictable based on the economic characteristics of the small and large operators within the fishery. With appropriate economic analysis, stakeholders can consider marketbased management with a clear idea of the likely structure of their industry following implementation.
Treatments and hypotheses In 2004, the Atlantic States Marine Fishery Council (ASMFC), which is responsible for managing American lobster in the northeast, approved a management plan that includes a provision for transferability of lobster trap allowances. However, concerns about initial allocations and latent effort from boats licensed to fish for lobster but who have been recently inactive have delayed implementation past the 2005 season. The Lobster Conservation Management Team (LCMT), which advises the ASMFC, drafted the plan. In this plan, a cap on the total number of traps has been determined from stock assessment data and the need to rebuild the lobster stock. Allowance, in the form of trap tags that allow a fisherman to put one trap in the water for a season, will be allocated based on the highest annual value of fish caught during the base period, such that fisherman who diversified into other fisheries or who took a year off would not be penalized. No limitations are to be set on allowance transfer except that transfers must involve a minimum number of trap tags; smaller buyers may buy smaller blocks of traps than larger fishers. Currently, the LCMT’s draft plan is being reviewed by managers. We evaluate the role of tradable allowance management in consolidation using this plan, as it would be applied to the Rhode Island lobster fishery, as a motivating case. To relate the experimental environment to the lobster fishery, we constructed profit functions for heterogeneous experimental subjects based on 2001 logbook data, provided by the Rhode Island Department of Environmental Management, which were used to estimate the production functions of Rhode Island lobster operations. Profit functions were estimated in three stages. First, we conducted a cluster analysis on inputs and landings to decompose the fishery into five “types” of operations: large (18 licenses landing an average of 27,104 pounds per year); medium–large (33 licenses averaging 14,950 pounds); medium (18 licenses averaging 8,169 pounds); medium–small (44 licenses averaging 3,600 pounds); and small (174 licenses averaging 464 pounds). Second, we estimated a production function of the form Landings = Days Fished × [Constant × (Traps Fished)bj] where bj is the return to average traps used by a fisher in category j. To calculate revenue, we multiplied this by the average annual ex vessel price ($4.15). In the two experimental treatments, revenue for subjects is based on average days fished for each type, and subjects could vary the number of traps fished by buying and selling trap allowances in the market.
32
C.M. Anderson et al.
Finally, we determined costs for each type of operation. Unfortunately, reliable cost survey data is not available for this fishery, and even if it were, the direct cost data typically collected does not reflect opportunity costs, which must be the largest component of a cost function that rationalizes production level choices. Rather than mislead fishers about the likely industry concentrations resulting from a transition to tradable trap certificates, we elect to address the concerns that market-based management inevitably leads to consolidation with experiments that demonstrate the range of outcomes for different plausible cost functions. Therefore, we designed an experiment with two treatments. In one treatment, costs are chosen so the competitive general equilibrium model (see Anderson and Sutinen, 2005) predicts that market share will move toward large operators (consolidation), and in a second treatment, the model predicts that market share will shift toward smaller operators (diffusion). While this design cannot predict the level of consolidation that would be observed if trading were to be implemented in the lobster fishery, the experiments can contribute to policy analysis in two ways. First, if equilibrium predictions are upheld, they will address the popular perception that consolidation is always a consequence of tradable rights management. This may address stakeholders’ concerns about market-based management, and encourage them to consider instead whether the production elasticities of their fishery are likely to lead to undesirable levels of consolidation. Second, the experiments provide a key test of the competitive model’s ability to predict the consolidation process in an equilibrating market based on market fundamentals. Thus, if accurate cost data were to be collected in the future, the experimental test would indicate whether the model’s predictions based on the actual data are meaningful, or whether the price discovery process facilitates consolidation to a greater degree than fundamentals suggest. If the model’s predictions are meaningful, stakeholders could be provided with the foreknowledge of whether tradable allowances would result in excess consolidation or efficient scaling in their particular fishery. The profit functions from the two treatments are designed to have ex ante identical levels of fishing effort, prior to the reduction in traps accompanying introduction of tradable allowances. In current regulation, any lobster license holder may fish up to 800 traps at a time. This restriction is binding on only the largest operators; most operators maximize their profit at trap levels below this limit. Thus, the profit functions are chosen so that optimal number of traps for each type of operation is the same across treatments. Because the trap certificates are most likely to be allocated proportional to historical landings, the initial endowments of allowance are also identical within each type of operator between the two treatments. What differ between treatments are the elasticities of the derived demand curves of each type of operator. Thus, although the allowance endowments and pre-cap profit maximizing trap levels are equivalent, the two markets will yield different prices, and different allocations of allowance among small and large operators. In the consolidation treatment, the large operators have a more inelastic profit function than the small operators. When the
Industry consolidation and diffusion 33 total number of traps is cut back below historical levels, additional traps are worth more to larger operators than to smaller operators. In the diffusion treatment, small operators have a more inelastic profit function, and so they buy from the large operators with lower marginal values. The cost functions used in the two treatments are primarily demonstrative. Since consolidation has been observed in some fisheries implementing tradable allowance systems, it is reasonable to assume that the cost functions similar to that of the consolidation treatment occur in the field. However, cost functions consistent with the diffusion treatment may also occur in the field, especially when small operators particularly value fishing. Insufficient data exist to determine where the Rhode Island lobster fishery falls within that range. In this environment, the hypothesis to be tested is that the allocations of allowance predicted by a general equilibrium model obtain. If so, these two treatments will demonstrate that market fundamentals dictate the level of consolidation that can be expected following implementation of market-based management. This is tested against the hypothesis that equilibrium allocations do not obtain. This could be caused by price volatility (Newell et al., 2005; Larkin and Milon, 2000; Anderson and Sutinen, 2005), speculation in the asset market (e.g., Smith et al., 1988), or simply a failure of the market to identify a price within the time it is observed.
Experimental procedure In order to test whether the general equilibrium predictions emerge under each set of trading rules, a controlled laboratory experiment is used in which subjects play the role of fishers in a tradable allowance market. Therein, subjects trade in a market for allowance, and individual profit from fishing is determined by the quantity of allowance a subject holds. At any available market price for allowance, each subject needs to decide what action earns her the most profit: buying some allowance and fishing more, selling some allowance and fishing less or fishing her current holdings of allowance. As in a naturally occurring fishery with an allowance market, subjects who better balance fishing and trading allowance to maximize their total profit from both activities earn more profit in the experiment and are paid more money for participating. Each experimental session is comprised of six rounds. At the beginning of each round, subjects are given an initial allocation of allowance and cash. Each round is then divided into periods, or fishing years. Each period is divided into two parts: a trading phase and a fishing phase. During the trading phase, the market opens, and subjects are able to trade allowance with each other. Once the trading market closes, the fishing phase commences, and subjects earn profit from fishing their allowances. Fishing profits are determined from a table based on the amount of allowance the subject holds after trading. The first four rounds are each one period. In these rounds, subjects may exchange allowance for the right to earn profit from fishing in a single
34 C.M. Anderson et al. year, corresponding to an initial lease period.1 Rounds five and six are each four periods. In these rounds, trades of allowance carry over from one period to the next – an asset structure – and intertemporal speculation is possible. The first trading period of each round is 5 minutes long and, when applicable, the second period is 4 minutes long and the third and fourth are 3 minutes long. The allowance itself is structured as an asset that provides the opportunity to earn profit in each period until the end of the round. As such, allowance purchased in the first of four periods provides profit in each of the four periods, while allowance purchased in the last period provides profit only in the final period. Therefore, the predicted equilibrium price of allowance decreases from period to period by the amount of profit the inframarginal demander earns from holding the allowance in one period. In this experiment, subjects may make and respond to both buy and sell offers in a double auction market. A centralized price board displays current buy and sell offers where subjects may buy (sell) an allowance unit by accepting the lowest sell (highest buy) price advertised. Should current prices not be appealing to a subject, she can advertise her own price at which to buy or sell and the quantity of allowance at that price. Upon acceptance of an offered price, trade immediately takes place, which allows for trades to occur at different prices throughout a period. This trading structure was chosen as it is like field allowance markets in that trading is continuous and asynchronous and that trading can occur at different prices. Subjects’ profit functions were derived from logbook data as described in the previous section. Each experimental session was designed for up to 14 participants, two playing with profit functions of large operators, two with the profit functions of medium–large operators, four medium–small operators and six small operators. Table 2.1 displays the market share holdings, on which profit is based, for each of the four operator types. The endowment range for the six small operators reflects wide heterogeneity of fishing histories within that category. The equilibria for market share holdings differ for the two treatments as a result of profit functions varying for the two treatments, as can be seen in Figure 2.1. Profit, as a function of the number of experimental Table 2.1 Market share holdings for operators Operator
No. of subjects
Profit function optimum*
Initial allocation (endowment)*
Diffusion equilibrium*
Consolidation equilibrium*
Large Medium–large Medium–small Small
2 2 4 6
90 (30.8) 85 (29.1) 45 (30.8) 9 (9.2)
77 (32.4) 74 (31.1) 34 (28.6) 2–9 (8.0)
68 (28.6) 70 (29.4) 41 (34.5) 6 (7.6)
85 (35.7) 79 (33.2) 31 (26.1) 4 (5.0)
Note * Total market share (percentage) of subjects are in parentheses.
Industry consolidation and diffusion 35 600
Profit ($)
M–S operator Large operator M–L operator Small operator
0
0
10
20
30
40
50
60
70
80
90
100
110
120
Permits
Figure 2.1 Profit functions for operators.
“permits” (corresponding to roughly eight trap tags), for the diffusion treatment are noted with a dashed line; profit functions for the consolidation treatment are noted with a solid line. Each operator type’s profit function peaks at the same number of permits, although the peaks were scaled differently in the two treatments. The upper bound of the equilibrium price for which trades would occur in the consolidation treatment is 20; the upper bound of the equilibrium price in the diffusion treatment is 17.8.2 Subjects’ profit functions do not change during the experiment, and because the functions do not change between rounds, we are able to monitor the effects of experience and learning. Unchanging profit functions also simplify the subjects’ task, and reflect the assumption that the fishery is in steady state and that the total allowance allocation is such that recruitment exactly offsets harvest and mortality. Although few managed fisheries exist in a steady state, if equilibrium for market shares in the tradable allowance model fails to obtain in this simple environment, it is unlikely that equilibrium would obtain in a fishery with a dynamic or stochastic stock. For an experimental session, subjects were recruited – by email and through in-class sign-ups – to appear at an appointed time at the Policy Simulation Laboratory, located at the University of Rhode Island. They were told they would receive a $5 participation fee and would have the “opportunity to earn considerably more” during the experiment. If there were extra subjects (we designed the diffusion experiment to accommodate 13 or 14 subjects and the consolidation experiment to accommodate 12 or 14 subjects), then randomly selected subjects were paid their participation fee and dismissed.3 After reading consent forms, subjects were shown into the laboratory and seated at individual computer terminals, with barriers to discourage talking and impair visibility of others’ terminals. The experimenter then read aloud the instructions (available from the authors) as subjects followed along on their computer screens, explained how to
36 C.M. Anderson et al. use the experimental software and led subjects through a practice round. In the practice round, subjects were provided an allowance and profit table with which to trade with the computer. After answering any questions, the first four rounds of the experiment were run. Next, the experimenter read instructions explaining the asset structure for the four-period rounds as subjects followed along on their computer screens, and subjects participated in another practice round before beginning rounds five and six of the experiment. The two practice rounds were identical to the subsequent rounds, but without financial implications. Following the experiment, subjects’ earnings were converted to US dollars, and they were paid privately as they left the laboratory. For the diffusion treatment, earnings averaged $23.38 with a standard deviation of $3.57 (range of $12.75 to $31.75). For the consolidation treatment, earnings averaged $24.75 with a standard deviation of $3.61 (range of $8.75 to $29.75). Sessions lasted approximately 1 hour and 45 minutes each. The two treatments each included four sessions for a total 108 participants.
Results The findings of this experiment are separated into three sections. The first and second sections examine, respectively, prices and the resulting efficiency observed under the two market environments and show that equilibrium behavior obtains on average. The data indicate that the market performed well, obtaining equilibrium prices and highly efficient allocations. In the third section, the observed market shares are examined, and the convergence towards, or away from, consolidation among large operators is discussed for the consolidation and diffusion treatments, respectively. The market outcomes broadly support the levels of consolidation predicted by the general equilibrium model in both treatments. Trade Prices Figures 2.2 and 2.3 show the average prices in each period of the diffusion and consolidation treatments, respectively. Average prices are broadly consistent with equilibrium predictions. In Figure 2.2, prices in rounds one through four, some sessions are slightly higher than equilibrium, but do not display volatility. Once the asset market is introduced, there is some dispersion around the equilibrium price. However, in all rounds there is movement toward equilibrium predictions. By periods three and four of each round, average prices are very close to equilibrium, indicating that market fundamentals are driving trades. In Figure 2.3, there is little such variance, as all four sessions have observed prices very close to equilibrium in all periods. The difference in price variation between treatments may be a result of the difference in elasticities of the derived demand curves of each type of operator, as higher elasticity in the consolidation treatment made deviations more costly for buyers.
Diffusion treatment prices Lease Rounds 1–4 140
Price (experimental dollars)
120
Round 5
Round 6
A B C D Equilibrium
100 80 60 40 20 0 Period
Figure 2.2 Diffusion treatment prices with an initial lease period.
Consolidation treatment prices Lease Rounds 1–4 140
Price (experimental dollars)
120
Round 5
Round 6
E F G H Equilibrium
100 80 60 40 20 0 Period
Figure 2.3 Consolidation treatment prices with an initial lease period.
38 C.M. Anderson et al. Table 2.2 First-order autoregressive model of average prices in asset rounds 5 and 6 (N = 32)
Constant Periods left in round Last period in round Total R-square
Diffusion*
Consolidation*
1.818 (5.308) 19.918 (1.600) 2.076 (4.330) 0.806
0.596 (2.337) 20.571 (0.692) –0.133 (1.990) 0.809
Note * Standard errors are in parentheses.
To confirm statistically the general impression from the graphs that the market prices are consistent with equilibrium, we model observed prices as a function of the number of periods remaining in the round, with an indicator variable for the last period in the round to test for any bubble-crash behavior (Anderson, 2004). Table 2.2 shows the results of the autoregressive heteroskedastic panel regression of prices in rounds five and six of the diffusion and consolidation treatments, using the average period price from particular sessions as the dependent variable. We obtain results using session-specific heteroskedastic and autoregressive processes with a common autoregression parameter. Consistent with equilibrium predictions, the coefficient on periods left in round is the only significant determinant of price in both treatments. The coefficients represent the amount the price changes between periods, and thus should be equal to the predicted single-period equilibrium price. For the diffusion treatment, we cannot reject the hypothesis that the estimated value of 19.92 is different from the predicted value of 17.80 (p = 0.185), the price interval’s upper band. For the consolidation treatment, we also cannot reject the hypothesis that the estimated value of 20.57 is different from the predicted price of 20 (p = 0.410), the price interval’s upper band. Hence, we conclude that prices are consistent with equilibrium predictions. Efficiency The allocative efficiencies that were realized in the diffusion and the consolidation treatments are displayed, respectively, in Figures 2.4 and 2.5. The efficiency of the identical endowments, shown with a dashed line, is 76 percent of the potential maximum profit for the diffusion treatment and 80 percent for the consolidation treatment. Efficiency levels in both treatments show a tendency to increase into the 90 to 100 percent range. In the diffusion treatment, efficiency shows improvement during repetition in both the initial lease periods and the asset rounds. Even the minimum efficiency of 85 percent steadily improves. In contrast, the efficiency in the consolidation treatment shows more variation. In several cases, efficiency rises as high as 100 percent. The one case where efficiency drops to 57 percent is a result of the outlier trading of two subjects. These same two subjects were earning negative profits in period four of round five and in periods three and four of round
Industry consolidation and diffusion 39 Efficiency of diffusion treatment market Lease Rounds 1–4 Round 5 Round 6
Profit from fishing/maximum profit from fishing
100 95 90 85 80 75 70 65 60 55 50
A B C D Endowment
Period
Figure 2.4 Average efficiency in the diffusion treatment.
six, where efficiency dropped below starting efficiency. These profits were due to one subject selling her allowances for less than they would have earned her in fishing profits and the second subject also selling her allowances for a low price and then buying allowances for more than they were worth to her in fishing profits. Furthermore, the behavior of those two subjects balanced out with regards to average prices, which is why no abnormalities were seen in Figure 2.3. Despite these two outliers, the remaining subjects in that session and subjects in the other sessions behaved as predicted in terms of efficiency. In the consolidation treatment, the elasticity of the derived demand curve has the same effect on efficiency as it had on average price relative to the diffusion treatment: an immediate jump in efficiency occurs with the consolidation treatment, whereas a gradual upwards trend is seen in the diffusion treatment. As a result of the high efficiencies occurring in both treatments, we can expect the market allocations to be similar to the equilibrium predictions made about consolidation for both treatments. Market shares Even with transactions occurring efficiently and at equilibrium prices, we expect the resulting distribution of market shares to be different for the two treatments; the experimental parameters support one equilibrium allocation for the diffusion
Profit from fishing/maximum profit from fishing (%)
40 C.M. Anderson et al.
100
Efficiency of consolidation treatment market Lease Rounds 1–4 Round 5 Round 6
95 90 85 80 75 70 65 60 55 50 E F G H Endowment
Period
Figure 2.5 Average efficiency in the consolidation treatment.
treatment and a different equilibrium allocation for the consolidation treatment. The market share distribution among operators is displayed in Figure 2.6. From the initial endowment, the market shares of large and medium–large operators should converge towards the predicted equilibrium for the respective treatment. For this experiment, the predicted distribution of market shares should be 0.689 for the consolidation treatment and 0.579 for the diffusion treatment. However, as seen in Figure 2.6, the change in market share holdings does not occur immediately; a gradual convergence towards the predicted equilibrium occurs in the diffusion treatment whereas convergence occurs more rapidly in the consolidation treatment. The gradual convergence in the diffusion treatment is due to a price discovery process in the market, in which subjects respond to prices that others submit to the market by trying to overbid buy offers and underbid sell offers. Since the subjects lack information about each other’s profit tables, they require some time to observe price submissions and be able to determine what prices constitute a good deal, so subjects trade less than the optimal number of allowances until they attain enough price information.4 Still, the market seems to converge in the diffusion treatment to its predicted state of low concentration; while in the consolidation treatment, it converges to its predicted state of greater concentration. Of interest is the fact that in the consolidation treatment the consolidation of market shares sometimes overshot the predicted equilibria, resulting
Large operators’ percentage of market shares (%)
Industry consolidation and diffusion 41
75
Lease Rounds 1–4
Round 5
Round 6
73 71 69 67 65 63 61 59 57 55 Period A B C D
E F G H
Diffusion equilibrium Consolidation equilibrium Endowment
Figure 2.6 Percentage of market shares held by large and medium–large operators.
in an even greater concentration of market shares amongst a few firms; whereas in the diffusion treatment, market share holdings rarely diffused more than theory predicted. However, in the consolidation treatment, systematic excess consolidation does not occur; 1.09 more permits are owned on average by large and medium–large operators than at the predicted equilibrium. Thus, given the different profit and cost structures among large and small operators in a fishery, the likely structure of the industry following implementation of a tradable allowance system can be predicted under the proposed cutback level. From Figure 2.6, the results point toward the consolidation of market shares in one treatment and the diffusion of market shares in the other. To confirm this impression statistically, we model the data from both treatments to test that market shares are converging to their predicted shares, and that these shares are statistically distinct from one another. We use a two-way random effects model to control for serial correlation as well as for correlation among errors in sessions. To address the effect that time has on convergence, we adapt Noussair et al.’s (1995) panel model of price convergence:
1 t −1 Sjt = ∑ β Startj[ Dj ] + β Asymptote + µ jt t t j
42 C.M. Anderson et al. where µjt = υj + et + εjt. The subscripts j and t correspond to round and period, respectively. Sjt is the market share observed in period t of round j and Dj is an indicator variable that takes on the value one in round j and zero otherwise. This model estimates a starting point of the convergence process for each round, and a common asymptote to which all rounds are converging. As the period number gets larger, predictive weight shifts from the 1/t of the round-specific starting point towards the (t1)/t of the common asymptote. It is the value of the asymptotes for each treatment that is important, for that indicates the equilibrated behavior in each treatment where market share convergence is heading. For purposes of this estimation, a period one index corresponds to the initial endowment, and all other periods’ indicies are incremented by one such that five periods exist. Table 2.3 shows the results of the convergence regression model of the market shares. The estimated limits (Consolidation and Diffusion asymptotes) are reported in the first section of the table, and the starting points for each asset round of the consolidation and diffusion treatments are listed in the lower two sections. The asymptotes are statistically close to their respective predicted equilibrium, and convergence towards the asymptotes occurs in both treatments. The asymptote for the consolidation treatment, 0.707, is only borderline statistically distinct from the predicted value of 0.689 (p = 0.025). Of 476 total permits in the treatment market, the asymptote is only 8.568 more permits than the predicted value, or 1.8 percent greater. The asymptote for the diffusion treatment is not statistically different from the predicted value of 0.579 (p = 0.568). These asymptotes are statistically distinct from one another (p < 10–16), proving that the market shares are converging towards points that are distinct from one another. The p-values in the right-hand column of the two lower sections of the table test the hypothesis that the starting points are, respectively, not less than and not greater than the asymptotes. All are rejected, indicating that market shares are moving during trade, and that they are moving toward distinct asymptotes as predicted by the general equilibrium model.5
Discussion Based on their record of effectively reducing harvest below established limits and improving stock health and profitability in other fisheries, government and nongovernmental panels have recommended that US policymakers give greater consideration to using tradable allowance systems in fishery management. The use of tradable allowance systems has been a topic of discussion and debate in many other countries for a number of years. As a result of the recent expiration of a 6-year federal moratorium on new programs, this system may now be implemented in US fishery management, and a number of fisheries are considering this newly available option. However, an obstacle to implementation that remains in many fisheries is stakeholder concern that market-based management will lead to industry consolidation. To address this fear, we designed an experiment based on market fundamentals from an actual fishery. One treatment, the consolidation treatment, demonstrated the feared case, where larger operators
Industry consolidation and diffusion 43 Table 2.3 Two-way random effects regression of market share consolidation Variable
Parameter estimate
p value (2-tail) asymptote = predicted
Consolidation asymptote Diffusion asymptote
0.707 0.575
0.025 0.568 p value (1-tail) Consol start # Consol asymptote
Consolidation start 1 Consolidation start 2 Consolidation start 3 Consolidation start 4 Consolidation start 5 Consolidation start 6 Consolidation start 7 Consolidation start 8
0.642 0.642 0.634 0.640 0.654 0.652 0.627 0.623
<10–5 <10–5 <10–6 <10–5 <10–4 <10–4 <10–7 <10–7 p value (1-tail) Diff Start # Diff Asymptote
Diffusion start 1 Diffusion start2 Diffusion start 3 Diffusion start 4 Diffusion start 5 Diffusion start 6 Diffusion start 7 Diffusion start 8
0.641 0.643 0.645 0.638 0.628 0.633 0.636 0.640
<10–5 <10–5 <10–6 <10–5 <10–4 <10–4 <10–5 <10–5
Note * All Consolidation start and Diffusion start variables have a 0.0126 standard error; both Asymptote variables have a 0.00758 standard error.
buy the market share of smaller operators. While large operators’ share increased, excess consolidation – beyond the level predicted by equilibrium – was not characteristic of the treatment on average. A second treatment demonstrated the opposite: smaller operators purchased some of the market share of larger operators after allowance trading was permitted. These results have two significant implications. First, market-based management can lead to market diffusion. Concentration is not, as many people fear, an automatic result of market-based management. As such, governments may not need to put caps on ownerships of market share percentages, particularly in apprehension of excess consolidation. Second, because observed market shares corresponded to equilibrium predictions in both treatments, the general equilibrium model is useful in identifying when concentration and diffusion are likely to occur, based on the profit functions actually present in the fishery. There are a couple of important limitations of this experimental design that
44 C.M. Anderson et al. should be recognized in applying these results. First, neither of the two treatments allows for a firm’s shutdown; only marginal scale changes are permitted. In a naturally occurring industry, shutdown is very likely for some firms. Second, both treatments include initial lease periods in their design. Field and laboratory analysis suggests that new asset markets that do include initial lease periods, such as those designed for market-based management, can be highly volatile, producing widely varying outcomes. Newell et al. (2005) identify price dispersion, as much as 30 percent of mean price, as a prominent feature in the first 4 years of prices in a 30-species study of the New Zealand quota system; long-run dispersion is closer to 10 percent. Larkin and Milon (2000) find price ranges spanning from one to four times the mean price in the first 5 years of the Florida spiny lobster trap certificate program. In a laboratory analysis of tradable fishing allowances, Anderson and Sutinen (2005) argue high levels of volatility support speculation in new markets. This volatility may lead to disequilibrium market shares, which could advantage larger firms that can afford to invest more speculatively in permit purchases and facilitate consolidation, or better diversify the market risk associated with holding a fluctuating asset. While these limitations do need to be considered and may be addressed in future experiments, they do not alter our conclusion that both consolidation and diffusion may result from tradable allowance management. While this result does not actually allow us to predict whether there will be concentration in the Rhode Island lobster fishery if and when tradable allowances are implemented, it does suggest industry characteristics to consider. If larger operators experience higher marginal profits, evaluated at the level of trap certificates reflecting the initial cutbacks, then some shift in market share toward large operators is likely. For instance, overcapitalized operators could make use of more traps and would purchase permits from operators who have a lower marginal profit. On the other hand, if smaller operators obtain higher marginal profits, the shift may well go the other way. This might happen if, as people familiar with family-scale commercial fishing often claim, the smaller family operations place lot of value on the fishing lifestyle.
Acknowledgments We are grateful to Meifeng Luo for engineering the experiment software, to Rhode Island Sea Grant (NOAA Award NA16RG1057) for financial support of the experiments and to the Rhode Island Agricultural Experiment Station for support of the Policy Simulation Laboratory.
Notes 1 An initial lease period is part of the design as Anderson and Sutinen (2006) have demonstrated it to dramatically reduce price volatility and improve market performance. 2 The upper bound is used in analysis since prices are empirically closest to the upper bound. This empirical regularity has not been systematically studied and may possibly
Industry consolidation and diffusion 45 be explained by an endowment effect, where operators with large endowments withhold supply and drive prices to the upper end of the range. 3 The treatment endowments are designed such that a certain number of subjects acting as small operators could be removed without affecting the predicted equilibrium. 4 The length of trading periods was based on pilot experiments such that all trading that would occur did. 5 Analysis of the Herfindahl–Hirschman Index measure of market concentration shows that, while the changes taking place are not enough to alter the market’s classification under the Index, we do observe a strong bifurcation from the initial market concentration toward the predicted equilibrium points. Thus, using a standard measure of market share, we arrive at the same conclusion as looking directly at allocations.
References Anderson, C., 2004. How Institutions Affect Outcomes in Laboratory Tradable Fishing Allowance Systems. Agricultural and Resource Economics Review, 33 (2), 193–208. Anderson, C. and J. Sutinen, 2005. A Laboratory Assessment of Tradable Fishing Allowances. Marine Resource Economics, 20 (1), 1–23. ——, 2006. The Effect of Initial Lease Periods on Price Discovery in Laboratory Tradable Fishing Allowance Markets. Journal of Economic Behavior and Organization, 61 (2), 164–180. Arnason, R., 2002. ITQs in Practice: An International Review. In A. Hatcher, S. Pascoe, R. Banks and R. Arnason, eds. Future Options For Fish Quota Management. A report to the Department for the Environment, Food and Rural Affairs, Centre for the Economics and Management of Aquatic Resources University of Portsmouth, UK, Report No. 58, pp. 54–70. Davidse, W.P., 2001. The Effects of Transferable Property Rights on the Fleet Capacity and Ownership of Harvesting Rights in the Dutch Demersal North Sea Fisheries. FAO Fisheries Technical Paper No. 412. FAO, Rome. Ford, W., 2001. The Effects of the Introduction of Individual Transferable Quotas in the Tasmanian Rock Lobster Fishery. FAO Fish. Tech. Pap. No. 412. FAO, Rome. Larkin, S. and J.W. Milon, 2000. Tradable Effort Permits: A Case Study of the Florida Spiny Lobster Trap Certificate Program. University of Florida Working Paper. Matulich, S., R. Mittelhammer and C. Reberte, 1996. Toward a More Complete Model of Individual Transferable Fishing Quotas: Implications of Incorporating the Processing Sector, Journal of Environmental Economics and Management, 31 (1), 112–128. Newell, Richard, James N. Sanchirico and Suzi Kerr, 2005. Fishing Quota Markets. Journal of Environmental Economics and Management, 49 (1), 437–462. Noussair, C., C. Plott and R. Riezman, 1995. An Experimental Investigation of the Patterns of International Trade. American Economic Review, 85 (3), 462–491. NRC, 1999. Sharing the Fish: Toward a National Policy on Individual Fishing Quotas. National Research Council, National Academy Press, Washington, DC. Pages 139–191. OECD, 1997. Towards Sustainable Fisheries: Economic Aspects of the Management of Living Marine Resources. Organization for Economic Cooperation and Development, Paris, 268 pp. Pew Oceans Commission, 2003. America’s Living Oceans: Charting a Course for Sea Change. Arlington, PA: Pew Oceans Commission. Smith, V., G. Suchanek and A. Williams, 1988. Bubbles, Crashes and Endogenous Expectations in Experimental Asset Markets. Econometrica, 56 (5), 1119–1152.
46 C.M. Anderson et al. Squires D., J. Kirkley and C.A. Tisdell, 1995. Individual Transferable Quota as a Fisheries Management Tool. Reviews in Fisheries Science, 2 (2), 141–169. Sutinen, J.G. and M. Soboil, 2003. The Performance of Fisheries Management Systems and the Ecosystem Challenge. A paper presented to the Reykjavik Conference on Responsible Fisheries in the Marine Ecosystem, Reykjavik, Iceland, October 2001. Tietenberg, T., 2002. The Tradable Permit Approach to Protecting the Commons: What Have We Learned? In E. Ostrom, ed. The Drama of the Commons, Washington, DC: National Academy Press, pp. 197–232. United States Commission on Ocean Policy, 2004. An Ocean Blueprint for the 21st Century. Final Report. Washington, DC: US Commission on Ocean Policy.
3
Caveat emptor Kyoto Comparing buyer and seller liability in carbon emission trading Robert Godby and Jason F. Shogren
Introduction Tradable emission markets continue to be promoted as a cost effective tool to manage global public goods such as climate protection. A prominent example of such global emission trading systems is in the 1997 Kyoto Protocol (UNFCCC 1999). Recall the Kyoto Protocol required the leading industrialized countries to reduce their greenhouse gas (GHG) emissions by an average of 5 percent below 1990 levels by 2008–12.1 These reductions are severe relative to current GHG emission rates and will likely be costly to achieve (e.g. see Nordhaus and Boyer, 2000; Shogren, 2004). To minimize control costs, Kyoto allows for some flexibility in GHG mitigation, including the international trading of GHG emission quotas (Article 17). Emission trading allows regulated emitters to buy emission reduction efforts from other emitters – in effect, contracting other emitters whose abatement costs are less than their own to make reductions for them (see Crocker, 1966; Dales, 1968). Emission markets have several appealing properties over traditional “command and control” regulation, and chief amongst them is the fact that market outcomes can theoretically result in emission reductions occurring at least cost to society. Domestic trading programs in the United States indicate that realized cost savings can be substantial. Emission trade has also occurred at the international level, and was used successfully, though in a limited way to administer reductions in the use of ozone depleting substances demanded by the Montreal Protocol (see Solomon, 1999; Schmalensee et al. 1998; Stavins, 1998). Global emission trading is central to cost effective climate policy because by some estimates it could halve compliance costs (e.g. CEA, 1998). The effectiveness of such global trading depends on the rules of enforcement and sanctions for nations that shirk on their emission commitments (e.g. Article 18 in Kyoto). Domestic trading programs in the United States and elsewhere have relied on strong enforcement and sanctioning frameworks to ensure market compliance. Demands posed by national sovereignty concerns, however, make the usual organization of emission permit markets problematic in an international setting, particularly with respect to Article 18 (see Rabkin, 1999; Barrett, 2003). In international treaties, it has been noted that “sanctioning authority is rarely
48
R. Godby and J.F. Shogren
granted by treaty, rarely used when granted, and likely to be ineffective when used” (Chayes and Chayes, 1998, p. 32). As noted by Barrett (2003, p. 323) the Kyoto treaty “ignored enforcement. This was something that the negotiators thought they could add later. But this problem should be viewed from the other direction. . . . Enforcement is the main challenge, and it needs to be addressed directly”. The critical issue is who should be held liable for overselling permits beyond quotas – the seller or buyer country? Relatively weak under-compliance penalties and ineffective monitoring methods create the incentive for selling nations to oversell permits and shirk on their emission responsibilities due to the magnitude of the potential permit revenues. Proceeds from international GHG trading could be significant – the former Soviet Union countries projected emission trading revenues under Kyoto may be as high as $17 billion (Edmonds et al., 1999). Cason (2003) notes that this value, measured in 1992 dollars, is equivalent to three-quarters of Russia’s trade surplus in 1997, or equal to the value of all lending between the US and Russia between 1990 and 1996. Overselling eliminates the benefits to emission trading in reducing GHG emissions. Kyoto and climate change provide our motivating example, but the problem of relatively weak enforcement holds for any proposed emission trading system that crosses the borders of sovereign nations. International enforcement of emission reductions inside a sovereign nation is unlikely to be authoritative if based strictly on moral suasion or threats of potential trade sanctions (see Rabkin, 1999). To most economists, seller liability is the obvious choice if the goal is to minimize transaction costs because all permits have the identical value to buyers, and only one market is needed to transact all permits. In domestic non-climate emission trading programs, sellers are held liable for any permit shortfalls. To others, however, buyer liability is the choice given their objective – to make sure that the relatively rich buyer nations (e.g. US, Japan) are liable for any shortfalls in emission reductions made by the relatively poorer sellers (e.g. Russia, Costa Rica); see for instance Baron’s (1999) description of liability rules that could be used to support international GHG trade. Here caveat emptor rules. Permits sold by non-compliant nations would be returned to the sellers, and the buyers would face the consequences of any resulting permit shortfalls. Conveniently, buyer liability avoids the difficult issues arising from the negotiation of tough enforcement measures while creating incentives for efficient market outcomes (e.g. see the Third Assessment Report of the IPCC, 2001). The argument for buyer liability arises from a belief that buying implies a commitment by the purchaser to ensure emission compliance, and this commitment creates an incentive for buyers to seek out sellers likely to comply with their emissions obligations thus increasing overall market compliance (see Victor, 2001). While opponents contend that buyer liability will undermine efficient emission trading by depressing market prices and trade volume due to its increased risk, proponents counter that fears of thin markets due to high transaction costs are exaggerated.
Caveat emptor Kyoto 49 Buyer liability rules increase the complexity of global emission markets. Differences in the perceived likelihood of sellers being in compliance would result in buyers having different valuations for permits based on their origin, and the need for separate markets for permits offered by each seller. Differences in permit prices across sellers and incomplete information regarding the likelihood of seller compliance could both decrease the economic efficiency of using emission markets to achieve emission goals and increase the economic and environmental risks faced by buyers. Such concerns have been downplayed by some groups lobbying for particular types of liability rules in global emission trade negotiations. For example, the Environmental Defense Fund (1998) said: in view of the limited set of enforcement tools available to an international regime, nations should adopt a limited but effective form of “buyer liability” to create incentives in favor of compliance and to ensure that the environment is made whole. The idea is that buyer liability results in buyers purchasing from those sellers they believe will honor commitments, an outcome that helps the climate. Other observers, while less optimistic about the cost savings, note “buyer liability increases the environmental performance of the system by reducing the probability of invalid permits circulating” (Kopp and Toman, 1998). The working thesis is that buyer liability leads to greater climate protection, as markets form to capture the gains from trade and reputations work to police market behavior (e.g. European Business Council for a Sustainable Energy Future, 2000). Also see Woerdman (2001) who argues: from the perspective of environmental effectiveness, buyer liability is preferable to seller liability, since it would strengthen compliance incentives by discouraging buyers to purchase emission reductions from (entities in) countries which appear to be heading towards non-compliance. Both from an economic and environmental perspective, buyer liability is still preferable to a quantitative restriction on trade, because such a restriction crudely reduces the overall cost saving potential, whereas buyer liability increases transaction costs but leaves the efficiency untouched. This chapter explores the relative efficiency of a caveat emptor carbon trading program. Using a stylized Kyoto emission market double auction experiment in which liability rules are the treatment, we show that buyer liability under relatively weak international enforcement leads to the worst possible outcome – less climate protection at greater costs. To our knowledge, no empirical studies have attempted to compare buyer and seller liability outcomes in markets. While not directly comparing seller liability and buyer liability, Cason (2003) finds that in experimental buyer liability markets where sellers can make a commitment to honor their obligations, market performance is enhanced relative to when such commitments cannot be made or
50 R. Godby and J.F. Shogren are unobservable. Haites and Missfeldt (2001), using a simulation study, find that introduction of supplementary rules such as reserve accounts and minimum eligibility requirements for market participation can be sufficient to induce market formation and increase market efficiency in international emission markets. They may not, however, eliminate all losses such rules impose on the market. We consider relatively weak enforcement levels to reflect the view held by many observers, in which potential sanctions on sovereign nations over international emissions trading are likely to be meager (see for example Cooper, 1998; Schelling, 1997). Our findings indicate buyer liability lowers economic efficiency, distorts permit prices and market production patterns. We find such rules also significantly worsen environmental performance through greater noncompliance. Adoption of buyer liability with relatively weak enforcement renders emission trading inadequate by subverting the goals of Kyoto – creating less environmental protection at greater economic cost. We test the robustness of this result by creating a setting more likely to produce spontaneous “good seller” reputations in the permit market such that buyer liability would be more likely to succeed. We increased the monitoring probability and introduced a random ending point in the markets. Overall, we find that our main finding was relatively robust. Higher enforcement reduces output and increases emission compliance under seller liability, as predicted. But contrary to prediction, permit trade occurred under buyer liability as some sellers reduced production to support trade. A random ending point led to outcomes similar to those observed with a fixed end point. Under buyer liability, although the results were not inconsistent with a weak reputation effect, we still observed more permit trading and production than expected. Our laboratory design is a test of concept experiment (see e.g. Plott, 1983). We do not calibrate the experimental parameters to exact features one might expect in a climate treaty; rather we test the concept of buyer liability within a global emission market. We work with climate but several environmental policy initiatives have considered buyer liability rules. Examples include the EPA Open Market Rule (since withdrawn), several proposed effluent markets in the US, the PERT emission trading market in Canada and in a proposed emission market for SO2/NOx in southern Ontario (Ontario Government Ministry of the Environment, 2001; PERT, 2000). To date, however, no studies exist to determine how the choice of liability rules affects market outcomes. Herein the choice of liability rule is a treatment variable to investigate the impact of the choice of liability rules, in which the general implications apply to a wide variety of settings.
Experimental environment, design and hypotheses The laboratory is an effective means to test economic institutions, and has been used extensively in the development of emission trading programs. See for instance Muller and Mestelman (1998) for an overview of emission permit
Caveat emptor Kyoto 51 experiments conducted to evaluate American and Canadian program proposals. Bohm and Carlen (1999) and Carlen (1999) report experiments testing institutional issues other than liability in international GHG trade (also see Soberg, 2000). Cason (2003), for example, considers how buyer liability rules affect market outcomes when firms commit to technologies that ensure emission compliance to reduce uncertainty about their ability to sell permits.2 Our design extends this work by comparing how buyer and seller liability affects market outcomes. We compare the performance of three liability rules: (i) seller liability – seller non-performance inflicts sanctions on the seller only, (ii) buyer liability – seller non-performance inflicts sanctions on buyers only, causing trades to be canceled and forfeiture of buyer’s permit costs, and (iii) buyer liability with refund – seller non-performance inflicts sanctions on buyers only, while sellers forfeit any permit revenues. For buyer liability, it has been suggested the incentive for sellers to oversell permits could be eliminated with the creation of “escrow accounts” (see Baron, 1999). Expenditures for permit purchases are deposited in these accounts and not released to sellers until it had been verified they had not overproduced. We include a refund treatment to test this hypothesis explicitly. Since an international trading framework remains unspecified, our laboratory environment purposefully focuses on the essential features of liability choice in a market under relatively weak international enforcement of emission shirking by sovereign nations. Experimental environment, treatments and procedures The 12 experimental sessions reported were conducted, each using a different set of student subject volunteers. To investigate the effect of alternative liability rules on market outcomes, one seller and two buyer liability treatments were considered using an ABA crossover design. Each session conducted three treatment rounds as described in Table 3.1. Each treatment “round” consisted of eight market periods, requiring 24 market periods to be conducted in each experimental session. The laboratory environment in each session was designed to replicate the market incentives that would be present in possible emission trading markets without exactly replicating the decisions required in any particular market setting that might occur. Each session involved eight experimental subjects who were informed they represented a firm producing and selling a product at an announced market price. Production costs varied across subjects, though production capacity did not. All costs were subjects’ private information and production profits earned “lab-dollars”. In each period, each subject would choose a level of production; however, each was aware that their actual production level could be either one more, one less or the exact level they chose with an equal probability. Subjects recorded production choices on their own computer terminal. Subjects were further informed that while production of each unit of output was expected to occur only if they held a permit to do so, it was possible to produce without a permit – but each faced a fixed and known probability of
Symbol
SBS-1 SBS-2 BSB-1 BSB-2 SRS-1 SRS-2 RSR-1 RSR-2 BRB-1 BRB-2 RBR-1 RBR-2 SBS-HM-1 SBS-HM-2 BSB-HM-1 BSB-HM-2 SB-RE-1 SB-RE-2 SB-RE-3
Session
011003 011004 011017 011018 011010 011011 011012 011015 011022 011023 011025 011026 020307 020315 020415 020418 020604 020607 020615
Table 3.1 Experimental design
Buyer liability Seller liability Buyer liability w/refund Buyer liability Buyer liability w/refund Seller liability High enforcement Buyer liability High enforcement Seller Liability
Practice
Practice
Practice
Practice
Practice
Practice
Practice
Practice
Seller liability
Treatment
Treatment
Practice
1
0
Round
Buyer liability High enforcement Seller liability High enforcement Buyer liability-Random end point
Buyer liability
Buyer liability w/refund
Seller liability
Buyer liability w/refund
Seller liability
Buyer liability
Treatment
2
Seller liability High enforcement Buyer liability High enforcement N/A
Buyer liability w/refund
Buyer liability
Buyer liability w/refund
Seller liability
Buyer liability
Seller liability
Treatment
3
Caveat emptor Kyoto 53 being monitored individually to determine if they held enough permits to cover their production. If found to have produced without enough permits to cover production, individual subjects faced a fixed fine for every unit produced in excess of their permit inventory. Each subject was endowed with an initial lab-dollar balance at the beginning of the experimental session. At the start of each period in the session, some subjects were also endowed with permits while others were not. After production decisions were made, subjects with permits (referred to as sellers) had the opportunity to sell permits to those who were not endowed any (buyers) using a computer-mediated double auction, which opened for a fixed period of time. These markets were open for 150 seconds in rounds 1 and 2, and 120 seconds in round 3 of each session. Subjects’ permit endowments and auction roles each period remained constant throughout the experiment. Once purchased, permits could not be resold to other subjects and could not be carried over to future periods. After this market closed, subjects were informed of their actual production levels, production profits earned, their permit revenues or expenditures, whether they were monitored and any monitoring fines incurred and their earnings in the period, and then a new period began. At the end of the experiment, subjects were paid privately in cash an amount based on their accumulated labdollar earnings, converted to US dollars at a fixed and announced exchange rate. Individual subject earnings ranged from $27 to $73, with a mean payment of $45.70 and standard deviation of $7.02. We imposed three treatments on the market, two in each experimental session (see Table 3.1). In seller liability (SL) treatments, any subject found to have produced without enough permits was fined for each unit produced without a permit. Buyer liability (BL) treatments imposed per unit fines on any buyer found to have produced without permits, while sellers faced no such fine. Instead, if sellers were monitored and found to have overproduced, any permit sales made in the period were canceled in reverse order until they either had enough permits to cover their production or until all sales had been canceled. After permit sales were canceled, sellers were not fined even if their production exceeded their permits. Buyers who had to return permits were then monitored and fined for each unit produced without a permit. Sellers also kept any permit sales revenues, even if permits were returned. Under buyer liability with refund (BLR) treatments, the same rules applied as those just described, with the exception that permit expenditures for canceled trades were refunded to buyers. Under both buyer liability treatments, any buyers forced to return permits to a seller were informed of this outcome on their computer. Using a whiteboard at the front of the laboratory, any seller who had sales canceled was identified by their ID number, the period the cancellation occurred and the number of sales canceled. A total of 96 subjects were recruited to participate in one of the 12 experimental sessions, each lasting 2.5–3 hours. Of these subjects, 87 were undergraduates, primarily from business disciplines, while nine were first or second year graduate students in Finance or Economics. One session included three graduate
54 R. Godby and J.F. Shogren students, while six other sessions included one graduate student. Since none had previous experience with the market environment or decisions they faced, experimental sessions began with each subject being given a folder and randomly assigned to an experimental role and ID number. Following standard experimental protocol, each folder contained worksheets and experimental instructions particular to that session describing how to use the computer software developed for the experiment and subjects’ experimental tasks. Subjects were asked to follow along as the instructions were read aloud. All instructions are available by request. An instruction phase followed that lasted approximately 45 minutes and included two unpaid practice periods, one using the training parameters and one unpaid and unreported practice session using the actual experiment parameters. Seller liability was used in practice sessions, since it is a familiar setting to subjects. To aid subjects in their production choices, a profit calculator was provided on their computer to explore their potential profits for various production and permit holding levels if they were or were not monitored. Table 3.2 reports the actual subject production costs, initial permit endowments, redemption values (production price) of output, probability of being monitored and fine per unit of excess production used in the experiment. For purposes of analysis, we assume that one unit of emissions was created for each unit of output produced. We never mentioned pollution or production emissions during the experiment and we used context-neutral terms in the instructions to avoid potential framing effects. Figure 3.1 shows the general experimental procedure in each session. Experiment instructions included details regarding the number of periods in each round and the number of rounds in the session. Under seller liability conditions, buyers should be indifferent to the origin of any permits purchased so only one auction market opened. In buyer liability treatments, we expected buyers to be concerned with the source of the permit and how many permits any seller had sold since buyers might expect different sellers to have different probabilities of overproducing. Buyer’s valuations for permits could differ across sellers thus we accommodated these concerns by simultaneously opening four permit auctions in which each seller could sell permits only in the market corresponding to their own their own ID number. All sessions used the same set of random monitoring and stochastic production outcomes in each period to ensure session outcomes were comparable. A monitoring probability of one in six was maintained for each subject in every period. Prior to the experiment, random production outcomes were generated using a random number generator drawing over the unit interval. To determine which subjects were monitored in any period, a dice roll was used, with the number rolled on the dice determining the subject monitored in the period. Since there were eight subjects in each period, two different subjects were excluded from each period’s dice roll thus every three periods another dice roll was necessary to determine which of six excluded subjects would be also be monitored to maintain the stated monitoring probability. The monitoring probability of one in
Caveat emptor Kyoto 55 Table 3.2 Experiment conditions and producer costs Marginal costs of production subject
Seller1 Seller2 Seller3 Seller4 Buyer5 Buyer6 Buyer7 Buyer8
SL treatments Production unit 1 2 3 4 5 6
128 133 154 157 175 325
125 136 151 161 185 315
60 139 148 164 195 305
60 142 145 167 205 295
8 133 154 157 205 285
5 136 151 161 225 275
2 139 148 164 235 265
2 142 145 167 245 255
BL treatments Production unit 1 2 3 4 5 6
115 133 154 157 175 325
115 136 151 161 185 315
105 139 148 164 195 305
105 142 145 167 205 295
25 133 154 157 215 285
25 136 151 161 225 275
25 139 148 164 235 265
25 142 145 167 245 255
BLR treatments Production unit 1 2 3 4 5 6
115 133 154 157 175 325
115 136 151 161 185 315
105 139 148 164 195 305
105 142 145 167 205 295
25 133 154 157 215 285
25 136 151 161 225 275
25 139 148 164 235 265
25 142 145 167 245 255
Initial permits
3
3
3
3
0
0
0
0
Initial balance
500
500
500
500
3000
3000
3000
3000
Exchange rate
0.70
0.70
0.70
0.70
1.00
1.00
1.00
1.00
Production price: 170 Monitoring fine/unit: 240 Monitoring probability: 1/6 (0.167)
six was used intentionally, under the presumption that subjects would have experience with rolling dice. The analogy to dice rolls could then be used to ensure the random nature of monitoring and the concept of statistical independence was more likely to be understood by all subjects. Subjects were also told the approximate odds of being monitored every period of a round and of not being monitored in any period of a round.
Sign-in
Instructions/risk aversion test/DA practice
New market: initial assignment of • cash balances • production costs • auction roles (buyer/seller) • auction rules
Start period: initial period permit inventory assigned to each subject
Production decision
Permit market opens
Permit market closes • updated cash balance • final permit inventory for period
“Stochastic” production outcome assigned, actual profits computed
Monitoring – 1/6 probability of monitoring = yes • audit if monitoring = yes • seller liability – assess fine if non-compliant • buyer liability – if seller audited found non-compliant, sales cancelled in reverse order – audit all buyers whose sales cancelled and assess fine if they are non-compliant. If buyer audited – assess fine if noncompliant. Refund to buyers if treatment = BLR only.
Compute period profits – update cash balance If period period 8 of round If round round 3
If period = period 8 of round If round = round 3 End session
Figure 3.1 Session procedure.
Caveat emptor Kyoto 57 After eight periods in each treatment round were completed, subjects opened a new set of instructions that described the next liability treatment. Earnings made in the previous round carried forward; accumulative earnings could affect risk taking behavior if people have decreasing or increasing absolute risk aversion over wealth. To ensure that subjects did not focus on previous market outcomes, production costs were changed in each round without affecting aggregate production cost conditions. As shown in Table 3.1, cost conditions were similar across treatments, with only the first unit costs being changed to equalize predicted earnings differences across treatments. In the third round of each session, which reintroduced the liability and cost conditions used in the first round, the costs subjects 3 and 4 faced in the first round were switched with those of subjects 1 and 2, while the costs of subjects 5 and 6 were switched with subjects 7 and 8. Subjects were unaware that the overall market cost structure was the same as that used in the first round, and only told their costs had changed. Induced permit demand and supply curves were also shifted by paying different fixed “permit subsidies” in each round. Subsidies were paid to all subjects holding permits at the end of a period regardless of whether their permits covered production. Experimental predictions In our laboratory environment, predictions of market outcomes under the alternative liability rules considered can be identified using expected profit maximization under the assumption of risk neutrality. Table 3.3 summarizes our predicted market outcomes:3 Prediction 1: Permit prices will be lower under buyer liability than under seller liability. Prices under buyer liability will be higher if refunds are possible than when they are not, but less than those under seller liability, thus the following permit price predictions arise: PSL ≥ PBLR ≥ PBL. Firms that produce without a permit face a potential fine. Under seller liability, all agents value permits for this reason. Their valuation of each permit depends on the expected market fine if they are non-compliant, or on the marginal profit generated by the unit of output the permit is to be applied to, whichever is less. In contrast, buyer liability removes the need for sellers to value permits, which reduces total permit market demand. Buyers face higher risk and lower expected profits under these rules since their probability of being monitored increases when they buy permits from a seller that overproduces. A buyer’s valuation now depends on the risk of overproduction they perceive of each seller, and this may depend on their priors, and also on the number of permits these sellers have already sold. Refunds raise buyers’ expected profits, but do not remove the discount buyer liability creates. For these reasons, permit price predictions under seller liability (between 33 and 40) exceed those under the buyer liability with refund (between zero and three) and buyer liability (between zero and two) treatments in Table 3.3.
58
R. Godby and J.F. Shogren
Prediction 2: Quantity of permits traded will be greater under buyer liability than under seller liability. Quantity traded will be unaffected by the presence of permit refunds, thus the following permit trade volume predictions arise: QBL = QBLR > QSL . In our environment, even though under buyer liability permit demand functions rotate downward due to increased risk (the horizontal intercept remains fixed), predicted trading volume increases under the assumption of risk neutrality. Sellers no longer value permits, creating a vertical supply curve at the total permit endowment level in the market. Given our assumption of risk neutrality, and using the permit and the cost parameters used in this experiment, quantity traded equals the number of permits endowed to sellers assuming buyer demand equals or exceeds supply. Given the fact that the length of the round is finite and known, one might expect that a “lemons-market” outcome could occur here – that trade in permits would disappear since all buyers should expect all sellers to overproduce. In this experiment, however, given the probabilistic monitoring, demand for permits can still exist despite the fact that all buyers may expect all sellers to shirk. Although the increased risk of default could decrease buyer demand to the point where expected trading volume is less than that under seller liability, the market conditions created here result in the predicted volumes under seller liability (six) or both buyer liability treatments (12) when risk neutrality is assumed. Risk aversion among subjects would lower the predicted trading volume, although exact predictions depend on the subject’s degree of risk aversion. Prediction 3: Buyer liability in permit markets distorts production outcomes, increasing production and emissions on the seller side of the market without necessarily reducing buyer’s output and emissions, thus the following predictions regarding emissions arise: EBL = EBLR > ESL . Buyer liability effectively relaxes the emission constraint faced by permit sellers and transfers it to buyers. Probabilistic monitoring and higher predicted permit inventories, however, increase the buyer’s predicted production levels relative to those under seller liability by one unit in total. Without an effective emission constraint due to the liability rules imposed here, seller subjects increase production to their unconstrained profit maximizing levels, increasing their total production by seven units. Predicted aggregate market output increases by eight units under both buyer liability conditions. In general, whether the imposition of buyer liability results in buyer’s production increasing or decreasing depends on the probability of monitoring. Seller output, however, always increases to unrestricted levels. Given the assumption of emissions proportionate to output, the resulting flow of total emissions increases and the stock of GHGs accumulates faster when buyer liability is used, which reduces the effectiveness of emission trading to achieve cost effective climate protection.
Caveat emptor Kyoto 59
Results We first present the key result, and then discuss the specific market factors that created this outcome. Since liability rules in climate negotiations are debated on two main criteria – the cost effectiveness (or market efficiency) and carbon emissions outcomes, we summarize our key finding over these criteria. Result 1: Instituting buyer liability in the laboratory markets decreased market efficiency and increased carbon emissions relative to levels observed under seller liability. As subjects gained experience, the inefficiency and emissions levels worsened under buyer liability, while in seller liability treatments they improved. Buyer liability leads to less economic efficiency and lower environmental protection. Figure 3.2 captures this finding by describing the average efficiency and emissions-via-production by treatment and round. Better outcomes – higher efficiency and lower emissions – occur in the lower left corner; worse outcomes – higher emissions and lower efficiency – are in the upper right. Efficiency here is the ex ante expected profit a subject would realize given their period production and permit holding choices relative to the expected profit possible had they made the predicted decisions. The measure defined here indicates the potential efficiency given subject choices (not realized), production and monitoring outcomes, and the monitoring and production choices of others over the last three periods of each round by treatment. The realized ex ante efficiency loss is computed as 1 – ((predicted Eπ – actual Eπ)/predicted Eπ) given subject’s decisions recorded in
Over-emission percentage
250
BL–1
200 SL–1 SL–3
BLR–1
BLR–2
BL–2 BL–3 BLR–3
SL–2
150
100
50
0 0
5
10
15
20
25
Unrealized efficiency percentage
Figure 3.2 Efficiency and emission outcomes by treatment.
30
35
60 R. Godby and J.F. Shogren the last three periods of each round. Emission levels are the percentage deviation from the emission quota defined by the total number of permits available. Figure 3.2 shows that the highest efficiency outcomes were always observed under seller liability: average efficiency in the last three periods of all seller liability treatments was 82.1 percent. Efficiency under buyer liability was significantly lower – 71.6 percent and 73.9 percent across all buyer liability without and with refund treatments. Efficiency increased with experience in the seller liability treatments, but worsened under buyer liability. Highest average efficiency and low emissions occurred under seller liability observed in round three (SL-3) of the experimental sessions, while lowest average efficiency and high emissions were observed in third round observations of the buyer liability treatments (BLR-3 and BL-3). These results strongly support Prediction 3 – looking at Figure 3.2 there was little difference in average emissions levels across the buyer liability treatments, while average seller liability emissions levels were always lower than those under either buyer liability treatment regardless of experience. Emissions levels directly reflect production levels in our environment. Table 3.3 reports that differences in mean production outcomes by round are stable over time, and reflect the qualitative differences predicted between treatments. Using ANOVA analysis for descriptive purposes, mean production choices by sellers and buyers appear to be influenced by treatment (p = 0.00 and p = 0.03), experience (p = 0.00 in both cases) and their interaction (p = 0.00 in both cases).4 Figure 3.3 presents a time 15
BPC T1–1 BPC T1–2
Quantity chosen
SL
BPC T1–3 BPC T2–1 10
BPC T2–2 BPC T2–3 BPC T3–1 BL
BPC T3–2
5
BPC T3–3 0
8
16
24
Period
Figure 3.3 Mean aggregate buyer production by treatment. Notes BPC T1–1 = buyer production choice seller liability (SL), round 1. BPC T1–2 = buyer production choice seller liability (SL), round 2. BPC T1–3 = buyer production choice seller liability (SL), round 3. BPC T2–1 = buyer production choice buyer liability (BL), round 1. BPC T2–2 = buyer production choice buyer liability (BL), round 2. BPC T2–3 = buyer production choice buyer liability (BL), round 3. BPC T3–1 = buyer production choice buyer liability w/refund (BLR), round 1. BPC T3–2 = Buyer Production Choice Buyer Liability w/Refund (BLR), Round 2. BPC T3–3 = Buyer Production Choice Buyer Liability w/Refund (BLR), Round 3.
SPC T1–1
20
SPC T1–2
BL
Quantity chosen
SPC T1–3 15
SPC T2–1 SPC T2–2 SPC T2–3
10
SPC T3–1 SPC T3–2
SL
SPC T3–3
5 0
8
16
24
Period
Figure 3.4 Mean aggregate seller production by treatment. Notes SPC T1–1 = seller production choice seller liability (SL), round 1. SPC T1–2 = seller production choice seller liability (SL), round 2. SPC T1–3 = seller production choice seller liability (SL), round 3. SPC T2–1 = seller production choice buyer liability (BL), round 1. SPC T2–2 = seller production choice buyer liability (BL), round 2. SPC T2–3 = seller production choice buyer liability (BL), round 3. SPC T3–1 = seller production choice buyer liability w/refund (BLR), round 1. SPC T3–2 = seller production choice buyer liability w/refund (BLR), round 2. SPC T3–3 = seller production choice buyer liability w/refund (BLR), round 3. 50
np11 SL
np12
Normalized price
40
np13 np21
30
np22 20
BLR
np23 np31
10
np32 0
np33 0
BL
8
16 Period
Figure 3.5 Mean permit prices by treatment. Notes Np11 = normalized price seller liability (SL), round 1. Np12 = normalized price seller liability (SL), round 2. Np13 = normalized price seller liability (SL), round 3. Np21 = normalized price buyer liability (BL), round 1. Np22 = normalized price buyer liability (BL), round 2. Np23 = normalized price buyer liability (BL), round 3. Np31 = normalized price buyer liability w/refund (BLR), round 1. Np32 = normalized price buyer liability w/refund (BLR), round 2. Np33 = normalized price buyer liability w/refund (BLR), round 3.
24
62
R. Godby and J.F. Shogren
series of mean buyer production choices by treatment and round and indicates treatment effects occurred early in rounds. In rounds in which subjects were less experienced, buyers often overproduced. But as they gained experience both within and across rounds, average production (emission) levels chosen across all buyers converged toward the predicted levels (see Table 3.4). Figure 3.4, which presents a time series of mean seller production choices by treatment and round, shows a contrasting outcome. Considering seller’s production (emission) choices (see Tables 3.3 and 3.5, and Figure 3.4), the effect of treatment seems apparent. On average, mean production (emission) choices by sellers increased by approximately four units under buyer liability relative to choices under seller liability conditions. Table 3.3 summarizes the results on permit price, transaction and production volumes observed by round and treatment. Overall, the results support our benchmark predictions regarding permit prices and volume, which we summarize below. Result 2: Average permit prices deviate by treatment as predicted – prices were greater under seller liability relative to buyer liability; and buyer liability prices were only slightly lower than those observed under buyer liability with refund. The results in Figure 3.5 and Tables 3.3 and 3.6 support Prediction 1 – we see lower permit prices under buyer liability, especially after subjects gain experience with the markets.5 Figure 3.5 reports average permit prices by treatment observed over time. Direct comparison of seller and buyer liability (with and without refund) price outcomes indicate mean prices are always higher under seller liability conditions in later periods, while comparison of the two buyer liability treatments both within and across sessions indicates that although variation in observed prices occurs, pricing behavior appears essentially similar. A KruskalWallis test of mean prices by treatment indicates a significant effect due to treatment (p = 0.00). Prices under the buyer liability treatments are also shown to differ significantly using a Mann-Whitney U-test across all three rounds, the last two or over the third round only (p = 0.00). We confirm these unconditioned tests with a random effects regression: Pit = α + ′xit + ui + eit where Pit is the price observation at time t of session i in the experiment; α is an intercept, is a vector of coefficients on the liability rule and order effects as defined in the matrix xit .6 We assume the random disturbance eit exhibits first order autocorrelation with previous disturbances, such that eit = ρieit–1 + εit , where ρi is the autocorrelation coefficient in session i, and εit is a classical disturbance term. We estimate: Pit = a + b1BL + b2BLR + b3Rnd2 + b4Rnd3 + b5BL-Rnd2 + b6BL-Rnd3 + b7BLR-Rnd2 + b8BLR-Rnd3.
21.07 (33.64) 7.76 (7.95) 9.98 (8.95)
5.31 (2.07) 6.97 (1.60) 4.06 (1.93)
0
5.16 (2.57) 4.97 (1.96) 4.75 (3.20)
5.63 (4.10) 8.72 (8.58) 4.26 (4.42)
0–3
0
6.41 (2.33) 6.16 (1.39) 6.72 (1.71)
42.39 (15.34) 24.84 (9.35) 40.82 (18.05)
0–2
6
33–40
6.69 (2.07) 5.03 (1.60) 7.94 (1.93)
12
6.84 (2.57) 7.03 (1.96) 7.25 (3.20)
12
5.59 (2.33) 5.84 (1.39) 5.28 (1.71)
6
13.38 (1.60) 13.84 (2.22) 14.69 (1.26)
15
13.91 (1.33) 14.38 (2.32) 15.50 (0.88)
15
9.84 (1.92) 10.69 (1.89) 9.47 (1.48)
8
11.03 (1.69) 8.75 (1.50) 9.38 (2.17)
9
9.44 (1.61) 9.97 (1.75) 9.00 (1.95)
9
11.25 (1.93) 9.19 (1.42) 9.88 (1.36)
8
Buyers
Sellers
Sellers
Buyers
Aggregate production choice
Aggregate permit holdings
24.41 (2.69) 22.59 (2.53) 24.06 (2.97)
24
23.34 (1.65) 24.34 (2.47) 24.50 (2.33)
24
21.09 (2.88) 19.88 (1.98) 19.34 (2.12)
16
Total
Note Predictions assume risk-neutral traders, and that in the case of both buyer liability treatments, that the buyers assume a probability of p = 1 that sellers will overproduce.
Treatment: BLR Prediction: Mean obs. (std. dev.) Round 1 Round 2 Round 3
Treatment: BL Prediction: Mean obs. (std. dev.) Round 1 Round 2 Round 3
Treatment: SL Prediction: Mean obs. (std. dev.) Round 1 Round 2 Round 3
Mean permit price
Table 3.3 Experiment results by treatment
64 R. Godby and J.F. Shogren Table 3.4 Buyer mean production choices (periods 6–8) Round Session
Statistic
11003 SL-BL-SL
Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency
11004 SL-BL-SL 11010 SL-BLR-SL 11011 SL-BLR-SL 11012 BLR-SL-BLR 11015 BLR-SL-BLR 11017 BL-SL-BL 11018 BL-SL-BL 11022 BLR-BL-BLR 11023 BLR-BL-BLR 11025 BL-BLR-BL 11026 BL-BLR-BL Total
1 3.08 0.14 3 2.08 0.52 3 2.67 0.29 3 2.58 0.14 3 3 0 3 2.25 0.5 3 2.75 0.25 3 2.33 0.14 3 2.67 0.14 3 2.75 0.25 3 2.08 0.14 3 2.25 0 3 2.54 0.39 36
2 2.583333 0.144338 3 2.416667 0.144338 3 2.5 0.25 3 2 0 3 2.42 0.14 3 1.83 0.14 3 2.67 0.14 3 2 0 3 3.08 0.14 3 2.5 0.25 3 2.33 0.14 3 1.67 0.29 3 2.33 0.41 36
3 2.75 0.25 3 2.083333 0.381881 3 2.333333 0.144338 3 2.333333 0.381881 3 1.67 0.14 3 1.83 0.38 3 2.92 0.14 3 2.25 0 3 2.92 0.29 3 2.75 0 3 2.33 0.29 3 1.92 0.14 3 2.34 0.45963 36
Here a is the estimated average price in markets using seller liability rules in round 1; the coefficients b1 and b2 are the estimated average price deviations from those observed under seller liability in round 1 induced when buyer liability and buyer liability with refund rules are used; b3 and b4 are estimated order effects and indicate the average deviation in prices observed in rounds 2 or 3;
Caveat emptor Kyoto 65 Table 3.5 Seller mean production choices (periods 6–8) Round Session
Statistic
1
2
3
11003 SL-BL-SL
Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency
2.75 0.25 3 2.42 0.29 3 2.92 0.80 3 1.83 0.29 3 4.00 0.00 3 3.17 0.29 3 3.17 0.29 3 3.83 0.14 3 3.42 0.14 3 3.25 0.00 3 3.58 0.14 3 3.25 0.00 3
3.08 0.38 3 4.00 0.00 3 3.75 0.00 3 2.92 0.14 3 2.92 0.38 3 3.00 0.25 3 2.00 0.43 3 2.33 0.38 3 3.83 0.14 3 4.00 0.00 3 4.00 0.00 3 3.50 0.00 3
2.33 0.14 3 2.08 0.29 3 2.92 0.29 3 2.58 0.14 3 3.50 0.25 3 3.25 0.00 3 4.00 0.00 3 3.75 0.00 3 3.83 0.14 3 4.00 0.00 3 4.00 0.00 3 3.75 0.00 3
11004 SL-BL-SL 11010 SL-BLR-SL 11011 SL-BLR-SL 11012 BLR-SL-BLR 11015 BLR-SL-BLR 11017 BL-SL-BL 11018 BL-SL-BL 11022 BLR-BL-BLR 11023 BLR-BL-BLR 11025 BL-BLR-BL 11026 BL-BLR-BL
and the estimated coefficients b5–b8 indicate the average interaction effects of buyer liability and market order. Table 3.7 reports the regression results, and indicates liability treatment has a significant impact on closing prices, and that these differences do not require previous experience in markets to be realized. On average, prices under buyer liability fall by about 30 lab-dollars relative to prices observed under seller liability; while prices under buyer liability with refund fall by between 20 and 24 lab-dollars. Further, no significant difference emerges in how experience
66 R. Godby and J.F. Shogren Table 3.6 Session mean price results (periods 6–8) Round Session
Statistic
11003 SL-BL-SL
Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency Mean price Std. dev. Frequency
11004 SL-BL-SL 11010 SL-BLR-SL 11011 SL-BLR-SL 11012 BLR-SL-BLR 11015 BLR-SL-BLR 11017 BL-SL-BL 11018 BL-SL-BL 11022 BLR-BL-BLR 11023 BLR-BL-BLR 11025 BL-BLR-BL 11026 BL-BLR-BL
1 45.36 2.33 3 43.07 0.92 3 35.53 2.58 3 51.63 3.59 3 7.45 1.95 3 6.17 0.73 3 7.54 0.62 3 2.94 0.13 3 12.94 3.54 3 42.30 3.38 3 5.41 2.57 3 3.30 1.70 3
2 18.07 2.91 3 2.89 0.91 3 15.43 1.13 3 8.65 5.40 3 29.46 2.30 3 36.06 0.82 3 32.87 1.05 3 21.57937 1.38 3 1.73 0.91 3 1.09 0.44 3 3.35 0.79 3 0.63 0.11 3
3 57.33 3.30 3 34.87 2.44 3 37.33 1.61 3 15.42 0.81 3 11.26 2.52 3 18.14 7.91 3 2.09 0.19 3 6.33 0.19 3 2.36 0.57 3 2.92 0.29 3 2.69 0.73 3 –2.13 0.75 2
influences outcomes under alternative liability treatments.7 Our results do suggest, however, that early negative price and risk experiences under buyer liability initially depress closing prices under seller liability, as supported by the time paths of prices in periods 9–16 (see Figure 3.5). Result 3: Permit trading volume increases under buyer liability. Predicted differences in volume between liability treatments, however, are not observed in all sessions.
Caveat emptor Kyoto 67 Table 3.7 Estimated regression results (p-values in parentheses) Estimated coefficient
Constant (a) BL (b1) BLR (b2) Rnd2 (b3) Rnd3 (b4) BL-Rnd2 (b5) BL-Rnd3 (b6) BLR-Rnd2 (b7) BLR-Rnd3 (b8) Number of observations
Dependent variable (Pit ) Closing prices all periods1
Closing prices periods 4–82
38.609 (0.000) –30.815 (0.000) –19.887 (0.000) –11.918 (0.000) –1.168 (0.639) 5.195 (0.124) –0.814 (0.794) 1.025 (0.813) –7.825 (0.088) 287
39.681 (0.000) –29.640 (0.000) –24.049 (0.000) –9.678 (0.001) –4.515 (0.221) 1.629 (0.695) 1.047 (0.799) 6.912 (0.187) –1.780 (0.739) 179
Notes p-values in parentheses – bolded values indicate significance at the 5 percent level. 1 Estimates for all periods are the GLS coefficients, allowing for heteroskedasticity between sessions (cross sections) and assuming an AR1 process present in the data. 2 Estimates for all periods are the OLS coefficients, allowing for heteroskedasticity between sessions (cross sections) and assuming an AR1 process present in the data. Using these coefficients is recommended when the number of time periods is small relative to the number of panels (see Beck and Katz, 1995).
Table 3.3 shows differences in permit trading occurred across treatments, and volume differences under buyer liability increased with experience. By the third round, the average buyer holds two more permits relative to holdings under seller liability. A Kruskal-Wallis test confirms a significant difference between average buyer’s permit holdings over the last three periods of the each round (p = 0.00) by treatment. A Mann-Whitney U-test also verifies that no significant difference exists in buyers’ permit inventories after trade between buyer liability treatments (p = 0.47), supporting our benchmark prediction. While these results indicate trading volume (and therefore buyer inventories) increases under buyer liability, the increase is less than expected. Figure 3.6 shows the average permit trading volumes by round and treatment. By periods 4–8, buyer liability outcomes diverge toward a volume appreciably greater than seller liability, and this divergence increases with experience. What is unclear in the figure is that significant deviation occurs in trading volume under buyer liability across sessions. In about half the sessions, volume converges toward predicted levels, whereas in the other half, trading is thin. A zero trading outcome would be consistent with a “lemons-market”: buyer liability creates an incentive for sellers to cheat and if they are monitored permit purchases could be lost. With such risk, trade could collapse to zero. But since risk of monitoring is low, this is not a predicted outcome. Although in the early periods of a round, markets were thin, trade rarely failed to occur. In later rounds, volume increased but remained below predicted volumes.8 Overall, the causes of the inferior emission and efficiency outcomes under buyer liability seem apparent – they arise due to the predicted distortions in observed permit
68 R. Godby and J.F. Shogren 12
ph11
10
ph12
BL
ph13
Trades
8
ph21
6
ph22
4
ph23 ph31
2
SL
ph32
0
ph33 0
8
16
24
Period
Figure 3.6 Mean trades by treatment. Notes ph11 = Permit holding seller liability (SL), round 1. ph12 = Permit holding seller liability (SL), round 2. ph13 = Permit holding seller liability (SL), round 3. ph21 = Permit holding buyer liability (BL), round 1. ph22 = Permit holding buyer liability (BL), round 2. ph23 = Permit holding buyer liability (BL), round 3. ph31 = Permit holding buyer liability w/refund (BLR), round 1. ph32 = Permit holding buyer liability w/refund (BLR), round 2. ph33 = Permit holding buyer liability w/refund (BLR), round 3.
prices and trading volumes. These distortions result in allocative inefficiencies and production pattern distortions that reduce permit market cost effectiveness, and increase emissions.
Robustness tests – high enforcement and random endings Less climate protection at greater cost under buyer liability is a strong result. Consider the robustness of this outcome given two changes in the permit market: an increase in the monitoring probability to two-thirds (0.667) from one-sixth (0.167), and removal of fixed-ending point effects caused by using a known number of periods within each round by introducing a random ending point (i.e. an infinite horizon) after the eighth round. These conditions create an environment more likely to produce “good seller” reputations in the permit market so buyer liability stands a better chance of success. Increased monitoring probability Table 3.8 summarizes the revised predictions and results in the high monitoring environment. Under seller liability with a higher enforcement, eight permit trades are predicted with permit market prices predicted to lie within an interval of 27–71. This wide interval occurs due to the wide discrepancy between buyer and seller valuations at this quantity due to the characteristics of the expected
0 6.80 (2.14)
9.73 (6.20)
7.42 (2.86)
28.48 (8.61) 0–3
6
8.13 (2.06) 4.75 (1.12) 7.50 (1.15)
33–40
50.51 (17.68) 38.94 (16.57) 38.43 (8.96)
12
6.50 (1.32) 8.25 (1.65) 5.81 (1.11)
81.57 (16.81) 61.59 (8.64) 69.22 (7.85) 0
4
27–71
5.20 (2.14)
12
4.58 (2.86)
6
3.88 (2.06) 7.25 (1.12) 4.50 (1.15)
0
5.50 (1.32) 3.75 (1.65) 6.19 (1.11)
8
Buyers
12.92 (1.35)
15
12.25 (1.83)
8
9.25 (3.09) 8.44 (2.16) 10.63 (3.01)
15
6.00 (2.16) 7.19 (2.46) 5.94 (1.06)
4
Sellers
8.16 (1.93)
9
9.21 (1.22)
8
3.94 (1.44) 6.31 (1.54) 2.81 (1.05)
0
7.06 (1.24) 3.88 (1.54) 5.94 (0.93)
8
Buyers
Aggregate production choice
Note * Predictions in random length sessions are those for fixed length sessions, and not necessarily the only possible prediction.
Treatment: BL Prediction:* Mean obs. (std. dev.) Round 2
Low enforcement, random length sessions Treatment: SL Prediction:* Mean obs. (std. dev.) Round 1
Treatment: BL Prediction: Mean obs. (std. dev.) Round 1 Round 2 Round 3
High enforcement, fixed length sessions Treatment: SL Prediction: Mean obs. (std. dev.) Round 1 Round 2 Round 3
Sellers
Mean permit price Aggregate permit holdings
Table 3.8 Experiment results by treatment: additional sessions
21.08 (2.71)
24
21.46 (2.19)
16
13.19 (3.31) 14.75 (2.38) 13.44 (3.03)
15
13.06 (2.98) 11.06 (2.88) 11.875 (1.31)
12
Total
70 R. Godby and J.F. Shogren profit maximization. Although this interval allows for prices that are lower than those predicted with lower monitoring odds, one could expect that since the probability of being monitored and caught producing without a permit has increased, permits would be more valuable and sell at a higher price than observed when monitoring occurred only one-sixth of the time. The center of this interval occurs at 49, approximately 10 lab-dollars higher than expected previously. Relative to the lower enforcement setting, increasing the monitoring probability increases permit prices and permit trades, while reducing market output. In seller liability sessions, no subjects are predicted to produce without a permit; sellers are predicted to reduce output to a level commensurate with their predicted permit holdings. Under buyer liability, increased monitoring probability should significantly affect predicted market outcomes. Buyers should presume that sellers will always overproduce given they face no sanctions. Buyers now should not value permits, and one should observe a complete market failure – valueless permits should not be exchanged. Facing no risk of a fine, we predict sellers will produce at their unconstrained profit maximizing levels, increasing total market output 25 percent beyond the emission cap imposed. Buyers are predicted to produce nothing. Result 4: Higher enforcement reduces output and increases emission compliance under seller liability, as predicted. Contrary to prediction, however, permit trade occurred under buyer liability as sellers reduced their production to support trade. Buyer liability with high enforcement made the problems observed in low enforcement sessions more costly due to naive and strategic behavior – “good sellers” exploit their reputations, while still promoting overemissions, albeit at reduced levels. Tables 3.8 and 3.9 present the summary statistics on how high enforcement altered market performance as measured by permit price, permit holdings, production, and the overproduction-to-total sales ratio.9 As predicted, seller liability resulted in almost perfect compliance with the emission target. For seller liability, Table 3.8 shows that trading led to higher mean permit prices relative to the low enforcement sessions, as predicted. The price increase is more extreme than expected – mean price (81.57) is 10 lab-dollars greater than predicted (although in the later rounds, mean prices fall to within the predicted range). We observe little change in permit trade volume relative to the low-enforcement sessions. Aggregate output decisions are not substantially different from aggregate permit holdings, supporting the prediction that higher enforcement reduces output and increases emission compliance. High monitoring reduced mean production outcomes by about eight units (nearly 40 percent) from low monitoring. As predicted, aggregate output choices are, by the second round, compliant with the emission target in the market. Contrary to predictions, however, the permit market did not completely collapse under buyer liability. Permit trades still occur at prices similar to the low
Caveat emptor Kyoto 71 Table 3.9 Overproduction-to-total sales ratio Treatment
Round 1
2
3
Low monitoring probability fixed length sessions Seller liability 0.62 (0.53) Buyer liability 1.27 (0.21) Buyer liability w/refund 1.21 (0.23)
0.76 (0.38) 1.38 (0.39) 1.29 (0.65)
0.51 (0.26) 1.64 (0.54) 1.34 (0.16)
High monitoring probability sessions Seller liability –1.12 (0.40) Buyer liability 0.33 (0.83)
–0.31 (0.54) 0.49 (0.35)
–0.003 (0.22) 0.76 (0.64)
Random end-point sessions Seller liability Buyer liability
1.04 (0.53) N/A
N/A 1.13 (0.27)
N/A N/A
enforcement case. The extent of market failure is shown in Figure 3.7, which redraws Figure 3.2 adding the additional sessions. Trades occurred because sellers reduced their output to cover permit sales and because not all permits sold were “bad” (i.e. buyers were not fined) – seller output averaged 40 percent less than predicted. Market efficiency now decreased because production was lower and some permit trades went “bad” and buyers were fined. Market inefficiency resulted from myopic or naive market decisions, especially by buyers who had greater costs through realized fines and lower subject earnings. Increased enforcement made the market more unforgiving – subjects endured high penalties for poor production decisions and sudden strategic switches in seller behavior. This strategic effect held when “otherwise-good” sellers turned on their buyers and began overproducing in the later periods of the rounds. Since buyers did not refrain from trading under these circumstances, efficiency losses arose from the penalties they incurred. In buyer liability sessions under high enforcement, seller overproduction occurred in all rounds, and increased over time (see Table 3.9). In the first round, these values indicate on average two-thirds of permit sales did not risk being returned if a seller was monitored. Increasing enforcement resulted in overproduction ratios 75 percent lower than those in the low enforcement sessions. Similarly, in the last round, although overproduction/total sales values and the risk of returning permits rose over time, on average, almost 25 percent of the permits sold were backed by output reductions. Compared to the low monitoring sessions, this overproduction ratio was less than 50 percent of the value observed under buyer liability (0.76 versus 1.64), and 44 percent lower than observed under buyer liability with refund (0.76 versus 1.34). Buyer liability in a market with high enforcement made the problems observed in low enforcement sessions more costly, while still promoting over-emissions, albeit at reduced levels.
72 R. Godby and J.F. Shogren BL–2 BLR–1
Emissions – percentage of cap
200
BL–1 SL–1
Buyer liability with refund Random end SL-high monitoring Seller liability Buyer liability BL-high monitoring
BL–3 BLR–3 BLR–2
BL–2RE SL–1RE
SL–3
SL–2
150
BL–3HM BL–2HM SL–1HM
100
BL–1HM
SL–3HM SL–2HM
50 0
20
40
60 80 100 120 140 Unrealized efficiency percentage
160
180
Figure 3.7 Efficiency and emission outcomes by treatment.
Random ending points Our markets were hostile to reputation effects since our rounds were short and of known length. This design reflects current climate policy – the original Kyoto timetable from 2008–2012 is fixed and short. Compliance periods beyond 2008–2012 have yet to be negotiated. This contrasts with other emission markets where buyer liability has been promoted that do not end at a fixed date. It is assumed participants in these markets would assume market conditions are permanent or at least that the end of the market is unpredictable. To capture this belief within our experiment and to avoid the end-period strategic effects that arose in the high enforcement sessions, we replaced the fixed and known end point to each round used previously with a random end point. This allowed us to explore whether “good seller” reputations might emerge in the labortory, and whether such outcomes could alter our key results. We focus on the low monitoring probability treatments. All conditions of the original sessions were maintained with two exceptions: (1) in buyer liability, we used a random ending point; in seller liability, we did not tell subjects when the period would end; and (2) we used an AB experiment design. Seller liability always occurred for the first eight periods (A), with buyer liability rules for the rest of the sessions (B) (see Table 3.1 for sessions 020604–020615). The AB design ensured sessions would not be too long, since the random ending point could cause the buyer liability rounds to go on for some time. Although using an AB design theoretically confounds the effect of experience and treatment on interpretation of observed results, we expected the effects of time and treatment would be easily identified by comparison to previous ABA sessions. If they
Caveat emptor Kyoto 73 were not, it was expected that the effect of changing the fixed to random ending point would have significant impacts on market outcomes if the new sessions had the predicted effect. After the eighth period in buyer liability markets, we used a random number generator to draw a number between one and 100 at the end of each period.10 Subjects knew the period ended if the number drawn was 20 or less – a one-fifth chance – otherwise another period would take place. We use the same predictions as in the low enforcement probability sessions under seller liability as a benchmark. Under buyer liability, our predictions are those computed for the fixed length rounds. If a reputation effect emerged, market outcomes will differ from predictions, suggesting the random end point changed behavior and market outcomes. Result 5: A random ending point had little effect in seller liability sessions – outcomes were similar to those observed with a fixed end point. Under buyer liability, the results of with random ending point are not inconsistent with a weak reputation effect – we observed fewer permit trades and lower production levels. Overall, the random ending point sessions were broadly consistent with the levels of over-emission and efficiency loss by treatment observed earlier. Again Tables 3.8 and 3.9 summarize the results. Although we see minor differences across the random and fixed ending treatments, our basic results still hold. Under seller liability, an uncertain ending point had an almost identical outcome to that observed when end points were known. Under buyer liability, emissions increased and market efficiency fell, as reported in Result 1. Buyer liability also lowers permit prices relative to seller liability, supporting Result 2. Output patterns reflect earlier behavior – sellers produced the highest proportion of total output. Total output level observed, however, is approximately three units less than previous sessions and total seller output levels are as much as two units less on average. The patterns of permit trade and output decreases observed are broadly consistent with the hypothesis that the inclusion of a random ending point in buyer liability rounds reduced the incidence of “cheating”, at least in later periods of a round; lowering the production levels in the market. Table 3.9 shows the overproduction: total sales ratios observed over all periods reinforce this conclusion, as they are 20 percent lower than those observed in the second round of previous buyer liability sessions.
Conclusions Successful global climate protection policy depends in part on the cost effectiveness of greenhouse gas emission trading. We present evidence that emission trading under a buyer liability rule could lead to less climate protection at greater costs given relatively weak international enforcement for environmental shirking. Promoting a caveat emptor liability rule backfired in our experiment on both economic and environmental criteria. Holding the subjects that represented highemission buyer nations responsible for climate shirking rather holding the
74 R. Godby and J.F. Shogren relatively poorer low-emission seller nation subjects responsible resulted in average emission levels exceeding those observed under seller liability by nearly 34 to 40 percent. The imposition of an escrow-like refund system did not alter this result; and neither did the introduction of tighter enforcement or conditions that could create stronger seller reputations. Our findings support the notion that buyer liability in global emission trading might lead to less climate protection at greater cost.
Acknowledgments We thank the College of Business and the Stroock Professorship for financial support. Thanks to Tim Cason, Kate Carson, Chuck Mason and participants at University of Victoria, the Economic Science Association Meeting in Tucson, AZ, and American Economic Association Meetings in Atlanta, GA, for helpful comments. This chapter is dedicated to Peter Bohm.
Notes 1 Specific emission targets for developing countries were not set in the protocol. 2 Also see Cason’s (1995) test bed experiments on efficiency of the acid deposition auctions run by the US Environmental Protection Agency. 3 In policy documents, proponents of buyer liability often suggest reputation effects will result in market outcomes equivalent to or approaching those expected under seller liability. In general, however, multiple equilibria exist in games with reputation, and which of these are reasonable outcomes to expect will be critically dependent on the characteristics of the market setting and the prior distribution of player types assumed. Results of reputation games in a number of settings are reviewed in Fudenberg and Tirole (1991). In the predictions presented here, since each round consisted of eight periods, backward induction suggests that in buyer liability rounds, buyers should expect sellers to overproduce since permits have no other use value to sellers. Using this assumption and the conditions described in Table 3.2, we find the values reported in Table 3.3. We do not attempt to develop alternative reputation–effect predictions that might be possible in this environment. Instead, we suggest that outcomes varying from the risk neutral predictions presented could indicate shortcomings in the method of computing predictions we use. 4 This test is performed on mean outcomes by round, but is inappropriate if one expects that the independence assumption required is violated across rounds. 5 Experimental predictions are computed assuming equilibrium market outcomes, but early periods appear to have convergence processes at work thus resultant mean values across periods vary in the predicted direction by treatment, if not by the predicted magnitude. This occurs in the summary data (see Table 3.3), and is present to a lesser degree when permit price outcomes are averaged over the last three periods of each round to allow for convergence (see Table 3.6). Treatment outcomes across rounds differ in the directions predicted, and Table 3.6 shows mean session outcomes are closest to the values predicted. 6 Since experimental sessions used separate subject groups, a random-effects model is employed in which the fixed and constant subject or session effect on price observations is described by ui , while eit is a random disturbance term. 7 There are no significant interaction effects of order of presentation and liability treatment as estimated by coefficients b5–b8. The decreased significance of b8 across the last five periods relative to all periods where it is significant at the 10 percent level
Caveat emptor Kyoto 75 may suggest that initially refunds may cause subjects to increase their permit valuations in the first round; however experience appears to result in closing price outcomes that are not significantly different across rounds in this treatment. 8 Volume differences are consistent with subject-pool differences in risk aversion, which could explain the lower than predicted average trading volumes observed under buyer liability treatments in some sessions. It could suggest the assumption of risk neutrality to generate the experimental predictions may be an excessive abstraction in this environment. 9 Overproduction is the difference between a firm’s output and its permits. For the overproduction-to-total sales ratios, a positive value means sellers overproduced relative to predictions; a negative value means they underproduced; a value over unity says sellers did not back permits sold with an output reduction. 10 We used an Excel spreadsheet to generate random numbers between one and 100.
References Baron, R., 1999. An Assessment of Liability Rules for International GHG Trading. International Energy Agency Information Paper, October. Barrett, S. 2003. Creating Incentives for Cooperation: Strategic Choices. Providing Global Public Goods. I. Kaul, P. Concei, K. Goulven and R. Mendoza, eds. New York: Oxford University Press, 308–328. Beck, N. and Katz, J., 1995. What To Do (and Not To Do) with Time-Series CrossSection Data, American Political Science Review 89: 634–647. Bohm, P. and Carlen, B., 1999. Emission Quota Trade Among the Few: Laboratory Evidence of Joint Implementation Among Committed Countries, Resource and Energy Economics 21: 43–66. Carlen, B., 1999. Large-Country Effects in International Emissions Trading, Department of Economics, Stockholm University, mimeo. Cason, T., 1995. An Experimental Investigation of the Seller Incentives in EPA’s Emission Trading Auction, American Economic Review 85: 905–922. Cason, T., 2003. Buyer Liability and Voluntary Inspections in International Greenhouse Gas Emissions Trading: A Laboratory Study, Environmental and Resource Economics 25: 101–127. Chayes, A. and Chayes, A., 1998. The New Sovereignty: Compliance with International Regulatory Agreements. Cambridge, MA: Harvard University Press. Cooper, R., 1998. Toward a Real Global Warming Treaty, Foreign Affairs, March/April: 66–79. Crocker, T., 1966. The Structuring of Atmospheric Pollution Control Systems. The Economics of Air Pollution. H. Wolozin, ed. New York: Norton, 61–86. Council of Economic Advisers (CEA) 1998. The Economics of the Kyoto Protocol, J. Yellen’s Statement before the Committee on Agriculture, Nutrition, and Forestry, US Senate, 5 March. Dales, J.H., 1968. Pollution, Property and Price. Toronto: Toronto University Press. Edmonds, J., Scoot, M., Roop, J. and MacCracken, C., 1999. International Emissions Trading and Global Climate Change: Impacts on the Costs of Greenhouse Gas Mitigation. Report prepared for the Pew Center on Global Climate Change, Washington, DC. Environmental Defense Fund, 1998. Cooperative Mechanisms Under The Kyoto Protocol: The Path Forward. Online, available at: environmental defense.org/ documents/747_PathForward.pdf (accessed September 2007).
76 R. Godby and J.F. Shogren European Business Council for a Sustainable Energy Future, 2000. Position for the COP 6, November 11, Online, available at: e5.org/pages/st-08e.htm (accessed March 2001). Fudenberg, D. and Tirole, J., 1991. Game Theory. Cambridge, MA: MIT Press. Haites, E. and Missfeldt, F., 2001. Liability Rules for International Trading of Greenhouse Gas Emissions Quotas, Climate Policy 1: 85–108. Intergovernmental Panel for Climate Change (IPCC) 2001. Climate Change 2001: Synthesis Report. A Contribution of Working Groups I, II, and III to the Third Assessment of the Intergovernmental Panel on Climate Change. R. Watson, eds. Cambridge and New York: Cambridge University Press. Jacoby, H., Prinn, R. and Schmalensee, R., 1998. Kyoto’s Unfinished Business, Foreign Affairs July/August: 54–66. Kopp, R. and Toman, M., 1998. International Emissions Trading: A Primer. Resources for the Future. Online, available at: weathervane.rff.org/features/feature049.html Muller, R.A. and Mestelman, S., 1998. What Have We Learned from Emissions Trading Experiments? Managerial and Decision Economics 19: 225–238. Nordhaus, W. and Boyer, J., 2000. Warming the World: Economic Models of Global Warming. Cambridge, MA: MIT Press. Ontario Government Ministry of the Environment, 2001. Emissions Reduction Trading System for Ontario: A Discussion Paper. Toronto, March. PERT, 2000. Clean Air Mechanisms and the PERT Project: A Five Year Report. Pilot Emissions Reduction Trading, Toronto, June. Plott, C., 1983. Externalities and Corrective Policies in Experimental Markets, Economic Journal 93: 106–127. Rabkin, J., 1999. Why Sovereignty Matters. Washington, DC: AEI Press. Schelling, T., 1997. The Costs of Combating Global Warming, Foreign Affairs November/December: 8–14 Schmalensee, R., Joskow, P., Ellerman, D., Montero, J. and Bailey, E., 1998. An Interim Evaluation of Sulfur Dioxides Emission Trading, Journal of Economic Perspectives 12: 53–68. Shogren, J., 2004. Kyoto Protocol: Past, Present, and Future, GeoHorizons 88: 1221–1226. Solomon, B., 1999. New Directions in Emissions Trading: The Potential Contribution of New Institutional Economics, Ecological Economics 30: 371–387. Stavins, R., 1998. What Can We Learn From the Grand Policy Experiment? Lessons from SO2 Allowance Trading, Journal of Economic Perspectives 12: 69–88. Soberg, M., 2000. Price Expectations and International Quota Trading: an Experimental Evaluation, Environmental and Resource Economics 17: 259–277. UNFCCC (United Nations Framework Convention on Climate Change), 1999. The Kyoto Protocol to the Convention on Climate Change. UNEP/IUC/99/10. France: published by the Climate Change Secretariat with the support of UNEP’s Information Unit for Conventions (IUC). Victor, D., 2001. The Collapse of the Kyoto Protocol and the Struggle to Slow Global Warming. Princeton: Princeton University Press. Woerdman, E., 2001. Limiting Emissions Trading: The EU Proposal on Supplementarity. Department of Economics and Public Finance ECOF, Faculty of Law, University of Groningen, Groningen (the Netherlands), ECOF Research Memorandum No. 29, p. 63.
4
A test bed experiment for water and salinity rights trading in irrigation regions of the Murray Darling Basin, Australia Charlotte Duke, Lata Gangadharan, and Timothy N. Cason
Introduction The Murray Darling Basin covers more than 1,000,000 square kilometers, or one-seventh of the Australian continent, and is the catchment for the Murray and Darling rivers. The river system extends from Queensland, and includes threequarters of New South Wales and half of Victoria, through South Australia and drains out of a single exit of Lake Alexandriana to the sea. The Basin supports almost three-quarters of Australia’s irrigated agricultural production, an important export for Australia. While naturally saline, human activity has significantly changed the Basin’s natural water balance systems. The removal of (deeprooted) native vegetation and the replacement with (shallow rooted) European style annual crops and pastures has resulted in excess rain and irrigation water entering the groundwater systems. As groundwaters rise, the gradient between these saline groundwater mounds and the rivers decrease. In some regions groundwater salinity can reach 50,000 EC, i.e. electroconductivity, which is a measure of salt concentration: 600 EC is thought to be suitable for potable water.1 Saline water contained in the soils then percolates into the rivers, increasing water salinity. While estimates vary, costs reported by the Murray Darling Basin Commission place annual costs due to human-induced salt impacts at $130 million in agricultural costs, $100 million in infrastructure and $40 million in environmental costs across the Basin.2 In response to increasing salinity costs, and informed by improved scientific information on salinity impacts, in November 2000 the Commonwealth of Australian Governments, agreed to the National Action Plan for Salinity and Water Quality (the NAP).3 This plan is the Australian government’s vehicle to deliver on-ground change, across this complex Basin, which includes the coordination of institutions and resources across four states and one territory. The government agreements supporting the NAP specifically include the facilitation of new market-based incentives, including trading mechanisms
78 C. Duke et al. (Commonwealth Government of Australia, 2001a and 2001b). In April 2003, the Market Based Instruments (MBI) Pilots Program was announced as a process, funded under the NAP, for building Australia’s capacity to use MBIs for natural resource management.4 This program allocated $5 million across 11 pilots to test MBIs. This chapter reports a part of one of these pilots.5 The objective of this chapter is to report preliminary economic experiments that pilot test bed a tradable permit market for irrigation induced salinity in the Murray. We focus on the design of experimental simultaneous double auction markets for water and salt, and in one treatment we allow subjects to make a technology choice that changes parameters in both markets. We use the Sunraysia region in northern Victoria as the pilot region. We selected this region for a number of important reasons. First, there exists mature scientific knowledge about the non-standard cause and effect between irrigation practices and salinity concentrations in the river for this region.6 Second, the regional natural resource management plan had zoned the irrigation regions into salinity impact zones.7 These zones allow regulators to attribute salt concentrations to individual irrigators based on their zone and monitored water use.8 This means a diffuse source of pollution can be managed as a point source.9 Third, and perhaps most importantly when attempting to influence policy direction, regional government officers, farm extension staff and farmers in this region are innovative and they recognize the need to carry some costs for salinity externalities in private production costs. The two questions raised most frequently in our (two) stakeholder workshops held in Mildura (the largest regional centre in the Sunraysia) during 2003 and 2004 were: how can salinity be cost effectively managed, and how can private salinity management efforts be recognized in policy? Tradable emission permits use market incentives to allocate pollution control responsibility at least cost between regulated firms. Tradable permits differ from the more traditional command and control mechanisms because they equalize marginal control costs between sources. Compared to price-based MBIs such as taxes, tradable permits do not require the regulator to estimate the demand and supply response of firms to different prices for pollution. Knowing the position and the elasticity of aggregate marginal control costs is important when there is some safe minimum standard above which pollution cannot rise. In the Murray, there is a Salinity Target at the town of Morgan in South Australia. Legislation requires the salinity concentration level at Morgan to remain under the Target of 800 EC for 95 percent of the time (MDBMC, 2001). The salinity impact zoning in Sunraysia allows regulators to attribute irrigation impacts to EC at the Target, to irrigators based on their zone and water use. Figure 4.1 shows the region and salinity impact zones. Experimental economics is increasingly being used to provide solutions to complex real world allocation problems. Within natural resource management, experiments have been used in the design of the Victorian Bush Tender Program, a discriminative sealed bid offer auction to secure land management change on private land for environmental outcomes (Cason et al., 2003).10
Water and salinity rights trading 79
Figure 4.1 The Sunraysia irrigation districts and salinity impact zones.
Experiments were also used to evaluate the United States EPA allocation auction for the Sulfur Dioxide Trading Program (Cason and Plott, 1996), and alternative proposals for tradable permit markets for air pollution control (see Muller and Mestelman, 1998, and Bohm, 2003, for a survey of the experimental literature on tradable emission permits).11 Experiments use special cases to inform more complex field situations (Plott, 1994), and are particularly useful in the area of environmental economics for communicating how complex incentive systems operate to multidisciplinary groups (Roth, 2002; Cummings et al., 2001). We chose to use experiments to test bed a tradable permit market for salinity in the Murray for two reasons. First, legislative change would be needed to pilot test the mechanism in the field, and this can be a time-consuming, resource intensive process with uncertain outcomes. Second, we are applying a point-source policy to a non-point-source problem. We are relying on the accuracy of the hydrological models used to design the salinity impact zones, to allocate emission rights. Creating legal property rights in a complex field environment before we understand how the markets work in a simple controlled environment is potentially very costly. Further, we currently have incomplete knowledge about private salt abatement options and their effectiveness over long time horizons. The experiments allow us to study how the two markets transfer this information, and the resulting change in salt and water allocations, without having to wait for more complete information to become available. Experimental economics allows us to test the main design features of a tradable permit market for salinity in the Murray at low financial and political cost, and without changing legislation. The methodology is also proving
80
C. Duke et al.
very useful for communication with policy makers unfamiliar with economic decision making. The methodology allows us to ask the following questions as posed by Plott (1994). First, is the system doing what it is designed to do? This means, are resources being allocated to least opportunity cost players; and are prices, quantities and efficiencies moving towards the predicted theoretical equilibrium given the environment? Second, is the system working for understandable reasons? In other words, the policy should generate the observed outcomes (allocations) for reasons we understand, and these reasons should be supported by economic theory. We pilot test simultaneous double auctions for water and salinity rights in the Murray River. We use a double auction institution as it most closely approximates the emerging water trading markets in the Basin.12 We want to observe if the institution can efficiently transfer private information between subjects as market complexity is increased across treatments. The pilot experiments are implemented as oral auctions. The time taken to run oral auctions limits the number of periods that can be implemented in a session. In an oral auction, the experimenter is (more of) a participant in the experiment who controls the trading floor. In computerized experiments the experimenter is an observer. In an oral auction the experimenter can learn more about how the markets transfer private information, and how transaction prices are formed, as compared to computerized experiments. When the experimental design is unclear, as was the case in this pilot test bed, an oral auction allows mistakes or misunderstandings to be identified, recorded and (where possible) even corrected within the session. We chose to implement the markets simultaneously as water and salt are complements in production and therefore must be held as a bundle if the farmer wants to irrigate. Failure to account for interdependencies between goods can lead to costly policy design mistakes.13 This is because if bidders must make bids for related goods in independent markets, then they do not have all the information they need to make a profitable bid on the bundle. The value of each good to them depends on the final bundle of goods they are successful in winning (Milgrom, 2004). In our pilot, the value of irrigation water to a farmer will also depend on the cost of purchasing salt licenses in the salt market. For this reason, we have implemented the markets simultaneously in this experiment. Buckley et al. (2004) use experiments to investigate the operation of sequential markets in which subjects first trade in a sealed bid uniform price multiple unit market for emissions rights, which act as inputs in the production process, and then sell output in the output market. Goodfellow and Plott (1990) use experiments to investigate simultaneous input and output markets organized as double auctions; and Plott (1983) uses sequential double auction markets to investigate the operation of a tradable permit market and a primary output market. The parameters used in the experiment approximate the production conditions and environment characteristics of Sunraysia in Northern Victoria.14 We chose to calibrate the experiments to the field for several reasons. The first reason is communication; it is easier to explain the institution to stakeholders if it is designed using parameters with which they are familiar.15 Second but
Water and salinity rights trading 81 related, a lack of parallelism with the real world is sometimes a criticism of experiments. By using parameters calibrated to the field, it is possible to improve parallelism without introducing all the complexity of the field.16 Third, the institutional design can depend upon field specific characteristics. In these pilot experiments, the joint relationship between salt and water was important in our choice of simultaneous markets as opposed to a sequential market design. We introduce endogenous technology change in one treatment to represent private abatement choice. Subjects in our experiment can choose to change production technology, or to remain with their starting technology, at a preannounced point in a session. Buckley et al. (2004) introduce technology choice in their experiments, where subjects must choose emissions rates for their output between the closing and opening of their sequential markets. Gangadharan et al. (2005) allow subjects to choose to invest in abatement technology at a cost, such that in equilibrium only a subset of firms would choose to invest. They find that subjects often invest more than optimal, leading to cleaner production but reduced efficiency. Ben-David et al. (1999) investigate investment decisions in experiments designed to test features of the tradable emissions permit program under the US Clean Air Act. Our pilot test bed experiment reported in this chapter is, we think, one of the first to test simultaneous double auctions with endogenous technology choice using variables that approximate field conditions. The rest of this chapter is organized as follows. The first part of the chapter describes the experimental design. The pilot experimental results are then presented, followed by a conclusion with a discussion and some design lessons for future treatments.
Experimental design Environment and procedures The aim of this research is to investigate if the two markets organized as double auctions and implemented simultaneously can efficiently allocate water and salt units between subjects in our three treatments. As noted above, the experiment employs parameters that correspond to field conditions in the Sunraysia region in Northern Victoria. Table 4.1 shows the main regional industries, their production technologies and value ranges for water. Table 4.2 shows subjects’ private redemption and cost values used in the experiment. The private values and costs were drawn randomly from uniform distributions with the ranges shown in Table 4.1. The slope of these curves was discussed with regional government officers and crop specialists to ensure the parameters approximate field conditions. Cost and value ranges remained private information throughout all sessions. Subjects in the experiment were allocated to one of two salinity impact zones, a high impact zone (HIZ) or a low impact zone (LIZ).17 For each unit of water (megaliter) used in the HIZ the external cost is assumed to be $80 in this experiment. For each unit of water used in the LIZ the external cost is $40. These external costs represent the cost to government of intercepting the salt
82 C. Duke et al. Table 4.1 Parameter ranges from Sunraysia Industry
Irrigation technology
Value ranges for water (per megaliter = 1000 liters)*
Wine grapes
Drip irrigation Overhead sprinkler Pressurized mix Flood (2 subjects) Overhead sprinkler Flood Overhead sprinkler
1100–1500 900–1300 700–1100 400–800 1000–1400 600–1000 900–1300
Orchard and citrus fruit Dried grapes
Note * Value ranges were provided by the NSW and Victorian Department of Primary Industries. Production technology for the industries was provided by SunRise 21, 2004, and discussed with regional government officers and field extension staff.
Table 4.2 Private redemption and cost values Unit 1
Unit 2
Unit 3
Unit 4
Redemption value of units (buyers) Buyer 1 1167 Buyer 2 1131 Buyer 3 1142 Buyer 4 1109
1077 1031 1062 1049
987 931 982 989
897 831 902 929
Cost of units (sellers) Seller 21 Seller 22 Seller 23 Seller 24
968 948 1000 940
1038 1038 1100 1020
1088 1128 1200 1100
908 858 900 860
using engineering solutions, and approximate the field parameters for two salinity impact zones in the region (MDBC 2003). The transfer coefficient is therefore two between irrigators located in the HIZ and the LIZ for salt concentration impact at the Target: irrigators located in the HIZ must hold twice the number of salt licenses for each unit of water as compared to irrigators located in the LIZ. We implement this in the laboratory by requiring subjects in the HIZ to hold four salt licenses for each unit of water, and in the LIZ they are required to hold two licenses for each unit of water. This implies that the price of a permit in the salt market should be 20 experimental dollars. The salinity permit internalizes the salt externality. Buyers’ private values for water will decrease by 80 or 40 experimental dollars, depending on their impact zone, when they must pay for salt permits. For example, Buyer 1’s private value for his first unit of water is 1167 and the value for his second unit is 1077 (see Table 4.2). Buyer 1 is located in the HIZ, and therefore Buyer 1 must purchase four salt permits for each unit of water he buys in the water market. If the price
Water and salinity rights trading 83 for a salt permit is $20, then Buyer 1’s demand curve for water will shift down by $80 for each unit of water (1167 − (20)*4 = 1087 and 1077 − (20)*4 = 997). Sellers’ opportunity costs for water also decrease by $40 or $80 as sellers can sell salt permits if they sell some water rights. For example, Seller 23’s private cost for her first unit of water is 900 and the cost for her second unit is 1000 (see Table 4.2). Seller 23 is located in the LIZ, and therefore holds two salt permits for each unit of water.18 If Seller 23 sells her first unit of water then she can also sell two units of salt. If the price of salt permits is $20, then the opportunity cost for her first unit of water is 860, i.e. 900 − (20)*2 and the opportunity cost for her second unit is 960, ie. 1000 − (20)*2. The introduction of the salt market therefore shifts the water market demand and supply curves down, and the equilibrium price and quantity for water will decrease as shown in Figure 4.2. In one of our treatments (Treatment 3) subjects have the choice to change their production technology. The technology change is assumed to reduce salt impact for each unit of water used in production.19 If subjects choose to change technology then they receive a (private) benefit and their value (cost) for water units increase (decrease). In this pilot experiment this technology investment changes all values and costs by $100 for each unit of water. Subjects also incur a cost for this technology change, equal to $15 per unit of water. If buyers choose to change technology then their license requirements are halved for each unit of water purchased. For example, if Buyer 1 chooses to change technology then his private values for water will increase by $100. Buyer 1 will also incur a technology cost equal to $15 and this will decrease his marginal values by 15, i.e. 1300 Treatment 2 and 3, technology 1, 900–997, 7 units
Treatment 1, 1000–1020, 8–9units
Price of water
1200 1100 1000 900 800 700
Treatment 3, technology 2, 963–1032, 11–12 units
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Units of water Treatment 1 Treatment 2 and 3, technology 1 Treatment 3, technology 2
Figure 4.2 Market demand and supply for water.
Treatment 1 Treatment 2 and 3, technology 1 Treatment 3, technology 2
84
C. Duke et al.
1167 + 100 − 15 = 1252. The technology change is salt saving and therefore Buyer 1 must now hold only half the number of salt permits per unit of water as compared to his starting technology. This means Buyer 1 must now purchase two as compared to four salt permits for each unit of water, i.e. 1167 + 100 − 15 − (20)*2 = 1212. If sellers choose to change technology then the number of licenses available to them to sell in the license market is also halved. For example, Seller 23’s private cost for his first unit of water is 900. If Seller 23 chooses to change technology then he receives a $100 benefit and incurs a $15 cost, i.e. 900 − 100 + 15 = 815. Seller 23 also has fewer salt licenses to sell. With her starting technology, Seller 23 had two licenses that could be sold in the salt market for each unit of water she sold in the water market. With her new technology Seller 23 has only one license to sell for each unit of water, this means she loses $20 in potential license sales on each unit of water. Sellers are better off with the new technology because the net change in their (private) marginal costs for water (equal to 85 experimental dollars) is greater than the reduction in the number of licenses they can sell in the salt market multiplied by the price at which they can sell licenses in the salt market (equal to $20 to $40 on each unit of water). Buyers are better off with the new technology because their (private) marginal values increase and the number of permits they require for each unit of water purchased is reduced. So all traders should adopt the new technology given the parameters we have implemented. The water market operates as a standard multiple unit double auction. At any time while the trading period is open, buyers can either make a price and quantity buy offer or accept a seller’s sell offer, and sellers can either make a price and quantity sell offer or accept a buyer’s buy offer. Only offers that improve the proposed terms of trade (i.e. higher buy offers or lower sell offers) are recognized by the market. Buyers’ earnings on a unit of water are equal to the redemption value for water minus the price paid for water, and sellers’ earnings on a unit are equal to the price at which they sell water minus the cost of water. Each buyer and seller has up to four units of water they can buy or sell in a trading period. The design of the salt license market is still in development. In these pilot experiments we want to focus on learning how to implement the simultaneous oral markets and to test if they work as theory predicts. We have chosen not to use standard seller costs and buyer values for salt licenses. We instead introduce a uniform cost that buyers must pay for each required license they fail to buy. We call this cost a fine. The fine is equivalent to the buyer buying a salt license from the regulator as described above, and is equal to $20 in this pilot test bed. Buyer opportunity cost for each license is therefore $20, and this is equal to the external cost of another salt concentration unit at the Target. Buyers should be willing to buy a license from the salt market if license price is less than $20. The fine is public information, and therefore sellers should learn that buyers’ maximum willingness to pay for each license is $20. The effect of using this fine to define the opportunity cost of a salt permit, as opposed to standard demand and supply curves for salt usage, is that the efficient market price of salt is not determined by private marginal salt abatement costs but
Water and salinity rights trading 85 is instead determined exogenously and is set equal to the government salt interception cost. The salt market only allocates the salt rights and should determine the efficient quantity for each farmer given this common salt interception cost and private values and costs for water. In future experiments we plan to include individual demand and supply curves for salt based on private abatement. Future experiments can also employ private values for water that change by non-uniform amounts due to adoption of the technology, which is more realistic for the field. Experimental subjects were undergraduate students from the University of Melbourne. Eight subjects participated in each session, randomly assigned as four buyers and four sellers.20 Subjects participated in either one or two (simultaneous) oral double auction(s), depending on the treatment. Subjects were read aloud the instructions reproduced in the appendix. All offers to buy and sell and transaction prices were recorded on a white board at the front of the room. Subjects’ (private) values and costs for water, and their license requirements per unit of water were displayed on excel sheets for each subject. Subjects recorded their transaction prices on these excel sheets, and the computer automatically calculated profits and license sales and purchases for the subject (example excel sheets are reproduced in the appendix.) Buyers and sellers traded units and licenses; we do not use the words water and salt in the experiment in order to avoid uncontrollable bias due to private opinions about salinity and water, which are high profile environmental issues in Australia. Subjects received their earnings in cash at the end of the session, and earnings ranged between A$20 and $30. Sessions lasted between 1.5 and 2 hours. Treatments In this chapter we report six pilot experimental sessions. We report two treatment 1 sessions in which buyers and sellers trade units of water in a multiple unit oral double auction. We report two sessions for treatment 2 where buyers and sellers also traded licenses for salt in a multiple unit double auction implemented simultaneously with the water market. In treatment 3 we report two sessions in which buyers and sellers can choose to change production technology, which influences parameters in both markets, at a pre-announced point in the experiment. All treatment 1 sessions included ten trading periods of 5 minutes each.21 In treatments 2 and 3, sessions included eight trading periods of 8 minutes each, except for treatment 2, session 2, in which six periods were run. Table 4.3 summarizes the experimental design. Table 4.4 presents the predicted equilibrium outcomes for the three treatments, and Figure 4.2, presents the market demand and supply for water in each treatment. In treatment 2 we introduce the salt license market, and this increases the cost of water to buyers and decreases market demand as compared to treatment 1, as explained above. The equilibrium quantity of water traded should decrease in treatment 2 by one to two units and salt impact should decrease by four to six licenses (or EC units) as compared to treatment 1. In treatment 3 we allow subjects to choose if they want to change production technology at the end
86 C. Duke et al. Table 4.3 Experimental design Treatment
Features
Sessions
Treatment 1 – Water market (called the primary market in the experiment instructions)
Multiple unit oral double auction
2 inexperienced* 10 trading periods in each session
Treatment 2 – Water market plus salt market (called the license market in experiment instructions)
Simultaneous multiple unit double auctions
2 inexperienced 6 trading periods session 1, and 8 trading periods session 2.
Treatment 3 – private salinity abatement
Simultaneous multiple unit double auctions with endogenous technology change between trading periods 4 and 5
2 inexperienced 8 trading periods in each session
Note * Subjects may have participated in previous experiments, but they had not participated in an experiment with this environment and institution.
Table 4.4 Model equilibrium predictions Treatment 1
Treatment 2
Treatment 3**
Transaction price for water
1000–1020
900–997
Quantity of water traded
8–9 units
7 units
Transaction price for salt licenses Quantity of salt licenses traded
24–26 EC units*
20 20 licenses
900–997 963–1032 7 units 11–12 units 20 20 licenses 16 licenses
Notes * In treatment 1, there is no salt market. The expected EC units reported in this table for treatment 1, is the magnitude of the salt externality (equivalent to the number of licenses traded in treatments 2 and 3) when there is no policy to manage salt operating. ** In treatment 3, the first values presented in this table refer to the expected equilibrium outcome with technology 1, the second values refer to technology 2.
of trading period 4 and before trading period 5 begins. Given the costs and values used in these pilot sessions all subjects should choose to change.22 In treatment 3, periods 1 to 4, the expected market outcomes are the same as those for treatment 2. In periods 5 to 8, the net change in subjects’ marginal values and costs is 85 experimental dollars plus the avoided cost for buyers and the lost revenue for sellers in the salt license market. Technology 2 reduces salt impact for each unit of water used in production, so that more water can be traded efficiently, from 11–12 units as compared to seven. The new expected equilibrium price range for water is 963–1032, and salt is reduced by four licenses (EC units) as compared to technology 1.
Water and salinity rights trading 87
Results Figures 4.3, 4.4 and 4.5 present average period transaction price in the water market for the three treatments. In treatment 1 (Figure 4.3), both sessions exhibit convergence towards the lower end point of the predicted equilibrium price range. The variance in prices (not shown) appears to decline in this session as periods increase. In session 2 transaction price adjusts from below, and in period 10 of this session average transaction price is 992. In treatment 2 (Figure 4.4) when tradable salt licenses are introduced we expect the price of water to be lower since water buyers must now pay for salt licenses. In session 1, average transaction price is lower compared to treatment 1, and is within the treatment 2 predicted range in all periods. In session 2 transaction price for this treatment is more erratic in the early periods but then stabilizes near the midpoint of the predicted range between periods 5 and 8. Both sessions appear to be converging as variance in price between periods decreases and price is moving closer to the midpoint of the equilibrium price interval. In treatment 3 (Figure 4.5), subjects can make a choice to change production technology before period 5. This technology change is salt saving and therefore reduces the cost of salt to the irrigator. We expect the price of water to increase when new technology is adopted because irrigators receive a productivity improvement and the cost of salt licenses decreases. In session 1 of treatment 3 average transaction price is within the predicted equilibrium range in all periods. When technology change occurs, average transaction price falls by 6 experimental dollars. Average transaction price continues to fall in period 6 before beginning to adjust upward in periods 7 and 8. In session 2, transaction price for water adjusts from above to within the treatment prediction for periods 1 to 4. After 1100
Price (per megaliter)
1050 1000 950 900 850 800 1
2
3
4
5
6
7
8
9
10
Trading period Treatment 1, session 1 Predicted lower eq. treatment 1
Predicted upper eq. price treatment 1 Treatment 1, session 2
Figure 4.3 Transaction price in the water market, treatment 1.
88 C. Duke et al. 1100
Price (per megaliter)
1050 1000 950 900 850 800 1
2
3
4
5
6
7
8
Trading period Treatment 2, session 1 Predicted lower eq. treatment 2
Predicted upper eq. price treatment 2 Treatment 2, session 2
Figure 4.4 Transaction price in the water market, treatment 2.
1100
Price (per megaliter)
1050 1000 950 900 850 800 1
2
3
4
5
6
7
8
Trading period Treatment 3, session 1 Predicted lower eq. treatment 3
Predicted upper eq. price treatment 3 Treatment 3, session 2
Figure 4.5 Transaction price in the water market, treatment 3.
the technology change transaction price continues to fall before adjusting towards the lower end point of the new prediction in periods 6 through 8. Figures 4.6, 4.7 and 4.8 present average transaction quantity per period in the water market for the three treatments. In treatment 1, average transaction quantity adjusts to the treatment prediction from below, although it remains too low in session 2. This is consistent with observed transaction price for this session in
Quanitity of water traded (megaliters)
Water and salinity rights trading 89 12 10 8 6 4 2 0 1
2
3
4
5
6
7
8
9
10
Trading period Treatment 1, session 2 Predicted eq. qty treatment 1
Treatment 1, session 1
Quantity of water traded (megaliters)
Figure 4.6 Transaction quantity in the water market, treatment 1. 9 8 7 6 5 4 3 2 1 0 1
2
3
4 5 Trading period
Treatment 2, session 1 Treatment 2, session 2
6
7
8
Predicted eq. qty treatment 2
Figure 4.7 Transaction quantity in the water market, treatment 2.
Figure 4.3. Both price and quantity are below the equilibrium prediction; in this session subjects are failing to realize all gains from trade in the water market. In treatment 2, when salt licenses are introduced, the quantity of water traded should fall. In both sessions, shown in Figure 4.7, as predicted the transaction quantity is lower than in treatment 1. In treatment 3, transaction quantity in session 1 is equal to the treatment prediction in periods 3 to 8, and quickly
90 C. Duke et al.
Quantity traded (megaliters)
12 10 8 6 4 2 0
1
2
3
4
5
6
7
8
Trading period Treatment 3, session 1 Treatment 3, session 2
Predicted eq. qty treatment 3
Figure 4.8 Transaction quantity in the water market, treatment 3.
adjusts when technology change occurs between periods 4 and 5. Transaction quantity in session 2 takes longer to respond to the new technology. Figures 4.3 to 4.8 indicate that in most of our sessions, prices and quantities in the water market do adjust towards the predictions for the treatments. In treatment 1, shown in Figures 4.3 and 4.6, price and quantity in session 1 stabilize at their predicted equilibrium early in the session, although in session 2 transaction price and quantity are too low but exhibit an upward trend towards their predicted equilibrium. In treatment 2, shown in Figures 4.4 and 4.7, average transaction price and transaction quantity are initially low, but both sessions show signs of convergence to the treatment predictions. The poorer performance of treatment 2 is probably influenced by some initial confusion and learning by the subjects. As we discuss later when we explore prices in the salt market, some of our subjects in treatment 2 apparently did not understand how the salt market operated. Treatment 3, shown in Figures 4.5 and 4.8, is particularly interesting as the market is clearly responding to the technology change. Although in period 5 average prices in both sessions fall, perhaps because subjects are unsure about the impact of the change on market equilibrium, information about the new (private) costs and values is quickly reflected in the market. Subjects first realize they can profitably trade more units of water, and transaction quantity increases in period 5. Then in periods 6 to 8 prices begin to rise towards the new equilibrium. Figures 4.9 to 4.12 present market outcomes for the salt permit market. Figure 4.9 shows that the average period transaction price in the salt market for treatment 2 differs substantially from the equilibrium price of 20 experimental
Water and salinity rights trading 91
Price of salt license (1 license = 1/1000 EC unit)
80 70 60 50 40 30 20 10 0 1
2
3
4
5
6
7
8
Trading period Predicted eq. price salt permits
Treatment 2, session 1 Treatment 2, session 2
Figure 4.9 Transaction price in the salt market, treatment 2.
Price of salt license (1 license = 1/1000 EC unit)
25 20 15 10 5 0 1
2
3
4
5
6
7
8
Trading period Treatment 3, session 1 Treatment 3, session 2
Predicted eq. price salt permits
Figure 4.10 Transaction price in the salt market, treatment 3.
dollars. In session 1, permit prices are very low, much lower than the prediction. In session 2, salt prices begin very high (equal to 204 experimental dollars), and rapidly fall across periods to reach $39 in period 8. In Treatment 3 (Figure 4.10), transaction price for salt adjusts from below to within a few experimental dollars of the treatment prediction.
92 C. Duke et al.
Quantity of salt licenses traded (1 license = 1/1000 EC unit)
25 20 15 10 5 0 1
2
3
4
5
6
7
8
Trading period Treatment 2, session 1 Treatment 2, session 2
Predicted eq. qty treatment 2
Figure 4.11 Transaction quantity in the salt market, treatment 2.
Quantity of salt licenses traded (1 license = 1/1000 EC unit)
25 20 15 10 5 0 1
2
3
4
5
6
7
8
Trading period Treatment 3, session 1 Treatment 3, session 2
Predicted eq. qty treatment 3
Figure 4.12 Transaction quantity in the salt market, treatment 3.
In these pilot experiments the uniform equilibrium price for salt permits is determined by the $20 fine that buyers incur for each salt permit they failed to buy in a period. In treatment 2 we believe some subjects did not understand that the opportunity cost of salt permits was $20 and this is why transaction price for salt permits is either too high (session 1) or too low (session 2). We
Water and salinity rights trading 93 used random transaction prices for salt in the instruction examples, and in the instructions for treatment 2, session 1, we used an example for the salt market with transaction prices between $1 and $4. In session 2, we used an example with transaction prices between $60 and $80. The observed prices suggest that some subjects did not understand the relationship between the salt permits and the enforcement fine, and they instead used the unintended suggestions given by the experimenter in the instructions in forming beliefs about the reasonable price of salt permits. In Treatment 3, we had adjusted the instructions to more clearly illustrate that 20 is the opportunity cost to buyers. We also checked buyers’ record sheets at the end of the first few trading periods in treatment 3 to determine if they had bought enough permits. If they had not then we clearly told them they must therefore pay a cost of $20 for each license they required but did not buy in the market. We think that subjects’ understanding of how the permit market operates in treatment 3 was significantly better than in treatment 2. Figures 4.11 and 4.12 present transaction quantity in the salt market for treatments 2 and 3. In treatment 2, Figure 4.11, transaction quantity for salt in both sessions adjusts from below towards the treatment prediction of 20 permits. In treatment 3 transaction quantity for salt increases to the technology one prediction of 20 permits. Transaction quantity then shifts down in periods 5 and 6, as predicted, but it appears to shift down too much. We conjecture that when permit prices are within 1 to 2 experimental dollars of 20, some buyers choose to pay the fine and to avoid the subjective transaction costs of trading in the salt market. (These transaction costs could include the mental effort and the need to trade and focus on two markets.) In treatment 3, in particular, some subjects chose to incur the $20 fine for salt permits and to trade only in the water market in later periods when permit prices are near 20. While these graphical impressions provide useful informal information about the operation of the markets, we also present some simple regressions to explore the statistical significance of treatment differences in these preliminary sessions. The observations used in these regressions are transaction prices averaged by period across sessions. Table 4.5 presents random effects regressions for average price in the water market. In the random effects models, the session represents the random effect to account for the correlation of market outcomes within a session. The equation we estimate is the following: Pit = constant + β11/t + β2D2 + β3D3Tech1 + β4D3Tech2 + eit where P indicates the average transaction price, i denotes the session and t represents time as measured by the number of market periods in the treatment. The coefficient β1 captures the impact of time on average prices. The inverse specification 1/t allows for the impact of time on prices to be non-linear, with larger price changes in early periods. D2, D3Tech1 and D3Tech2 are dummy variables for treatments 2 and 3 respectively. (D2 takes the value of 1 when the treatment
94 C. Duke et al. being considered is treatment 2 and takes the value of 0 otherwise; D3Tech1 takes the value of 1 for data in periods 1–4 of treatment 3 and D3Tech2 takes the value of 1 for data in periods 5–8 of treatment 3.) The coefficients on the dummy variables D2 and D3, capture the impact of treatment 2 and the impact of the change in production technology in treatment 3 respectively, relative to the baseline treatment 1. The results indicate that transaction prices are significantly lower in treatment 2 as compared to treatment 1. This is consistent with the equilibrium prediction: in treatment 1 there is no salinity policy and therefore private values and costs, and the resulting market equilibrium, do not include the external cost of salt. Transaction prices in treatment 3 are not significantly different from treatment 1 for either production technology. The β1 coefficient on 1/period is not significantly different from zero, indicating that no systematic trend in transaction prices exists across periods. We do not present regression results for average price in the salt market for treatments 2 and 3, as there is substantial variation between sessions in treatment 2 (see Figure 4.9). Hence the coefficient for the treatment dummy could be capturing the noise in the data and not providing any important information. The results presented in Table 4.5 assume that the price of salt licenses does not affect water prices and vice versa. Models that account for the endogenous Table 4.5 Random effects estimation models for average transaction price in the water market Random effects model Constant 1/period (t) Treatment 2 Treatment 3, periods 1–4 (tech 1) Treatment 3, periods 5–8 (tech 2) No. of observations Wald Chi-squared Significance level R-squared (overall)
987.61* (12.81) (77.08) –15.45 (13.25) (–1.17) –46.55* (17.66) (–2.64) 9.24 (18.68) (0.49) –28.57 (18.52) (–1.54) 50 16.85 0.002 0.45
Notes The first number in brackets is the standard error; the second number in brackets is the z-statistic. * significant at 1 percent.
Water and salinity rights trading 95 relationship between water and salt prices in these simultaneous markets would be useful in understanding the interconnectedness of these two markets. Given our limited data, however, it is difficult to obtain identifying variables to estimate this endogenous relationship. We hope to extend the analysis in this direction in future work that is based on additional experimental sessions.
Discussion and additional design lessons In this chapter we report preliminary results from experiments that test the implementation of simultaneous markets for water and salinity rights in the Murray River in Australia. These pilot experiments form a part of the Commonwealth of Australian Governments’ efforts to facilitate market-based incentives for the environment across the Murray Darling Basin. As these pilots are designed ultimately to inform government policy, we use parameters specific to the field. Our pilot region is the Sunraysia located in Northern Victoria, and we held a number of workshops in the region to determine accurately costs and values for water and salt. These preliminary experiments illustrate how the introduction of a salt license market, which operates simultaneously with a water permit market, can reduce water price and quantity traded in the direction of the social optimum. In one treatment we introduce endogenous technology change to represent private salt abatement by irrigators. We find that subjects in the experiment respond to the technology change and prices and quantities of water and salt begin to move toward their new equilibrium predictions. This pilot experiment identifies some useful design lessons for additional sessions. Subjects take some time to learn in these markets and it would be useful to design sessions with more periods to enable us to examine their behavior over time. Given the difficulty in conducting more than eight periods in a multimarket oral double auction lasting no more than 2 hours in total, in future experiments we plan to computerize the trading institution to allow us to conduct more trading periods within a session. Additional period observations could also help in separating out the impact of learning and technology adoption. Subjects in treatment 3 could be performing better in the last few periods because of learning effects or due to the impact of adopting new technology. To examine this issue we could analyze the later periods’ data from treatments 2 and 3 to isolate the effect of technology adoption. In treatment 3 all subjects had an incentive to adopt the new technology. In future experimental treatments we plan to explore how many subjects adopt new technology when the incentives to adopt are asymmetric and different across participants, so that in equilibrium not all subjects should choose to change. In addition, the kind of technology that is useful to manage the salinity problem can be different across treatments to reflect different technologies available in the field. Changing irrigation technology from, for example, flood to drip, would release both salt and water permits for sale. Alternative technologies, such as replanting native vegetation, groundwater pumping and improved drainage
96 C. Duke et al. would only release for sale salt licenses and not water permits. In the pilot experiments presented in this chapter we assume that the technology improvements only release salt licenses, however in future experiments we would examine the impact of different technologies on market and environmental performance. The price for salt permits in our experiment is determined by the government’s cost to build engineering works to reduce salt, which corresponds to the fine level for a failure to purchase a salt permit. In more realistic settings the price for salt permits would also depend on irrigators’ heterogeneous marginal cost for salt reduction (abatement) technology. Government could be a market participant, willing to supply salt permits at prices equal to its marginal control costs ($20 in the experiment). Irrigators would compete with government and would be willing to supply additional salt permits if the market price is greater than their marginal abatement cost. The market price for a salt permit would be $20 if government is the lowest cost abater, and would be less than $20 if private salt abatement is less expensive. We have not included this endogeneity in our pilot experiments. Representatives at our workshops were cautious about publicly revealing the currently available (rough) estimates of benefits for private abatement. The complexity of introducing heterogeneous technology choice in these oral pilots was significant and we wanted to learn how to run the markets before introducing additional complexity. Subjects in this experiment represent irrigators along the Murray River, but other agents also impact upon salinity. Private salt harvesters, such as Pyramid Salt, reduce river salt concentrations and sell the harvested salt as a final product.23 Government also invests to reduce salinity; engineering solutions such as pumping and the purchase of water permits from the water market for environmental flows can also reduce salt concentrations. We plan to investigate increasing the number and types of participants in these markets to include these agents. This is particularly important for illustrating to government how environmental impacts from multiple sources can be managed by markets.
Notes 1 Online, available at: mdbc.gov.au/education/encyclopedia/naturalresources/env_ issues/water_and_land_salinity.htm (accessed 30 August 2005). See surface and groundwater salinity. 2 Costs to agriculture include lost production due to saline affected soil and the application of salty water to crops, reduction in land values, amenity values and the costs of remedial actions to repair irrigation and drainage infrastructure and farm buildings. Most of the infrastructure costs fall on local government and residents in regional urban areas, and include damage to roads and buildings, corrosion of gas, water and sewer pipes, killing of trees, grass and other vegetation. Environmental costs include impacts on aquatic and terrestrial biodiversity, destruction and damage to wetlands and the dependant breeding patterns of migratory birds. Online, available at: mdbc.gov.au/education/encyclopedia/naturalresources/env_issues/water_and_land_ salinity.htm (accessed 30 August 2005). See the costs of land and water salinity.
Water and salinity rights trading 97 3 In October 1999, the Commonwealth Government of Australia released an audit of salinity across the Basin (MDBC, 1999). See also online, available at: napswq.gov.au (accessed 30 August 2005). 4 Online, available at: napswq.gov.au/mbi/index.html (accessed 30 August 2005). 5 Online, available at: napswq.gov.au/mbi/projects.html (accessed 30 August 2005). See Cap and Trade for Salinity: Property Rights and Private Abatement Activities, a Laboratory Experiment Market. 6 See Sinnclair Knight Merz (2001), for a technical discussion of the salinity impact estimates. 7 See, SRWA (2002), for information provided to irrigators in the region about the salinity impact zones. 8 Each irrigator is located within one zone only for the purpose of policy implementation (SRWA, 2002). 9 Without this information the regulator could not define property rights for salt. Tradable permits allow firms to trade allocations of pollution (property rights). It would be too costly or infeasible to create property rights for sources that cannot be measured. 10 Also see Stoneham, et al. (2003). For general public information on the Bushtender program, See online, available at: dpi.vic.gov.au/dse/nrence.nsf/FID/ -15F9D8C40FE51BE64A256A72007E12DC?OpenDocument, and for information on the new Ecotender program, an extension of the Bushtender See online, available at: dpi.vic.gov.au/DSE/nrence.nsf/LinkView/F18669E8E2A4C02FCA256FDB00031592 E2A52365F438A1EFCA256FFF00236D8E (both accessed 30 August 2005). 11 Outside the environmental domain experimental economics has been successfully used in the design of the Federal Communication Commission’s Spectrum Auction (Milgrom, 2004), the British Telecommunications Spectrum Auction (Binmore and Klemperer, 2002) and the allocation of American medical doctors to their first jobs (Roth, 2002), to name but a few applications. 12 Online, available at: watermove.com.au (accessed 30 August 2005). 13 See, for example, the New Zealand Spectrum Auction conducted in 1990 (Milgrom, 2004 and McMillan, 1994). 14 Land use and production technology distributions across the Sunraysia were provided by SunRise 21, Mildura, Victoria. 15 Our stakeholders include irrigators and rural water authorities from the region. These groups are familiar with local production costs and values. It is very important to them that any policy experiments, which may in the future influence production in the region, accurately reflect the field conditions. 16 Introducing too much complexity from the field can reduce control over incentives in the experiment. This can arise for a number of reasons. Subjects may find the experiment too difficult to understand; this can be mitigated through training periods. Behavior may also be induced by incentives not controlled for in the laboratory. For example, having subjects trade units labelled water and salt as compared to units 1, 2 and 3, could induce private preferences that cannot be controlled. Or, in more complicated experiments, subjects may choose a trading strategy that minimizes effort and difficulty giving a “safe” average payoff as opposed to trading to maximize earnings. 17 In Sunraysia there are five salinity impact zones (see Figure 4.1). In this pilot test bed experiment we have chosen to simplify the environment. Our objective is to understand how the simultaneous multiple unit double auctions operate. Using two impact zones instead of five provides some understanding without introducing additional complexity. In future experiments, we plan to increase the number of subjects and impact zones to model the field more closely. 18 Sellers are allocated their equilibrium quantities of licenses free of charge at the beginning of each period. 19 Many technologies will also save water, and therefore if a subject adopts improved technology then they will have additional water to sell that is no longer required in
98 C. Duke et al.
20
21 22 23
production. Alternatively, subjects could expand production for a given level of water input. We assume in these pilot experiments that technology only reduces salt impact. Examples of technologies that reduce salt are groundwater pumps, drainage schemes that capture excess water and drain away from rivers, and revegetation with native crops, or native trees and shrubs. It is possible to design experiments in which subjects change from buyer to seller within a period depending on market price. This introduces additional complexity particularly in oral auctions. As our focus is to study how these markets operate it was not necessary to include this additional feature at this stage. An announcement was made in all treatments informing subjects there was 1-minute until market close. In session 2 of treatment 2, only Buyer 2 did not choose to change even though it was profitable for her to do so. Online, available at: pyramidsalt.com.au/ (accessed 30 August 2005).
References Ben-David, S., Brookshire, D., Burness, S., McKee, M. and Schmidt, C., 1999. Heterogeneity, irreversible production choices, and efficiency in emission permit markets, Journal of Environmental Economics and Management, 38(2), 176–194. Binmore, K. and Klemperer, P., 2002. The biggest auction ever: the sale of the British 3G telecom licences, Economic Journal, 112(478), 74–96. Bohm, P., 2003. Experimental evaluations of policy instruments. Handbook of Environmental Economics, Vol. 1. of Handbooks in Economics, Mäler, K.G. and Vincent, J., eds., Amsterdam: Elsevier. Buckley, N., Mestelman, S. and Muller, R.A., 2004. Implications of alternative emission trading plans: experimental evidence, McMaster University, Department of Economics Working Paper Series. Cason, T.N. and Plott, C.R., 1996. EPA’s new emissions trading mechanism: a laboratory evaluation, Journal of Environmental Economics and Management, 30(2), 133–160. Cason, T.N., Gangadharan, G. and Duke, C., 2003. A laboratory study of auctions for reducing non-point source pollution, Journal of Environmental Economics and Management, 46(3), 446–471. Commonwealth Government of Australia, 2001a. Intergovernment Agreement on a National Action Plan for Salinity and Water Quality, an agreement between the Commonwealth of Australia, New South Wales, Victoria, Queensland, Western Australia, South Australia, Tasmania, the Northern Territory and the Australian Capital Territory, June, Canberra, ACT. Commonwealth Government of Australia, 2001b. An Agreement Between the Commonwealth of Australia and the State of Victoria for the Implementation of the Intergovernment Agreement on a National Action Plan for Salinity and Water Quality, October, Canberra, ACT. Cummings, R., McKee, M. and Taylor, L., 2001. To whisper in the ears of princes: laboratory economic experiments and environmental policy. Frontiers of Environmental Economics, Folmer, H., Gabel, L., Gerking, S. and Rose, A., eds., London: Elgar Publishers. Department of Natural Resources and Environment, 2001. The Value of Water – A Guide to Water Trading in Victoria, Catchment and Water Resources, Melbourne, Vic. Duke, C. and Gangadharan, L., 2004. Salinity in water markets, an experimental investigation of the Sunraysia Salinity Levy, Victoria. Mimeo, University of Melbourne.
Water and salinity rights trading 99 Gangadharan, L., Farrell, A. and Croson, R., 2005. Investment decisions and emissions reductions: results from experiments in emissions trading, University of Melbourne Working Paper # 942. Goodfellow, J. and Plott, C.R., 1990. An experimental examination of the simultaneous determination of input and output prices, Southern Economic Journal, 56(4), 969–983. McMillan, J., 1994. Selling spectrum rights, Journal of Economic Perspectives, 8(3), 145–162. Milgrom, P., 2004. Putting Auction Theory to Work, Cambridge: Cambridge University Press. Muller, R.A. and Mestelman, S., 1998. What have we learned from emissions trading experiments? Managerial and Decision Economics, 9(4–5), 225–238. Murray Darling Basin Commission, 1999. The Salinity Audit of the Murray Darling Basin, A 100 Year Perspective, October, Canberra, ACT. Murray Darling Basin Commission, 2003. Stakeholder workshop, Mildusa, June 2002. Murray Darling Basin Ministerial Council, 2001. Basin Salinity Management Strategy 2001–2015, August, Canberra, ACT. Plott, C.R., 1983. Externalities and corrective policies in experimental markets, Economic Journal, 93(369), 106–127. Plott, C.R., 1994. Market architectures, institutional landscapes and testbed experiments, Economic Theory, 4(1), 3–10. Roth, A.E., 2002. The economist as engineer: game theory, experimental economics and computation as tools of design economics, Econometrica, 70(4), 1341–1378. Sinnclair Knight Merz, 2001. Mallee Region Salt Impacts of Water Trade: A Proposed Method for Accounting for the EC Impacts of Water Trade in the Victorian Mallee, a report prepared for the Department of Natural Resources and Environment, Melbourne, Victoria. Stoneham, G., Chaudhri, V., Ha, A. and Strappazzon, L., 2003. Auctions for conservation contracts: an empirical examination of Victoria’s Bushtender Trial, Australian Journal of Agricultural and Resource Economics, 47(4), 477–500. Sunraysia Rural Water Authority, 2002. Refinement of the river salinity impact zoning system, Irrigator, 1(16), 6–17. SunRise21, Mapping and Information Services, May 2004.
5
Aligning policy and real world settings An experimental economics approach to designing and testing a cap-andtrade salinity credit policy John R. Ward, Jeffery Connor, and John Tisdell
Introduction This chapter describes context specific lessons about the application of experimental methodologies to aid in the design and implementation of cost effective water quality and land management policy. Alternate institutional structures were tested in an experimental economics laboratory including an array of potential market settings to inform an on-ground trial of a recharge cap-andtrade scheme. The case study focus is the Bet Bet dryland farming community in a catchment located in north central Victoria, Australia. The Bet Bet catchment contributes the majority of the more than 40,000 tonnes of salt annually entering the Boort irrigation area from the Loddon dryland catchment areas. Levels of river salinity (measured as units of electrical conductivity or EC) are partially a consequence of the volume of rainwater recharging highly saline, connected groundwater aquifers. Recharge rates and associated rates of salt mobilization in the areas depend on geomorphology with localized fractured rock conducive to high groundwater recharge rates. In addition the rate of recharge and thus external salinity impacts depend on the type of vegetative ground cover and farm specific cropping, grazing and management decisions. Extensive replacement of deep rooted woody perennials and perennial pasture with shallow rooted annual pastures has been identified as a key factor in increased rainwater soil percolation and subsequent groundwater recharge. The increased recharge leads to episodes of increased mobilized salt loads. The additional salt is exported into connected river systems where it jeopardizes the long-term viability of downstream irrigated horticultural and agricultural crops, and threatens the functional organization of downstream riparian ecosystems. Government sponsored efforts to motivate changes in management regimes on privately tenured land have relied on traditional farm extension processes, legal and statutory remedies and a scheme providing scheduled payments for the re-establishment and management of deep rooted perennials. Despite regional
Aligning policy and real world settings 101 promotion, the level of established revegetation and consequent groundwater recharge and river salinity, have not been at a scale sufficient to comply with prescribed salinity targets (Connor et al. 2004). The goal of the experimental economics reported on here was intended to test the capacity of alternative policy designs to: • • • •
motivate persistent land use change appropriate to the specified groundwater condition; encourage change at scales that contribute substantially to salinity targets; mobilize high levels of participation in strategically beneficial localities and, achieve targets at the lowest cost to society.
The local agencies responsible for the management of groundwater recharge and salinity have endorsed market based instruments as an alternative policy approach. Market based instruments (MBI) rely on regulation or laws that encourage behavioral change through the price signals of markets, as opposed to the explicit directives for environmental management associated with regulatory and centralized planning measures (Stavins 2003). The primary motivation of market based instrument approaches is that if environmentally appropriate behavior can be made more rewarding to land managers, then private choice will better correspond to the best social and environmental choice. There are several market based approaches that are widely applied including environmental charges and auctions. The project that this research focused on was in response to a specific call for proposals to test a cap-and-trade approach to motivate revegetation efforts. The objective was to develop and test the capacity of a recharge credit scheme to provide flexibile incentives sufficient to motivate extensive revegetation efforts and as a consequence ameliorate high volume reductions in groundwater recharge, mobilized salt loads and eventual levels of river salinity. The advocacy of MBI as policy tools for managing both diffuse and point source environmental problems has a recent heritage, limiting opportunities for policy makers to gain experience and expertise in their design, testing and implementation (see for example, Shortle and Horan, 2001). Appraisals of their relative importance in Australian policy portfolios have also been informal, limited and ad hoc. Despite improvements in the analysis of market based instrument performance, simple rules and evaluation protocols to identify a priori the relative advantages over other instruments to resolve specific environmental problems have not yet emerged. This chapter describes the design and the results of a series of economic experiments to test the reliability and performance of potential policy instruments and provide robust ex ante guidance in the design features for a trial in the market exchange of recharge credits to reduce river salinity. The primary objective of the experiments was to empirically inform the contractual design of an on-ground trial of a recharge cap-and-trade scheme in the Bet Bet dryland farming community. In addition, a discussion is provided of how economic experiments informed actual groundwater recharge policy design.
102
J.R. Ward et al.
The chapter is set out as follows. The first section of the chapter describes the hydrological process of saline groundwater recharge that characterizes many Australian aquifers, including the Bet Bet catchment. The development of a proxy monitoring and auditing model is also described, which quantifies variation in groundwater recharge rates at the farm scale as a function of land management action and landscape position. Such a model is a necessary antecedent to the market exchange of recharge units. An evaluation of potential market impediments and the experimental rationale and research hypotheses to test solutions are described in the next part of the chapter, which also outlines an argument for expanding the potential function of experimental economics from primarily theory testing to wider use in the institutional design process. A description of a context rich simulation setting is the focus in the third part of the chapter followed by details of the experimental setting, controls, player payments and protocols. The results are then set out and are discussed further in the penultimate part of the chapter. Conclusions drawn from the experiments in developing implemented policy for the market exchange of recharge credits in the Bet Bet catchment complete the chapter.
Modeling interdependent land management and hydrological processes The effects of recharge are different in the Bet Bet catchment than in fresh groundwater aquifers in many other parts of the world (e.g. most of North America), where typically increased recharge ameliorates the social allocation dilemma facing abstractors. In typical settings, declining groundwater levels constitute a common pool externality, arising as a result of the costly exclusion of beneficiaries and rival resource utilization (Ostrom 1998). When joint outcomes depend on multiple actors contributing inputs or actions that are costly and difficult to quantify and there is lack of institutional protocols to restrict usage, incentives exist for individuals to act opportunistically. Without coordinating institutions, individual appropriations occur to a level where resource overuse arises, groundwater levels drop and this imposes group shared costs on the common pool community such as higher pumping costs. Costly exclusion implies it is not possible to compel those who enjoy the additional benefit of appropriations to compensate those who incur costs. Groundwater in the Bet Bet (and many other Australian settings) is also characterized as a common pool resource, however rising rather than falling levels of groundwater constitutes the common pool externality. Figure 5.1 is a schematic representation of land management effects of rising groundwater recharge levels. The figure shows that groundwater recharge increases as an inverse function of the level of deep rooted perennial vegetation (illustrated in panel B). The increased hydraulic pressure in the mound above the saline aquifer causes a subsequent rise in both the water table and the level of salt intrusion in the river system. In the Bet Bet the majority of salinity impacts are exported to downstream river districts, where the costs of salinization are incurred primarily by
Aligning policy and real world settings 103
Figure 5.1 Schematic of the hydrogeology of irrigation water quality affected by variable upper catchment salt loads.
downstream irrigators. The increased salt load confers increased irrigator costs brought about by increased soil salinity and correlated reductions in crop yields, accelerated infrastructure degradation and reduced ecosystem function. In addition, rising water tables can create external costs for dryland farmers in the form of land with shallow rooting depth to saline water leading to crop yield losses or reduced rotational options. To participate effectively in the market based instrument scheme, land managers need to be able to evaluate the implications of their management decisions prior to implementation and to monitor progress against their targets or commitments. Similarly, administrators of the scheme must also have the capacity to monitor and audit the outcomes of changes in land use or management practice and to attribute change in recharge to either landholder action or climate. Since groundwater recharge and salinity are not readily measured directly, a prerequisite to implementing a cap-and-trade is the development of a reliable and transparent
104 J.R. Ward et al. surrogate metric to assist all participants in evaluating recharge and salinity impacts of land management actions. Thus a first step in this project was the development of a robust and community validated biophysical and hydrological modeling to provide information about groundwater recharge rates as a function of variable vegetation cover and land management at the farm scale, differentiated according to landscape position (Clifton 2004; Connor et al. 2004). In the model, Ri represents the recharge rate for farm i, managing crop j, where: Ri = α (Cij, Aij, RAi, Gi, Lk) and Cij is crop type and management, Aij is area of crop type, RAi is annual rainfall, Gi is soil type and geomorphology and Lk is landscape position, k = 1, 2 or 3, where 1 represents lower slope, 2 represents break of slope and 3 represents ridge and upper slope. j – 1–5, where: 1 represents annual grazing, 2 represents perennial pasture (phalaris set grazing), 3 represents perennial pasture (phalaris rotational grazing), 4 represents native tree vegetation and 5 represents farm forestry (less than 10 years old). Cij, Aij represent endogenous variables in a farm decision set, and RAi, Gi, Lk represent exogenous variables in a farm decision set. The model developed accounts for three key biophysical determinants of recharge differences across locations and actions shown in Figure 5.1: 1
2
3
Ceteris paribus, for crop j, recharge from lower slope (L1) is less than the recharge from break of slope (L2), which is less than recharge from upper slopes (L3). Namely, RL1 < RL2 < RL3. Increased deep rooted perennial vegetation reduces groundwater recharge: namely, for landscape position Lk, subject to land management regime MP (Panel A) or MG (Panel B), recharge Ri is such that Ri Lk MP < Ri Lk MG. The estimated costs of groundwater recharge for land management activity at farm i, at landscape position Lk is such that: RMG > RMP – recharge from annual grazing is greater than recharge from perennial grazing or forestry; WTMG > WT MP – the water table level is higher for annual grazing than perennials; SLMG > SLMP – alt load is greater for annual grazing than perennials; CMG > CMP – external costs of recharge imposed on downstream irrigation are greater for upstream annual grazing compared to perennials.
Shortle and Horan (2001) and Schary (2003) argue that developing policy capable of realizing savings by focusing on performance coupled with compliance flexibility when monitoring actual outcomes are technically infeasible or
Aligning policy and real world settings 105 very costly is a substantial challenge to effective cap-and-trade schemes. The recharge accounting model described above was designed to act as a reliable means of auditing and monitoring land management actions to facilitate performance based payments.
Selecting experiments to address market impediments Whilst the introduction of maximum recharge quanta imposes costs on individuals, in a well functioning market the opportunity to trade has the potential to compensate that loss or reduce the cost burden. Some individuals will choose to use more than their quantum (and incur a debit), and others will choose to use less (being rewarded with credits). A challenge for policy is to create the opportunity for a “frictionless” market setting expediting rapid learning and understanding of the advantages of trade with low learning and exchange costs relative to trade benefits. In a frictionless market, savings to landholders through market exchange between individuals with surplus credits and those in deficit may be considerable. In addition, information expressed as coherent price signals from market exchange would reveal any differences in returns to management options that reduce environmental consequences and these would be immediately discovered and exploited. Environmental policy analysts dating back to Dales (1968) and Montgomery (1972) have advocated tradable credit approaches based on the argument that such approaches can increase economic efficiency, relative to approaches that allow less flexibility such as uniform standards. Baumol and Oates (1988, p. 177) articulate the economic case for and potential advantage of tradable credit systems applied to environmental policy, advocating that environmental goals can be achieved at lower cost to society than alternative policy approaches. Since the mid 1990s (Tietenburg and Johnstone 2004), MBI have been increasingly endorsed across an array of agency jurisdictions as cost effective policy instruments to address environmental targets and protection. Subject to controversy and debate ten years ago (Keohane et al. 1998), MBI have evolved to the point of becoming received wisdom in many environmental policy circles (Stavins 2003). However, Tietenburg (1998) notes that global experience indicates that many past attempts to implement well intentioned tradable permit schemes for both diffuse (e.g. ground water recharge) and point source emissions have failed because of inadequate attention to the design and timing of implementation of the market architecture deployed. Several commentators note that successful MBI schemes have not necessarily substituted for regulatory approaches but are more generally complementary (Stavins 2003; Tietenburg and Johnstone 2004; Tisdell et al. 2004; Young et al. 1996). In many cases MBI may be advantageous, but in others the relative advantages over other instruments may be limited, poorly defined, state contingent and subject to change through time. For example a market based instrument may be cost effective, but may not perform well in the dimensions of adoption
106 J.R. Ward et al. rates, administrative and transaction costs, concentration of environmental consequences and political feasibility. When these are articulated policy objectives, the single model terrain of economic efficiency or cost effectiveness may not be sufficient to inform reliably policy makers of aggregate instrument performance. To improve ex ante predictions of policy outcomes, experimental economics was employed to provide empirically based analysis of observed behavioral responses to the implementation of potential economic and community governance instruments to reduce recharge. In a controlled decision environment, calibrated to represent the salient economic and biophysical features of the actual catchment (Clifton 2004), formal experiments were applied to pretest feasible on-ground market impediment solutions and longer term policy options that may require changes to current institutional structures. The motivation for experimental work was to help understand and compare key factors that could impede cost effective, environmentally reliable and politically feasible trial implementation and empirically guide policy design features to mitigate identified impediments. Experiments were part of a wider process of policy design involving formal and informal community and expert consultation, analysis of a sociological survey in the catchment and farmers participating in field demonstrations of an experimental land management and recharge trading simulation. Ward and Connor (2004) and Connor et al. (2004) identified market impediments that were qualitatively evaluated and ranked according to the likely significance of the impediment on the catchment, administrative alignment, political feasibility and the likely efficiency gains of proposed impediment solutions. A lack of enforceable property right obligations to manage recharge, the high transaction costs of complex information processing, the likelihood of thin markets and the effect of non-market motivations were identified as likely impediments to an effective recharge market. The rationale for the three sets of experiments that were chosen through this process is summarized below.
Experiment 1: testing assignment of property right obligations for recharge management with discriminant and uniform price auctions There are currently no obligations for farmers to manage recharge to prescribed limits. Thus a first step in trialing the market exchange of transferable recharge credits is to establish limits on recharge. Given that a mandatory approach is not institutionally possible in the short run, a feasible surrogate property right is to pay landholders to enter into a temporary contract prescribing recharge obligations. The payment scheme also provides a straightforward trial entry, and by introducing an incentive to maximize participation rates addresses the likelihood of thin markets.1 Previous Australian experience (Stoneham et al. 2003; Bryan et al. 2004) and theoretical insights (inter alia: Holt 1980; Kagel, 1995; Milgrom 2004; Latacz-Lohman and Van der Hamsvoort 1997) suggest that a competitive
Aligning policy and real world settings 107 tender should reveal the true costs of recharge abatement and act as a cost effective way to assign recharge obligations. Current Australian applications of tenders for conservation provision have relied on a sealed bid2 discriminant price tender format (Stoneham et al. 2003; Bryan et al. 2004), both to reveal individual opportunity costs in a competitive environment and to minimize public expenditure by capturing available producer surplus. Relying on experimental analysis (Cason et al. 2003; Cason and Gangadharan 2004) agencies have relied on maintaining the extant information asymmetry3 to mitigate incentive incompatibilities and maximize cost effectiveness for the agency, namely, to induce sellers to minimize the level of trade surplus included in their tender bid. Milgrom (2004) notes a uniform price, where an equal payment level is established exogenously from successful bidders, is more incentive compatible to the revelation of true costs. Cason and Gangadharan (2005) note that violations of the assumptions of the revenue equivalence theorem (Vickery 1961; Myerson 1981), common to many environmental applications, precludes coherent theoretical analysis of the tender formats. Milgrom (2004) summarizes these as budget constraints, variable risk aversion, multiple or endogenous quantities, autocorrelation or interdependence of decision sets and the existence of public goods. Thus the first experimental analysis conducted is to provide a formal comparison of the bidding behavior and the magnitude and reliability of recharge reduction with discriminant versus uniform tender formats.
Experiment 2: assessing impacts of information processing cost Social survey work from this research (Thomson 2004) as well as experience with market based policy (Hahn 1989; Randall 2003) suggests that the issue of low participation rates and unreliable recharge reduction could be particularly acute with novel market designs that require more complex information processing. Thus a set of experiments was designed to test how the level of complexity of information processing required of program participants in alternative policy designs influences tradable credit market outcomes. One of the premises of theoretical analysis of perfectly competitive markets is that agents are fully and costlessly informed of the consequences of all potential market actions they can take. In implementing novel natural resource policies involving transitions to new institutional arrangements such as recharge credit trade, agencies may need to account for behavioral and cognitive limitations of informational processing to understand how to adapt successfully and introduce market based approaches (Simon 1972; Sterman 1987, p. 190; Smith 2002). A premise of the iterative process of rational expression of preferences, called the “discovered preference hypothesis” by Plott (1996) and referred to by Binmore (1999) and Baga and Starmer (2005), is that “rational economic behavior emerges at different stages in the development of an agent’s decision skills” (see Guala 1999, p. 569). According to Baga and Starmer (2005, p. 62) “Plott characterizes this as a three stage process with a particular trajectory; decisions
108
J.R. Ward et al.
progressively exhibit less randomness and greater rationality.” That is agents progress from an initial state of untutored and limited cognitive capacity, expressed as seemingly random impulsive responses, through a more systematic selection of choices that reflect the decision environment and finally to a stage where expressed choices reflect the choices of others. According to Plott (1996) and Binmore (1999), given the opportunity for repeated market participation and with feedback signals of sufficient clarity and strength, the expression of increasingly rational behavior can be expected to evolve. There is currently a paucity of empirical evidence or theoretical insights to help anticipate the time steps and explanatory variables of emergent learning in specific market contexts (Braga and Starmer 2005). Additionally, Smith (1982, 1991, 2002) and Vatn and Bromley (1995) argue that the complexity of informational processing necessary to enable coherent choices (extended here to environmental goods), which have remained functionally invisible, may impede convergence with a theoretical market equilibrium. The recharge trial is time constrained to a single iteration of the clearance market (namely, 12 months), precluding repeated farmer participation. Therefore experiments were designed to provide experimental evidence relating levels of information complexity to recharge market outcomes. This is especially important for an evaluation of the initial recharge clearance market implemented in the field.
Experiment 3: testing policies to overcome low participation rates and non-market motivation The final issue investigated is policy designed to manage common pool resources, given behavior of individuals with non-market and social motivations. Past research suggests that behavior may diverge substantially from expected utility theoretical predictions of the outcomes of trade, extending to trade in recharge units (Ostrom 1998; Ostrom et al. 1992; Gintis 2000; Tisdell et al. 2004; Poe et al. 2004). According to Vatn and Bromley (1995), Lowenstein (1999) and Loomes (1999) the expression of individual preferences in market settings is a complex psychological process where personal welfare may not be sufficient to explain the nature of choice, especially in the domain of public goods and environmental quality. Additionally, it may not just be individual choice but a variable sequence of choices (Kahneman and Tversky 1979) potentially filtered through the focal length of collective choice (Ostrom 1998). Prestige, public recognition, group belonging, avoidance of group sanction and desire to contribute to the public good can all represent powerful motivators in some contexts. Failure to account for such motivations in policy design can result in reductions in the willingness to supply environmental quality compared to policy designed to harness such motivations. Survey results from this research (Thomson 2004) indicate high levels of social cohesion, reciprocal behavior and concerns about community reputation. In another Australian catchment, Marshall (2004) also found the level of preparedness to cooperate and adhere to group compacts for salinity control were more sensitive to perceptions of
Aligning policy and real world settings 109 community benefits and a belief that others will reciprocate compared to private considerations like business security. Consultation with local landholders and agency personnel raised important concerns that a voluntary tendering process may result in insufficient participation levels to achieve the desired level of land management change. In particular, there was concern that once the few enthusiastic participants had initially been engaged primarily in pasture improvement actions, it would be difficult to motivate further land management change to reduce recharge, especially tree planting at the locations where this would contribute most to recharge reduction. These combined findings suggested that a policy approach to reward scheme participation with some form of collective award for reaching an aggregate recharge reduction level may increase the motivation for trial participation.
Synopsis of rationale for chosen experimental treatments The overall conclusion of the review of potential impediments to recharge credit trade policy was that introduction of novel market institutions, given the poorly defined state of information discovery, the violation of theoretical assumptions of auction theory and the public nature of groundwater recharge may mean that theoretical prediction is not sufficient as an analytical tool or a reliable precursor for policy decision making. To gain an empirical basis for a priori evaluation of recharge policy, experiments were chosen to test three research hypotheses. Research hypothesis 1: The incentive to reveal the true cost of individual recharge reduction is less for a discriminant price tender than for a uniform price tender and a discriminant price tender will result in more variable recharge reduction and more volatile market outcomes. Research hypothesis 2: Compared to a closed call market, the additional complexity of available information in an open call market will lead to more volatile market outcomes and impede convergence to predicted equilibrium. Research hypothesis 3: Compared to payments for individual recharge remediation, the introduction of a social payment, equally distributed to all players for collective recharge reduction in excess of target levels, will result in increased voluntary participation and hence recharge reduction. The introduction of communication as a coordination mechanism will reinforce voluntary social contracts and result in increased recharge reduction.
Context rich economic experiments The experiments were designed with field calibrated hydrological, biophysical and economic data, modeling recharge and opportunity cost to actual farm activity choices in the trial area. Experimental farms are thus heterogeneous and represent the main relationships between landscape positions, farm
110
J.R. Ward et al.
management regimes, farm income and groundwater recharge specific to the Bet Bet catchment. Additional context is introduced into the experimental domain, by informing participants that they are hypothetically farmers located in a closed catchment and produce recharge that in turn potentially affects other farmers. Using the experimental setting, we sought to elicit and measure behavioral responses to hypothetical decisions simulating policies and diverse institutional structures across an array of market conditions in an analogue specific to the Bet Bet catchment. Contextualization is not usual experimental protocol, however this research follows the lead of Cardenas (2000), Krause et al. (2003), Poe et al. (2004) and Tisdell et al. (2004). As an important insight from cognitive psychology, Lowenstein (1999) and Loomes (1999) advocate that decision making is highly context dependent. This has led some experimentalists to conclude that to inform policy meaningfully, experiments may need to be designed to include salient features of the policy setting of interest (Lowenstein 1999; Loomes 1999; Krause et al. 2003; Tisdell et al. 2004). Adherents conclude that while experiments designed to eliminate any confounding effects are useful for isolating influence of single treatment factors, they may not tell us much about how people are likely to react in real world contexts where confounding factors exist. There is now a growing body of experiments conducted in context rich environments. Results demonstrate that differences in context lead to differences in bargaining behavior, risk taking and sharing (inter alia Camerer and Lowenstein 2004; Gintis 2000; Krause et al. 2003). While experiments from context rich settings may allow only limited inference about behavior in other contexts, they are employed in this work because they represent in the view of some, the most appropriate way to draw inferences about behavior that are valid for specific contexts where policy design is being investigated (Lowenstein 1999; Plott and Porter 1996).
Experimental settings and experimental controls common to all experiments All experimental sessions were held at the Griffith University experimental economics laboratory in November 2004, using the MWATER experimental software platform developed and administered by Dr. John Tisdell of Griffith University and the Cooperative Research Centre for Catchment Hydrology. Participants were selected from an existing pool of approximately 200 undergraduate students, familiar with experimental protocols and procedures. The students represent a number of faculties and the majority do not have an economics background. All experiments involve the same simulated experimental catchment comprising a total of 12 heterogeneous farms located in three landscape positions with four farms located in each landscape position. The farms represent
Aligning policy and real world settings 111 a synthesis of existing farm management styles, characterized by different levels of farm income and recharge rates, calibrated to simulate the main economic and biophysical decision environment facing farmers in the Bet Bet catchment (Ward and Connor 2004).1 The decision set provided to participants comprised five discrete farm management options, each option associated with a unique level of farm income, recharge and a marginal value of a recharge unit based on recharge modeling summarized earlier in the chapter. As shown for two farms in Table 5.1 allocations were distributed according to farm characteristics and landscape position; for example Farm 1 in Table 5.1 was allocated 45 recharge units, Farm 2 allocated 20 units, when the recharge quantum was set at 50 percent of maximum recharge. Throughout the experimental sessions, each participant was randomly assigned to a single farm, and could select one of the five possible farm management options. Options characterized by higher income levels were associated with higher recharge levels for all farms. After entering the chosen management option, participant screens were updated with the option specific income, the marginal value of recharge units and the required recharge balance for the selected option. For all treatments, experimental subjects participated in sessions that involved ten independent periods. Two sessions were held for each experimental treatment. To create realistic incentives, subjects were paid according to the combined simulated farm and trading income they achieve in experiments. The duration of experimental sessions was approximately two hours. Player payments and non-compliance penalties in the open and closed call clearance market adhere to induced value theory and were standardized across
Table 5.1 Decision set, farm income, crop mix and recharge of two of 12 experimental farms Landscape position
Decision
Crop type (%)
Farm income (A$)
Recharge (Mls)
Ridge and upper slope
1 2 3 4 5 1 2 3 4 5
100a An 100 Phs 100 Phr 100 NV 100 FF 50 An + 50 Ps 50 An + 50 Pr 50 An + 50 NV 50 An + 50 FF 50 Pr + 50 FF
3,868 3,280 2,948 2,210 1,399 7,911 7,484 7,183 5,280 3,923
91 73 62 22 0 39 33 31 26 7
Farm 1 Mid slope and break of slope Farm 2
Notes a % refers to the proportion of 65ha under specified crop management. An = Annual pasture, Ps = Phalaris set stocking, Pr = Phalaris rotational grazing, NV = Native vegetation, FF = Farm forestry, Lu = Lucerne.
112
J.R. Ward et al.
all treatments. Player payments were calculated as a function of aggregate farm management and trading outcomes, reflecting player capacity to comply with recharge targets, adherence to farm management agreements and trading skill. In addition to a $10 attendance payment, specific farm (player) payments were calibrated by regressing (OLS) achieved farm income against a payment schedule of $1.50 per period for achieving the derived optimum farm income and $0.40 for static decisions (namely, a player selects the 50 percent recharge reduction income choice). Optimal incomes were calculated assuming players act as profit maximizers, acting optimally to available information. To ensure salience of player behavior and response to income variance in the simulated catchment, the player payments were therefore a scaled representation of the income decisions confronting farmers in the Bet Bet catchment. The penalty for non-compliance in the clearance market treatments reduces farm income to a level where the recharge allocation can be satisfied and deducts the equivalent market value of the shortfall of recharge units necessary to satisfy the recharge target associated with the income level selected. The noncompliance penalty was explained to experimental participants thus: To receive the income for the farm management option you have chosen you must reach the recharge target. The recharge target is the total recharge produced by the farm option minus your allocation. If you do not reach the target, your excess farm recharge will affect other farmers. Your income will be reduced to the management option where the recharge target can be met. You will not be able to trade or be compensated for any surplus units. For example: Say you have an allocation of 2 units, where option 3 produces a total of 10 recharge units and an income of $2 (with a target of 10 − 2 = 8) and option 2 has a total of 5 units, an income of $1 (and a target of 5 – 2 = 3). You choose option 3. You purchase 7 units in the market. You cannot reach the option 3 target, but can meet the option 2 target. You receive the option 2 income of $1 and have 4 surplus units. The cost you paid for purchasing the surplus units will be deducted from your farm income. Carefully designed instruction sets, salient for each treatment were provided via individual internet access to a power-point display. The instructions explain the farm characteristics, decision sets, rules, protocols, payments and noncompliance penalties specific to each experimental treatment. Supervising staff did not verbally present the instructions to avoid personality or behavioral biases and correct for possible delivery nuances. Talking, unless formalized in the treatment, was forbidden except to clarify questions from individuals regarding the experimental setting or instructions. To control for variable learning and to ensure consistent understanding, participants were required to answer accurately a quiz comprising 10–12 questions specific to the experimental treatment. The successful completion of the quiz was a necessary prerequisite for participation
Aligning policy and real world settings 113 in the experiments. The instruction sets are available from the authors upon request. Using individual computer terminals, 12 experimental participants are provided with details of their farm’s initial recharge allocation (Ra), farm income, the value of a recharge unit and the recharge levels for each decision variable or option. Participants only have access to their own farm information. In the cap-and-trade treatment, individual farms are differentially allocated a recharge entitlement according to farm characteristics, nominally set as 50 percent of the maximum farm recharge rate. The recharge balance (Rb) is calculated as the initial recharge allocation less the amount needed for the farm management option the participant selects. Rb can represent a surplus or deficit of recharge units depending on the farm allocation and management option selected. For example from Table 5.1: Farm 1 Ra = 45, Roption 1 = 91; then Rb = −46; Roption5 = 0, then Rb = 45. Option 1 has a 46 recharge unit shortfall, requiring purchase in a recharge market; option 5 has a surplus of 45 units, facilitating a selling option in the market. Marginal value (MV) for each option is calculated as the marginal cost of recharge abatement for the option compared to the recharge allocation. Subjects had the choice of acting in one of three ways in each decision making period. They could choose a farm management option that exactly satisfied their recharge quota, choose an option with recharge in excess of their quota and make up the deficit by buying credits, or choose an option that required less recharge credits than their quota and offer for sale the excess credits. Players voluntarily enter the market to meet recharge shortfalls and sell surplus recharge units. Participants could either buy or sell (subject to having surplus recharge units), but may only enter a single bid in each period. Market entrants enter bid quantities (based on Rb) and their price (based on the marginal value of recharge). Bidding information is restricted to individual bids only in the closed call, and public declaration of all bids in the open call. Players could access bidding history at any stage during the experiment. Players were prevented from offering surplus recharge units for sale in excess of their calculated Rb. Bids were accepted over a 90 second period, and the market clearing price calculated. The market price was announced (in the event that there are no matching buy and sell offers, it was announced that no trades occurred). In the closed call, successful bids were only revealed to successful individuals. Participants’ screens were then updated to reveal the outcome of their bids, the market price and their total income from the period. Players were restricted to selling surplus units in the tender treatments. Market entrants entered a sealed bid quantity based on Rb and their market exchange price (based on the marginal value of recharge). Players were prevented from offering surplus recharge units for sale in excess of their calculated Rb. Anonymity of bidders was maintained throughout the session and players could access their individual bidding history at any stage of the experiment. For all experiments, the control treatment was the behavioral response that would be predicted if players acted as profit maximizers, acting optimally to
114
J.R. Ward et al.
available information provided by the given market structure treatment instructions. In all experiments, behavioral responses to the treatments were measured as the aggregate recharge volume, the recharge volumes traded, market price and aggregate farm income.
Design of uniform and discriminant price tender experiment to test initial recharge credit allocation options In the uniform price tender experiments, players were asked to submit bids to the authority to undertake land use change that reduces recharge. The bids were ranked according to price per unit recharge reduction and accepted until a targeted level of reduction was achieved (nominally assigned at 367 units or 50 percent of maximum recharge) or bid offers exhausted. If participants are assumed to maximize profit, understand their farm characteristics perfectly and responded optimally to available price information, the predicted reduction of 367 units will be realized at a tender price of $56 per unit. In a uniform price auction the purchaser offers a single uniform purchase price that is paid to all successful sellers, regardless of their initial bid. That is all sellers receive a market clearing price that both exceeds their original offer and is determined exogenously of individual bids. Individual payments to sellers are thus inclusive of a trading surplus. As a consequence sellers have a greater incentive to reveal their true costs of recharge reduction, as increasing their bid reduces the probability of bid rejection, without a commensurate raising of the market price. The purchase price can be determined as either equal to the lowest rejected bid (second price) or the highest accepted bid (first price). Alternatively, in a discriminant price auction (first price) the purchaser pays a range of prices that match the bids offered by successful sellers. Sellers therefore face uncertain bid ranking and acceptance although price is assured. Trading surplus is not determined exogenously and must be included in the bid offer. The incentive for sellers is to increase the bid value to include a component of variable trading surplus. Dependent on the marginal cost of reduction, a seller faces a tension and costly learning effort in determining the balance between a higher probability of bid acceptance versus a higher trading surplus. Traders strategically seeking an optimal and maximum price may tend to explore the price opportunities in the market for a longer period compared with strategies in a uniform price auction. As a corollary, the recharge and price values may be more volatile in the discriminant price auction, potentially resulting in a less reliable and more costly recharge outcome. When payment is determined by an own bid, the discriminant tender format is not compatible with true cost revelation (Milgrom 2004) but may be more cost effective for the purchasing agency (Cason and Gangadharan 2005). We experimentally tested a first price discriminant price tender with a first price uniform tender. Experiments designed to compare the results of tendering with context specific upper Bet Bet supply, demand and market conditions provided the opportunity to test for differences between participation rates, levels of
Aligning policy and real world settings 115 recharge reduction and purchase costs for the uniform and discriminant price tender system. Our motivation for deploying the tender process in the recharge trading trial departs from the objectives of previous experimental analysis of tender schemes for conservation provision. We sought to establish the tender mechanism associated with a recharge reduction outcome that best approximates a predetermined level of recharge reduction (namely, simulates a hypothetical cap of 367 ML). In all the experimental tender treatments the bidding strategy of the purchasing agency is quantity constrained, in contrast to a budget constrained strategy. As participants eventually have the opportunity to exchange tradable recharge units, implicitly they are fully informed of recharge outcomes associated with their individual land management actions. We therefore did not apply a strategy of asymmetric information to mitigate incentive incompatibilities. Landscape context in the Bet Bet catchment matters in addressing dryland salinity. We therefore sought the best approach to maximize participation rates, particularly those farms with low marginal costs of recharge reduction.
Design of open and closed call market exchange credit trade treatments to test impacts of information processing requirements The recharge credit trade market can be administered as either a closed call or an open call exchange. In a closed call market potential buyers submit sealed bids to buy and potential sellers submit sealed offers to sell. The market is called and trades are executed by a clearing house, in this case the experimental recharge management authority. The authority computes a single equilibrium price at which all trade takes place based on the aggregate supply of and demand for credits. When the price has been computed, the authority notifies successful traders and announces the market price and informs successful traders of the individual volume traded only. An important characteristic of the closed call auction is the limited disclosure of bidding information. The only information disclosed is market price and volumes traded, no information on specific transactions is reported. There is no public disclosure of individual bidding information or the individual volumes traded. In contrast, an open call market publicly declares all individual bidding and volume offers prior to the market clearing. The purpose of the experiment was to test prior hypotheses about the initial cost of complex information processing required of untutored program participants in closed and open call market designs and the effect on the observed gains from trade. The results of experimental treatments were compared with the theoretical predictions of a no-trade regulatory control.
Design of tendering experiments with social payments The social payment experiments were variants of the uniform price auction. Payment rates per unit of recharge were a function of aggregate recharge reduction.
116
J.R. Ward et al.
The standard uniform auction price was paid to successful bidders if total recharge reduction was less than 50 percent of maximum recharge. If aggregate recharge reduction exceeded 50 percent of maximum recharge, players received a two part payment. In addition to the standard uniform price payment, the social or group payment was equally distributed amongst all participants, regardless of recharge reduction efforts. The experiment involved two treatments of the social payment mechanism. One excluded communication among subjects. The “communication” treatment consisted of the same social payment mechanism in addition to a formal forum for group discussion. The discussion forum was designed to test the effect of reinforcing group crafted social coalitions to improve recharge reduction, increasing market participation and improving individual contract adherence. The communication treatment consisted of the same payment mechanism as the social payment plus a formal forum for group discussion. The treatment allowed for an initial face to face, non-anonymous communication and discussion forum of 10 minutes whereby players could craft a social compact for recharge management. Subsequent face to face discussion periods were of 3 minutes’ duration (a similar process to that reported by Tisdell et al. 2004). Individual adherence with the group crafted agreement was voluntary. The intended aggregate recharge reduction was disclosed at the beginning of the period; actual aggregate reduction was disclosed at the end of the period. In this treatment players were informed of the level of achieved aggregate reduction at the end of each period for all treatments when the reduction target is 70 percent of maximum recharge. In addition, players were publicly informed of the agreed and intended recharge reduction and the magnitude of the social payment at the end of the discussion forum. The level of recharge reduction achieved and the level of the actual social payment was provided at the end of each period. Players cannot reveal their farm characteristics, intended or historical market strategies or the value of their recharge units. Consensus is achieved by majority vote. Communication between players was not permitted between the discussion forums. Supervising staff were able to facilitate the discussion forum by answering technical questions and calculating aggregate recharge reduction and social payment estimates only. They did not engage in any strategic discourse with the players. Participation in the treatment was voluntary and there is no penalty for non-compliance or non-adherence to group consensus. In this treatment in addition to payment for individual performance, a social or group payment component was paid equally to all players. The group payment for aggregate reduction exceeding the 50 percent reduction target of 367 recharge units is calculated as: R0 − 367 × $2.30 547 − 367
where: Ro equals aggregate recharge reduction by players 1–12. The 70 percent reduction level corresponds to 547 units. Based on the marginal cost of recharge
Aligning policy and real world settings 117 reduction of the 12 experimental farms, the $2.30 group payment is equivalent to the ratio of the additional surplus of a 70 percent reduction compared to a 50 percent recharge reduction payment of $1.50 minus a transaction cost of approximately $0.70.
Experimental results Uniform and discriminant price tender treatments Results of aggregate recharge observed in the uniform and discriminant price auction treatments are compared in Figure 5.2. A summary of the ANOVA and Student’s t-test statistical analysis is presented in Table 5.2. For both ANOVA (F) and t-tests a * entry in the table indicates significantly different values at α = 0.05. Homogeneity of variance was tested by Levene’s statistic and found to be significantly different (p < 0.05) for all treatments. Therefore Dunnett’s post hoc test was used for pair wise comparison and described by subscript letters across rows. Differences in subscripted letters across rows indicate that values across treatments are significantly different. ANOVA assumes independence of responses between periods. To test for the degree of inter-period independence we ran a linear mixed model imputing periods (1–10) as a covariate to estimate
600
Aggregate recharge (ML)
550
500
450
400 50% recharge reduction 367 MLs 350
300
250 Discriminant price
Uniform price
Figure 5.2 Observed aggregate recharge in the discriminant and uniform price tender treatments.
118
J.R. Ward et al.
Table 5.2 ANOVA of discriminant and uniform price tender auction treatments Periods 1–10
Statistics1
Control2
Discriminant2
Uniform2
Aggregate recharge (MLs) F(2.47) = 4.402* 367 441 (82.89)3 411 (60.54) Qty traded (MLs) F(2.47) = 7.573* 367 245 (116.20) 294 (52.83) Player income ($) F(2.47) = 19.897* 18.00 20.49 (3.26) 15.40 (2.36) Trade income ($) Discriminant t (19) = 3.992* 13,144 21,210 (9036) 15,398 (2,971) Uniform t (19) = –7.756* 20,552 Periods 1–5 Aggregate recharge (MLs) Qty traded (MLs) Player income ($) Trade income ($) Discriminant Uniform
F(2,22) = 6.729* F(2,22) = 7.954* F(2,22) = 7.188*
367 367 18.00
t (9) = 1.327 13,144 t (9) = –5.636* 20,552
493 (74.61) 203 (103.22) 18.63 (2.42)
410 (76.77) 280 (60.05) 14.80 (2.78)
16,084 (7,007) 14,845 (3,202)
Notes 1 Statistical tests * indicates significant difference across treatments at α = 0.05. 2 Treatment means (across rows) with the same letter were not statistically different at α = 0.05. 3 Mean (standard deviation).
random effects (AR-1 heterogeneous) using a restricted maximum likelihood estimator. The primary experimental metric is the level of aggregate recharge (or recharge reduced) observed for each treatment. The estimated covariance parameter for periods 1–10 is 23.25 (s.e. = 46.82), the Wald Z coefficient is 0.497, p = 0.620, indicating there are no significant unobserved random effects between periods. The control values for all treatments represent those predicted according to expected utility theory; representing the expected outcome where participants are assumed to behave as profit maximizers, who completely understand and act upon optimal trade, management strategies and payoffs instantaneously. A total of 367 units at a value of $56 (in the uniform price) are predicted to be realized in the control treatment. Table 5.2 summarizes the statistical analysis of the bidding responses to the uniform and discriminant price treatments of periods 1–5. The observed bidding behavior of the uniform and discriminant price treatments is illustrated in Figure 5.3 and Figure 5.4 respectively. Results indicate that: 1
The number of aggregate recharge units observed in the discriminant price tender is significantly more than the number predicted for the control treatment for periods 1–5. In contrast, the aggregate recharge resulting from the behavior of players trading in the uniform price tender is not significantly different from the control prediction (a proxy for a regulatory cap on recharge).
Aligning policy and real world settings 119 140
Experimental bids Predicted bids
120 Price ($)
100 80 60 40 20 0 0
20
40
60
80
100
Recharge units
Figure 5.3 Observed and predicted bids for the uniform tender treatment. 160
Experimental bids Predicted bids
140 Price ($)
120 100 80 60 40 20 0 0
20
40 60 Recharge units
80
100
Figure 5.4 Observed and predicted bids for the discriminant tender treatment.
2
3
Although not significantly different, the mean value of recharge units traded in the uniform price is greater than the observed mean value traded in the discriminant price tender treatment. The variance of the discriminant price tender (s.d. = 116.203) indicates the tender format is characterized by substantial bidding volatility for the ten period experimental sessions (illustrated in Figure 5.4). The mean aggregate player income for the ten period session is significantly greater in the discriminant price tender than the value predicted for the control and that observed in the uniform price tender treatment. Player income increased from a mean value of $18.62 in periods 1–5 to $22.45 for periods 6–10, despite fewer recharge units traded in the discriminant price tender than in the uniform price tender treatment. The experimental design deploys a simulated purchasing strategy of the management agency that prioritizes achieving recharge reduction targets as opposed to being budget constrained. Higher than predicted bids for recharge reduction actions were therefore purchased if the 50 percent recharge reduction stipulated in the
120
J.R. Ward et al. target had not been fulfilled by purchasing sufficient units offered by lower priced bidders. Successful bids for the uniform price tender ranged from $23–56/unit for periods 1–10 (see Figure 5.3) at a mean cost of $52.37. Successful bids in the discriminant tender ranged from $23–120/unit (see Figure 5.4) increasing from a mean cost of $79.23 in periods 1–5 to $86.57 for all ten periods. The experimental purchasing strategy is sensitive to the suboptimal decisions of low marginal cost players, providing the opportunity for strategic, successful bidding behavior by high marginal cost players.
The results are in accord with the experimental results reported by Cason and Gangadharan (2005) that compared uniform and discriminant price auctions for land management. Empirical evidence provided by these authors found that offers in the uniform price auction experiment they conducted were within 2 percent of individual abatement costs while offers in the discriminative price auction were at least 8 percent greater than cost. In contrast to these experimental results, Cason and Gangadharan (2005) found the discriminant tender was more cost effective compared to a second price uniform tender. The analysis of the experimental results indicate that the research hypothesis 1 should be accepted: that is, there is a significant difference, measured as the amount of aggregate recharge reduction, in the discriminant and uniform tender auctions in the context of these experimental treatments. The behavior of players in the uniform tender results in significantly increased recharge reduction, less volatile market bidding and recharge is reduced at a lower unit cost. The uniform price is therefore selected as the appropriate tender mechanism to establish payments for farmers to enter into recharge management obligations.
Market exchange in open and closed call treatments The results of the gains from trade observed in the open and closed call cap-andtrade experiments are presented in Figure 5.5. A summary of the ANOVA statistical analysis is presented in Table 5.3. Significant differences (p < 0.05) are estimated using Dunnett’s T3 post hoc comparison (Levine’s; p < 0.05) and described by different superscript letters across treatment rows. The independence of periods was tested at two factor levels (dependent variable net farm income). The first factor used the ten periods as levels: periods did not significantly affect observed net income (F(9,49) = 2.080, p = 0.055). In a general linear model, the second factor tested the random effect of two period levels: level I = periods 1–5, level 2 = periods 6–10. The two level factor did not significantly affect observed net income (F(1,49) = 5.111, p = 0.143: interaction effect of treatment * period: F(2,44) = 1.759, p = 0.184). There was no correction on observed treatment means for the random effect of period. The control values represent those predicted when participants are assumed to behave as profit maximizers, who completely understand and act upon optimal trade and management strategies and payoffs instantaneously. The control treatment predicts a market price of $61 and 144 recharge units traded.
Aligning policy and real world settings 121 70,000
Net aggregate farm income ($)
65,000
60,000
55,000
Predicted income $56,978, cap with no trade
50,000
45,000
40,000 Open call
Closed call
Figure 5.5 Aggregate farm income, including gains from trade, observed in the open and closed call market treatments.
Results indicate that there is no significant difference in the observed mean market price between treatments. The mean number of recharge units traded observed in both the open and closed call treatments are significantly fewer (p < 0.05) than the control, but are not significantly different from each other. There is no significant difference between treatments in Table 5.3 ANOVA of closed and open call market treatments Periods 1–10
Statistics1
Control2 Open2
Closed2
Aggregate recharge (MLs) F(2.47) = 1.257 367 Qty traded (MLs) F(2.47) = 27.054* 144 Market price ($) F(2.47) = 0.284 61 Gains from trade ($) F(2.47) = 6.729* 56,978
385 (65.40)3 355 (68.88) 83 (20.31) 93 (28.18) 60 (42.83) 67 (21.60) 56,150 (4,681) 61,762 (3,950)
Periods 1–5 367 Aggregate recharge (MLs) F(2.22) = 0.175 Qty traded (MLs) F(2.22) = 27.668* 144 Market price ($) F(2.22) = 0.142 61 Gains from trade ($) F(2.22) = 6.4458* 56,978
353 (83.51) 371 (71.05) 79 (19.54) 77 (19.53) 61 (38.182) 68 (26.167) 53,825 (4,505) 59,584 (3,338)
Notes 1 ANOVA tests: * indicates significant difference across treatments at α = 0.05. 2 Treatment means (across rows) with the same letter were not statistically different at α = 0.05. 3 Mean (standard deviation).
122
J.R. Ward et al.
the value of aggregate recharge in the cap-and-trade treatments. Results indicate that: 1
2
In contrast to the open call, the net income (including gains from trade) observed in the closed call market are significantly greater (p < 0.05) than the control value of regulated, no trade institution. The market prices observed in the closed call (s.d. = 21.60) were less variable than the open call market. There is substantial variance in the market price (s.d. = 42.825) observed in the open call market trading for periods 1–10; increasing in periods 6–10 (s.d. = 49.90) compared to periods 1–5 (s.d. = 38.182). The result indicates that the volumes traded and the market prices in an open call market are more volatile and unpredictable. The continuing variability of observed volumes traded suggests that participants were continuing to explore for an optimal strategy up to the final periods of the experiments. Despite increased experience and practice in making farm and trading decisions, the volume traded and market price did not converge to a predicted equilibrium value. In contrast, the variance observed in the closed call decreased from periods 1–5 (s.d. = 26.175) compared to periods 6–10 (s.d. = 17.165).
The analysis of the experimental results indicate that the research hypothesis 2 should be accepted: that is there is a significant difference in the gains from trade of the closed call market treatment compared to the open call treatment. The closed call market is therefore selected as the appropriate market protocol based on the observed significant gains from trade, stable market prices and increased convergence to the predicted equilibrium.
Social payment and communication treatment This experiment involved three treatments: a uniform price tender, a uniform price tender with social payment and a uniform price tender with social payment and communication. The uniform price treatment involves player payments made according to individual land management and trading performance only; there is no social payment related to group aggregate reduction. The social payment treatment involves payments based on individual performance and an additional social payment (equally distributed to all players regardless of contribution) is paid based on aggregate recharge reduction achieved by the group. The communication treatment involves individual and social payments. In addition it includes a discussion forum to craft a voluntary social compact for all players, prior to the selection of a farm management option. Results of aggregate recharge observed in the uniform and discriminant price auction treatments are compared in Figure 5.6. A summary of the ANOVA statistical analysis is presented in Table 5.4. Significant differences (p < 0.05) are estimated using Dunnett’s T3 post hoc comparison (Levine’s; p < 0.05) and described by different superscript letters across treatment rows. In a general
Aligning policy and real world settings 123 Predicted quantity traded (528 MI)
550
500
450
400
350
300
250
200 5 Uniform tender
6 Uniform and social
7 Uniform and comm
Figure 5.6 Recharge units traded in the 70 percent reduction, uniform price, social payment and communication treatments.
linear model, the unobserved random effects of periods were tested at two factor levels (dependent variable = aggregate recharge (MLs)). The first factor used the ten periods as levels: periods did significantly affect observed total recharge (F(9,60) = 2.794, p = 0.017). The second factor tested the random effect of two period levels: level I = periods 1–5, level 2 = periods 6–10. The two level factor Table 5.4 ANOVA of 70 percent reduction, uniform price, social payment and communication tender treatments Periods 1–10
Statistics1
Control2 Uniform2
Aggregate F(2.47) = 4.912* 206 recharge (MLs) Qty traded F(2.47) = 4.231* 528 (MLs) Player F(2.47) = 1.629 35 income ($)
Uniform + social2
Uniform + communication2
294 (82.49)1 277 (52.61)3
268 (54.36)
437 (80.99)
453 (74.46)
440 (73.66)
28.66 (2.66) 30.96 (10.23)
32.67 (10.62)
Notes 1 ANOVA tests: * indicates significant difference across treatments at α = 0.05. 2 Treatment means (across rows) with the same letter were not statistically different at α = 0.05. 3 Mean (standard deviation).
124
J.R. Ward et al.
did not significantly affect observed net income (F(1,69) = 6.543, p = 0.075: interaction effect of treatment * period: F(7,62) = 1.292, p = 0.285). Due to random effects across the ten periods, the standard error of the treatment means was corrected to 10.268 and the control corrected to 14.521. The control represents a 70 percent reduction target of 528 units at a recharge value of $109. Results indicate that: 1
2
3
All the treatments are described by significantly fewer (p < 0.05) quantities of recharge units exchanged when compared to the 528 units predicted in the control. There is no significant difference between the three experimental treatments. There is no significant difference in the mean aggregate recharge revealed in trading in the three treatments although they are significantly less than the 206 MLs predicted in the control. Mean player payments observed in the uniform price treatment ($28.66) were significantly less than the control prediction of $35. Player payments in the social payment and communication treatments were not significantly different from the control or the uniform treatment, but characterized by high levels of variance (s.d. = 10.23 and 10.62 respectively).
The analysis of the experimental results indicate that the research hypothesis 3 should be rejected: i.e. in contrast to game theoretic predictions, the provision of a form of social payment, based on cooperative action to achieve aggregate recharge reduction, does not reduce farm income nor impede the reliability of recharge outcomes compared to individual payments only. In the n player, finite period, public good game, game theory predicts little or no public good contribution, significant free riding and rapid and substantial decay in observed public good contributions (Ledyard 1995). The experimental results are consistent with Ostrom et al. (1992), Poe et al. (2004) and Tisdell et al. (2004). Incorporating an additional social payment based on collective action into the on-ground trial implementation design does not significantly reduce recharge reduction and may encourage increased trial participation as it is aligned with the high levels of social cohesion observed in the catchment.
Summary and conclusions This chapter reports on the use of experimental economics to guide and inform the design and pre-implementation testing of an on-ground trial of a recharge cap-and-trade scheme implemented in 2005. There are currently no obligations for farmers to manage recharge below a maximum threshold. Thus a first step in trialing the trade in exchangeable recharge credits is to establish limits or a cap on recharge. A mandatory approach is not institutionally possible at least in the short run. A feasible approach to developing effective obligations for the trial, in addition to revealing accurately individual costs of abatement, is to invite and remunerate landholders
Aligning policy and real world settings 125 to enter into a contracted obligation through a competitive tender process. Experimental economics was used to compare the performance of alternative tender processes; a uniform and discriminant price tender. In both treatments the purchasing strategy of the recharge authority prioritizes achieving recharge targets rather than being constrained by budget or price limits. Experimental results indicate a significantly (p < 0.05) greater quantity of aggregate recharge for the discriminant price tender compared to the uniform price tender and control treatments. The analysis supports the research hypothesis that the increased incentive incompatibility of the discriminant tender results in lower levels of recharge amelioration, increased rent seeking and impeded convergence to a predicted equilibrium (expressed as more volatile bidding behavior over the experimental periods). An implication of these results for the Bet Bet catchment recharge credit policy design is that in a context where a new tendering process is being introduced, an incentive compatible uniform price tender may result in higher participation, and increase cost effectiveness. We recognize that the results may well be only relevant to the specific experimental context. Sociological survey work conducted for this project indicates levels of social cohesion within the community are very high, with over 80 percent of the survey respondents indicating involvement in the local Landcare group (Thompson 2004). Previous research (e.g. Ostrom 1998, Gintis 2000) reports the potential for significant divergence from individualistic profit maximizing behavior in small, cohesive communities. These combined findings suggested that a policy approach to reward participation with some form of collective award for reaching an aggregate recharge reduction level might increase trial participation. To test the potential for social payments to enhance willingness to enter into contracts to reduce recharge, variants of the uniform price auction treatment were run inclusive of a social payment component. The experiments involved two social payment treatments: one with communication and one without. The control treatment was a uniform auction with the same total player payoff but no social payment. For both the control and social payment treatments, rates of convergence toward theoretical predictions were high. Generally, outcomes were not statistically different across treatments, with high levels of voluntary social contract adherence and low levels of observed free riding. Whilst not predicted by game theory (Ledyard 1995) (i.e. high levels of free riding and decay of initial public contribution), the results are in accord with the empirical work summarized by Ostrom (1998) and Gintis (2000) and the level of public good contribution reported in inter alia Smith (1991). In contrast to Poe et al. (2004) and Tisdell et al. (2004) there was no significant effect on the level of recharge reduction observed for the communication treatment. We suspect that the particular results were likely at least to some extent an artifact of the particular experimental context. Because participants were able to extract nearly all possible gains to trade even without social payment and communication, there was relatively little opportunity to obtain additional gains when social payments and communication treatments were
126
J.R. Ward et al.
introduced. Possibly if experiments had been constructed such that individual gains realized with the baseline treatment were less, increased gains from social payment and communication may have been observed. To understand the implications of credit trade market structures, behavioral responses to an experimental open call, a closed call and a theoretical prediction (regulation in absence of trade) were compared. Conceptually, an open call market should lead to more rapid convergence to a predicted equilibrium as participants receive additional market information (the bidding information is publicly disclosed for all players whereas players in a closed call are only informed about the success of their own bids). In contrast, Binmore (1999) and Plott (1995) predict delays in convergence to a theoretical equilibrium when the decision environment requires the processing of novel, context rich and complex information. The results indicate that the gains from trade observed for the closed call treatment are significantly greater than the control and open call treatments. Additionally, although not significant, the volatility of observed market prices and quantity traded (expressed as variance) is substantially higher in the open call market. Ultimately, the objective of the experimental component of the MBI trial was to provide an empirical basis for selecting appropriate instruments to establish reliable recharge reduction obligations and cost effective recharge trading opportunities in the on-ground trial implementation phase. The experimental economics reported on here informed recommendations for contractual obligations in three key areas: 1
2
3
Multiple year agreements with landholders that involve providing payment in exchange for land management obligations to provide recharge reduction. Based on experimental results a competitive uniform price tender was selected to elicit and rank the marginal abatement costs of individual participants and establish initial land management obligations. A group performance component of payment that involves withholding available funds unless landholders cumulatively achieve a designated recharge reduction level (or threshold). This feature addresses thin market challenges by harnessing the power of community cohesion to bring in reluctant or undecided participants with high recharge reduction potential properties. The intent of the trial is to allow credit/debit positions to be cleared through trade. Market exchange is facilitated using the closed call structure. Based on the contextualized experimental findings of significantly increased gains from trade, increased market predictability and reduced market volatility, a closed call auction format has been selected in preference to an open call market structure.
Acknowledgments This research was funded by the Commonwealth National Action Plan for Salinity and Water Quality Market-based Instruments Pilot Program. The National
Aligning policy and real world settings 127 Action Plan for Salinity and Water Quality is a joint initiative between the State and Commonwealth governments. The Victorian DPI has provided information input into this report. Two anonymous referees provided insightful and helpful comments. All errors remain the responsibility of the authors.
Notes 1 Thin markets and the anticipated low market participation rates in the geographically constrained catchment may be associated with price volatility and distortion (Stavins 1995, 2000; Kampas and White 2003), spatial concentration (Tietenburg 1998), a reduced probability of satisfying market needs associated with increased transaction costs (Stavins 1995) and, as a corollary, unreliable recharge outcomes (Dinar and Howitt 1997). 2 A sealed bid in this case attempts to reduce the possibility of collusion to seek excessive profits when a tendering agency is confronted with a small number of land managers in a spatially constrained catchment. 3 When the quality of a groundwater is not readily observable, farmers possess information about their opportunity costs of recharge reduction actions, but not the accruing benefits (as these primarily occur off farm), which if withheld can adversely affect the value of the transaction with the agency. The agency possesses information (biophysical and hydrological) about the marginal benefits of specific land management actions, in this case at the farm scale, but not the actual costs of individual reduction actions. In the tender schemes deployed for conservation provision, the discriminant price tender reveals farmer costs of reduction plus a variable level of trade surplus. To minimize the level of farmer’s trade surplus, the agency has tactically withheld information about the benefits of those actions (Stoneham et al. 2003). 4 More detail on the experimental demand and supply schedules is available on request from the authors.
References Baumol, W.J. and Oates, W.E. (1988). The Theory of Environmental Policy. 2nd edn; New Jersey: Prentice Hall. Binmore, K. (1999) Why experiment in economics. Economic Journal: 109, pp. F16–F24. Braga, J. and Starmer, C. (2005) Preference anomalies, preference elicitation and the discovered preference hypothesis. Environmental and Resource Economics: 32(1), pp. 55–89. Bryan, B, Connor, J., Gatti, S., Garrod, M. and Scriver, L. (2004) Catchment care – developing a tendering process for biodiversity and water quality gains. Milestone 13 Report for the Onkaparinga Catchment Water Management Board. CSIRO Land and Water (Policy and Economic Research Unit), Adelaide. Camerer, C. and Lowenstein, G. (2004) Behavioral economics: past present and future. In (eds.) Camerer, C., Lowenstein, G. and Rabin, M. Advances in Behavioral Economics. New Jersey: Russel Sage Foundation and Princeton University Press, pp. 3–51. Cardenas, J.C. (2000) How do groups solve local commons dilemma? Lessons from experimental economics in the field. Environment, Development and Sustainability: 2(3–4), pp. 305–322. Cason, T. and Gangadharan, L. (2004) Auction design for voluntary conservation programs. American Journal of Agricultural Economics: 86(5), pp. 12–17.
128
J.R. Ward et al.
Cason, T. and Gangadharan, L. (2005) A laboratory comparison of uniform and discriminant price auctions for reducing non-point source pollution. Land Economics: 81(1), pp. 55–70. Cason, T., Gangadharan, L. and Duke, C. (2003) A laboratory study of auctions for reducing non-point source pollution. Journal of Economic and Environmental Management: 46(2), pp. 446–71. Clifton, C. (2004) Water Salt Balance Model for the Bet Bet Catchment. Bendigo Victoria: SKM. Common, M. (1995) Sustainability and Policy: Limits to Economics, Cambridge: Cambridge University Press. Connor, J., Ward, J., Thomson, D. and Clifton, C. (2004) Design and implementation of a land holder agreement for recharge credit trade in the Upper Bet Bet Creek Catchment, Victoria. CSIRO Land and Water Report, Folio no. S/04/1845. Dales, J.H. (1968) Pollution, Property and Prices. Toronto, Canada: University of Toronto Press. Dinar, A. and Howitt, R.E. (1997) Mechanisms for allocation of environmental control cost: empirical tests of acceptability and stability. Journal of Environmental Management: 49(2), pp. 183–203. Dwyer, J. Eaton, R., Farmer, A., Baldock, D., Withers P. and Silcock P. (2002) Policy mechanisms for the control of diffuse agricultural pollution with particular reference to grant aid. English nature research reports, number 455. Fang, F. and Easter, K.W. (2003) Pollution Trading to Offset New Pollutant Loadings – a Case Study in the Minnesota River Basin. American Agricultural Economics Association Annual Meeting, Montreal, Canada 27–30 July 2003. Gintis, H. (2000) Beyond Homo Economicus: evidence from experimental economics. Ecological Economics: 35(2), pp. 311–322. Guala, F. (1999) The problem of external validity (or “parallelism”) in experimental economics. Social Science Information: 38(4), pp. 555–573. Hahn, R. (1989) Economic prescriptions for environmental problems: how the patient followed the doctor’s order. Journal of Economic Perspectives: 3(3), pp. 95–114. Holt, C. (1980) Competitive bidding for contracts under alternative auction procedures. Journal of Political Economy: 88(3), pp. 433–455. Kagel, J. (1995) Auctions: a survey of experimental research. In (eds.) Kagel, J. and Roth, A.E. The Handbook of Experimental Economics. New Jersey: Princeton University Press, pp. 510–585. Kagel, J. and Roth, A.E. (1995) The Handbook of Experimental Economics. New Jersey: Princeton University Press, pp. 3–109. Kahneman, D. and Tversky, A. (1979) Prospect theory; an analysis of decision under risk. Econometrica: 47(2), pp. 263–291. Kampas, A. and White, B. (2003) Selecting permit allocation rules for agricultural pollution control: a bargaining solution. Ecological Economics: 47(2–3), pp. 135–147. Keohane, N., Revesz, R. and Stavins, R. (1998) The choice of regulatory instruments in environmental policy. Harvard Environmental Law Review: 22(2), pp. 313–367. Krause, K., Chermak, J.M. and Brookshire, David S. (2003) The demand for water: consumer response to scarcity. Journal of Regulatory Economics: 23(2), pp. 167–191. Latacz-Lohman, U. and Van der Hamsvoort, C. (1997) Auctioning conservation contracts: a theoretical analysis and application. American Journal of Agricultural Economics: 79(2), pp. 407–418. Ledyard, J.O. (1995) Public goods: a survey of experimental research. In (eds.) Kagel, J.
Aligning policy and real world settings 129 and Roth, A.E. The Handbook of Experimental Economics. New Jersey: Princeton University Press, pp. 111–194. Loomes, G. (1999) Some lessons from past experiments and some challenges for the future. Economics Journal: 109(February), pp. 35–45. Lowenstein, G. (1999) Experimental economics from the vantage point of behavioral economics. Economics Journal: 109 (February), pp. 25–34. Marshall, G.R. (2004) Farmers cooperating in the commons? A study of collective action in salinity management. Ecological Economics: 51(3–4), pp. 271–286. MBI Working Group on Market-based Instruments (2002) Investigating new approaches: a review of natural resource management pilots and programs in Australia that use market-based instruments. National Action Plan for Salinity and Water Quality, Canberra, Australia. Mesiti, L. and Vanclay, F. (1996). Farming styles amongst grape growers in the Sunraysia district. In (eds.) Lawrence, G., Lyons, K. and Montaz, S. Social Change in Rural Australia, Rockhampton: Central Queensland University. Milgrom, P. (2004) Putting Auction Theory to Work. Cambridge: Cambridge University Press, pp. 64–97. Montgomery, W.E. (1972) Markets in licenses and efficient pollution control programs. Journal of Economic Theory: 5(3), pp. 395–418. Myerson, R. (1981) Optimal auction design. Mathematics of Operations Research: 6(1), pp. 58–73. Myerson, R.B. (1991) Game Theory: Analysis of Conflict. Cambridge, MA: Harvard University Press, p. 275. Ostrom, E. (1998) A behavioral approach to the rational choice theory of collective action. American Political Science Review: 92(1), pp. 1–22. Ostrom, E., Walker, J. and Gardner, R. (1992) Covenants with and without a sword: self governance is possible. American Political Science Review: 86(2), pp. 404–417. Plott, C.R. (1996) Rational individual behavior in markets and social choice processes: the discovered preference hypothesis. In Arrow, K.J., Colombatto, E., Perlman, M. and Schmidt, C. (eds.) The Rational Foundations of Economic Behavior. Cambridge: Cambridge University Press, pp. 225–250. Plott, C.R. and Porter, D.P. (1996) Market architectures and institutional test-bedding: an experiment with space station pricing policies. Journal of Economic Behavior and Organization: 31(2), pp. 237–272. Poe, G., Schulze, W., Segerson, K., Suter, J. and Vossler, C. (2004) Exploring the performance of ambient based policy instruments when non-point source polluters can cooperate. American Journal of Agricultural Economics: 86(5), pp. 1203–1210. Randall, A. (2003) Market based instruments – international patterns of adoption, remaining challenges and emerging approaches. Market-Based Instrument Workshop, Canberra, October. Schary, C. (2003) Can the acid rain program’s cap and trade approach be applied to water quality trading? EPA Conference, Washington, DC 1–2 May 2003. Shortle, J.S. and Horan, R.D. (2001) The economics of non-point pollution control. Journal of Economic Surveys: 15(3), pp. 255–289. Simon, H. (1972) Theories of bounded rationality. In (eds.) Radner, C.B. and Radner, R. Decision and Organization. Amsterdam: North Holland. Smith, V.L. (1982) Markets as economizers of information: experimental examination of the Hayek Hypothesis. Economic Inquiry: 20(2), pp. 165–179.
130
J.R. Ward et al.
Smith, V.L. (1991) Papers in Experimental Economics. (Collected Works). New York: Cambridge University Press. Smith, V.L. (2000) Bargaining and Market Behavior: Essays in Experimental Economics. (Collected works). Cambridge University Press, New York. Smith V.L. (2002) Method in experiment: rhetoric and reality. Experimental Economics: 5(2), pp. 91–110. Stavins, R. (1995) Transaction costs and tradable permits. Journal of Environmental Economics and Management: 29(2), pp. 133–148. Stavins, R. (2000) Experience with Market-Based Environmental Policy Instruments. Resources for the Future Discussion Paper 00–09, January 2000. Resources for the Future, Washington, DC. Stavins, R. (2003) Market based instrument policies: what can we learn from US experience (and related research). Presented at Twenty years of market based instruments for environmental protection: has the promise been realised: Donald Bren School of Environmental Science and Management, University of California, Santa Barbara 23–24 August 2003. Sterman, J.D. (1987) Systems simulation: expectation formation in behavioral simulation models. Behavioural Science: 32(3), pp. 190–211. Stoneham, G., Chaudri, V., Ha, A. and Strappazzon, L. (2003) Actions for conservation contracts: an empirical examination of Victoria’s BushTender trial. Australian Journal of Agricultural and Resource Economics: 47(4), pp. 477–500. Thomson, D. (2004) Understanding the social drivers and impediments to participation in the upper Bet Bet MBI pilot. Chapter 3 in Connor, J.and Ward, J. Design and implementation of a landholder agreement for recharge credit trade in the upper Bet Bet Creek catchment, Victoria. CSIRO Client Report, December 2004. Tietenberg, T. (1998) Ethical influences on the evolution of the US tradable permit approach to air pollution control. Ecological Economics: 24(2–3), pp. 241–257. Tietenburg, T. and Johnstone, N. (2004) Ex post evaluation of tradable permits: methodological issues and literature review: In (ed.) Tietenburg., T. Tradable Permits: Policy Evaluation, Design and Reform. Paris: OECD. Tisdell, J. and Ward, J. (2003) Attitudes towards water markets: an Australian case study. Society and Natural Resources: 16(1), pp. 61–75. Tisdell, J., Ward, J. and Capon, T. (2004) Impact of communication and information on a heterogeneous closed water catchment environment. Water Resources Research: 40(9), p. W09S03. Vatn, A. and Bromley, D. (1995) Choices without prices without apologies. In (ed.) Bromley, D. The Handbook of Environmental Economics, Cambridge, MA: Blackwell. Vickery, W. (1961) Counterspeculation, auctions and competitive sealed tenders. Journal of Finance: 16(1), pp. 8–37. Ward, J. and Connor, J. (2004) Evaluation of options for recharge credit trade in the Bet Bet Creek catchment, Victoria. CSIRO Land and Water Report, Folio no. S/04/903. Young, M. Cunningham, N., Elix, J., Lambert, J., Howard, B., Grabosky, P. and McCrone, E. (1996) Reimbursing the future: an evaluation of motivational, voluntary, price based, property right and regulatory incentives for the conservation of biodiversity. Department of the Environment, Sport and Territories Biodiversity Unit, Biodiversity Series Paper 9. Canberra, Australia.
6
Discussion Tradable permit markets Dallas Burtraw and Dan Shawhan
Five teams of authors have contributed chapters that illustrate the relevance of experiment methods to the analysis of quantity-based tradable permits for environmental protection. Various ones of these chapters illustrate the various purposes to which experimental methods can be applied, such as making precise the design of institutions, identifying potential multiple equilibria and the guiding influence of psychological or social factors, generating new hypotheses and illustrating for policy makers how markets for environmental goods can function.
Buckley, Mestelman, and Muller Buckley, Mestelman, and Muller seek to test experimentally theoretical predictions about the performance of two types of tradable emission permit markets when each producer can incrementally vary its output capacity but cannot change its emissions per unit of output. The two types of market based instruments are cap-and-trade programs, which have a cap on aggregate emissions, and baseline-and-credit plans, which do not necessarily have such a cap. In Buckley et al.’s version of the latter, each firm earns an amount of tradable emission permits for each unit of product output. This creates an incentive to expand production because the permit allocations grow with output and function like a subsidy to output. The assumptions behind the theoretical predictions include perfect competition and risk neutrality. In the predictions and the experiment results, product output and emissions are higher under the baseline-and-credit plan because of the implicit subsidy given to increases in product output. The experiments verify the prediction of theory and show that the baseline-and-credit plan leads to a lower level of economic efficiency than cap-and-trade. In their theoretical discussion, the authors assert, “Baseline-and-credit plans are theoretically equivalent to a cap-and-trade plan if the cap implicit in the baseline-and-credit plan is fixed and numerically equal to the fixed cap in a capand-trade plan.” A baseline-and-credit plan can accommodate a fixed cap by adjusting the permitted emission rate as aggregate product output varies such that the aggregate emissions are held constant. However, as long as a producer
132
D. Burtraw and D. Shawhan
can earn more credits by increasing product output, this does not lead to a theoretically equivalent result as cap-and-trade, as suggested, because the incentive to increase output still exists. Theory and previous simulation models show that the equilibrium will lead to increased output and lower product prices and also lower welfare than under cap-and-trade (Burtraw et al. 2001; Fischer 2001).1 Hence, the authors might find an even more general result were they to repeat their experiments with the aggregate emission cap fixed under their baseline-and-credit plan, adjusting the permitted emission rate in order to maintain aggregate emissions. This chapter is a good illustration of the potential role of experimental methods in demonstrating for policy makers a theoretical result in a practical context. The role of the output subsidy in tradable performance standards has been characterized as a theoretical oddity by some analysts, but this experiment shows it has practical implications.
Anderson, Freeman, and Sutinen One of the lingering concerns about the introduction of the new institution of market based policies into resource markets is the effect on incumbents and market structure, and especially the likelihood that it will lead to market consolidation. In an experiment modeled after a local fishing industry, Anderson, Freeman, and Sutinen demonstrate that a cap-and-trade program for fishing rights can either increase or decrease market concentration, depending in part on the profit function elasticities of small firms relative to those of large firms. The authors report that the experiment results approximately match the predictions of a general equilibrium model described by two of them in a separate paper. Profit function elasticities that decrease market concentration in the equilibrium model do so in the experiment as well. Using those elasticities, the authors’ chosen measure of market concentration in the experiment is statistically indistinguishable from the equilibrium model’s prediction. Similarly, profit function elasticities that increase market concentration in the equilibrium model also do so in the experiment. However, the authors’ measure of market concentration is slightly larger in the experiment than in the equilibrium model. The authors conjecture that concentration in excess of the predicted amount may occur as a result of price volatility, speculation, market power, or “a failure of the market to identify a price within the time it is observed.” The experiment supports the approximate accuracy of the predictions of the general equilibrium model. By demonstrating that a cap-and-trade program for simulated fishing rights may decrease rather than increase market concentration in an experiment, the chapter may reduce the reported resistance of small fishers to a cap-and-trade system, especially if further research indicates that the profit function elasticities in the industry are conducive to decreased industry concentration under a cap-and-trade system. Sometimes laboratory experiments provide evidence suggesting that a design of a new institution such as tradable fishing rights is likely to have one particular
Discussion 133 impact rather than another. Such evidence can provide some confidence about likely outcomes, but cannot prove that other outcomes cannot happen. In contrast, the experiment in the chapter by Anderson, Freeman, and Sutinen usefully varies the fundamental parameters of the problem and proves the positive hypothesis that a particular treatment can have either of two qualitatively different impacts. The actual market outcome will depend on the underlying fundamental parameters. As a result, this finding illustrates the complementarity of experimental methods and other research methods.
Godby and Shogren Godby and Shogren report on an experiment designed to compare the performance of three different enforcement schemes for a multiparty cap-and-trade program in which the only feasible enforcement options may be weak ones. The authors model the program in their chapter after the Kyoto Protocol for reduction of greenhouse gas emissions through emission trading. A key question in the implementation of the Protocol is how to assign liability for the possibility that emission allowances are not backed up by valid emission reductions. For example, a country might initiate a project for biomass sequestration and sell allowances based on expected capture of carbon, but over time the country might fail to maintain adequately the project and the anticipated sequestration goal may not be realized. The question of liability is important not only for ensuring the integrity of environmental agreements, but also because some approaches might do a better job of inducing participation of developing countries in the international regime. In the experiment, each subject represents a country. “Sellers” are endowed with tradable production permits (analogous to emission permits) at the beginning of each period, while the other participants, “buyers,” can acquire permits only by purchase. The authors suggest an interpretation that identifies sellers as developing nations and buyers as the industrialized nations. A concern under the Kyoto Protocol is that nations might sell permits in excess of actual emission reductions (henceforth, “invalid permits”), perhaps due to a failure to operate investments that lead to intended emission reductions. In the experiment, a stochastic element leads actual production to vary slightly from anticipated production, which is a feature that resembles technological uncertainty. In addition, players can intentionally produce in excess of permitted levels. Players run the risk of detection and fine if production exceeds permitted levels. The question that is addressed is how performance of the trading system varies with the assignment of liability for non-compliance. In all three experimental treatments, some or all of the players have a one-insix chance of being “monitored” and fined a set amount for each unit produced without a corresponding valid permit. In one treatment, sellers are liable if their permits are found invalid. In another treatment labeled the “buyer liability” scheme, sellers cannot be fined, but buyers can be fined if they buy permits that turn out to be invalid. A third treatment labeled the “buyer liability with refund”
134
D. Burtraw and D. Shawhan
scheme is the same as the “buyer liability” scheme except that if a permit is canceled, the buyer receives back from the seller the money she paid for that permit. In each treatment, reputation plays a role, as the identity of a seller who markets invalid permits is revealed. In the authors’ theoretical predictions and in the experiment results, emissions are higher, and economic efficiency lower, in each buyer liability scheme than in the seller liability scheme. Increasing the probability of being monitored did not change these relationships. The chapter inspires possible extensions. An alternative to seller liability and buyer liability is joint liability, in which both sellers and buyers are liable for sellers’ excess emissions. This option could be considered theoretically and experimentally. In addition, there is another version of buyer liability that could be considered. In Godby and Shogren’s buyer liability treatment, neither buyers nor sellers are liable for a seller’s production in excess of its initial permit endowment. As the authors explain, “After [invalid] permit sales were cancelled, sellers were not fined even if their production exceeded their permits.” An alternative buyer liability treatment could eliminate this exemption. It could do so by fining a seller, if monitored, for any excess production that remained after cancellation of any invalid permit sales it had made. Godby and Shogren point out that some previous authors claim that buyer liability could encourage the participation of more countries as venues for greenhouse gas emission reductions, through the Clean Development Mechanism established by the Kyoto Protocol. Godby and Shogren’s experiment results support their theoretical prediction that emissions will be higher, and economic efficiency lower, under buyer liability than under seller liability. The authors state this finding strongly: “Buyer liability with relatively weak enforcement renders emission trading inadequate by subverting the goals of Kyoto – creating less environmental protection at greater economic cost.” While this chapter provides a striking example of how experimental methods can be used to verify and illustrate theoretical predictions, it also provides an example of the need to interpret results with caution. Just as with the underlying theoretical results, the experimental results depend on the limitations imposed for the analysis. A central assertion of advocates of buyer liability is that it will influence participation decisions of developing countries in a positive way, but this assertion is not examined. Therefore, the authors’ strongly stated conclusion should be weighed with caution because seller liability may lead to less overall participation by developing countries and, conceivably, less environmental protection overall – a possibility that is precluded in the experiment.
Duke, Gangadharan, and Cason Two chapters address the question of salinity and water use in Australia. Unlike the previous chapter that treated costs in a heterogeneous way, both of these chapters treat costs as homogeneous. Duke, Gangadharan, and Cason investigate linked markets for water and salinity and the prospects for trading of salinity
Discussion 135 rights in the Murray Darling Basin. The chapter reports on the six pilot experimental sessions that were an attempt to characterize the relevant markets and institutions. The pilots used an oral auction, wherein mistakes and misunderstandings were identified and even corrected within the session. Nonetheless, the instructions given to participants were complicated. Some subjects did not understand the instructions and took the unintended suggestions in the instructions as data in forming beliefs about the reasonable price of salt permits. This is one of the only chapters to acknowledge explicitly the justification of allowances (permits) as a policy instrument, and in so doing place the experimental literature within a broader context in environmental economics. The authors state that the justification for a quantity based approach to the salinity problem is the presumption of “some safe minimum standard” below which salinity does not impose significant harm, and above which it is very harmful. Were the environmental costs more continuous, one might be compelled to address a larger set of issues about instrument choice. Even in the context of a quantity instrument, the problem structure allows for a kind of safety valve, which they label as a monetary fine. This fine represents the engineering cost of an outside option that the government could institute – an abatement technology that is described as the government’s “salt interception cost.” One of the widely recognized values of experimental methods is the generation of new hypotheses, and this chapter does so. The authors find the safety valve affects the behavior of participants in the qualitative direction that was anticipated, but it appears to affect it “too much.” This leads the authors to “conjecture that when permit prices are within one to two experimental dollars of 20, some buyers choose to pay the fine and to avoid the subjective transaction costs of trading in the salt market.” Alternatively, it appears such behavior could be the result of the introduction of an asymmetric payoff function when payoffs in one direction are censored by the safety valve mechanism. An important part of the work summarized in Duke et al. is the building of a link between natural science and social science. The authors begin their study with a series of workshops in the region to gather information about scientific and economic issues that face the agricultural community. However, although the experimental parameters approximate this technical information including the economic factors that affect the industry, the authors choose to frame their pilot experiments in an entirely abstract manner, without a specific context for the problem that is modeled in the experiments. This approach follows the main thread in the academic literature. The pilot studies used students and did not reveal the commodity or other aspects of the environmental problem in the simulations. The merits of this “context free” setting, even though mainstream, perhaps should be re-examined in every study. With too much detail and complexity, one can question what if anything is being learned in an experiment. However, one might be losing the defining features of a problem if one strips away all detail entirely. To know whether this applies or not in a specific context one might vary the treatment to explore how contextual detail affects the behavior of participants.
136
D. Burtraw and D. Shawhan
Ward, Connor, and Tisdell The second chapter on salinity credit trading addresses the Bet Bet dryland farming community in north central Victoria, Australia. In contrast to Duke et al. this chapter embeds the technical information that characterizes the salinity problem in a context-rich setting for its experiments. This research is motivated by the desire to identify the performance of actual market based instruments in an applied context to guide the design of institutions, rather than by the desire to test theory per se. The authors do so by comparing the behavior of participants in the experiments with a regulatory control, in which farm income can be calculated directly. A challenge in this project is to balance the amount of information that is needed in the experiment in order to develop the resemblance of an identifiable institution against the potential confusion that can emerge from the introduction of detail that may be irrelevant to a specific research hypothesis. The authors appear to have traversed this dangerous slope successfully. Three hypotheses are tested directly. One dealt with discriminating versus uniform price auctions. The results conformed to expectations, based on the previous literature. A uniform price auction led to a significantly different outcome, with increased environmental performance, a less volatile market, and a more cost effective outcome. The second hypothesis compared a closed market design with an open design. In a closed market bids to buy and sell are sealed, a market clearing price is calculated, and successful traders are informed, but no further information on the bid structure or outcome is revealed. In an open market information is provided about the bid structure. One might conjecture that an open market design would allow participants to learn faster and for a more efficient equilibrium to be achieved sooner. However, other authors have suggested that the additional information can introduce complexity about issues beyond the payoff facing an individual and it can slow learning about the best performance for that individual in the market. The experiments supported the latter conjecture, and found in favor of a closed market design. The third hypothesis addressed the role of a supplementary social payment to the group for aggregate environmental performance above a threshold level. One variation allowed for communication among participants. The availability of a supplementary social payment could increase voluntary participation in practice if there are important social norms that support group activity, or it could lead to a reduction in participation if individual incentives are scaled down and there is important free riding. The results find that environmental performance and mean payments to participants are unaffected, compared to the absence of the social payment, although variability of payments increased. Concern about free riding is rejected, and the authors endorse the strategy of using social payments. This finding is a bit unsubstantiated; it is based on the absence of evidence that the social payment causes a problem and it hinges on a continued conjecture that the social payment may encourage increased participation, which is not demonstrated. Moreover, the reader has to work hard to dissect the finding because the
Discussion 137 statement of the hypothesis ex ante and the statement of the hypothesis in the results appear to contradict each other. In fact, the Ward et al. chapter may be one of the gems in this volume – it is strongly motivated and incorporates helpful literature throughout to visit some of the most important conceptual issues in experimental methods. However, the reader has to work harder than one would like to grasp the issues that are presented. Moreover, although the issues that are addressed are rich, the possibility of subject confusion may cast some doubt on the authors’ interpretation of the results. The instructions given participants seem unclear. There were no verbal instructions other than answering specific questions. There was a quiz that participants had to pass before participating, but we do not know what was in that quiz or what portion of answers had to be correct. The authors mention suboptimal bidding by low-marginal-cost players, by which they might mean participants with low marginal abatement costs submitted very highly priced offers or something else similarly irrational. This would be consistent with confusion. Confusion could have a large effect on the results. For example, it could reduce subject profits, reduce bid and offer volume, and increase recharge compared with what would have happened if the subjects had understood. The reported results are consistent with these possibilities, since profits were in fact low and recharge was in fact high compared with the theoretical predictions. The chapter’s conclusions hinge on such things, so the conclusions could conceivably be artifacts of confusion or lack of understanding, rather than effects of tightly constructed treatments. Perhaps a lesson from this reading is that if a complex or non-intuitive setting is chosen, even greater attention should be directed at evaluating the understanding of participants, perhaps through followup interviews and debriefing.
An implication of factors favoring optimal decision making One way in which an experiment can differ from a comparable “real world” situation is the relative extent to which optimality of decisions is favored in the experiment versus in the real world situation. This is a matter of judgment and opinion. If one believes that the experiment favors optimal decision making more, or alternatively if one believes that the real world situation does, then such a belief may play a useful role in using the experiment to update expectations about what will happen in the real world situation. If one believes that optimality is more favored in the experiment than in the real world situation, perhaps because of a simply constructed choice context, then one should expect decisions to be less optimal in the real world situation than they were in the experiment. If, on the contrary, one believes that optimality is more favored in the real world situation than in the experiment because of higher stakes and greater experience of real world emission credit traders, then one should expect decisions to be more optimal in the real world situation than in the experiment. Applying this to the experiment by Buckley, Mestelman, and Muller, for
138
D. Burtraw and D. Shawhan
example, one might conclude that their finding of irrationally low emissions and high permit banking in their experimental tradable performance standards market is not a reliable indicator that the same will occur in a real world tradable performance standard market. Naturally, it may be useful to consider the above in choosing or designing an experiment. For example, an experiment that plausibly favors optimal decision making more than does some important comparable real world situation may not be able to establish convincingly that outcomes in that real world situation will be within some tolerance of the optimum, even if the outcomes are well within that tolerance in the experiment. However, such an experiment might serve other purposes such as validating underlying predictions of theory, indicating that outcomes will be far from the theoretical optimum, identifying unanticipated outcomes or generating new hypotheses.
A few additional experimental research ideas These five chapters sample from a set of topics related to tradable permits that might be researched experimentally. We have mentioned a few elsewhere in this commentary. Here are a few more. •
•
What auction designs will best achieve the goals of government officials, such as maximizing perceptions of fairness, minimizing cost of participation, minimizing “mistakes” by participants, and maximizing economic efficiency and government revenue? To what extent are different tradable permit market designs susceptible to the following: • • •
•
market power; market “bubbles;” permit or final-product prices that are inefficiently high, low, or volatile? (For a possible empirical example, see Joskow and Kahn 2002.)
What is the performance of different market designs for unconventional tradable permits, especially as such designs more precisely target environmental measures but in doing so introduce increasing levels of complexity? One example is locational ambient pollution permits (see Montgomery 1972).
Conclusion As a set, the chapters in this Part illustrate the various ways that experimental methods can make a unique contribution to economic analysis. To various degrees, we see these chapters testing theoretical predictions, testing notions that might go beyond the limits of established theoretical models, generating new hypotheses, and demonstrating theoretical findings for policy makers, regulators,
Discussion 139 and stakeholders. As Buckley, Mestelman, and Muller point out, experiments can also be reused as participatory learning exercises for policy makers and stakeholders. The boundaries of the experiment framework necessitate caution in formulating general conclusions about real institutions outside the experimental context, but also empower the researcher to speak with confidence about specific aspects of a problem. How robust is each finding? Does it survive variations in the margins of the analysis such as the institutional details of constraints and framings, especially variations that may apply outside of the laboratory? This is an issue facing the researcher in any context, but experimental methods are especially well suited to test the robustness of theory by varying the margins of the analysis.
Note 1 The authors interpret the theory to say that if the emission rate is adjusted as output changes so as to maintain a fixed aggregate level of emissions this raises industry costs “due to unnecessarily tight restrictions on emitting firms.” We would characterize this phenomenon differently. The output subsidy implicit in the baseline-and-credit approach leads to more output than is efficient, which is the source of inefficiency in a general equilibrium context (Goulder et al. 1999).
References Burtraw, Dallas, Karen Palmer, Ranjit Bharvirkar, and Anthony Paul, 2001. “The Effect of Allowance Allocation on the Cost of Carbon Emission Trading.” Resources for the Future Discussion Paper 01–30. Fischer, Carolyn, 2001. “Rebating Environmental Policy Revenues: Output-based Allocation and Tradable Performance Standards.” Resources for the Future Discussion Paper 01–22. Goulder, Lawrence H., Ian W.H. Parry, Roberton C. Williams III, and Dallas Burtraw, 1999. “The Cost-Effectiveness of Alternative Instruments for Environmental Protection in a Second-Best Setting.” Journal of Public Economics 72(3): 329–360. Joskow, Paul and Edward Kahn, 2002. “A Quantitative Analysis of Pricing Behavior in California’s Wholesale Electricity Market During Summer 2000: The Final Word.” Mimeo dated 4 February 2002. Online, available at: econ-www.mit.edu/faculty/ download_pdf.php?id=548 (accessed 12 September 2006). Montgomery, W.D., 1972. “Markets in Licenses and Efficient Pollution Control Programs.” Journal of Economic Theory 5(3): 395–418.
Part II
Common property and public goods
7
Communication and the extraction of natural renewable resources with threshold externalities C. Mónica Capra and Tomomi Tanaka
Introduction Non-binding communication, or cheap talk, has been associated with the resolution of coordination failures (see Cooper et al., 1992; Clark et al., 2001) and social dilemmas in both laboratory and field experiments (see Isaac and Walker, 1988; Ostrom and Walker, 1991; Ostrom, et al., 1994; Cardenas et al., 2004). In simple coordination games, communication is expected to reduce the uncertainty of what other players are likely to do and hence facilitate coordination in the better equilibrium. In social dilemma games, the reasons why communication works are still unclear. Perhaps communication results in an increased sense of group identity, an enhancement of normative orientations toward cooperation, or a necessity to seek or avoid verbal approval or reprimand when promises of cooperation are fulfilled or violated. In this chapter we use a simple neoclassical growth model with multiple equilibria to investigate the mechanism by which non-binding communication results in lower equilibrium resource extraction. We use a growth model because it provides an adequate dynamic framework for modeling extraction of a natural resource with threshold externalities. One specific example is fisheries. Consider a case in which the available stock of fish determines its regeneration rate. If fishing or extraction is such that the stock does not surpass the threshold level, then the regeneration rate is lower, which renders a lower equilibrium. In the lower equilibrium, agents attain a lower overall resource stock and consumption. In contrast, when extraction is such that enough fish stock is left to pass the threshold, a higher regeneration rate is achieved. In equilibrium, when the resource surpasses the threshold, higher resource stock and consumption can be attained. Thus, in this dynamic setup, there are two steady states, one of which is optimal.1 There are two innovative aspects in this chapter. First, we model the extraction of a natural renewable resource using a coordination game environment rather than the typical prisoners’ dilemma environment. Second, we use a dynamic setup to see the effects of communication on equilibrium selection. Most experimental papers on communication and coordination have analyzed
144
C.M. Capra and T. Tanaka
static games. In contrast, by creating a dynamic environment, we make the decision context much more realistic. We implement communication by allowing subjects to send unrestricted messages to each other in real time, before any extraction decisions are made. By analyzing the content of the messages, we are interested in discovering the circumstances under which communication works. Thus, our study attempts to answer the following question: what “kind” of messages can eliminate strategic uncertainty in a dynamic environment? The data from our experiments suggest that messages that contain positive feedback or verbal approval are important for successful coordination.
Experimental design Our design is based on Capra et al.’s (2006). These authors add institutions – rules that govern subjects’ choices – to the basic Ramsey–Cass–Koopmans (RCK) neoclassical growth model with a resource externality.2 Capra et al. (2006) use three institutions in their paper: communication, voting, and a hybrid treatment in which subjects were allowed to communicate and vote. In this chapter, we discuss the procedures and the results from the communication treatment. In addition, we study aspects of the content of messages that have not been included in previous works. Theoretical framework In the neoclassical growth model a representative agent maximizes his lifetime utility given by: ∞
∑ (1 + ρ )
−t
U (C t )
t =0
subject to the resource constraint given by the expression Ct + Kt+1 ≤ AF(Kt) + (1 – δ)Kt, where U is the utility of consumption, Ct, at time t. Technology is given by AF(Kt) and Kt is capital stock (the amount of the resource). The parameters ρ and δ represent the discount and depreciation rates, respectively. In our resource extraction environment, K would represent a renewable resource such as fish. With such a perishable resource, there is 100 percent depreciation (i.e. δ = 1). Multiple steady states result from a threshold externality. This is implemented by letting the parameter A take different values depending on the amount of the resource that is available after consumption. If the resource stock is higher – – than a threshold level, K , then the value if A is A . In contrast, if K is lower than – – the threshold K , then A is equal to –A with A > –A. To implement infinite horizons in the laboratory, we used a random ending rule. At any given period, the probability of continuation was 0.80, which is equivalent to having a value of the discount parameter, ρ equal to 0.25.
Threshold externalities 145 Table 7.1 Parameters of the model U(Ct ) = 400Ct – 2Ct2 )
F(Kt ) = Kt0.5 Kt = 31
−A = 7.88 if K < 31 − A = 16.77 if K ≥ 31 ρ = 0.25 δ =1
For the parameter values described in Table 7.1 and under the assumption that the markets are competitive, it is possible to derive the equilibrium levels of the resource consumption, C*, and the resource stock, K*. In our experiments, – when the threshold, K , was surpassed, CH* = 70; and K H* = 45 and CL* = 16 and * KL = 9 when the threshold was not surpassed.3 Communication In our experiments, subjects were allowed to send unrestricted messages to each other in real time in a “chat room” environment before making trading and consumption decisions. There are advantages and disadvantages of using this form of communication vis-à-vis email or face-to-face. Clearly, email is not as effective as a chat room because communication is not in real time; there is a lag between the time the message is composed and sent and the time it is received and read. On the other hand, a chat room environment is not as realistic and “dynamic” as face-to-face communication. When people can talk and see each other, they can use body expressions and voice intonations to their conversation that add content to their messages. However, these body and tone messages may be hard for the experimenter to interpret and even notice. In a chat room environment, the experimenter has a less ambiguous way to record and interpret messages. We believe that the benefits from the added control in the computer setup outweigh the costs incurred from sacrificing reality. In addition, there is some evidence that real time computer communication (chat room) is as effective as face-to-face communication. For example, Rocco (1996) showed that face-to-face communication and chats are equally effective at resolving social dilemmas. Experimental procedures We organized a total of six sessions; three sessions (sessions one, two, and three) were conducted at Emory University and three sessions (sessions four, five, and six) were conducted at Caltech. In each session, five subjects participated in the experiment. The sessions lasted no more than 3 hours and earnings ranged from $11.56 to $44.24. Because of the random ending rule, each session consisted of several “horizons.” A horizon represents a lifetime. A lifetime consists of one or more periods.4 Table 7.2 below shows the number of horizons and periods for each session.
146 C.M. Capra and T. Tanaka Table 7.2 Number of periods per horizon Session
Horizon (periods)
1 2 3 4 5 6
H1(7), H2(7) H1(8), H2(1), H3(3), H4(5), H5(2) H1(3), H2(7), H3(6), H4(4) H1(6), H2(1), H3(1), H4(2), H5(4) H1(3), H2(4), H3(1), H4(3), H5(6) H1(11), H2(2), H3(5), H4(2)
At the beginning of each session, subjects sat in their computer terminals. After the instructions were read aloud, subjects participated in a three-period practice exercise that did not count towards their earnings. In each horizon, each subject was endowed with five units of the resource and 10,000 experimental tokens (called Yen). Thus, the initial conditions were such that the threshold level of the resource was not surpassed (i.e. there was a total of 25 units of the resource at the beginning of each horizon). In each of our sessions, five subjects participated in multiple periods and in each period they faced a two-stage decision problem (stage one and stage two), plus a communication stage that preceded all decisions. In the communication stage, before stage one of the experiment, the computer displayed a chat room screen, where participants could chat with each other in real time. Subjects were given 180 seconds to chat, but in all sessions, communication lasted less then 2 minutes. In stage one, participants traded units of the resource in a call market; trading determined the amount of the resource available to each of them in the current period. When the call market opened, subjects were asked to submit their demand schedules by specifying the quantity and price of each unit of the resource. The aggregate demand was then derived by adding each subject’s individual demand. The market price of the resource was determined by the interaction between the aggregate demand and the aggregate supply. The latter was fixed (perfectly inelastic) at a quantity equal to the aggregate amount of the resource. For each unit bought in the market, an amount equal to the market price was subtracted from the initial endowment of tokens. For each unit sold, the subject made an amount equal to the market price. In our experiments, subjects were heterogeneous and differed in their valuation of the resource and their individual production functions. Because they were not identical, they benefited from trading in the market. Their differing production functions and the aggregate production function are shown in Figure 7.1. The individual demand schedules can be found in the instructions (available upon request). Notice that the production functions were lower when the threshold level of the resource was not attained. The last panel of Figure 7.1 shows the aggregate production function and the kink generated by the resource externality.
Agent 1’s production function
60
Above threshold
50
Output
40 30 Below threshold
20 10 0 0
5
10
15
20
25
30
Units of input Agent 2’s production function
40
Above threshold
35 30
Output
25 Below threshold 20 15 10 5 0 0
5
10
15
20
25
30
Units of input Agent 3’s production function
40 35
Above threshold
30
Output
25 20
Below threshold
15 10 5 0 0
5
10
15 Units of input
Figure 7.1 Production functions.
20
25
30
Agent 4’s production function
45 40 35
Above threshold
Output
30 25 20 15
Below threshold
10 5 0 0
5
10
15
20
25
30
Units of input Agent 5’s production function
40 35
Above threshold
30
Output
25 Below threshold
20 15 10 5 0
Output
0
5
10
15 Units of input
20
25
30
125
150
Aggregate production function
220 200 180 160 140 120 100 80 60 40 20 0 0
25
50
75 Units of input
Figure 7.1 continued.
100
Threshold externalities 149 In stage two, each subject decided on how many of the resource (fish, for example) to extract or consume, and on how many units to save for future extraction. Subjects derived utility, which later is translated into earnings, from the extraction or consumption of the resource. However, if too many units were consumed in a given period, and the threshold level of the resource was not surpassed, the resource could not grow at a pace that would make all better off in the future (i.e. consume more). Thus, our decision task captured the tradeoff between current and future consumption (short run vs. long run) that is often faced when making natural resource exploitation decisions.5 Once the stage two decisions were made, the computer calculated the amount of the resource left over to determine whether the threshold level of the resource was surpassed. The resource left over then was mapped by the production function into units available for the next period. This process was repeated in each period until the random end of the horizon.
Results In Lei and Noussair (2003) and Capra et al. (2006) an economy without institutions (e.g. no communication) unambiguously converges to the “inferior” equilibrium. This renders lower earnings for all participants in the economy. In our experiment, where subjects are allowed to communicate freely in a chat room environment, the results are mixed. Figure 7.2 shows the average amount of the resource in each horizon for all six sessions. In the first horizon (H1) of the first session, for example, there were seven periods. Weighting each period equally, the average resource amount remaining after consumption was 10.14 units; just above the low equilibrium level of nine units. In this horizon, the threshold was not surpassed. In contrast, in the second horizon (H2), the average amount of the resource across periods left after consumption was 71.29 units. This amount is higher than the threshold (31 units), and higher than the high equilibrium level of 45 units. From Figure 7.2 we observe that in sessions one and two, coordination was partially successful because the threshold level of the resource was surpassed in 80
Session 1
70
H2
60 50
H4 H2 H4
30 H2
20 10
H5
H3
40
H1
H1
Session 6
Session 5
Session 2
H5
H1
Session 4 Session 3 H2 H3 H1 H1 H4 H2 H3 H4 H5
0
Figure 7.2 Average resource amount (all sessions).
H1
H3 H2
H4
Equil (H)
H3
Threshold
Equil (L)
150
C.M. Capra and T. Tanaka
some, but not in all horizons. In sessions three and four subjects were unable to coordinate their actions as to restrict extraction. Averaging across periods, in none of the horizons were they able to pass the threshold. In contrast, in sessions five and six, subjects successfully restrained consumption in all horizons. This divergent pattern of the data tells us that communication does not resolve strategic uncertainty unambiguously. Just the existence of an institution that allows subjects to communicate may be a necessary condition, but is clearly not a sufficient condition for successful coordination. There are two reasons for why communication may “fail.” (1) There is no awareness or expression of awareness for the need to coordinate actions; that is, no subject brings up this issue in their conversation. (2) There is no common knowledge of awareness; that is, subjects do not know that all know that too much extraction is worse for all. By looking at the chat transcripts, we hope to determine whether these two conditions or additional ones are sufficient conditions for successful coordination. The next subsection reports the analysis of the transcripts. Analysis of the chat transcripts The basic requirement for communication to work is that people exchange information relevant to the coordination problem or that they share knowledge. In our setup, subjects face a tradeoff: consume today or save for tomorrow. However, saving only makes sense when enough people save enough of the resource. Thus, when chatting, it is important that subjects express through relevant messages the need to coordinate actions so as to restrain extraction and make sure that they pass the threshold. In our experiments, there were two sessions where coordination failed: sessions three and four, but they failed for different reasons. In session three, there was little evidence of awareness and of expression of awareness. The left panel of Figure 7.3 shows a graph of the resource stock in session three that was left over after extraction. There were four horizons in this session; each horizon is separated in the figure by a space. Each marker represents the stock of the resource available in that period. The numbers above the marker represent the number of people whose messages contained language related to extraction, consumption, coordination or anything related to the coordination problem in that period (i.e. relevant messages). The horizontal line represents the threshold level of the resource. All except two sentences from the transcripts in session three excluded relevant messages. Most subjects in this session were either unaware of the coordination problem or unwilling to express awareness. There were only two instances where “one” subject expressed awareness of the problem and attempted to exchange information. This happened in periods seven and three of the second and last horizons, respectively. This is what the “aware” subject said: H2, t7, subject three said: K just keep getting smaller in the economy. H4, t3, he said: man . . . don’t consume all your K . . . got to save some.
Threshold externalities 151 Session three
100
Session four 100
80
80
60
Threshold
40 20 0
0 0
60 40
0 0 0 0 0 000 0 00 00 0 0 01 1
20
Threshold 3 2 2
0 4 2 0 1
3
2 0
3 0 0
0
Figure 7.3 Resource stock and number of subjects with relevant messages in each period and horizon.
Clearly, the messages were not followed up with any relevant action or comment. The reasons for why communication failed in session four are quite different. As in session three, in none of the periods of session four were subjects able to pass the threshold (see right panel of Figure 7.3). This happened despite the fact that there were two “aware” subjects who immediately expressed awareness. One of these subjects also took upon himself to convince aggressively others of the benefits from restraining extraction. For example, in the first horizon of this session, out of 84 messages sent by all subjects in seven periods, 52 (62 percent) were sent by subject two. However, his messages had a weak echo. Only subject four followed up. Subject four was responsible for 23 (27 percent) of the messages. Thus, two players were responsible for 89 percent of the messages. Only nine messages (11 percent) were sent by subjects one, three, and five altogether. Subject one said: “hihi,” “good,” and “ok.” In this case, we can argue that coordination failed because of the lack of common knowledge of awareness. Indeed, awareness of the benefits from cooperation is not enough; subjects must know that others know. Thus, successful coordination should require that most players express their agreement with the proposal to restrain extraction or keep the resource above the threshold level. By communicating their agreement through relevant messages, awareness becomes common knowledge and strategic uncertainty should be reduced. However, are awareness and common knowledge of awareness sufficient conditions for eliminating strategic uncertainty in our environment? Figure 7.4 shows the stock of the resource in sessions one and two. In the first horizon of session two, there was little relevant content in the messages sent by the subjects. The horizon converged to the low equilibrium because communication failed, as expected. In the third horizon, there was an attempt to coordinate. In the first period of this horizon, one subject expressed awareness and proposed lowering extraction; two other subjects expressed agreement to this proposal by sending relevant messages. The subjects were successful at passing the threshold, but their communication later collapsed, which resulted in more extraction. Thus, awareness and agreement were not enough to sustain coordination in session two of our experiment. In contrast, although communication in the first horizon of session one failed (see the right panel of Figure 7.4), subjects were able to restrain extraction in all
152 100 80 60 40 20 0
C.M. Capra and T. Tanaka Session two
Session one
0
3 0 3 1
2 1
0 0 00
0
2
1
0
3 2 0
Threshold
100 80 60 40 20 0
2
5
1 0
0
1
2 3
0
1 0 0 3 4
Threshold
Figure 7.4 Resource stock and number of subjects with relevant messages in each period and horizon.
periods of the second horizon. In the second horizon subjects not only expressed awareness and agreed on a proposed strategy, they also acknowledged that the strategy worked. We have included below an excerpt of the communication that shows the kinds of messages that imply acknowledgment (i.e. positive feedback). This conversation followed successful coordination in the first period of the second horizon (H2, t1). t2, subject four: t2, subject three: t2, subject two: t2, subject three:
SEE! good work. wanna do it again? let’s let it keep growing.
Again, the positive feedback acknowledging the benefits from coordinating actions, in addition to awareness and majority agreement, seems to have been responsible for successful communication in the second horizon of session one. However, if acknowledgment was important for successful coordination, we should observe a similar pattern in the sessions where communication worked. The left and right panels of Figure 7.5 show the resource stocks in each period and horizon of sessions five and six, respectively. In sessions five and six, subjects were able to maintain the average resource above the threshold in all of the horizons. In these sessions the three ingredients, awareness, majority agreement to coordinate, and acknowledgment of the benefits of coordination, were present. In horizon one of session five, for example, the sequence of messages was the following: in the first period, subjects exchanged information
Session five
100
Session six
80 5
60 40 20 0
4 5 0
3 4 4
0
0
0 0 0 0 0
0 0
Threshold
100 80 60 40 20 0
00 5
0
0
3
00 0 0
0 4
0 0
0 0 0
0 0
Threshold
Figure 7.5 Resource stock and number of subjects with relevant messages in each period and horizon.
Threshold externalities 153 about the benefits of coordinating actions, but the resource did not pass the threshold, then in the second period, subject four said: “umm looks like we didn’t get above 30 this time again.” The rest of the period two conversation is shown below: t2, subject three: nowhere near. t2, subject three: did we at least increase total K? t2, subject four: def. for the first round of a set (we’ll probably do this again), should try to keep and not consume more than one or two. t2, subject four: barely. t2, subject five: yes, by 1. t2, subject three: so next round we try to keep? t2, subject five: yes, lets see if we can trust another. t2, subject two: don’t consume the k. t2, subject five: all keep. t2, subject four: well it’s round two, probabilistically not nearly as good as round 1. t2, subject five: nonsense, its already round 2. t2, subject five: this is markov. t2, subject four: but yeah, keeping lots is a good strategy. Notice that most subjects also express agreement with the proposal not to consume too much; only subject one did not express his agreement in period two; however, in period one, this subject did not dissent when player three advised to keep the resource high and asked: “no dissent?” Finally, in the third period of the horizon, players expressed satisfaction with the results. For example, subject five said “awesome” and “I think we know what we should do by now.” This is acknowledgment of success or positive feedback. A similar pattern of messages was observed in session six. After successful coordination in the first period of the first horizon, subject four said “yay,” “awesome guys,” subject one said “wow,” and other subjects also expressed similar messages. An interesting issue here is that once subjects surpassed the threshold for a couple of periods, they did not need to continue sending messages where consumption information was included. Below is an excerpt of the process preceding the drop in relevant messages. This explains the many zeroes in the last horizons. t1, subject five: t1, subject one: t1, subject four: t1, subject two: t1, subject three: t1, subject three: t1, subject three: t1, subject four: t1, subject five: t2, subject three:
I think we know what we should do by now. yep. right. yes. wake me up when it starts. drill = consume 1? whee. sure. right. zzz.
154
C.M. Capra and T. Tanaka
To test our hypothesis that positive feedback matters, we use a random effects regression to estimate the effects of kinds of communication on the level of the resource stock. Our dependent variable is the total resource stock, Kt , in period t. The independent variables are (1) the total resource in the previous period t – 1, (2) the total number of messages in period t, (3) the number of subjects with relevant messages in period t, (4) the number of subjects with relevant messages in the previous period t – 1, and (5) the number of positive feedback messages in period t.6 The third and fourth variables capture awareness and agreement. The regression pools all sessions and allows horizon-specific and session-specific disturbances. Each session and each horizon is assumed to have a random effect. Table 7.3 shows the results of the regression. The total number of messages had a negative impact on the level of resource stock. This happened because, once subjects establish effective ways of coordination, they tend to stop sending relevant messages. The number of subjects who sent messages related to coordination (i.e. language that contains messages of extraction, consumption, coordination or anything related to the coordination problem in that period) has a positive effect on resource stock. However, the number of subjects who sent relevant messages in the previous period does not significantly affect the level of resource stock. This implies that coordination efforts need to be repeated at least until successful coordination is consolidated. Finally, expressions of approval or positive feedback have positive effects on the resource stock. It is easy to see why awareness and agreement (common knowledge of awareness) are important in reducing strategic uncertainty; however, why is acknowledgment that the strategy worked important to prevent coordination failure? We believe that subjects respond to both financial and non-financial incentives. Positive feedback in the form of acknowledging that they did well increases the benefits of successful coordination, and thus is an important motivator for subjects. Economists have recently realized that verbal reprimands or sanctions matter. Masclet et al. (2003), for instance, show that subjects respond to nonmonetary punishment in the form of expressions of disapproval in voluntary Table 7.3 Random effects GLS estimation of the effects of communication on the levels of resource stock Independent variables
Previous K Total messages Relevant messages Relevant messages in previous period Positive feedback Constant
R2 = 0.90 Estimated coefficient
Std Error
p > |t|
0.99 –0.31 2.08 0.81 2.83 0.90
0.04 0.14 0.57 0.53 0.78 1.58
0.00 0.02 0.00 0.12 0.00 0.95
Threshold externalities 155 contribution games. Brandts and Cooper (2006) find that communication is a more effective motivator than monetary incentives in weak-link games. Similarly, it is likely that subjects would respond to verbal (non-monetary) reward or acknowledgment that they are doing the right thing in a coordination game.
Conclusion In this chapter, we present results from an experiment that captures important features of renewable resource extraction. Our framework differs from that of others in that we adopt a dynamic coordination game environment, in which the amount of the resource in equilibrium depends on the ability of subjects to coordinate choices and reduce extraction. We study communication under this framework to determine under what conditions communication improves outcomes. In our experiment, we use a chat room environment, in which subjects are allowed to send unrestricted messages in real time before making decisions. We observe that in two of the six sessions we organized, subjects failed to coordinate their actions to restrict extraction. In these sessions, choices converged to a low equilibrium (i.e. high resource extraction). By contrast, in two other sessions, subjects were perfectly successful in avoiding the bad equilibrium. In the remainder of the sessions, the results were mixed. These conflicting results motivated us to study the chat transcripts to understand better the kind of messages communication needs to be a necessary and sufficient condition for successful coordination. We determine that there are three ingredients for successful coordination in our dynamic coordination environment. First is the expression of awareness of the benefits from reducing extraction and surpassing the threshold level of the resource. Second is the common knowledge of awareness that arises when the majority of subjects express agreement to reduce extraction (i.e. send relevant messages). The third ingredient is acknowledgment of successes. This last ingredient, unlike the previous ones, is less obvious (at least to economists) at reducing strategic uncertainty in coordination games. However, verbal rewards seem to matter in our dynamic environment. If communication is considered a policy alternative to help people resolve coordination problems, we suggest that communication be structured so that the three ingredients be satisfied. Indeed, a key lesson from our investigation, which is consistent with the findings in static coordination games (see Chaudhuri et al., 2001), is that the existence of communication alone cannot resolve coordination problems. It matters what “kind” of messages people send to each other.
Acknowledgments We thank Charles Noussair, Colin Camerer, and Lauren Munyan.
Notes 1 The model was also studied by Lei and Noussair (2003) and Capra et al. (2006) in different contexts.
156
C.M. Capra and T. Tanaka
2 Lei and Noussair (2002) were the first to implement the RCK model in the laboratory. 3 For a more detailed derivation of the steady states, please refer to Lei and Noussair (2003). 4 Subjects were recruited to participate in a 3 hour session. If a horizon did not end by the end of the third hour, subjects were told that they could continue the experiment on another day or be paid for the decisions made by other subjects who would finish their horizon. Subjects were also told that new horizons started only if at least 30 minutes were left before the end of the session. In no session was it necessary to continue a horizon on another date. 5 To facilitate decision making, we provided subjects with an option to simulate different consumption choices and see how these choices would affect their earnings. 6 The positive feedback messages expressed in period t referred to choices made in the previous period(s).
References Brandts, J. and D. Cooper, 2006. It is what you say not what you pay: an experimental study of manager–employee relationships in overcoming coordination failures. Forthcoming in the Journal of the European Economic Association. Capra, C. M., T. Tanaka, C. F. Camerer, L. Feiler, V. Sovero, L. Wang, and C. Noussair, 2006. The impact of simple institutions in experimental economies with poverty traps. Emory University working paper, Atlanta, Georgia. Cardenas, J. C., T. K. Ahn, and E. Ostrom, 2004. Communication and cooperation in a common-pool resource dilemma: a field experiment. In: Steffen Huck (ed.), Advances in Understanding Strategic Behaviour: Game Theory, Experiments, and Bounded Rationality: Essays in Honour of Werner Güth, New York: Palgrave, 258–286. Chaudhuri, A., A. Schotter, and B. Sopher, 2001. Talking ourselves to efficiency: coordination in inter-generational minimum games with private, almost common and common knowledge of advice. C.V. Starr Center for Applied Economics at New York University, New York. Clark, K., S. Kay, and M. Sefton, 2001. When are Nash equilibria self-enforcing? An experimental analysis. International Journal of Game Theory, 29 (4), 495–515. Cooper, R., D. DeJong, R. Forsythe, and T. Ross, 1992. Communication in coordination games. Quarterly Journal of Economics, 107 (2), 739–771. Isaac, R. M. and J. M. Walker, 1988. Communication and free-riding behavior: the voluntary contribution mechanism. Economic Inquiry, 26 (4), 585–608. Lei, V. and C. Noussair, 2002. An experimental test of an optimal growth model. American Economic Review, 92 (3), 549–570. Lei, V. and C. Noussair, 2003. Equilibrium selection in an experimental macroeconomy. Emory University working paper, Atlanta, Georgia. Masclet, D., C. Noussair, S. Tucker, and M-C. Villeval, 2003. Monetary and nonmonetary punishment in the voluntary contributions mechanism. American Economic Review, 93 (1), 366–380. Ostrom, E. and J. Walker, 1991. Communication in a commons: cooperation without external enforcement. In: T. Palfrey (ed.), Laboratory Research in Political Economy, Ann Arbor, MI: University of Michigan Press, 287–322. Ostrom, E., R. Gardner, and J. Walker, eds., 1994. Rules, Games and Common Pool Resources. Ann Arbor, MI: University of Michigan Press. Rocco, E., 1996. Cooperative efforts in electronic contexts: the relevance of prior face-toface interactions. Proceedings of the 2nd Americas Information Systems Conference. Phoenix, Arizona, 350–352.
8
Unilateral emissions abatement An experiment Bodo Sturm and Joachim Weimann
Introduction Some of the most serious environmental problems can be characterized as international public good problems. The damage each country suffers depends on the aggregated emission of harmful material and not (only) on local emissions. The most prominent example is the climate change expected from global CO2 emissions. It is not in the self-interest of each individual country to abate the Pareto efficient amount of emissions, because parts of the total benefit generated by abatement cannot be internalized. Given this situation, environmental pressure groups often demand that local politicians take on a leading role and abate more emissions than is in the narrow self-interest of the country. Obviously, they expect that “a good example” may encourage other countries to join the coalition of abating countries in order to overcome the social dilemma situation all countries are confronted with. Furthermore, there is the hope that the leader himself can profit from the leading position. As an example we can quote the German Federal Minister for the Environment (BMU, 2002): We are the leader in international climate protection and we want to maintain our leading position . . . because climate protection creates new jobs and opportunities to export . . . climate protection policy generates not only environmental benefits but also economic profits. The leading position of German climate protection policy is profitable. There are numerous examples for situations in which leadership of one country may have the potential to induce cooperation of other countries. The pollution of the Baltic Sea or the acid rain problem between Finland and the former USSR are historic examples for the European region.1 It is not quite clear why countries should follow the good example of a leader even if it is not in their immediate self-interest. However, one may speculate that cooperative solutions are only accessible if players (countries) manage to coordinate on the efficient outcome.2 Therefore, leadership may serve as a coordination device. Alternatively, one may hope that the “good example” creates some kind of a social norm. If followers ignore this norm, this may cause emotional costs. Thus there are several
158
B. Sturm and J. Weimann
ways in which leadership possibly is able to induce cooperative behavior of the followers. In a seminal paper, Hoel (1991) investigated the consequences of unilateral emissions abatement in a global public good game, i.e. abatement of a single country (say country 1) above the equilibrium level, under the assumption that all countries decide simultaneously and that all other countries behave fully rationally and selfishly. It turns out that unilateral emissions abatement leads to a lower abatement of the other countries, but to a greater total abatement. Total welfare increases if country 1 has the lowest marginal benefit from abatement.3 However, the assumption of selfish behavior of all countries except one rules out that “the good example” given by country 1 can infect other countries.4 Thus it remains an open question whether “leadership matters”. In this chapter, we try to answer this question by testing the Hoel model experimentally. The issue of whether abatement on the part of the leader has an influence on the behavior of the followers is analyzed in an experiment by van der Heijden and Moxnes (2000). However, there are some important differences between our design and the design chosen by van der Heijden and Moxnes. In their experiment, subjects played a standard public bad game where both the Pareto efficient outcome and the equilibrium solution for the simultaneous decision protocol and mixed sequential-simultaneous decision protocol had a boundary solution.5 Therefore, they did not really test the Hoel model, which has interior solutions and allows deviations from the equilibrium in two directions. In van der Heijden and Moxnes, the leader had a significant influence on the other subjects (followers). However, the influence of the leader on the followers was not strong enough to make leadership profitable for the leader. Gächter and Renner (2003) confirm that despite the significant correlation between the leader’s and followers’ contributions, for the leader leadership does not pay. In their experiment, the presence of a leader in a repeated standard public good game did not enhance the overall level of cooperation relative to a situation without a leader. In a similar experiment by Güth et al. (2007), leadership had a positive but only weakly significant effect on the aggregated contributions to the public good. However, as before, for the leader leadership does not pay. In order to test the Hoel model experimentally, two methodological questions have to be answered. First, one has to decide how the experiment should be framed. Either subjects are confronted with the game as it is without any reference to environmental problems, or they are told the cover story that fits the story the Hoel model tells us. We decided on the latter because we wanted to test if unilateral abatement of harmful material induces further activities. A fair test of the model should take into account that it is designed to deal with environmental problems. Second, one has to decide how to inform subjects. Once again, we decided to design the experiment analogously to the model. This means that we gave subjects all the information the agents in the model are assumed to have. Game theoretic models implicitly make very strong assumptions about what people know. Normally, not only the rules of the game but also the equilibrium solution and the consequences of all kinds of deviations from rational
Unilateral emissions abatement 159 behavior are assumed to be common knowledge. To test the model fairly, we had to ensure that these knowledge preconditions were fulfilled. Because the game has complicated interior equilibrium solutions, we could not assume that subjects would be able to compute the prediction of game theory and the Pareto efficient solution in the laboratory during the experiment. We therefore decided to teach the subjects before the experiment and to demonstrate the prediction of game theory as well as the Pareto solution. One may argue that this can influence subjects. But if this is the case, agents in the Hoel model should be influenced in the same way because they possess the same information we gave our subjects. Not informing subjects fully would mean not testing the model but testing the ability of students to find rational strategies in complicated games. We would like to emphasize that this does not mean that the experiment is a “recommended play” game in the sense of Brandts and McLeod (1995). Recommendation of a particular strategy combination served in their experiment as an equilibrium selection device. In our experiment, the Nash equilibrium is unique. Our aim was not to recommend one particular equilibrium among different existing equilibria but to make sure that the subjects perfectly understand that they are in a dilemma situation in which individual rationality leads to inefficient solutions. The chapter is organized as follows. First, the model is outlined and the parameters and functions used in the experiment are introduced.6 The experimental design is then described. After this, the results are presented and discussed. Finally, conclusions are offered.
The laboratory version of the Hoel model In the N-country version of the Hoel model, each country emits a global pollutant. Every country i, i = 1, . . . ,N, chooses abatement Xi and possesses benefit and cost functions Bi (X ) and Ci (X i), respectively, with Bi > 0, Bi > 0, Ci > 0, and Ci > 0. The sum of individual abatements is X = ∑i =1 X i . N
In the Nash equilibrium (NE) of the game with a simultaneous decision protocol, country [IE10] abates to the amount at which private marginal benefit equals marginal costs, i.e. Bi' (X ) = Ci' (X i )
(8.1)
In the Pareto optimum (PO), aggregated abatement is chosen such that social marginal benefit equals marginal costs, i.e. B ' (X ) = Ci' (X i ) with B ' (X ) = ∑i =1 Bi' N
(8.2)
160
B. Sturm and J. Weimann
Because social marginal benefit is above private marginal benefit, the NE abatement is smaller than the PO abatement, i.e. X PO > X NE. In order to model unilateral abatement of country 1, Hoel assumes that this country derives some kind of extra marginal benefit from total abatements, measured by a parameter h. It is not important where this benefit comes from. It may be the case that the inhabitants of this country have altruistic preferences or just love to see a “cleaner” world. For country 1, the total benefit from abatement then is
B1 (X ) + hX with h > 0
(8.3)
Given (8.3), in the new NE for the simultaneous decision protocol it holds that
B1' (X )+ h = C1' (X 1 ) and B 'j (X ) = C 'j (X j ) with j = 2,..., N
(8.4)
The standard case (equation 8.1) is contained in this formulation with h = 0. Hoel shows that in the equilibrium described by (8.4) country 1 abates more and countries 2, . . . , N abate less than in the equilibrium given by (8.1) and that total abatements are greater in the case of unilateral abatement. He also shows that under unilateral abatement B1 > Bj for all j = 2, . . . , N is a sufficient condition for an increase in total welfare. We are seeking an answer to the question of whether “leadership”, i.e. the pure timing of action, matters. Therefore, it is necessary for us to analyze the implications of a change in the timing of action from a simultaneous decision protocol to a mixed sequential-simultaneous (in the following “sequential”) decision protocol, i.e. country 1 decides first, the other countries are informed of this decision, and then decide simultaneously on their emission reduction. Although the cost and benefit functions remain unchanged, the shift from the simultaneous decision protocol to the sequential decision protocol alters the game theoretical prediction of individual behavior. Country 1, the leader, now has a strategic advantage because it may choose the point on the best response function of the followers that maximizes its own profit. In the subgame perfect equilibrium (SPE) of the sequential game, country 1 abates less and country j abates more than in the NE of the simultaneous game.7 Aggregated abatement in the SPE is lower than in the NE, i.e. leadership generates a negative environmental effect. In order to test the model experimentally, the cost and benefit functions have to be specified. Table 8.1 shows the specification of the model, and in the appendix to this chapter we derive the appropriate NE, SPE, and PO solution. To test the Hoel model and the influence of leadership, we use a 2 × 2 factorial design varying the sequence of moves and parameter h. In the simultaneous treatments (sim), all the countries simultaneously decide on their abatement. In the sequential treatments (seq), country 1 decides first, the other countries are
Unilateral emissions abatement 161 Table 8.1 Cost and benefit functions Cost function of country i, i = 1, ..., N
Ci (Xi) = cXi2 + T
Benefit function of country 1
B1(X) = b(AX – 0.5X 2) + hX
Benefit function of country j = 2, ..., N
Bj (X) = AX – 0.5 X 2
Table 8.2 Experimental treatments Variable
parameter h
h=0 h>0
Sequence sim
seq
treatment T I (h = 0, sim) treatment T II (h > 0, sim)
treatment T III (h = 0, seq) treatment T IV (h > 0, seq)
informed of this decision, and then simultaneously decide on their emission reduction. The variation of h in the simultaneous case is a direct test of the model. Table 8.2 summarizes the treatments. To implement the four treatments, we had to specify the parameters of the functions in Table 8.1 in a way that ensures that the payoff functions are sufficiently steep. Table 8.3 summarizes the parameter values for T I to T IV and the abatements and payoffs subjects realize in the NE, SPE, and PO. The parameters were chosen so that the NE has a solution in integers; numbers had to be rounded for the SPE and PO solutions.8 The efficiency loss in the NE and the SPE is considerable in all treatments. For h = 0, the profit for country 1 in the PO is 49 percent (44) above the profit in NE (SPE). The profit for country j in the PO is 57 percent (68) above the profit in NE (SPE). For h > 0, the profit for country 1 in the PO is 73 percent (68) above the profit in NE (SPE). The profit for country j in the PO is 50 percent (62) above the profit in NE (SPE). Free rider incentives are also considerable. If all the other countries choose the PO abatement in T I, country 1 (j) can increase its profit by 72 percent (16) compared to the PO profit providing it chooses its best response. In T II and the same situation, country 1 (j) can increase its profit by 24 percent (16). If all other countries choose their PO abatement, country j can increase its profit in T III (T IV) by 16 percent (19) compared to the PO profit providing it chooses its best response.9 Given this version of the Hoel model, we can now formulate the two central hypotheses that follow from standard game theory and that will be checked experimentally. Hypothesis 1 The parameter h has a significant influence on the abatement and profit of country i, i = 1, . . ., 5, both in the simultaneous and sequential treatments. If we define X iT as abatement of country i in treatment T ∈ {h = 0, h > 0} for a given
2, . . . , 5 47.54 37,619 21.50
T II and T IV: b = 30/47, h = 3000/47, A = 500, c = 3, T= 40,000, N = 5 Country 1 2, . . . , 5 all 1 Abatement 40 46 224 24.59 Profits (LD) 24,974 40,564 187,230 25,713 PaymentsI (EUR) 18.30 22.20 18.50
Notes I 40,000 lab-dollars (LD) = 1 EUR; payments include 12 EUR show-up fee. II Simultaneous decision protocol. III Mixed sequential-simultaneous decision protocol.
2, . . . , 5 48.16 36,212 21.10 all 214.75 176,188
All 211.07 156,971
Subgame perfect equilibriumIII 1 18.44 12,123 15.10
T I and T III: b = 30/47, h = 0, A = 500, c = 3, T= 40,000, N = 5 Country 1 2, . . . , 5 all Abatement 30 47 218 Profits (LD) 11,707 38,611 166,151 PaymentsI (EUR) 15.00 21.70
Nash equilibriumII
Table 8.3 Summary of parameters, abatements, profits, and payments
1 81.63 43,200 22.80
1 79.45 17,488 16.40
2, . . . , 5 81.63 60,788 27.20
2, . . . , 5 79.45 60,782 27.20
Pareto optimum
all 408.16 286,312
all 397.23 260,606
Unilateral emissions abatement 163 sequence of moves, we can formulate hypothesis 1 for country 1 and j, j = 2,...,5, as follows: 5
∑X
X 1h =0 < X 1h >0 , X hj =0 > X hj >0 and
i =1
h =0 i =1
5
< ∑ X ih=>10 .
10
i =1
If we assume the same notion for profit πiT we get:
π ih >0 > π ih =0 .
12
Hypothesis 2 The variable sequence has an influence on the abatement both at the individual and the aggregate level. If we define X iT as abatement of country i, i = 1, . . ., 5, in treatment T ∈ {sim, seq} for a given choice of the parameter h, we can formulate hypothesis 2 for country 1 and j, j = 2, . . . , 5, as follows:
and X 1sim > X 1seq , X sim < X seq j j
5
∑X i =1
5
sim i
> ∑ X iseq . i =1
If we assume the same notion for profit π iT, we have to distinguish between country 1 and j. We get:
and π 1sim < π 1seq , π sim > π seq j j
5
∑π i =1
5
sim i
> ∑ π iseq .12 i =1
The most interesting point of hypothesis 2 is that the game theoretical prediction, that leadership results in less abatement for country 1 and a world with less aggregated abatement, is in direct contrast to the above-mentioned idea that leadership may be a solution to the social dilemma situation countries are confronted with.
Experimental design Each of the four treatments was originally played with six groups of five subjects. After a preliminary analysis of the data, we decided to conduct 12 additional independent observations in the sequential treatments, i.e. we have six independent observations for the simultaneous treatments T I and T II and 18 independent observations for the sequential treatments T III and T IV. All in all, 240 subjects participated in the experiment. There were 16 sessions with three groups (15 subjects) playing the game in parallel. Each session lasted about 1 hour. The sessions were conducted between December 2003 and May 2005 at the Magdeburg Experimental Laboratory (MaXLab). All subjects were undergraduate
164
B. Sturm and J. Weimann
economics students familiar with fundamental game theoretic concepts, i.e. the idea of best response functions, the Nash equilibrium, and its application to finitely repeated games. Each of the, in total, 48 groups played the game ten times and subjects were informed about the number of repetitions. The experiment was fully computerized and anonymous.13 The subjects were seated in soundproof booths and had no contact before, during, and after the experiment. The information to the subjects was organized as follows. During the experiment, subjects were informed about their individual and aggregated abatement, the aggregated abatement of all the other countries, the individual profits of all countries, and the aggregated profit for all expired periods. The subjects received written instructions about the rules of the game, their role (country 1 or country 2, . . . , 5), the parameters, and the functional forms. Furthermore, their computers were equipped with a payoff simulator that had two elements. The first was the profit maximizing function.14 In the simultaneous treatments, each country could put in the expected abatement of the other countries, and then its profit maximizing response was computed. In the sequential treatments, country j used the same program as in the simultaneous treatment but country 1, the leader, could put in its own abatement and then the profit maximizing response of the other countries was computed. Additionally, the expected profit and total abatement were computed for both cases. The second element was the simulator, which subjects could use to evaluate the consequences of non-profit-maximizing actions. Here the subjects could put in the expected abatement of the others and their own arbitrary abatement.15 The payoff simulator was identical for all subjects in a treatment and was visible both on the input and the output screen. Before the experiment started, questions were answered and all subjects played two test rounds against the computer. The subjects knew that in the test periods they were playing against four automated systems whose behavior would not change. As already mentioned in the introduction, we decided to use the frame also employed in the Hoel model and to inform subjects comprehensively about the decision situation. For this purpose, we invited the subjects to attend a separate lesson held before the experiment on the same day or a day before the experiment was carried out. We conducted 15 lessons with 12 or 24 subjects, with the anonymity within the groups being guaranteed by the procedure. At the beginning of the lessons, subjects were told that they should imagine that they were the head of a delegation from their country at an international conference on emissions abatement of a global pollutant.16 Given all necessary information for their country (costs and benefits), they had to decide on the level of domestic abatement. Then the most important features of the decision situation were explained. First, we demonstrated the Nash equilibrium for the simultaneous game and the subgame perfect equilibrium for the sequential game graphically by means of best response functions. We showed that everybody was better off in the Pareto efficient solution but that there were strong incentives to deviate from the efficient solution. Second, the idea of the underlying social dilemma was illustrated by an example with three countries. The abatement decisions in the equilibrium and the Pareto efficient solutions as well as the corresponding
Unilateral emissions abatement 165 payoff implications were depicted. At the end of the lecture, the input screen, the output screen, and the payoff simulator were shown and explained using the above-mentioned example. The lesson lasted about 1 hour. Subjects were informed that in the experiment countries would have different roles (country 1 and the other countries). However, the information on which role each subject would play and the timing of action was first given in the experiment. At the beginning of the experiment, subjects received a show-up fee of 12 EUR and were told that possible negative payoffs had to be settled using this fee.
Results Abatement The payoffs per subject range from 14.40 EUR to 27.30 EUR including the show-up fee. The average payoff of all subjects is 21.70 EUR. Figure 8.1 displays the average abatement over all ten rounds for country 1, the other countries j, and all countries. The PO, NE, and SPE values are marked. We summarize our findings with respect to the mean abatement (for country 1, j, and total) as follows. Observation 1: a
b c
d
The mean abatement of country 1 is ceteris paribus higher in the h > 0 treatments than in the h = 0 treatments. Abatement is higher in the sequential than in the simultaneous treatments. However, none of these differences between treatments is significa nt (exact two-sided M-W U-test, 5 percent level).17 There is no significant difference between the treatments for the mean abatement of country j (M-W U-test, 5 percent level). The difference between the mean abatement and the SPE value in the sequential treatments is highly significant for country 1 (Wilcoxon signed-rank test, 1 percent level). However, there is no significant difference for country j (5 percent level). Total abatement is higher in the sequential treatments than in the simultaneous treatments but the difference is not significant. However, the difference between total abatement and the SPE value in the sequential treatments is significant (Wilcoxon signed-rank test, 5 percent level).
The most striking observation is that country 1 does not reduce its abatement in the change from a simultaneous to a sequential decision protocol – as theory predicts – but raises its abatement. Due to the high variance of individual behavior, the difference between treatments is not significant. However, we must still reject the hypothesis that country 1 abates at the level of the SPE values in the sequential treatments. In other words, the theoretical prediction for country 1 is supported by our data in the simultaneous treatments, but this is not the case in
166
B. Sturm and J. Weimann Country 1 PO
PO
PO
PO
60
40
NE NE SPE
20
T IV (h > 0,seq)
53.8
PO
PO
SPE
SPE
51.0
51.6
52.2 T IV (h > 0,seq)
T II (h > 0,sim)
45.1 T III (h = 0,seq)
47.4
T III (h = 0,seq)
40.3
T II (h > 0,sim)
0
SPE
T I (h = 0,sim)
Abatement per period
80
Abatement per period
PO
PO
NE
NE
51.7 T I (h = 0,sim)
Country j 80
60
40
20
0
Figure 8.1 Abatement per period.
the sequential treatments. On the other hand, the behavior of country j does not vary much between the treatments and the corresponding theoretical prediction is supported for all treatments. Although we have only a relatively small number of independent observations per cell in Table 8.2, it is interesting to look at the correlation of the mean abatement of countries 1 and j. Whereas the Spearman rank correlation yields
Unilateral emissions abatement 167
PO
PO
PO
NE
NE
SPE
SPE
247.1
251.5
251.2
262.4
T II (h > 0,sim)
T III (h = 0,seq)
T IV (h > 0,seq)
Abatement per period
PO
T I (h = 0,sim)
Total 400
300
200
100
0
Figure 8.1 continued.
rS = 0.200 (p = 0.704) and 0.429 (p = 0.397) for the simultaneous treatments T I and II respectively, values of rS = 0.781 (p = 0.000) and 0.546 (p = 0.029) are obtained for the sequential treatments T III and IV respectively, i.e. there is a strong positive correlation of leader’s abatement with followers’ abatements in both sequential treatments.
T IV (h > 0, seq) 80
60
60
Abatement Aj
Abatement Aj
T III (h > 0, seq) 80
40
40
20
20 Aj = 25.7 + 0.575 A1 Adj R-squared = 0.687
0 0
20
40 60 Abatement A1
Aj = 28.2 + 0.446 A1 Adj R-squared = 0.374
0 80
Ind. obs. SPE
Figure 8.2 Scatterplots for seq-treatments.
0
20
40 60 Abatement A1
PO Fitted values
80
168
B. Sturm and J. Weimann
Table 8.4 Regression analysis Model
T I (h = 0, sim) T II (h > 0, sim) T III (h = 0, seq) T IV (h > 0, seq)
aj = β0 + β iα1 b0
p(b0)
b1
p(b1)
adj, R2
51.2 24.9 25.7 28.2
0.001 0.168 0.000 0.002
0.010 0.552 0.575 0.446
0.928 0.147 0.000 0.004
0.002 0.446 0.687 0.374
Note aj (a1) mean abatement of country j (1) over rounds, N = 6/18 ind. obs.
The simple linear regression in Table 8.4 supports this result and yields a significantly positive coefficient for the sequential treatments. The scatterplots for the sequential treatments in Figure 8.2 make this conclusion more clearly visible. The SPE and the PO abatements are indicated to provide a better orientation and the line of best fit is obtained from the regression analysis in Table 8.4. Observation 2: a b
There is a significant and positive correlation between the abatement of the leader and the abatement of the followers. An increase in the leader’s abatement by one unit increases the total abatement of countries j by about two units.
This observation is therefore an indication that (at least some) followers follow the leader’s example and that abatements of followers and leaders are positively correlated. Figure 8.3 shows the abatements in the four treatments round by round. The round-by-round analysis of the abatements confirms the findings of observation 1. Observation 3: a
b
c
The hypothesis that the abatement of country 1 is equal to the NE/SPE values must be rejected in each of the 20 rounds of the sequential treatments. The hypothesis cannot be rejected for any round of the simultaneous treatments (Wilcoxon signed-rank test, 5 percent level). The same hypothesis must be rejected in only 12 of the 40 rounds of all treatments for country j. Nine of these cases appear in the first four rounds. Remarkably, country j’s abatement in the last round of both sequential treatments is significantly below the SPE value (Wilcoxon signed-rank test, 5 percent level). The hypothesis that abatements for country j are equal to the PO level must be rejected in all 40 rounds of the treatments. The same hypothesis must be rejected for country 1 in 17 of the 20 rounds of the simultaneous treatments
Unilateral emissions abatement 169 and in 17 of the 20 rounds of the sequential treatments (Wilcoxon signedrank test, 5 percent level). The downward pattern of abatement for both countries in Figure 8.3 suggests that subjects presumably learn the equilibrium strategy – despite the fact that they know the equilibrium solution before the experiment starts. Therefore we split the mean abatement of both countries in two subsamples, a subsample which contains the first five periods and a subsample with the last five periods. Observation 4 summarizes our findings regarding both subsamples: a
b
The average abatement for the last five periods is for both countries in all treatments lower than the average abatement for the first five periods. This difference is significant except for country 1 in the simultaneous treatments (Wilcoxon matched-pairs signed-rank test, 5 percent level). The average abatement for the last five periods of country 1 is significantly above the SPE abatement in the sequential treatments (Wilcoxon signed-rank test, 1 percent level). There are no other significant differences between the subsamples of abatement and the NE or SPE values.
Based on this observation we may conclude that there is some learning of equilibrium behavior. Nevertheless the surprising contradiction between the theoretical prediction regarding the abatement of country 1 and the corresponding behavior of this country as a leader preserves over the course of the experiment.
Convergence and individual behavior The overall impression from Figure 8.3 is that subjects show a clear tendency towards the equilibrium strategy during the course of the game and that this tendency is more pronounced in the simultaneous treatments than in the sequential treatments. In the sequential treatments, the downward trend to the SPE is interrupted by phases of constant, and even increasing, abatement, especially for country 1. In order to gain more insight into the way subjects adjust their behavior, we use the coefficient
1 N ∑ X t ,i − X tEq,i N i =1 αt = Eq X tPO ,i − X t ,i as a simple measure of the deviation of individual behavior from individual rationality. Xt,i is the individual abatement of country i in period t, X t,iEq is i’s individual equilibrium (the NE or SPE) abatement in period t and X t,iPO is i’s individual PO abatement in period t. Coefficient αt measures the mean absolute value of
170
B. Sturm and J. Weimann T I (h = 0, sim) 80 PO
60 Abatement
Country j NE j
40
Country 1 NE 1 20
0 1
2
3
4
5
6
7
8
9
10
7
8
9
10
Period
T II (h > 0, sim) 80 PO
60 Abatement
Country j NE j
40
Country 1
NE 1
20
0 1
2
3
4
5
6 Period
Figure 8.3 Abatement over periods.
the deviation of individual abatement from equilibrium abatement as a fraction of the difference between the equilibrium and PO abatement, i.e. αt summarizes the information about individual abatement behavior in Figure 8.3. We observe αt = 0 (1) for country 1(j) if all countries of this type play their equilibrium (PO) abatement. As expected, the αt values in Figure 8.4 decrease, although not monotonically. At first glance, we observe that the αt values are higher for the sequential treatments than for the simultaneous treatments for both countries over almost all
Unilateral emissions abatement 171 T III (h = 0, seq) 80 PO
60 Abatement
Country j NE j 40
Country 1
20 NE 1 0 1
2
3
4
5
6
7
8
9
10
Period T IV (h > 0, seq) 80
PO
60 Abatement
Country 1 NE j
40
Country j
20
NE 1
0 1
2
3
4
5
6
7
8
9
10
Period
Figure 8.3 continued.
rounds.18 The similarity between the αt values for the simultaneous treatments, on the one hand, and for the sequential treatments, on the other hand, is striking. Furthermore, the decrease in the αt values seems to be sharper for country 1 than for country j. Because the NE and SPE are not at a boundary, αt measures departures from equilibrium behavior in both directions. Abatements below the NE level or SPE level may therefore serve as a means of punishing subjects who behave selfishly. In order to analyze the structure of the deviations from the NE behavior and the SPE behavior, we classify the individual decisions of countries 1 and j into three groups: cooperative behavior, best response or NE/SPE
Country 1
0.8
0.8
0.6
0.6
0.4
0.4 0.2
0.2 0
Country j
1
Alpha
Alpha
1
1
2
3
4
5 6 Period
7
8
0
9 10
1
T I (h = 0, sim) T II (h > 0, sim)
2
3
4
5 6 Period
7
8
9 10
TIII (h = 0, seq) T IV (h > 0, seq)
Figure 8.4 Alpha.
Country 1: sim treatments
80 60 40 20 0 1
2
4
5 6 Period
7
8
40 20 0 3
4
5 6 Period
7
8
40 20 0 1
Fraction of behaviour (%)
60
2
60
2
3
4
5 6 Period
7
8
9 10
Country j: seq treatments
100
80
1
80
9 10
Country j: sim treatments
100
Fraction of behaviour (%)
3
Country 1: seq treatments
100
Fraction of behaviour (%)
Fraction of behaviour (%)
100
80 60 40 20 0
9 10
1
2
3
4
5 6 Period
7
8
9 10
Figure 8.5 Individual behavior of country 1 and j. Notes Co-operative behavior, i.e. abatement between max(bR, Eq) and PO. Abatement between min(bR, Eq) and max(bR, Eq) with “bR” as individual best response to the aggregated abatement of the others in the current period and “Eq” as individual abatement in NE or SPE (each with +/– 20 percent). Abatement below min(bR, Eq).
Unilateral emissions abatement 173 behavior, and abatements that are below the best response or NE/SPE. The corresponding intervals are described at the bottom of Figure 8.5, which shows the fractions of the three behavioral patterns in the simultaneous and sequential treatments. Although this classification serves only an illustrative purpose, several behavioral patterns are striking.19 Regarding country 1, Figure 8.5 confirms the observation that cooperative behavior is more frequent and rather stable in the case of leadership, i.e. leadership matters from the viewpoint of country 1. However, in the last rounds, abatement above the individual rational level becomes less frequent even in the sequential treatments. This indicates that the efforts to induce cooperation through leadership are, on average, not very successful. On the other hand, there is virtually no abatement below the individual rational level for country 1. Although the behavior of country 1 differs considerably between the simultaneous and sequential treatments, the behavior of country j seems to be rather stable for both kinds of sequence. Individual rational behavior prevails and becomes more frequent in the course of the experiment. On average, one-third of the decisions can be classified as cooperative behavior. However, this kind of behavior clearly becomes less important over the periods. Remarkably, a significant fraction of abatement decisions of country j is below the best response level. A reason for this behavior may come from the existence of some punishment behavior or negative reciprocity shown by the subjects in the role of country j. The proportion of this kind of deviation from individual rational behavior seems to be higher in the sequential treatments than in the simultaneous case. This may be a reaction of some subjects to the frustrating experience made in the early rounds that the high abatement of country 1 does not motivate the other countries, i.e. country j, to follow suit.
Profits and efficiency Figure 8.6 shows the average profits earned by the subjects. These profits are not identical to the welfare measure used in the Hoel model because there the “extra profit” country 1 derives from abatements above the Nash level (measured by h) is not part of the welfare of country 1. The positive aggregated welfare effect of “over-abating” of country 1 results because the welfare loss suffered by this country is overcompensated for by the welfare gains of the other countries. We summarize our findings concerning the profits below. Observation 5: a
Country 1 earns a significantly higher profit in the h > 0 treatments than in the h = 0 treatments (M-W U-test, 1 percent level). The profit in T III is significantly below the profit in T I but there is no significant difference between the profits in T IV and T II (M-W U-test, 5 percent level). The hypothesis that the profit of country 1 is equal to the NE value or SPE value can only be rejected for treatment T III where the profit is below the SPE prediction (Wilcoxon signed-rank test, 5 percent level).
174 b
c
B. Sturm and J. Weimann Country j is able to increase its profits above the NE and SPE values in all treatments. The difference is significant in T I and in both sequential treatments (Wilcoxon signed-rank test, 5 percent level). Total profit is slightly above the NE and SPE values in all treatments. The difference is significant in T I and T IV (Wilcoxon signed-rank test, 5 percent level).20
The picture we get based on this observation is quite clear. Although the abateCountry 1
50 40
PO
PO
NE
SPE
30
NE 13.116
27.926
9.359
26.433 T IV (h > 0, seq)
0
PO SPE
T III (h = 0, seq)
10
PO
T II (h > 0, sim)
20
T I (h = 0, sim)
Profit in 1,000 LD per period
60
PO
PO
Country j PO
PO
NE
NE
50 40
SPE
SPE
30 20
43.557
44.336
41.610
44.833
T III (h = 0, seq)
T IV (h > 0, seq)
0
T II (h > 0, sim)
10 T I (h = 0, sim)
Profit in 1,000 LD per period
60
Figure 8.6 Profit per period.
Unilateral emissions abatement 175
300
PO
PO
200 NE
PO
NE
SPE SPE
205.269
175.797
T III (h = 0, seq)
0
205.764
T IV (h > 0, seq)
187.346
T II (h > 0, sim)
100
T I (h = 0, sim)
Profit in 1,000 LD per period
Total PO
Figure 8.6 continued.
ment behavior of country 1 with leadership differs considerably from that without leadership, it turns out that country 1 is unable to increase its profits by using leadership. On the other hand, the followers are able to increase their profit above the equilibrium prediction. Due to the asymmetry of profits in the equilibrium and the PO solution for both types of countries it is useful to normalize the realized profits of all treatments. We use the efficiency index Effi as an index to facilitate the comparability of results. The index Effi measures the difference between the realized profit (πireal) and the equilibrium profit (πiEq) as a fraction of the difference between the maximal possible profit in the PO solution (πiPO) and the equilibrium profit for a country i, i.e.
Effi =
π ireal − π iEq . π iPO − π iEq
If a country i realizes its equilibrium profit the efficiency is zero, if it is able to achieve the social optimal profit the efficiency is equal to one. Therefore the efficiency index permits to compare the ability of both countries to realize Table 8.5 Efficiency index Treatment
T I (h = 0, sim)
T II (h > 0, sim)
T III (h = 0, seq)
T IV (h > 0, seq)
Country 1 Country j Total
0.241 0.223 0.224
0.162 0.187 0.182
–0.515 0.220 0.182
0.041 0.311 0.269
176
B. Sturm and J. Weimann
profits above the equilibrium. Table 8.5 depicts the results for country 1, j and all countries. We summarize our findings concerning the efficiency index below. Observation 6: a b
c d
The overall efficiency is quite low and cannot be significantly enhanced in the game with leadership (M-W-U-test, 5 percent level). Country 1 realizes in both sequential treatments a lower efficiency than in the simultaneous treatments. However, only the difference between the efficiency of T I and T III is significant (M-W U-test, 5 percent level). There is no significant difference between the efficiency values of the simultaneous and sequential treatments for country j. In the simultaneous treatments both countries do not differ in their ability to realize profits above the equilibrium. However, countries j achieve efficiency values that are significantly higher than those of country 1 in the sequential treatments (Wilcoxon signed-rank test, 1 percent level).
The interpretation of observation 6d is quite clear. In terms of efficiency, country 1 performs worse relative to country j when it becomes a leader, i.e. the rate of realized supra-equilibrium profits to maximal possible profits above the equilibrium are higher for the followers compared to the leader. None of the countries, neither the leader nor the follower, are able to increase the efficiency in a game with leadership.21
Discussion Our data support the Hoel model for the treatments with a simultaneous decision protocol at least for the second half of the ten rounds. Therefore, it seems fair to state that the Hoel model describes actual behavior surprisingly well, i.e. we cannot reject hypothesis 1, in an environment where subjects act simultaneously. On the other hand, our findings for the simultaneous treatments are in line with the stylized facts of many public good experiments: abatement starts between the NE level and the PO level, decays during the course of the game, and then displays a finalround effect. Brosig et al. (2003) have shown that, in standard public good games, subjects try to coordinate their behavior in order to realize the efficient outcome but that this coordination is only successful if all subjects stick to their promise to cooperate. Normally, this is not the case and cooperation breaks down after a few periods. This line of reasoning seems to be in line with our observations. The most important question we sought to answer with this experiment was whether or not “leadership matters”. Having a leader may open a way to solving the coordination problem just mentioned. If the leader starts each round with the PO abatement, this could serve as a kind of focal point for the followers. The observations from the abatement section show that, on average, the leaders take a chance going ahead with a “good example”. Country 1’s
Unilateral emissions abatement 177
Number of groups
30
20
10
0
5
28
3
Successful leadership
Leadership with defection
No leadership
Figure 8.7 Classification of 36 groups in the sequential treatments. Notes Successful leadership: Country 1 chooses the PO abatement (+/– 20 percent) and all countries follow with the PO abatement (+/– 20 percent) in at least eight of the ten periods. Leadership with defection: Country 1 chooses the PO abatement (+/– 20 percent) in one or more periods and at least one country j does not follow with the PO abatement in each of this periods (+/– 20 percent). No leadership: Country 1 does not choose the PO abatement (+/– 20 percent) in any period.
abatement is significantly above the SPE abatement in the sequential treatments. However, followers, on average, react only a little to these efforts by the leader. On average, their reaction to the leader’s efforts is not sufficient to boost the profit of the leader above the equilibrium value. In this context, the identification of different types of behavior as an important result of experimental economics is of particular importance, i.e. average data may not be an appropriate means to analyze individual behavior.22 A simple analysis of the group specific data (see Figure 8.7) shows three interesting behavioral patterns in this context. In contradiction to the game theoretical prediction, leaders try to lead and set a “good example”, i.e. in 33 of 36 groups country 1’s abatement is near the PO abatement at least once over the periods.23 The reaction of the followers is quite mixed. There are some groups (five of 33) that succeed in abating at the PO level in at least eight of the ten rounds. One of these groups even plays the PO abatement in all rounds, including the last round.24 However, in most cases (28 of 33) the leader fails to induce cooperation, i.e. in these groups there are few followers who react cooperatively to the leader’s signal. High efforts by the leader and the cooperative followers are exploited by the majority of defective followers. This behavioral pattern is the explanation for the fact that the mean profit of country 1 does not exceed the SPE values in the sequential treatments although the mean abatement of country 1 is significantly higher than the SPE values. Moreover, it explains the fact that the mean profit of country j exceeds the SPE values in the sequential
178
B. Sturm and J. Weimann
treatments. The change of the efficiency values regarding country 1 and j from a simultaneous to a sequential game serves as another illustration of our main finding: by assuming leadership country 1 loses ground in relation to country j. Furthermore, these observations are in line with the related experimental research on leadership that was discussed in the introduction. Based on these observations, we must reject hypothesis 2, but otherwise our data do not support the idea that leadership is an effective means to create stable cooperation.
Conclusion The primary objective of our experiment was to test the Hoel model and to analyze the influence of leadership. Since the external validity of results gained in singular laboratory experiments is restricted to the specific laboratory environment, we have to admit that we are not able to make any recommendations for environmental policy purposes.25 However, we may come to the conclusion based on our results that, first, the Hoel model describes the individual behavior surprisingly well in an environment with a simultaneous decision protocol and, second, leadership matters a lot but is not able to increase the profit of the leader and to overcome the social dilemma situation all countries are confronted with. Only the followers who free ride at the expense of the leader and the cooperative followers in cooperative groups can increase their profits. In particular, the experiments show that countries that want to increase their own profit and the total profit of the group by showing leadership should not put too much hope in the effectiveness of their good example. Even if some follow this example, the probability that other followers free ride and cooperation breaks down very soon is high. All in all, leadership by itself does not seem to be an appropriate tool to overcome social dilemma problems, even if leaders have a strong incentive to induce cooperative behavior.
Acknowledgments Support by the Deutsche Forschungsgemeinschaft (DFG) and the CESifo, Munich, is gratefully acknowledged. The authors thank Jeannette Brosig, Hartmut Kliemt, Manfred Königstein, Thomas Riechmann, Martin Weber, and an anonymous referee for their helpful comments.
Appendix In this appendix we derive the solutions for the Nash equilibrium for the simultaneous decision protocol, the subgame perfect equilibrium for the mixed sequential-simultaneous decision protocol, and the Pareto optimal solution based on the specification used in the experiment.
Unilateral emissions abatement 179 Nash equilibrium (NE) for the simultaneous decision protocol Country 1 maximizes the difference between benefit and costs of abatement given the abatement of all other countries, i.e.
[
maxπ 1 = b A(X 1 + X −1 )− 0.5(X 1 + X −1 ) X1
2
] + h(X
1
(
+ X −1 )− cX12 + T
)
N
with X–1 = 冱 Xj . j=2
The reaction function of country 1 is 1 (bA + h − bX −1 ) . 2c + b Since all other countries (J = 2, . . . , N) are identical in their benefit and cost functions, we can substitute abatement X−1 with X−1 = (N − 1) Xj. The reaction function R1 (X −1 )=
R1( X j) =
1 (bA + h − (N − 1)bX j) 2c + b
(8.a.1)
describes the best response of country 1 to the abatement chosen by country j, Xj. Country j, j = 2, . . . ,N, maximizes the difference between benefit and costs of abatement given the abatement of all other countries, i.e.
(
maxπ j = A(X j + X − j )− 0.5(X j + X − j ) − cX 2j + T 2
Xj
) with
X−j =
∑
N
i≠ j
Xi .
The reaction function of j is R j (X − j ) =
1 (A − X − j ) . 2c + 1
Since country j and all other N − 2 countries k, k ≠ j, ^ k ≠ 1, have identical benefit and cost functions, we can substitute abatement X−j with X−j = (N − 2)Xj + X1. The reaction function of j, R j (X 1 ) =
1 (A − X 1 ) N − 1 + 2c
(8.a.2),
describes the best response of country j to the abatement of country 1, X1. The NE for the simultaneous decision protocol results as the intersection of the reaction functions (equation 8.a.1) and (equation 8.a.2), and the interior solution for the NE is
X 1NE =
2cbA + h(N − 1 + 2c ) , 2cb + 2(N − 1)c + 4c 2
X jNE =
2cA − h , 2cb + 2(N − 1)c + 4c 2
180
B. Sturm and J. Weimann and X NE = X 1NE + (N − 1)X jNE =
(b + N − 1)A + h N − 1 + 2c + b
.
For b = 1 and h = 0, we have the perfectly symmetric NE for the simultaneous decision protocol with
NA . A and X NE = N + 2c N + 2c For b < 1 and a sufficiently small h, we have X1NE < XjNE, i.e. the country with the smaller marginal benefit from abatement (here country 1) abates less than all other countries in NE. We are interested in analyzing the effects of a marginal increase in h. It is easy to show that the results of the general model hold for our specification, i.e. ∂X1NE / ∂h > 0, ∂XjNE / ∂h < 0, and ∂X NE / ∂h > 0. For profits, we can show that ∂π1NE / ∂h > 0, ∂πjNE / ∂h > 0, and ∂π NE / ∂h > 0, i.e. payoffs increase with a marginal increase in h, which is an element of the real payoff function of country 1 here. This effect is independent of the relative size of the marginal benefit of country 1. X 1NE = X jNE =
Subgame perfect equilibrium (SPE) for the mixed sequential-simultaneous decision protocol Country 1 maximizes the difference between benefit and costs of abatement given the knowledge that the N − 1; other countries will behave according to their best response function (equation 8.a.2)
[
max π 1 = b A (X 1 + (N − 1)R j (X 1 )) − 0.5(X 1 + (N − 1)R j (X 1 )) X1
[
](
]
2
)
+ h X 1 + (N − 1)R j (X 1 ) − cX 12 + T
i.e. country 1 can use its first mover advantage by choosing a point on the follower’s best response function, which maximizes its profit by backward induction. The solution for the subgame perfect equilibrium is
X 1SPE =
A(N − 1 + 2c )− h 2cbA + h(N − 1 + 2c ) , X SPE , = j 2 2 2cb + (N − 1 + 2c ) 2cb + (N − 1 + 2c )
and X SPE = X 1SPE + (N − 1)X SPE = j
2cbA + A(N − 1)(N − 1 + 2c )+ 2ch . 2 2cb + (N − 1 + 2c )
Unilateral emissions abatement 181 We can show that X1SPE < X1NE, XjSPE > XjNE, and XSPE < XNE, i.e. the change from the simultaneous decision protocol to the mixed sequential-simultaneous decision protocol leads to a lower aggregated abatement. Regarding profits, the change from the simultaneous decision protocol to the mixed sequential-simultaneous decision protocol leads to a higher profit for country 1, a lower profit for country [IE51], and (for the chosen parameters) a lower aggregated profit. The influence of parameter h on abatements and profits is the same as in the simultaneous case. Pareto optimum (PO) In the Pareto optimal allocation, total profit from abatement is maximized, i.e. the global planner has the following optimization problem:
(
)
⎛c ⎞ max π = π 1 + (N − 1) π j = (b + N − 1) AX − 0.5 X 2 + hX − ⎜ X 2 + NT ⎟ . X ⎝N ⎠
In the PO, all countries adjust their marginal abatement costs to the marginal social benefit from abatement. Since all countries have the same marginal abatement costs, Ci′ = 2cXi, in the PO all countries choose an equal abatement, independent of the protocol of play, given by
X iPO =
A(b + N − 1)+ h A(b + N − 1)+ h . and X PO = c c ⎛ ⎞ b + 2 + N −1 N ⎜ b + 2 + N − 1⎟ N N ⎝ ⎠
It is easy to show that XPO > XNE holds. For abatement, it follows that ∂XiPO / ∂h > 0 and ∂XPO / ∂h > 0. For profits, we can show that ∂π1PO / ∂h > 0, ∂πjPO / ∂h > 0, and therefore ∂πPO / ∂h > 0.
Notes 1 See Kaitala et al. (1992). 2 Brosig et al. (2003) was able to show in a standard public good experiment with communication that successful cooperation only occurs if subjects had the opportunity to coordinate their behavior by face-to-face communication. Thus coordination seems to be a necessary condition for cooperation. 3 Hoel also shows that this positive result no longer holds if the unilateral abatement is followed by international negotiations on emission reductions. Unilateral reductions weaken the position of the leading country. Therefore, total emission reduction after international negotiations may be lower in the case of ex ante reductions of the leading country compared to the case without unilateral abatement. We do not deal with negotiations on emission reduction in this chapter but only look at decentralized decisions about abatement. 4 Hoel explicitly points out that “I do not take up the question of whether such action from one country might lead to similar behavior from other countries” (p. 56). 5 See Andreoni (1995) for a public bad game. 6 A more complete version of the model can be found in the appendix.
182
B. Sturm and J. Weimann
7 See the appendix. 8 Subjects had to enter integers in the simultaneous treatments. We allowed for one decimal place in the sequential treatments. 9 We assume that country 1 has no free rider option in the sequential treatments. 10 However, the hypothesis for country j is questionable due to the very small difference in the NE and SPE abatement of country j between treatments with h = 0 and h > 0 (see Table 8.3). 11 At this point, we have to discriminate between the increase in profits in our experiment and the positive welfare effect that Hoel describes in his model for the simultaneous game. The increase in profits is caused by a change in preferences due to parameter h, which is an element of the real payoff function of country 1. The positive welfare effect assumes that country 1 voluntarily abates more than in the equilibrium and the resulting welfare loss is compensated for by the gain of all other countries. 12 Here the same problem as in Note 10 appears. The difference in NE and SPE profits of country 1 between the simultaneous and sequential treatments is too small to get significant results (see Table 8.3). 13 We used Z-tree for programming. See Fischbacher (1999). 14 Instead of the term “best response” we use the term “profit maximizing response” because the latter is more neutral in our view. 15 Subjects who tried to maximize the collective profit given the expected abatement of the others could compute their “best response” with the help of the simulator. 16 From the viewpoint of game theory, the frame of the decision problem is irrelevant for individual behavior. On the other hand, if we assume that people have “green preferences”, i.e. that they behave more cooperatively in a social dilemma with an environmental frame, our frame provides a “worst case” scenario for game theory. 17 If not otherwise stated all the following tests are exact and two-sided. 18 However, significant differences for the αt values can only be observed for few periods. 19 We get a similar picture if we generate this classification based on “myopic best response” behavior, i.e. the individual best response to the aggregated abatement of the others in the previous period. 20 The difference is weakly significant for T III (Wilcoxon signed-rank test, 10 percent level). 21 However, we have to admit that the treatment T III seems to be an outlier as country 1 is not able to realize even the SPE profit. 22 See Weimann (1994) and Fischbacher et al. (2001). 23 In three of 36 groups we observe behavior near the SPE for both the leader and the followers. 24 Also see Figure 8.2 where these five groups can be easily identified. 25 See Sturm and Weimann (2006) for a detailed methodological discussion.
References Andreoni, J. (1995): Warm-glow versus Cold-Prickle: The Effects of Positive and Negative Framing on Cooperation in Experiments, Quarterly Journal of Economics, 110, 1–21. Barrett, S. (1994): Self-Enforcing International Environmental Agreements, Oxford Economic Papers, 46, 878–894. BMU (2002): Regierungserklärung von Bundesumweltminister J. Trittin vor dem Deutschen Bundestag am 22 Marz 2002. Braudts, J. and W.B. Macleod (1995): Equilibrium Selection in Experimental Games with Recommended Play, Games and Economic Behavior, 11(1), 36–63. Brosig, J., A. Ockenfels, and J. Weimann (2003): The Effect of Communication Media on Cooperation, German Economic Review, 4, 217–242.
Unilateral emissions abatement 183 Fischbacher, U. (1999): z-Tree: A Toolbox for Readymade Economic Experiments, Working Paper No. 21, University of Zurich. Fischbacher, U., S. Gächter, and E. Fehr (2001): Are People Conditionally Cooperative? Evidence from a Public Goods Experiment, Economic Letters, 71, 397–404. Gächter, S. and E. Renner (2003): Leading by Example in the Presence of Free-Rider Incentives, Working Paper, University of St. Gallen. Güth, W., M. Levati, M. Sutter, and E. van der Heijden (2007): Leading by Example With and Without Exclusion Power in Voluntary Contribution Experiments, Journal of Public Economics, 91, 1023–1042. Hoel, M. (1991): Global Environmental Problems: The Effects of Unilateral Actions Taken by One Country, Journal of Environmental Economics and Management, 20, 55–70. Kaitala, V., M. Pohjola, and O. Tahvonen (1992): Transboundary Air Pollution and Soil Acidification: A Dynamic Analysis of an Acid Rain Game between Finland and the USSR, Environmental and Resource Economics, 2, 161–181. Sturm, B. and J. Weimann (2006): Experiments in Environmental Economics and some Close Relatives, Journal of Economic Surveys, 20, 419–457. Van der Heijden, E. and E. Moxnes (2000): Leadership and Following in a Public Bad Experiment, Working Paper, SNF Report No. 13/00, Bergen. Weimann, J. (1994): Individual Behavior in a Free Riding Experiment, Journal of Public Economics, 54, 185–200.
9
Voluntary contributions with multiple public goods Todd L. Cherry and David L. Dickinson
Introduction The provision of public goods is a thoroughly examined issue in experimental and environmental economics. Experimental research has highlighted many critical issues affecting the ability of a collective to provide public goods, such as how contributions are affected by the size of the collective (Isaac and Walker, 1988), the returns from the public good (Fisher et al., 1995), punishment and rewards (Fehr and Gächter, 2000; Dickinson, 2001), member heterogeneity (Chan et al., 1999; Cherry et al., 2005), and many others (see Ledyard, 1995, for a review). An area that has received little attention is how individual contributions are affected by the presence of multiple, competing public goods. The lack of attention directed to this issue is surprising considering life outside the laboratory generally has people contemplating a variety of public goods (Cornes and Itaya, 2003), with the alternatives sometimes being diverse (e.g. green energy program, neighborhood clean-up programs, etc.) or quite similar (e.g. three similar land conservation groups seeking your financial support). Consequently, the question arises how the presence of multiple public goods will affect individual contributions, or free riding, in the laboratory. Previous experimental work has consistently shown individuals deviate from the dominant strategy to make zero contributions in a linear public good game. Evidence shows that people generally make positive though suboptimal contributions to the public good. But the consensus has arisen almost entirely from experimental settings that present a single public good. In this chapter, we extend the literature by examining whether the presence of more than one public good option affects total contribution levels. Any evidence of reduced or destabilized contributions in the face of multiple options has significant implications. For example, if multiple public goods options is shown to reduce contributions, then a reduction in charity options (e.g. merging charities) could increase total donations to charities. On the other hand, a larger number of charity choices may be advisable if a larger number of options is shown to increase overall giving. Charitable organizations are but one example of public goods funded through voluntary contributions. But, given that US charitable organizations have annual revenues of around $600 billion, anything that may increase
Voluntary contributions 185 (or decrease) contribution levels by even a small amount would have a sizable monetary implication. Existing research has quite thoroughly examined the voluntary contributions mechanism (VCM) for provision of public goods. Theoretical examinations of multiple public goods environments are less common (Kemp, 1984; Bergstrom et al., 1986; Cornes and Schweinberger, 1996; Cornes and Itaya, 2003). Experimental economists have only recently become interested in examining the behavioral effects of having multiple public goods options. Andreoni and Petrie (2004), Blackwell and McKee (2003), and Moir (2005) all include multiple public goods environments in their experimental research, though they each have a clearly distinct focus. Andreoni and Petrie (2004) study contribution levels when subjects are identifiable or not to other members of their group. They include a treatment in which subjects may contribute to either a broadcast public good and/or an anonymous public good. The marginal monetary incentives to contribute are identical for both public goods, which differ only in their information level. This treatment with two public goods options increased total contributions to public goods (i.e. increased efficiency) relative to their most efficient single public good treatment, though most subjects chose to contribute to the non-anonymous broadcast public good. Blackwell and McKee (2003) examine a multiple public goods environment with a local versus global public good competing for subject contributions. They speculate that a variety of factors likely contribute to their results, including altruism, learning, coordination, and possibly reciprocity. A main conclusion of their research is that there may be complementarities in voluntary contributions across multiple public goods in environments such as theirs, where marginal incentives to contribute differ across public goods. That is, rising marginal benefits to global public goods seem to induce increased contributions, but subjects still maintain contribution levels to their local group public good. Moir (2005) examines multiple public goods in the context of Morgan’s (2000) lottery provision mechanism for funding public goods. Expanding on the experimental environment of Morgan and Sefton (2000), he introduces a second public good along with rank-ordered social desirability of the two public goods. While lottery funding of public goods is shown to be efficient under risk neutrality, Moir finds that a second, less socially desirable, public good can decrease efficiency if funded through a lottery mechanism. This results from the fact that using the lottery for the less socially desirable public good appears to siphon contributions away from the more socially desirable public good, a result that does not occur when the public goods are both funded through a standard voluntary contributions mechanism. Though researchers have now begun to incorporate multiple (i.e. two) public goods into various experimental treatments, none of the existing research has examined the pure effect of increasing the number of public goods options. This is the goal in the present chapter. We start with a traditional single public good VCM environment. Our Nash prediction in each stage of our five-round
186
T.L. Cherry and D.L Dickinson
baseline treatment is complete free riding – zero contributions to the public good. We nevertheless expect a positive level of voluntary contributions given the wealth of previous research documenting such over-contribution relative to the Nash outcome for this particular setup. In a second treatment, we have three public goods that are identical to the public good option in the single public good treatment – call this treatment multiple homogeneous. These three public goods are identical in every way, and so a comparison of contribution tendencies across these two treatments yields a measure of the pure effect of going from a single to a multiple public goods environment. If individuals derive utility from coordinating contributions to the same public good, then multiple options makes coordination more difficult in the absence of any communication, and so total giving might decline. On the other hand, reframing the VCM environment as one with multiple options may induce additional contributions if individuals consider it important to fund each public good at some level. Thus, the effect of multiple and identical public goods options is ultimately an empirical question. We also examine a third treatment, multiple heterogeneous, where we make the multiple public goods options heterogeneous by altering the marginal incentive to contribute across three public goods options – for one of the public goods options, the marginal incentive to contribute is a monotonically increasing function of the number of total group contributions, indicating increasing marginal benefits of contributing. All environments share the traditional Nash free riding prediction (in the absence of any utility of group participation) and Pareto efficiency at full contributions.1 However, in our third treatment one of the three public goods options is strictly dominated in the sense that the marginal incentive to contribute, as defined by the marginal per capita return (MPCR) in Isaac et al. (1984), to this good is strictly less than the marginal incentive to contribute towards a second public good. Thus, the multiple heterogeneous treatment allows us to examine subject rationality and expectations, and our results indicate that coordination on the variable-MPCR public good occurs in a very deliberate and rational way.
Theory and experimental design We examine the issue of multiple public goods in a linear public goods environment. In this framework, each individual i divides her endowment Ei into contributions to the jth public good account,
xij (0 ≤
m
∑x
j
i
j =1
and a private good, m
Ei − ∑ xij . j=1
≤ Ei )
Voluntary contributions 187 The n members of a group make their contributions decisions to the m public goods independently and simultaneously, and the monetary payoff π 0i for each member i is m
m
j =1
j =1
π io = Ei − ∑ xij + ∑ α j X j ,
(9.1)
in which 0 < α j < 1 < nα j, where α j is the marginal per capita return (Isaac et al., 1984) from a contribution to the jth public good, and n
X j=
∑x
j k
k =1
where X j is the total contribution to the jth public good. The constraint on α j ensures that the individually optimal contribution to the public good is zero, although the socially optimal outcome is achieved when all group members contribute their entire endowments to the public good. Thirty-two students from Appalachian State University participated in a public goods experiment consisting of three treatments and two sessions. The sessions consisted of ten rounds with treatments varying within and across sessions. The first five rounds of each session presented parameters from one of the three treatments, while the final five rounds presented parameters from a different treatment. Subjects were placed in groups of four, and each group participated in two of the three treatments. Table 9.1 provides an overview of the experimental design with treatment parameters and ordering. Table 9.1 Experimental design Treatment
Public Accounts
Number and MPCR of public accounts by treatment Baseline One Account (B) Multiple homogeneous Three Accounts (B) (C) (D) Multiple heterogeneous Three Accounts (B) (C) (D) Rounds 1–5 Treatment ordering and participation by group Groups 1 and 2 Baseline Groups 3 and 4 Multiple homogeneous Groups 5 and 6 Baseline Groups 7 and 8 Multiple heterogeneous Note a See Figure 9.1.
MPCR Linear, 0.6 Linear, 0.6 Linear, 0.6 Linear, 0.6 Linear, 0.6 Linear, 0.3 Nonlinear, [0.45, 1.04]a Rounds 6–10 Multiple homogeneous Multiple heterogeneous Multiple heterogeneous Multiple homogeneous
188
T.L. Cherry and D.L Dickinson
Treatment 1: baseline. The baseline treatment follows the well-established literature in linear public good games. Subjects were endowed with 15 tokens and randomly assigned to groups of four. Subjects simultaneously and privately decided how to allocate their endowment between two accounts – a private account A that generated a 1 cent return to the individual for every token she placed in this account, or a public account B that generated a 0.6 cent return to each group member for every token the group placed in this account. Individual contributions were recorded, and payoffs were calculated and announced privately to each subject. Subsequent rounds followed, each with subjects receiving an initial endowment, making the allocation decision, and learning the subsequent outcomes and payoffs. Group affiliation remained unchanged across the session, but subjects did not know the identities of other group members. After the final round, accumulated individual payoffs were totaled and subjects were paid in private as they departed individually from the laboratory. Treatment 2: multiple homogeneous. The second treatment introduced two additional public accounts to the allocation decision. Subjects in this treatment therefore faced the decision of how to allocate their 15 token endowment between four accounts: one private (A) and three public (B, C, and D). The return from the private account remained unchanged from the baseline treatment. The three public accounts provided the same return with the marginal per capita return being identical to the baseline public account. Therefore, in this treatment, the payoff structure did not change – only the number of public goods changed. Treatment 3: multiple heterogeneous. The final treatment introduced variable payoff structures to the multiple public goods treatment. Subjects decided how to allocate their endowment between one private account (A) and three public accounts (B, C, and D), but the payoff structure varied across the public accounts. The return from the private account remained unchanged from the previous treatments, but the three public accounts provided marginal per capita rates of return: MPCR = 0.6 (account B), MPCR = 0.3 (account C), and MPCR [0.45, 1.04] as represented in Figure 9.1 (account D). Group account payoff tables provided to subjects are available upon request.
Results Figure 9.2 provides an overview of average group contributions by treatment. We first review results from the single public good baseline treatment. As in previous experimental research, we observe initial contribution levels significantly above the theoretically predicted zero and decline with repetition. Subjects contributed 63 percent of their endowments in the first round and 54 percent in the final round. Results from the multiple public good treatments indicate that the frame of additional public goods positively impacts total contribution levels. In the multiple homogeneous treatment, contributions were 79 percent of endowments in the first round and fell to 65 percent in the final round – significantly
Voluntary contributions 189 1.2 Account D
1.0
MPCR
0.8 Account B
0.6 0.4
Account C 0.2 0 20
40
60
80
100
Group contributions as percentage of total tokens
Figure 9.1 MPCR by group account for multiple heterogeneous treatment.
greater than the baseline (t = 3.41; p < 0.001). In the multiple heterogeneous treatment, contributions were 75 percent of endowments in the first round and fell to 71 percent in the final round – significantly greater than the baseline (t = 4.96; p < 0.001). We confirm these unconditional results by estimating the treatment effects on individual contributions using a two-way fixed effects panel model that controls for subject and round specific effects. Table 9.2 presents the results for the total contributions model (column 1). Results reveal total contributions were significantly greater in both the multiple public good treatments than in the single public good baseline. Estimated coefficients indicate that subjects contributed 2.8 more tokens to public goods in the multiple homogeneous treatment than in the baseline treatment, and 3.58 more tokens in the multiple heterogeneous treatment. Estimates associated with the multiple homogeneous treatment, which 1
Contribution
0.8 0.6 0.4
Baseline Multihomo Multihetero
0.2 0 1
2
3
4
5
Round
Figure 9.2 Total contributions to group accounts by treatment. Note Contributions measured as a percentage of total group endowment.
190
T.L. Cherry and D.L Dickinson
Table 9.2 Individual contributions: two-way fixed effect estimatesa
Constant Baseline Multiple homogeneous Multiple heterogeneous F N
Coefficient
p-value
9.38 – 2.80 3.58 2.56 320
0.000 – 0.001 0.000 0.004 320
Note a Individual- and round-specific effects.
presents three identical public goods, suggest a framing effect that significantly increases total contributions. We now examine the allocation of contributions across the three competing public good accounts in the multiple account treatments. Figure 9.3 presents the allocation of contributions in the multiple homogeneous treatment and reveals no significant coordination by subjects. As one may expect, the homogeneity of marginal per capita rates of return fails to provide any direction to subjects. The proportion of total (public account) contributions going to public good B, C, and D were 31, 28, and 41 percent in the first round and 32, 17, and 51 in the final round. The allocation of contributions in the multiple heterogeneous treatment is illustrated by Figure 9.4. Recall the MPCR for public account B is strictly greater than that of public account C, and public account D provides a higher MPCR if contributions exceed 25 percent of total endowments. We therefore should expect subjects to strictly prefer public account B to C, and prefer D if they expect total contributions to be at least 25 percent of the endowments. 1
Contribution
0.8 0.6
Account B Account C Account D Account B + C + D
0.4 0.2 0 1
2
3
4
5
Round
Figure 9.3 Group contributions across homogeneous competing group accounts. Note Contributions measured as a percentage of total group endowment.
Voluntary contributions 191 1
Contribution
0.8 0.6 0.4
Account B Account C Account D Accounts B + C + D
0.2 0 1
2
3 Round
4
5
Figure 9.4 Group contribution across heterogeneous competing group accounts. Note Contributions measured as a percentage of total group endowment.
Observed behavior reveals subjects understood the game and rationally responded to the heterogeneous incentives of the multiple public accounts. As Figure 9.4 shows, the proportion of contributions going to the public good D was 63.9 percent in the first round, with accounts B and C receiving 26.4 and 9.7 percent. In the final round, 100 percent of contributions went to public account D. Subjects therefore appear quite able to understand and respond to the relative payoffs in this multiple public goods setting, and they also seem to expect correctly others to identify and contribute to account D.
Conclusions The goal of this chapter is to fill a void in the literature on public goods experiments and contributions in multiple public goods environments. As experimental economists have started to become interested in these decision environments, it is important to understand the pure effect on cooperation of moving from a single to a multiple public goods environment. Our results show clear evidence that total contributions and efficiency are increased when the number of public goods options are increased from one to three in an otherwise standard VCM environment. An additional treatment that we examine also highlights the role that rationality and expectations play in the subjects’ giving behavior. We uncover no evidence of confusion among subjects, at least conditional upon the decision to contribute.2 Subjects also appear to learn across rounds, as is evidenced by convergence towards a single public good account when it is more efficient to do so. The implications of this research extend beyond the laboratory. Individuals are often in the position of choosing how to distribute their wealth among private versus public goods options. Our results indicate that more options for otherwise similar public goods will increase total dollar contributions towards
192
T.L. Cherry and D.L Dickinson
the larger cause. For example, multiple options for providing relief following an environmental disaster is predicted to increase the total amount of voluntary giving among private citizens. Of course, disaster relief agencies may have private objectives that do not fully align with the public goal of providing the public good, and so the real world incentive environment is undoubtedly more complex than what we study in this chapter. Nevertheless, we have separated and identified an important effect that has been previously hidden in the literature. At least in the context of voluntary giving, this effect of multiple options seems to indicate that more is better.
Acknowledgments We would like to thank the participants of the 2005 Experimental Economics and Public Policy Workshop at Appalachian State University. Stephan Kroll, Timothy Perri, Mike McKee, and two anonymous referees provided helpful comments.
Notes 1 Once contributions reach 93 percent of the total group endowment, the variable MPCR account in multiple heterogeneous reaches MPCR = 1. So, at the extreme, if expectations are high that group members will contribute nearly all of their tokens to this one particular group account, there is a private marginal incentive to contribute. 2 Our design is not capable of determining whether the initial decision to “cooperate” is due to confusion or subject preference. Andreoni (1995) finds some evidence of both kindness and confusion, although our evidence on rational choice among public goods options in our heterogenous accounts treatment leads us to minimize the potential effect of confusion in our data generation.
References Andreoni, James. 1995. “Cooperation in public-goods experiments: kindness or confusion?” American Economic Review, 85(4): 891–904. Andreoni, James and Ragan Petrie. 2004. “Public goods experiments without confidentiality: a glimpse into fund-raising.” Journal of Public Economics, 88(7–8): 1605–1623. Bergstrom, T.C., L. Blume, and H. Varian. 1986. “On the private provision of public goods.” Journal of Public Economics, 29(1): 25–49. Blackwell, Calvin and Michael McKee. 2003. “Only for my own neighborhood? Preferences and voluntary provision of local and global public goods.” Journal of Economic Behavior and Organization, 52(1): 115–131. Chan, Kenneth S., Stuart Mestelman, Robert Moir, and R. Andrew Muller. 1999. “Heterogeneity and the voluntary provision of public goods.” Experimental Economics, 2(1): 5–30. Cherry, Todd L., Stephan Kroll, and Jason F. Shogren. 2005. “The impact of endowment heterogeneity and origin on public good contributions: evidence from the lab.” Journal of Economic Behavior and Organization, 57(3): 357–365. Cornes, Richard and A.G. Schweinberger. 1996. “Free riding and the inefficiency of the
Voluntary contributions 193 private production of pure public goods.” Canadian Journal of Economics, 29(1): 70–91. Cornes, Richard and Jun-ichi Itaya. 2003. “Models with two or more public goods.” University of Nottingham Discussion Paper 03/21. Dickinson, David L. 2001. “The carrot vs. the stick in work team motivation.” Experimental Economics, 4(1): 107–124. Fehr, E. and S. Gächter. 2000. “Cooperation and punishment in public goods experiments.” American Economic Review, 90(4): 980–994. Fisher, J., R.M. Isaac, J. Schatzberg, and J. Walker. 1995. “Heterogeneous demand for public goods: behavior in the voluntary contributions mechanism.” Public Choice, 85(3–4): 249–266. Isaac, R. Mark and James M. Walker. 1988. “Group size effects in public goods provision: the voluntary contributions mechanism.” Quarterly Journal of Economics, 103(1): 179–199. Isaac, M., J. Walker, and S. Thomas. 1984. “Divergent evidence on free riding: an experimental examination of possible explanations.” Public Choice, 43(1): 113–149. Kemp, M.C. 1984. “A note on the theory of international transfers.” Economics Letters, 14(2–3): 259–262. Ledyard, J.O. 1995. “Public goods: a survey of experimental research.” in John H. Kagel and Alvin E. Roth (eds.). Handbook of Experimental Economics, Princeton: Princeton University Press, pp. 111–194. Moir, Robert. 2005. “Multiple public goods and lottery fundraising.” Working Paper, Department of Social Sciences (Economics), University of New Brunswick. Morgan, John. 2000. “Financing public goods by means of lotteries.” Review of Economic Studies, 67(234): 761–784. Morgan, John and Martin Sefton. 2000. “Funding public goods with lotteries: experiment evidence.” Review of Economics Studies, 67(234): 785–810.
10 Can public goods experiments inform policy? Interpreting results in the presence of confused subjects Stephen J. Cotten, Paul J. Ferraro, and Christian A. Vossler
Introduction Public policy is frequently used to induce individuals to contribute to public goods when it may be in their private interests to free-ride off the contributions of others. To explore how individuals behave in various public goods decision settings and to gain insights into how institutions might be better designed to encourage the provision of public goods, economists employ laboratory experiments. The cornerstone of experimental investigations on the private provision of public goods is the voluntary contributions mechanism (VCM). Understanding behavior in experimental implementations of the VCM game is critical for the work of economists with institutional and policy-oriented interests. The standard linear VCM experiment places individuals in a context-free setting where the public good, which is non-rival and non-excludable in consumption, is simply money. Participants are given an endowment of “tokens” to be divided between a private account and a public account. Contributions to the private account are converted to cash and given to the individual. Contributions to the public account yield a cash return to all group members, including the contributor. If the marginal return from contributing a token to the public account is less than the value of a token kept in the private account, but the sum of the marginal returns to the group is greater than the value of a token kept, the individually rational contribution is zero (i.e. the individual free rides) while the social optimum is realized when everyone contributes their entire endowment to the public account. In single-round VCM experiments where a public good contribution rate of zero is the unique Nash equilibrium, subjects contribute at levels far above this: on average, 40–60 percent of endowments. In repeated-round VCM experiments, contributions start in the range of 40–60 percent but then decay towards zero (ending around 10 percent of endowments on average). Thus, there seem to be motives for contributing that outweigh the incentive to free ride.
Public goods experiments and policy 195 Possible motives underlying contributions include: (1) “pure altruism” (sometimes called “inter-dependent utility”), which describes a situation in which an individual’s utility function is a function of his own payoff and the payoffs of her group members; (2) “warm-glow” (often called “impure altruism”; Andreoni, 1990), which describes a situation in which an individual gains utility from the simple act of contributing to a publicly spirited cause; and (3) “conditional cooperation” (Andreoni, 1988; Fischbacher et al., 2001), which is a predisposition to contribute in social dilemmas but punish by revoking contributions when significant free riding behavior is observed. A fourth motive, which the VCM literature often ignores but we are particularly interested in, is “confusion.” We define confusion as behavior that stems from the failure of an individual to identify the dominant strategy of zero contributions. More broadly, confusion behavior results from a failure for individuals to discern the nature of the game, and individuals do not understand how to utility-maximize in the context of the game. Investigations into the identification and relative importance of various motives for contributions in the VCM game and closely related games have led to conflicting conclusions. For example, Palfrey and Prisbrey (1997) find statistical evidence of warm-glow but no evidence of pure altruism; Goeree et al. (2002) find the opposite. Fischbacher et al. (2001) and Fischbacher and Gächter (2004) find no evidence of pure altruism or warm-glow, but find significant conditional cooperation. Efforts to compare public goods contributions across different subpopulations have likewise led to mixed results. A particularly well-studied issue is whether contributions behavior differs between men and women (Eckel and Grossman, 2005). Brown-Kruse and Hummels (1993) find that men contribute more than women. Nowell and Tinkler (1994) find females are more cooperative. Cadsby and Maynes (1998) find no significant differences between men and women. Particularly troubling is the apparent lack of correspondence between contributions behavior in experimental and naturally occurring settings. Whereas many studies find that economics students or economists are less likely to contribute to public goods in experiments (e.g. Marwell and Ames, 1981; Cadsby and Maynes, 1998), attempts at externally validating this claim yield contradictory results (Yezer et al., 1996; Laband and Beil, 1999; Frey and Meier, 2004). Laury and Taylor (forthcoming) use behavior in a one-shot VCM experiment to predict behavior in a situation in which individuals can contribute to an urban tree planting program, a public good. Using the empirical approach of Goeree et al. to estimate a pure altruism parameter for each subject, the authors find little or no relationship between subjects’ altruism parameters and subjects’ contributions to urban tree planting. There are several possible reasons for these puzzling results, including differences in the design, implementation, and participants used in these experiments. We focus on an alternative explanation: confusion confounds the interpretation of behavior in public goods experiments. This chapter presents results from one new experiment and two previous experiments that use the
196
S.J. Cotten et al.
“virtual-player method,” a novel methodology for detecting confusion through a split-sample design where some participants play with non-human players (automata) that undertake predetermined strategies or choices. Each experiment involves a slightly different public goods game and a different subject pool (with presumably differing abilities). The level of confusion in all experiments is both substantial and troubling. These experiments provide evidence that confusion is a confounding factor in investigations that discriminate among motives for public goods contributions, in studies that compare behavior across subpopulations, in research that assesses the external validity of experiments, and in attempts to use experimental results to improve policy design. We conclude by proposing ways to mitigate confusion in standard public goods experiments, and present results from a pilot study that uses “context-enhanced” instructions.
Prior evidence of confusion in public goods experiments Andreoni (1995) was the first to identify and test the hypothesis that confusion plays an important role in the contributions decisions of participants in public goods games. Specifically, Andreoni (p. 893) hypothesizes that the experimenters may have failed to convey adequately the incentives to the subjects, perhaps through poorly prepared instructions or inadequate monetary rewards, or simply that many subjects are incapable of deducing the dominant strategy through the course of the experiment. To test his confusion hypothesis, Andreoni developed a VCM-like game that fixes the pool of payoffs and pays subjects according to their contributions to the public good. The person who contributes the least is paid the most from the fixed pool. Thus contributions to the “public good” in this game do not increase aggregate benefits, but merely cost the contributor and benefit the other group members. Andreoni uses behavior from the ranking games to infer that both other-regarding behavior and confusion are “equally important” motives in the VCM. Houser and Kurzban (2002) continued Andreoni’s (1995) work with a clever experimental design that includes: (1) a “human condition,” which is the standard VCM game; and (2) a “computer condition,” which is similar to a standard VCM game except that each group consists of one human player and three nonhuman computer players (which we refer to as “virtual players”) and the human players are aware they are playing with computer players. In each round the aggregate computer contribution to the public good is three-quarters of the average aggregate contribution observed for that round in the human condition. By making the reasonable assumption that other-regarding preferences and confusion are present in the human condition, but only confusion is present in the computer condition, Houser and Kurzban find that confusion accounts for 54 percent of all public good contributions in the standard VCM game.
Public goods experiments and policy 197 Ferraro et al. (2003) independently developed a similar design with virtual players, and applied it to a single-round VCM game. They find that approximately 54 percent of contributions are due to confusion. Ferraro and Vossler (2005) extend this design to the multi-round VCM, where they find that 52 percent of contributions across rounds stem from confusion. However, unlike Hauser and Kurzban, they present evidence showing that this confusion does not decline with experience. This difference stems from the atypical behavior that Hauser and Kurzban’s all-human condition exhibits (little decline in contribution) and two other aspects of their design that make it difficult to directly compare the human and computer conditions.1 Palfrey and Prisbrey (1997) developed an experimental design that, when combined with a few behavioral assumptions, allows the authors to separate the effects of pure altruism, warm-glow, and confusion. Their design changes the standard VCM game by randomly assigning different rates of return from private consumption each round, which enables the measurement of individual contribution rates as a function of that player’s investment costs. The authors conclude that (p. 842) “altruism played little or no role at all in the individual’s decision and, on the other hand, warm-glow effects and random error played both important and significant roles.” While no point estimate was given of the proportion of contributions stemming from confusion in their experiment, the authors use their model results to predict that “well over half” of contributions in the seminal VCM experiments by Isaac et al. (1984) are attributable to error. Goeree et al. (2002) use a VCM design in which group size is either two or four and the “internal” return of a subject’s contribution to the public good to the subject may differ from the “external” return of the same contribution to the other group members. The authors estimate a logit choice model of noisy decision making with data from a series of one-shot VCM games (no feedback) in which the internal and external returns are varied. They find that coefficients corresponding to pure altruism and decision error are both positive and significant. Similar to Palfrey and Prisbrey, no estimate of the fraction of contributions due to confusion is given. Fischbacher and Gächter (2004) design an experiment to test specifically for the presence of conditional cooperation. In Fischbacher and Gächter’s “P-experiment,” they ask subjects to specify, for each average contribution level of the other group members, how much they would contribute to the public good. By comparing the responses in this experiment with those in their “C-experiment,” which is a standard VCM game with four-person groups, Fischbacher and Gächter argue that most contributions come from conditional cooperators. They find no evidence of pure altruism or warm-glow (no subjects stated they would contribute if other group members contributed zero). In contrast to previous work, they claim confusion accounts for a smaller fraction of observed contributions to the public good, “at most 17.5 percent” (p. 3). Overall, four of the five studies above that assess magnitude find that about half of all contributions stem from confusion. This conclusion is alarming. In response to the fifth study, Ferraro and Vossler point out that Fischbacher and
198
S.J. Cotten et al.
Gächter’s characterization of conditional cooperators also describes the behavior of confused “herders” who simply use the contributions of others as a signal of the private payoff-maximizing strategy. As such, the proportion of confusion contributions (17.5 percent) found by Fischbacher and Gächter may be best characterized as a lower bound estimate. Despite this dispute, the research on confusion in public goods experiments can be succinctly summarized: every study that looks for confusion finds that it plays a significant role in observed contributions.
The virtual-player method The virtual-player method discriminates between confusion and other-regarding behavior in single-round public goods experiments, and discriminates between confusion and other-regarding behavior or self-interested strategic play in multiple-round experiments (see Ferraro et al. (2003) for other applications). The method relies on three important features: (1) the introduction of nonhuman, virtual players (i.e. automata) that are preprogrammed to exercise decisions made by human players in an otherwise comparable treatment; (2) a split-sample design where each participant is randomly assigned to play with humans (the “all-human treatment”) or with virtual players (the “virtual-player treatment”); and (3) a procedure that ensures that human participants understand how the non-human, virtual players behave. The virtual-player method makes each human subject aware that he or she is grouped with virtual players that do not receive payoffs and that make decisions that are exogenous to those of the human. Thus the method neutralizes the otherregarding components of the human participant’s utility function and the motives for strategic play.2 Thus, as long as participants understand their decision environment, any contributions made by humans in virtual-player groups can be attributed to confusion in the linear VCM game. The random assignment of participants to an all-human group or a virtualplayer group allows the researcher to net out confusion contributions by subtracting contributions from (human) participants in the virtual-player treatment from contributions in the all-human treatment. In single-round experiments where the decisions of other players are not known ex ante, the contributions from virtual players should have no effect on human contributions nor should they confound any comparison between all-human and virtual-player treatments. Thus, one can argue that randomly selecting the profile of any previous human participant, with replacement, as the contribution profile for a virtual player suffices to ensure comparability. However, in the typical multiple-round public goods game where group contributions levels are announced after each period, it is important to exercise additional control as the history of play may affect contributions in the virtual-player treatment. Indeed, Ferraro and Vossler find that confused individuals use past contributions of virtual players as signals of how much to contribute. The additional control comes by establishing that each human in the all-human treatment
Public goods experiments and policy 199 has a human “twin” in the virtual-player treatment: each twin sees exactly the same contributions by the other members of her group in each round. Thus, the only difference between the two treatments is that the player in the virtual-player treatment knows she is playing with preprogrammed virtual players, not humans. To illustrate, consider a game that involves repeated interactions with groups consisting of three players. Participants in a group in the all-human treatment are labeled as H1, H2, and H3. Subject V1 in the virtual-player treatment plays with two virtual players: one that makes the same choices H2 made in the all-human treatment, and the other that makes the same choices H3 made. Likewise, player V2 plays with two virtual players, one playing exactly like human subject H1 and the other playing exactly like H3. And so on. Note that having an imbalance between all-human and virtual-player treatments, which would occur if some participants do not have a “twin” or if a player in one treatment has multiple twins in other, confounds comparisons. To ensure that participants in the virtual-player treatment believe the virtualplayer contributions are truly preprogrammed and exogenous, each subject has a sealed envelope in front of her. The participants are told that inside the envelope are the choices for each round from the virtual players in their groups. At the end of the experiment, they can open the envelope and verify that the history of virtual group member contributions that they observed during the experiment is indeed the same as in the envelope. The subjects are informed that the reason we provide this envelope is to prove to them that there is no deception: the virtual players behave exactly as the moderator explained they do. Post-experiment questionnaires are useful at assessing whether participants fully understand the nature of virtual players.
Application of the virtual-player method to the Goeree, Holt, and Laury experiment The experiment of Goeree, Holt, and Laury (hereafter referred to as “GHL”) is a variant of the static linear VCM game that endeavors to test the significance and magnitudes of contributions stemming from pure altruism and warm-glow. Each participant decides how to allocate 25 tokens between a private and a public account in each of ten “one-shot” decision tasks (referred to as “choices” in instructions), without feedback, where the internal (mI) and external rates (me) of return, and group size (n), vary across tasks. For each decision task, a token kept in the private account yielded 5 cents. The internal rate of return refers to the marginal return to oneself from a token contributed to the public account, and ranged from 2–12 cents. The external rate of return refers to the marginal return to other players from one’s contribution to the public account, and was either 2 or 4 cents. Group size was either two or four players. Formally, the profit function of the individual i (in cents) for a particular decision task is given by n
π i = 5(25 − xi ) + mI xi + me ∑ x j i≠ j
200
S.J. Cotten et al.
where xi ∈ [0, 25] denotes public account contributions from player i. Since the internal rate of return in GHL is always lower than the value of a token kept, it is still the individual’s dominant strategy to contribute nothing. The sum of the external and internal rates of return is always greater than 5 cents, so that full endowment contribution maximizes group earnings. In the typical one-shot VCM, the external and internal rates of return are equal (mI = me), i.e. all players receive the same return from the public good. The rates are varied in the GHL design because participants exhibiting pure altruism should increase their contributions when the external return or the group size increases. Such systematic correlations should be identifiable by observing patterns in individual contributions across the various decision settings. If considerable contributions are observed, but they show little correlation with external return and group size, the conjecture is that contributions are largely attributable to warm-glow. We replicate the GHL experiment using the virtual-player method to explore whether conclusions drawn from the original study are robust after quantifying and netting out confusion contributions. We made two small changes in the way subjects were grouped and paid. GHL assign subjects to two- and four-member groups by selecting marked ping-pong balls after all decisions are made. We pre-assign participants to two- and four-member groups based on their subject ID number. This is important for virtual-player sessions as it allows us to give each participant an envelope with the aggregate contributions of other players as well as earnings from virtual-player contributions for each possible decision selected. The pre-assignment into groups shortens the length of both all-human and virtual-player treatments. GHL randomly choose only one of the ten decisions to be binding using the roll of a ten-sided die and use a second, unrelated experiment to supplement earnings. Rather than engage our participants in a second experiment, we pay participants based on three randomly chosen decisions instead of one. This change increases the saliency of each decision. Experiment instructions are presented both orally and in writing. The instructions for all-human and virtual-player treatments are available from the authors on request. The all-human instructions are from GHL, with minor revisions. The virtual-player instructions are similar with the exception of emphasizing that participants are matched with virtual players, whose contributions are predetermined. As in GHL, participants make decisions via paper and pencil. Decision sheets are identical to GHL. Following the decisions, a post-experiment questionnaire is given to collect basic demographic information as well as to assess understanding of the experimental design and decision tasks. A total of 53 participants were recruited from a pool of undergraduate student volunteers at the University of Tennessee in the Spring of 2005. Of these, 23 students participated in the all-human treatment, which serves as a replication of the GHL design, whereas 30 students participated in the virtual-player treatment.3 Experiment sessions consisted of groups ranging from four to 12 people, and participants were visually isolated through the use of dividers. Matching was anonymous; subjects were not aware of the identity of the other members of
Public goods experiments and policy 201 their group(s). All sessions took place in a designated experimental economics laboratory. Earnings ranged from $8 to $15 and the experiment lasted no more than 1 hour.
Results Goeree, Holt, and Laury application Table 10.1 presents mean and median contributions from the all-human treatment, which serves as a replication of the GHL study. The pattern of contributions in relation to design factors is quite similar between this study and the GHL study, with contributions generally increasing with respect to external return and group size. This suggests that pure altruism is an important motive. To quantify formally the magnitude of altruism and warm-glow, GHL consider different theoretical specifications for individual utility and estimate utility function parameters using a logit equilibrium model. For the sake of parsimony, we refer the interested reader to the GHL study for details. We estimate logit equilibrium models with our data and concentrate on interpretations of estimated parameters and comparisons of parameters across treatments. Estimated logit equilibrium models are presented in Table 10.3 for all-human and virtual-player treatments. The “altruism” model considers the altruism motive but not warm-glow, the “warm-glow” model considers warm-glow but not altruism, and the “combined” model considers both motives. Consistent with the contributions pattern observed in Table 10.1, the logit equilibrium model results for the all-human treatment suggest that pure altruism is an important motive. In particular, the parameter α is a measure of pure altruism, and we find this parameter to be statistically different from zero using a 5 percent significance level. Our estimates suggest that a participant is willing to give up between 5 cents (“altruism” model) and 15 cents (“combined” model) in order to increase another person’s earnings by $1. The parameter g measures warm-glow, which we find to be insignificant. The parameter µ is an error parameter. While µ measures dispersion and does not indicate the magnitude of confusion contributions, statistical significance of this parameter does indicate decision error is
Table 10.1 GHL application, all-human treatment results Decision task
Group size Internal return External return Mean Median
1
2
3
4
5
6
7
8
9
10
4 4 2 9.2 5
2 4 4 10.1 10
4 4 6 10.8 11
4 2 2 5.2 4
2 4 6 9.7 9
4 4 4 9.9 9
2 2 6 6.5 5
2 4 2 5.2 3
4 2 6 8.7 6
2 4 12 12.3 12
202
S.J. Cotten et al.
present (Goeree et al.). Estimates of µ are indeed statistically different from zero at the 5 percent level for each specification. Overall, the main conclusions drawn from GHL carry over in our all-human treatment model: pure altruism and confusion are important motives behind contributions whereas warm-glow is not. We now discuss the outcome from the virtual-player treatment and present two main results about the role of confusion. Result 1: Positive contributions stem largely from confusion and subjects use experimental parameters as cues to guide payoff-maximizing contributions, leading to behavior that mimics behavior motivated by pure altruism. Contributions in the virtual-player treatment, presented as Table 10.2, are generally smaller than in the all-human treatment but not strikingly so. Specifically, mean contributions across all decision tasks are 6.7 tokens or 27 percent of endowment in the virtual-player treatment as compared to 8.8 tokens or 35 percent in the all-human treatment. Put another way, virtual-player contributions are approximately 75 percent of all-human contributions. Assuming that otherregarding preferences and confusion are present in the all-human treatment, but that only confusion exists in the virtual-player treatment, this suggests that an alarming 75 percent of all-human treatment contributions stem from confusion. Perhaps more startling is the observed correspondence between all-human and virtual-player treatment contributions across decision tasks, as illustrated in Figure 10.1. From Figure 10.1, one observes that subjects in the virtual treatment alter their contributions based on the same stimuli as subjects in the allhuman treatment; the two response patterns are parallel such that the difference between sets of contributions across decision tasks are approximately equal. Turning to the logit equilibrium models estimated from virtual-player treatment data, we find that estimated pure altruism parameters are statistically different from zero. In particular, we find that a participant is willing to give up between 4 cents (“altruism” model) and 16 cents (“combined” model) in order to increase a virtual player’s earnings by $1. Using the estimated “altruism” and “combined” models from the two treatments we test for equality of altruism parameters between the two (leaving other parameters unconstrained) using Wald Tests. For both specifications we fail to reject the hypothesis of equal Table 10.2 GHL application, virtual-player treatment results Decision task
Group size Internal return External return Mean Median
1
2
3
4
5
6
7
8
9
10
4 4 2 6.1 5
2 4 4 6.9 6
4 4 6 9.1 7
4 2 2 2.7 0
2 4 6 7.7 7.5
4 4 4 7.7 5.5
2 2 6 4.4 2
2 4 2 4.1 3.5
4 2 6 7.2 5
2 4 12 10.8 10.5
Public goods experiments and policy 203 Table 10.3 GHL application, estimated logit equilibrium models All-human treatment Altruism
α
Warm-glow
Combined
–
0.148* (0.064) –1.583 (1.059) 28.269* (9.054) –668.718 230
0.054* (0.021) –
g
µ Log-L N
Virtual-player treatment
–0.470 (0.769) 19.310* 32.382* (3.447) (11.628) –671.497 –673.071 230 230
Altruism
Warm-glow
Combined
–
0.163* (0.050) –2.383* (0.987) 21.132* (5.311) –813.308 300
0.034* (0.014) –
–1.231 (0.796) 11.914* 24.801* (1.460) (7.150) –824.510 –823.148 300 300
Notes Standard errors in parentheses. * indicates parameter is statistically different from zero at the 5 percent level.
14 All-human treatment Virtual-player treatment Difference
Contributions (tokens)
12 10 8 6 4 2 0 1
2
3
4
5
6
7
8
9
10
Decision task
Figure 10.1 GHL application, comparison of all-human and virtual-player contributions.
altruism parameters (“altruism” model: χ2 = 0.647, p = 0.421; “combined” model: χ2 = 0.033, p = 0.855). Of course, by design, participants in the virtualplayer treatment are not exhibiting pure altruism, unless one believes pure altruism includes preferences over the utility of fictional automata. Why would virtual-player participants respond to the same stimuli as all-human treatment participants? Confused subjects are using the changes in the parameters across decision tasks as a cue of how to behave. The altruism parameter is picking up confusion about the role of the external return in the subject’s private payoff
204 S.J. Cotten et al. function. In a confusing situation, most people look for cues to direct them towards the optimal behavior. In the GHL experiment, subjects have to make ten contributions decisions for which the internal and external rates of return, and group size, are all changing. It should not be surprising that a confused subject will infer meaning from the changes in these parameters and decide that her behavior ought to change in response to them. This behavior is similar in spirit to the “herding” behavior found in the Ferraro and Vossler dynamic VCM experiment. In this experiment, subjects use past group member contributions (human or nonhuman) as a cue of how to choose their own optimal responses. Recall that Laury and Taylor (forthcoming) run subjects through a GHL experiment and then ask participants to contribute to an urban tree-planting program. Subjects with positive altruism parameters are found to be less likely to contribute to the naturally occurring public good, even after controlling for experimental earnings and subject demographics and attitudes. Based on our logit equilibrium model results and observed correspondence between contributions both virtualplayer and all-human treatment contributions to changes in the marginal per capita return (MPCR), it is quite likely confusion confounds their comparison. Result 2: The common observation in public goods experiments that contributions increase with increases in the marginal per capita return likely results from subject confusion rather than altruism or expectations about the minimum profitable coalition. Decision Task 4 and Decision Task 6 involve a group size of four. The internal return is equal to the external return, but these returns increase from two to four across the two tasks. Thus, the lone design difference is analogous to a change in the MPCR in standard VCM experiments. In particular, the MPCR doubles from 0.4 to 0.8 from Decision Task 4 to Decision Task 6. A “stylized fact” from the experimental public goods literature is that an increase in the MPCR increases contributions, which has been attributed to altruism and “minimum profitable coalitions” (Cox and Sadiraj, 2005).4 In the all-human treatment, mean contributions are 5.2 in Decision Task 4 and 9.9 in Decision Task 6 – an increase of 4.7 tokens – which is consistent with the results on MPCR changes in the literature. In the virtual-player treatment, contributions go from 2.7 to 7.7, which is a nearly identical change of 5.0 tokens. Thus our results are consistent with the “MPCR effect” being related to confusion. Such pervasive evidence of confusion may cause readers to doubt the validity of the virtual-player method. In addition to the emphases placed in the instructions and the use of the sealed envelope, we also used a post-experiment questionnaire. We asked all subjects to answer the following question: If all you cared about was making as much money as possible for yourself, how many tokens should you have invested in each decision? (you may not have cared about making as much money as possible for yourself, but if you did, what is the correct answer?).
Public goods experiments and policy 205 Subjects were aware that they would be paid $1.50 for a correct answer. A total of 13 out of 53 answered this question incorrectly, suggesting that 25 percent of respondents were unable to discern the dominant strategy of zero contributions after participating in the experiment (note that this is a lower bound given that some subjects may only realize the correct answer after being asked the question and, as noted in Ferraro and Vossler, other subjects who erroneously believe they are playing an assurance game will often answer “zero” to this question). For those in the all-human treatment we asked respondents to state the contributions level that would have maximized group earnings. All participants correctly stated 25 tokens or full endowment. Thus, it appears that an important issue with the public goods game is that some self-interested individuals are simply not able to deduce the dominant strategy. Since decision errors can only be made in one direction (contributions are non-negative), this confusion necessarily leads to what looks like other-regarding behavior. Ferraro, Rondeau, and Poe We draw from previous experiments to strengthen our arguments about the confusion problem in public goods experiments. The first experiment is from Ferraro, Rondeau, and Poe (2003), who use the virtual-player method to study behavior in a single-round VCM-like game. Group size is 21, individual endowment is $12, MPCR is $0.07, and there is a cap on returns from the public good of $7 each. Thus, while the social optimum is for the group to contribute $100 (divided equally this is $4.76 each) – rather than full endowment – the dominant strategy is still for the individual to contribute nothing. This study uses “Ivy League,” Cornell University undergraduates from an introductory economics class, whom all have prior experience in experiments. Total sample size is 85. As stated by Ferraro et al. (p. 103), “our subject pool can be considered an ‘extreme’ environment in which to search for altruistic preferences: subjects were ‘economists in training,’ operating in an environment in which self-interest was being reinforced.” Results from this experiment are presented in Table 10.4. Using the entire sample, all-human treatment contributions are $2.14 and virtual-player treatment contributions are $1.16, such that we estimate confusion accounts for 54 percent of contributions. Thus, even with some of the world’s brightest young individuals as subjects, it appears as though confusion contributions are quite substantial. As discussed in the Introduction, public goods experiments are often used to make inferences about the behaviors of subgroups in the population (by gender, race, culture, etc.). We use raw data from this experiment to analyze further behavior according to gender (not reported in the original article). Table 10.4 presents mean contributions by gender and treatment. Based on the all-human treatment results, contributions from females are $0.92 higher than males and this difference is statistically significant using a Mann-Whitney Test (p = 0.07). However, virtual-player treatment contributions are also larger for females by $1.24 (p < 0.01). Thus, most of the purported difference between genders
206
S.J. Cotten et al.
Table 10.4 Ferraro et al. (2003) VCM experiment, mean contributions
All-human treatment Virtual-player treatment Difference % confusion contributions
All
Males only
Females only
2.14 1.16 0.98 54%
1.77 0.84 0.93 47%
2.69 2.08 0.61 78%
disappears when confusion contributions are removed ($0.93 for males versus $0.61 for females). What appears like a gender-effect is likely a gender-based difference in confusion for this specific sample. Ferraro and Vossler The other prior experiment we draw upon is from Ferraro and Vossler (2005), who apply the virtual-player method to the dynamic VCM game. They use an archetype multiple-round VCM game with group size of four, an MPCR of 0.5, and feedback on group contributions after each round. Subjects are undergraduate students from Georgia State University. The sample consists of 160 subjects: 80 in an all-human treatment and 80 in a virtual-player treatment.5 Figure 10.2 presents mean contributions (measured as a percentage of endowment) by round for the all-human and virtual-player treatments. The first observation is that confusion contributions are considerable. With 60
Percentage of endowment
50 40 30 20 10 0
1
3
5
7
9
11
13
15
17
19
21
Round
All-human treatment
Virtual-player treatment
Figure 10.2 Ferraro and Vossler (2005) experiment, mean contributions.
23
25
Public goods experiments and policy 207 participants contributing 32.5 percent and 16.8 percent of endowment in the all-human and virtual-player treatments, respectively, this suggests 52 percent of total contributions in the standard VCM game stem from confusion. Second, while the virtual-player treatment contributions do decrease over rounds, average contributions still amount to 10 percent of endowment in round 25. Ferraro and Vossler carefully analyze the data using a dynamic pooled timeseries model and find that the reduction in contributions in the virtual-player treatment is largely driven by the decline in observed contributions from virtual players in previous rounds. Thus, the standard decay in VCM experiments over rounds is not due to learning the dominant strategy or a reduction in confusion. Instead, similar to the correlation between MPCR and confusion contributions in our GHL application, confused participants in the virtual-player treatment simply use any available cue to help determine contributions. As additional validation of this result, Ferraro and Vossler report responses to a question similar to the one in our GHL experiment concerning what the purely self-interested, dominant strategy is. They find that 30 percent of respondents are unable to determine the dominant strategy of zero contributions, and additional evidence from the post-experiment questionnaire and focus groups suggests that this proportion is a lower bound.
Discussion Through the course of three different applications of the virtual-player method, we find that at least half of contributions in public goods games stem from confusion. This finding in itself may not seem alarming, given that decision errors are likely rather commonplace in many economics experiments. Unfortunately, these confusion contributions do not simply amount to harmless statistical noise. In particular, we have shown that confusion contributions are sensitive to changes in design parameters, distort inferences about the role of other-regarding preferences, and confound comparisons between subpopulations. Furthermore, confusion just does not simply go away over the course of many repeated rounds. Overall, these results call into question the internal and external validity of this line of experimentation. Do our results suggest we should just stop drawing inferences from public goods experiments? Certainly not, but they do suggest we need to rethink how these experiments are implemented. As a starting point for discussion, recall that Andreoni (1995) cites three potential causes of confusion contributions: (1) inadequate monetary rewards; (2) poorly prepared instructions; and (3) the inability of participants to decipher the dominant strategy. Are inadequate monetary rewards a problem? We think not. Experiments discussed here involve payoffs that are on average much higher than student wages for this time commitment. Further, in an investigation of “house money” effects, Clark (2002) found that having subjects play the VCM game with their own money had no discernible effect on their behavior. The results of Clark are
208 S.J. Cotten et al. consistent with the presence of a substantial number of individuals who are not clear about the appropriate strategy conditional on their preferences. Are instructions “poorly prepared”? For the experiments discussed that use the virtual-player method, the instructions are standard in experimental economics. The decision settings are presented using neutral language, effort is made to avoid context, and subjects go through simple exercises to assess their understanding of payoff computations. From our experience, the vast majority is quite capable at performing the necessary payoff calculations. Thus, while our instructions – and instructions for public goods experiments in general – are not necessarily poorly prepared, the inability of individuals to decipher the dominant strategy does suggest the need for modifying how the game is explained. Responses from post-experiment questionnaires we used, as well as behavior, suggest that at least 30 percent of respondents simply are not able to figure out the dominant strategy of zero contributions. Ferraro and Vossler report in a postexperiment focus group that just one-quarter of participants were able to figure out the dominant strategy by reading the instructions. This has important consequences for the external validity of the experiment unless one can show confusion has similar effects and magnitudes in “real world” contributions. We believe, however, that when faced with a naturally occurring contributions decision, people recognize the tension between privately beneficial free riding and socially beneficial contributions. Our results thus call into question the standard, “context-free” instructions used in public goods games. Standard instructions for this type of experiment use neutral language and do not reveal that the experiment is about public goods or that participants are being asked to make a contributions-like decision. Indeed, the focus group of Ferraro and Vossler reveals that many participants thought they were playing some sort of assurance game.6 We share the sentiment of Loewenstein (Loewenstein, 1999, p. F30), who suggests “Subjects may seem like zero intelligence agents when they are placed in the unfamiliar and abstract context of an experiment, even if they function quite adequately in familiar settings.” Our experimental evidence suggests that a bit of context could go a long way. In particular, since many subjects cannot figure out the dominant strategy (but all our GHL experiment participants figured out the social optimum) perhaps we can clue them in without altering their preferences for the public good. For instance, we could explain to participants that we are asking them for voluntary contributions for a public good and that the public good is simply an amount of money that gets distributed throughout the group. Subjects can be informed that it is perfectly reasonable to give nothing. As a pilot study, one of the authors used such context-enhanced instructions in a standard, ten-round VCM experiment run in two sections of an undergraduate environmental economics course at the University of Tennessee in September 2005. These instructions are available from the authors on request. The experiment was being used to illustrate the free riding phenomenon (before the concept was formally introduced). After students read the instructions, but
Public goods experiments and policy 209 before contribution decisions were made, the students were asked to write down the dominant strategy. Only three of 25 students (12 percent) failed to identify the dominant strategy of zero contributions (mean response was 0.4 tokens). This figure is considerably below those from comparable, context-free experiments: 30 percent from the post-experiment questionnaire in our GHL experiment, and the estimate from Vossler and Ferraro that three-quarters could not deduce the dominant strategy prior to the experiment. The pattern of contributions is quite similar in both class sections: contributions start at about 50 percent and fall to 40 percent by round 10. This rate of decay is quite low for a VCM experiment, but results are consistent with expectations based on our virtual-player treatment results. Confused, herding individuals are going to follow the group trend and so any reduction in contributions in early rounds really causes a downward spiral: conditional cooperators get an exacerbated signal of free riding and revoke contributions, herders then further reduce, and so on. Without the herders, the decay in average contributions over time should be relatively less steep. While there are likely tradeoffs associated with adding even generic context, namely that it could systematically alter participant preferences for the welfare of others, it appears that investigation into instruction based modifications is warranted. Consistent with our conjecture, the findings of Oxoby and Spraggon (this volume) suggests that confusion also may be reduced by providing a payoff table showing the subjects’ payoffs given their decisions and the decisions of others. Note that the standard VCM instructions provide information only on the payoffs associated with each level of group contributions. The value of instruction enhancements could be tested using the virtual-player method, through survey questions with monetary rewards for correct answers, through debriefing sessions, and through external validity tests. In conclusion, we believe that public good experiments will continue to play an important role in testing economic theory and designing public policies. However, they cannot achieve their full potential as long as they are implemented in a way that leaves many subjects oblivious to the social dilemma that experimentalists are trying to induce. Without innovation in the design of these experiments, our ability to draw inferences about behavior in collective action situations, and about the effects of alternative institutional arrangements that induce private contributions to the public goods, will continue to be impaired.
Notes 1 The two potential design flaws are: (1) human subjects in the computer condition observe their group members aggregate contribution before they make their decision in a round, as opposed to after they make their decision, as in the human condition; and (2) the automata contribute the average of what human condition members contributed. If the history of contributions affects both confused and other-regarding subjects, and if participants behave differently when the contributions of other players are known ex ante, then such changes in design affects the comparability of the two
210
2 3
4 5 6
S.J. Cotten et al.
conditions. Indeed, the intent of our virtual-player method is simply to have participants play with virtual players and not change any other aspect of the game. A similar use of virtual players was employed by Johnson et al. (2002) in a sequential bargaining game. In one session, a graduate student was asked to participate as a last-minute measure to make the total number of participants divisible by four. This individual was subsequently dropped from the data set. Due to the nature of the game, this inclusion of this person should have no impact on the contribution level of the undergraduate participants. Davis and Holt (1993, p. 332) define a “minimal profitable coalition as “the smallest collection of participants for whom the return from contributions to the [public account] exceed the return from investing in the private [account].” We only report their “VI” and “HI” treatments. An Assurance Game (also known as the Stag Hunt) is a game in which there are two pure strategy equilibria and both players prefer one equilibrium (payoff dominant) to the other. The less desirable equilibrium, however, has a lower payoff variance over the other player’s strategies and thus is less risky (it is risk dominant). In the case of the VCM game, some subjects erroneously view contributing their entire endowment as the most desirable strategy when everyone else in the group contributes their endowments too. Subjects described this decision as “risky” because it leads to low payoffs if other players do not contribute their endowments. Contributing zero was viewed as a payoff inferior choice but “less risky.” These subjects were unable to infer the dominant strategy in the VCM game.
References Andreoni, J., 1988. Why Free Ride? Strategies and Learning in Public Goods Experiments. Journal of Public Economics, 37 (3), 291–304. Andreoni, J., 1990. Impure Altruism and Donations to Public Goods: A Theory of WarmGlow Giving. Economic Journal, 100 (401), 464–477. Andreoni, J., 1995. Cooperation in Public-goods Experiments: Kindness or Confusion? American Economic Review, 85 (4), 891–904. Brown-Kruse, J. and D. Hummels, 1993. Gender Effects in Laboratory Public Goods Contributions: Do Individuals Put their Money Where their Mouth Is? Journal of Economic Behavior and Organization, 22 (3), 255–268. Cadsby, C.B. and E. Maynes, 1998. Gender and Free Riding in a Threshold Public Goods Game: Experimental Evidence. Journal of Economic Behavior and Organization, 34 (4), 603–620. Clark, J., 2002. House Money Effects in Public Good Experiments. Experimental Economics, 5 (3), 223–237. Cox, J. and V. Sadiraj, 2005. Social Preferences and Voluntary Contributions to Public Goods. Paper presented at the Conference on Public Experimental Economics, Georgia State University. Davis, D.D. and C.A. Holt, 1993. Experimental Economics. Princeton, NJ: Princeton University Press. Eckel, C. and P. Grossman, 2005. Differences in the Economic Decisions of Men and Women: Experimental Evidence. In Handbook of Experimental Results, Volume 1, edited by C. Plott and V. Smith. New York: Elsevier. Ferraro, P.J. and C.A. Vossler, 2005. The Dynamics of Other-regarding Behavior and Confusion: What’s Really Going on in Voluntary Contributions Mechanism
Public goods experiments and policy 211 Experiments? Experimental Laboratory Working Paper Series #2005–001, Department of Economics, Andrew Young School of Policy Studies, Georgia State University. Ferraro, P.J., D. Rondeau, and G.L. Poe, 2003. Detecting Other-regarding Behavior with Virtual Players. Journal of Economic Behavior and Organization, 51, 99–109. Fischbacher, U. and S. Gächter, 2004. Heterogeneous Motivations and the Dynamics of Free Riding in Public Goods. Working Paper, Institute for Empirical Research in Economics, University of Zurich. Fischbacher, U., S. Gächter, and E. Fehr, 2001. Are People Conditionally Cooperative? Evidence from a Public Goods Experiment. Economic Letters, 71 (3), 397–404. Frey, Bruno and Stephan Meier, 2004. Social Comparisons and Pro-Social Behavior: Testing “Conditional Cooperation” in a Field Experiment. American Economic Review, 94 (5), 1717–1722. Goeree, J., C. Holt, and S. Laury, 2002. Private Costs and Public Benefits: Unraveling the Effects of Altruism and Noisy Behavior. Journal of Public Economics, 83 (2), 255–276. Houser, D. and R. Kurzban, 2002. Revisiting Kindness and Confusion in Public Goods Experiments. American Economic Review, 92 (4), 1062–1069. Isaac, R.M., J. Walker, and S. Thomas, 1984. Divergent Evidence on Free Riding: An Experimental Examination of Possible Explanations. Public Choice, 43 (2), 113–149. Johnson, E.J., C. Camerer, S. Sen, and T. Rymon, 2002. Detecting Failures of Backward Induction: Monitoring Information Search in Sequential Bargaining. Journal of Economic Theory, 104 (1), 16–47. Laband, D.N. and R.O. Beil, 1999. Are Economists More Selfish than Other “Social” Scientists? Public Choice, 100 (1–2), 85–101. Laury, S. and L. Taylor, Forthcoming. Altruism Spillovers: Does Laboratory Behavior Predict Altruism in the Field? Journal of Economic Behavior and Organization. Loewenstein, G., 1999. Experimental Economics from the Vantage Point of Behavioural Economics. Economic Journal, 109 (453), F25–F34. Marwell, G. and R.E. Ames, 1981. Economists Free Ride, Does Anyone Else? Journal of Public Economics, 15 (3), 295–310. Nowell, C. and S. Tinkler, 1994. The Influence of Gender on the Provision of a Public Good. Journal of Economic Behavior and Organization, 25 (1), 25–36. Oxoby, R.J. and J. Spraggon, 2007. The Effects of Recommended Play on Compliance with Ambient Pollution Instruments. In Experimental Methods in Environmental Economics, edited by T.L. Cherry, S. Kroll and J.F. Shogren (in this volume). Palfrey, T.P. and J.E. Prisbrey, 1997. Anomalous Behavior in Public Goods Experiments: How Much and Why? American Economic Review, 87 (5), 829–846. Yezer, A.M., R.S. Goldfarb, and P.J. Poppen, 1996. Does Studying Economics Discourage Cooperation? Watch What We Do, Not What We Say or How We Play. Journal of Economic Perspectives, 10 (1), 177–186.
11 Spies and swords Behavior in environments with costly monitoring and sanctioning Rob Moir
Introduction Depletion of resources is a serious global concern. Fisheries on both Canadian coasts have, at various times, been closed and thousands of people put out of work. Excessive numbers of whale watchers have led to a reduction in the number of whales returning to – and consequently declining biodiversity of – the Bay of Fundy (Hawkins, 1998: B1). Groundwater in the American Midwest is used faster than it is replenished. Fuelwood lots and grazing lands suffer from severe depletion around the world. To understand that these issues are a problem, one need only look at the socio-economic impacts upon the relevant population of users. To understand how these issues became a problem, however, the interaction between individuals, groups, and the resource must be examined. In this chapter, the specific effects of costly monitoring of individuals and the role of endogenous sanctions are examined. Common-pool resources (CPRs), like those above have a rich theoretical background (Gordon, 1954; Hardin, 1968; Bromley, 1992). Each resource is characterized by costly exclusion and subtractable resource use (Ostrom, 1992a). Despite Hardin’s dire “Tragedy of the Commons” prediction of complete resource depletion, some, though not all, CPRs are successfully managed through a system of communal ownership and management.1 This begs the question, why do only some communally managed CPRs generate the cooperation necessary to succeed? Economic cooperation in games of collective action has been explained as an outcome of evolution (Bergstrom, 2003; Huberman and Glance, 1993), theories of repeated games (Kreps et al., 1982; Abreu, 1988; Seabright, 1993; Dutta and Sundaram, 1993), joint supply dependency (Hechter, 1987; Caputo and Lueck, 1994; Lueck, 1994), cultural norms (Platteau, 1994a&b; Sethi and Somanathan, 1996; Ostrom, 1998), and preferences for fairness (Fehr and Schmidt, 1999; Fehr and Gächter, 2000a). Institutional economics posits that rules and institutions can align individual self-interest with a more collective group goal. Ostrom, Gardner, and Walker (1994 – hereafter OGW) provide data from an economic experiment showing that the ability to impose welfare-reducing sanctions upon others reduces over-appropriation from a CPR.2 These sanctions are
Spies and swords 213 welfare reducing as they are costly to both the individual imposing the sanction and the individual sanctioned, and the monies are removed from the economy.3 In this chapter, I focus quite narrowly upon the institutional topic of selfgovernance of CPRs – can rules guide a group of individuals to control their joint use of a CPR, and how do such rules affect CPR appropriation? I present the results from an experiment designed specifically to examine the separate and joint roles of costly monitoring of individuals and costly sanctioning in a CPR environment.4 In the seminal CPR experiment, OGW did not specifically include monitoring in their design. Monitoring, however, is a key design variable in organizing a communally managed irrigation system (Ostrom, 1992b).5 Monitoring has led to difficulties in formulating fishing policies in Chile (Peña-Torres, 1997), and is crucial in determining the degree of successful management of CPRs in Maine (Acheson, 1988), Japan (McKean, 1992), Niger (Thomson et al., 1992), Brazil (Cordell and McKean, 1992), and India (Wade, 1992; Blaikie et al., 1992). The remainder of this chapter is structured in the following manner. Next, the results of relevant economic experiments are summarized. Particular attention is paid to related CPR experiments by OGW and Casari and Plott (2003) and to sanctioning experiments from the public goods literature (e.g., Fehr and Gächter, 2000b). Following this, the experiment design, treatment conditions, and predictions are outlined. The results are then presented, with the discussion and conclusions forming the final part of the chapter.
Previous experiments In baseline CPR experiments, that is experiments with no external rules or structures, subjects typically appropriate more than the Nash equilibrium prediction (Walker et al., 1990). With repetition, appropriation tends towards the Nash equilibrium but falls far short of socially optimal appropriation.6, 7 OGW (1994) systematically address institutional issues of self-governance of a CPR in an experiment. Sessions consist of a baseline treatment followed by a sanctioning treatment permitting a within-subject analysis. Treatment variables include varying sanction costs, varying sanction sizes, varying the right to communicate, and allowing subjects to vote on the adoption of a sanctioning mechanism. Both sanctions and communication significantly reduce CPR appropriation, contrary to the game theoretic prediction that neither treatment should cause a change. The effect of sanctioning is largest when groups are permitted to choose whether or not to adopt a sanction rule, and opt to do so. In these cases, cheating is minimal, few sanctions are imposed, and resource use is almost optimal.8 The OGW design raises several important methodological issues. First, sanctions are not explicitly “rules based” in that subjects could sanction each other even if no “rule” is broken.9 Second, sanction levels are fixed for a session, and subjects are permitted to sanction only one individual per period, no matter the number of offenders. While the choice to sanction is endogenous, the size of the sanction and the number of sanctions to impose is exogenous. Third, because
214
R. Moir
communication takes place face-to-face, it is impossible to separate social sanctioning from the direct effects of pecuniary sanctions. Fourth, subjects in the OGW experiment are presented with “rough” payoff tables that are known to change behavior in public goods experiments (Saijo and Nakamura, 1995). Finally, the role that monitoring plays in the OGW experiment is not explored, as subjects freely receive information on individual investments in CPR appropriation at the end of each period but before sanctioning takes place. However, as described in Acheson (1988), monitoring is costly either in terms of effort – a person has to spend time watching the activities of others – or an independent monitor must be hired and paid for. Schmitt et al. (2000) examine uncertain monitoring in a CPR environment. Eight people are formed into a group. Six of them engage in face-to-face communication before making contribution decisions, while the other two remain isolated from the group and each other. At the end of each period, subjects receive information on aggregate CPR appropriation. It is difficult for the six communicating members of a group to determine if departures from any agreements made in the communication phase are due to cheating from within the communicating subgroup or because of outsiders’ actions. Uncertainty in monitoring reduces the significant efficiency-enhancing effects of face-to-face communication exhibited in OGW. Casari and Plott (2003) implement an OGW design to study the ‘Carte di Regola’ mechanism used to regulate CPR appropriation in the Italian Alps from the thirteenth to the nineteenth century. While similar to both OGW and the design used in the present experiment, and implementing a costly monitoring/fining rule, there are important design differences (see Casari and Plott: Table 1, p. 220 for a summary). Most importantly, subjects are aware of group appropriation before choosing to monitor, monitoring and fining are modeled as a single activity, fines are fixed to be proportional to the deviation from an exogenously determined appropriation level, and the proceeds from the fines are returned to the individual doing the monitoring. Casari and Plott implement weak and strong versions of fines and compare these treatment results to a baseline condition. While both versions of fines increase efficiency, strong fines are more effective. A number of public good experiments focus solely on monitoring and typically find that monitoring improves voluntary contributions and efficiency (Palfrey and Rosenthal, 1994; Cason and Khan, 1999). This effect may be due to social sanctioning – the effect of having individual contributions and identity revealed to the group (Gächter and Fehr, 1999; Andreoni and Petrie, 2004). Holcomb and Nelson (1991 and 1997) examine the role of perfect versus imperfect monitoring in a duopoly environment and show that only after significant evidence of overproduction in the imperfect monitoring case do cartel arrangements break down. In all of these experiments monitoring remains costless. Likewise, monitoring is costless in public good experiments with rewards and sanctions. While both rewards and sanctions increase contributions, sanctions
Spies and swords 215 are more effective at increasing and sustaining contributions to the public good, but sanctioning costs reduce efficiencies to baseline levels (Fehr and Gächter, 2000b; Sefton et al., 2005; Walker and Halloran, 2004). Grouping subjects according to behavioral type influences net efficiency when sanctions are available (Yamagishi, 1986; Ones and Putterman, 2007).
The experiment Design The current experiment is based upon the OGW (1994) framework. A fixed number of subjects (n = 8) are endowed with a fixed number of tokens (w = 20) each period. Subjects simultaneously select how to invest their tokens in a private activity (xi ; referred to as Market 1), a CPR appropriation activity (gi ; referred to as Market 2), and in monitoring other individuals (mi). It cost one token to obtain perfect information about the Market 2 investment of another individual in the current period.10 Let mij represent subject i’s monitoring of subject j and mi be the sum of i’s monitoring activities in a period.11 Sanctions are expressed in laboratory dollars and subjects are required to pay one-half the value of the sanction in order to levy it against another individual. Unlike OGW, the size of a particular sanction is endogenously chosen by the subject, as is the number of sanctions to levy in a particular period. Let sij be subject i’s sanction of subject j and si be the sum of i’s sanctioning activities in a given period so that i’s sanctioning costs are equal to 0.5si. Further define si as the sum of all sanctions levied against subject i in a period. With these terms in mind, the period payoff in lab-dollars to subject i is
πi = 3.68xi + 80gi − 0.53giG − 0.5si − si where G = ∑gi is the aggregate investment in CPR appropriation. Subject i is constrained by his endowment, wi = xi + gi + mi . With homogeneous individuals, the aggregate payoff function is Π = 3.68X + 80G − 0.53G2 − 1.5S
(11.1)
where capitalized terms are aggregates and the social budget constraint is W = X + G + M.12 Face-to-face communication, both repeated and one-shot, in collective action experiments greatly enhances efficiency. Communication permits coordination (though not necessarily at the optimal investment level), the creation of “rules”, and an opportunity for social sanctioning. Name calling, even if it is ambiguous (e.g. ‘some greedy idiot is ruining it for us all’) can lead to changes in behavior. Holcomb and Nelson (1991 and 1997) have subjects anonymously pass notes to their partner while Isaac et al. (1985) provide subjects with the individual action necessary to achieve a socially optimal outcome.13
216
R. Moir
In order to mitigate social sanctions while still providing a focal point for the sanctioning rule, I adopt a variant of the Isaac et al. procedure. All sessions are divided into two parts. Subjects are informed that part 1 of the session lasts five periods and at that point new instructions describing part 2 are to be read aloud. After period five the following passage is read: You may have noticed in part 1, that if the other members of the group restrict their allocation of tokens to Market 2, your payoff increases. In fact, if each member of the group restricts their allocation to Market 2 to 9 tokens, then the group payoff (yours plus everyone else’s) will be maximized. However, if everyone else is allocating 9 tokens to Market 2, then your payoff will increase if you allocate more tokens to Market 2. Subjects are then informed that the remainder of the experiment will last ten periods. Similar to the OGW stage-game terminology, this information phase is denoted as “I” when describing the entire game. Treatment conditions In the baseline treatment, subjects are required to select how much to invest in Market 2 (CPR appropriation) and the remaining endowment is invested in Market 1. Using OGW terminology, I call this game “X”. The entire game is expressed by, X, X, X, X, X, I, X, X, X, X, X, X, X, X, X, X. At the end of each period, a subject knows her own investment in Market 2, the aggregate investment in Market 2 and her period payoff. In the monitoring treatment, subjects complete a monitoring phase (“M”) before making their investment decisions. Each subject selects the subject number of the individuals they wish to monitor. Subjects know that for each individual they choose to monitor they have one less token to use in making their own investment decision. After the monitoring stage, subjects choose their investments in Markets 1 and 2. The entire game is expressed by, X, X, X, X, X, I, M-X, M-X, M-X, M-X, M-X, M-X, M-X, M-X, M-X, M-X. At the end of each period, a subject knows her own investment in Market 2, the aggregate investment in Market 2, her period payoff, and the Market 2 investment decisions of any individuals she chooses to monitor. The sanctioning treatment follows the same procedure as the monitoring decision expressed above. Prior to the sanctioning phase (identified as “S”), a subject knows her own investment in Market 2, the aggregate investment in Market 2, her period payoff (before sanctions), and the investment decisions of any individuals she chooses to monitor. In the sanctioning stage, the subject is permitted to sanction, at the known cost of one-half the value of the sanction, any individuals they monitor and catch investing more than nine tokens in Market 2. A subject can levy any size sanction against any number of offenders as long as her sanctioning costs do not exceed her current period payoff. Subjects cannot earn negative payoffs in a period unless they are
Spies and swords 217 sanctioned by others. These sanctioning rules are enforced by computer software. The entire game is expressed by, X, X, X, X, X, I, M-X-S, M-X-S, M-X-S, M-X-S, M-X-S, M-X-S, M-X-S, M-X-S, M-X-S, M-X-S. At the end of each sanctioning stage, a subject knows her own investment in Market 2, the aggregate investment in Market 2, her period payoff, the investment decisions of any individuals she chooses to monitor, any sanctions she imposes, and the total sanctions levied against every member in the group. Sanctions become public knowledge at the end of each period. Subjects are aware of the numerical identity of all sanctioned individuals and the amount each is sanctioned.14
Model solutions The unique symmetric Nash equilibrium for the one-shot game X, involves each subject investing 16 tokens in Market 2 and earning 209.28 lab-dollars per period. When the resource stock is time independent and constant then the unique symmetric equilibrium in the one-shot game (X) is a symmetric subgame perfect equilibrium (SSPE) in the repeated game (Selten, 1973). Extending the game to include costly sanctioning does not alter the SSPE prediction and zero sanctions are predicted (OGW, 1994; Sethi and Somanathan, 1996). It is a simple corollary that if monitoring is costly and necessary for sanctioning, then monitoring will not take place. Thus the SSPE in each of the three treatments is for each subject to invest 16 tokens in CPR appropriation. This result is invariant to the inclusion of the information phase that takes place in period five. Resource appropriation at the SSPE exceeds the socially optimal level. At the symmetric social optimum (obtained by maximizing equation 11.1), each subject invests nine tokens in Market 2 and earns a period payoff of 417.04 labdollars.15 If seven subjects invest nine tokens each in Market 2, the remaining subject could maximize his period profit by investing 20 tokens for a payoff of 720.02 lab-dollars, thus making cooperation difficult to sustain.
Predictions In this chapter, I focus upon aggregate results – analysis of individual data is left for another paper. Three summary variables are calculated and form the basis of comparison. First, define the change in CPR appropriation as ∆Gkvt = Gkvt − avg(Gkv(1−5)),
(11.2)
where G is the aggregate investment in CPR appropriation, k = (B, M, or S) depending on whether it is a baseline, monitoring, or sanctioning session, v is the session number, and t > 5 is the period number. The final term, avg(Gkv(1−5)), is the average aggregate investment in the five periods prior to the communication
218
R. Moir
phase. It “corrects” for any inherent cooperation within a group and concentrates upon the change in CPR appropriation because of the change in treatment. This is the primary variable for analysis. Summary efficiency results are presented. The net efficiency gain over Nash in each period is calculated as NetE = (Π − Π*)/(ΠO − Π*),
(11.3)
where Π is the aggregate profit at the end of a period and the superscripts * and O refer to the aggregate profit realized at the Nash equilibrium and socially optimal levels of appropriation respectively.16 Gross efficiency adds back monitoring costs (3.68 lab-dollars per token invested), sanctioning costs, and the sanctions, GrsE = ([Π + (1.5)S + 3.68M] − Π*)/(ΠO − Π*).
(11.4)
∆NetEkvt and ∆GrsEkvt are defined as in equation 11.2. Given the SSPE prediction expressed earlier, the within-treatment prediction is that the passage read before period six will not change CPR appropriation behavior as compared to the first five periods. Prediction 1: The information phase introduced before period six does not significantly alter CPR appropriation from the average behavior in the first five periods. The alternative hypothesis tested is that the treatment reduces CPR appropriation. This prediction is summarized in Table 11.1, column 1. Because the information phase occurred in all three treatments, it is necessary to compare across treatments to test for treatment effects. Prediction 2: CPR appropriations are not affected by the addition of monitoring or monitoring in conjunction with sanctioning. Moreover, the addition of sanctioning to the monitoring game does not affect CPR appropriation. The alternative hypotheses tested in these cases propose that the treatment reduces CPR appropriation and increases cooperation. These predictions are summarized in Table 11.1, columns 2 and 3 respectively. The right to sanction has monetary value. Simply knowing that others can watch you may not affect your behavior. However, knowing that you may be punished if you are caught over-appropriating might cause a behavior change. If sanctions serve to reduce CPR appropriation then payoffs increase. Because monitoring is required in order to sanction, monitoring becomes more valuable in the sanctioning sessions.17 However, the SSPE prediction argues that neither monitoring nor sanctioning will take place.
Spies and swords 219 Table 11.1 Group contribution predictions Within treatment
Baseline
Sanctioning vs. monitoring
H0: ∆GBvt = 0 HA: ∆GBvt < 0 H0: ∆GMvt = 0 HA: ∆GMvt < 0 H0: ∆GSvt = 0 HA: ∆GSvt < 0
H0: ∆GMvt = ∆GBvt HA: ∆GMvt < ∆GBvt H0: ∆GSvt = ∆GBvt HA: ∆GSvt < ∆GBvt
H0: ∆GSvt = ∆GMvt HA: ∆GSvt < ∆GMvt
Prediction 3: Total levels of monitoring in the monitoring and sanctioning sessions are no different from total monitoring levels in the monitoring-only sessions. The alternative hypothesis tested is that level of monitoring increases in the sanctioning sessions.
Procedure In total 120 students from McMaster University participated in 15 separate sessions – five groups of eight in each of three treatments. Eight subjects at a time were seated at sheltered computer terminals to ensure privacy and were aware that all members in the room were identical in terms of payoffs. However, as subjects were identified only by a number that was private knowledge, anonymity of all actions was ensured. Instructions for part one of the experiment were read aloud and questions were answered.18 Subjects were informed that there were five periods in part one of the session after which a new set of instructions would be distributed. Subjects were also informed that their earnings were cumulated across both parts of the session. In addition to a fully detailed payoff table that allowed subjects to determine the payoff associated with their own Market 2 investment and the combined investment decisions of all others, subjects were provided with a scenario calculator on their computer screen. Subjects could enter hypothetical values for both their own investment and the investments of others and the resulting payoff was displayed.19 After period five, subjects were read the part two instructions and informed that after ten more periods the experiment would end. At the end of the experiment subjects were paid privately according to the pre-announced exchange rate of 157 lab-dollars to $1 Canadian. If the Nash equilibrium was played each period, subjects would earn $20, whereas if the social optimum was reached each period, subjects would earn $40. Average subject earnings for 1.5 hours of work were $24.25 (approximately double the minimum wage at the time of the experiment).
220
R. Moir
Results Summary results are presented in Table 11.2 and Figures 11.1 through 11.3. In Table 11.2 and in Figure 11.1, it is evident that CPR appropriation starts between the Nash equilibrium and social optimum and trends towards the Nash prediction over the first five periods. Despite an initial “information” effect in period six, the baseline and monitoring sessions quickly return to the Nash equilibrium. There is a significant reduction in the appropriation levels in the sanctioning sessions. From period six onward, the monitoring treatment exhibits the lowest gross and net efficiency gains over Nash. The sanctioning treatment always shows a higher gross efficiency gain when compared to the baseline after period six. This is a reflection of the decrease in CPR appropriation in the sanctioning sessions. However, once monitoring and sanctioning costs are accounted for, the net efficiency gain over Nash in the sanctioning sessions are not too different from those in the baseline treatment. The SSPE prediction is that neither monitoring nor sanctioning will occur. Instead, it is evident in both the monitoring and sanctioning sessions that monitoring took place (see Table 11.2 columns headed by M). In some cases, more than ten tokens on average were devoted to monitoring in a single period. The average level of monitoring declines more quickly in the monitoring-only treatment when compared to the sanctioning treatment. The large difference between the gross efficiency and net efficiency measures in the sanctioning sessions (see Table 11.2 and Figures 11.2 and 11.3) indicates that sanctions were used extensively, even in the last period. Thus, despite the fact that CPR appropriation levels were approximately Nash in the monitoring session, the SSPE prediction
Baseline
Monitoring
Aggregate appropriation
160 128
2 4 5 3 3 4 3 1 4 5 3 2 3 2 2 1 2 3 4 5 5 4 5 1 4 2 5 3 1 2 4 5 1 1 1
5 4 3 1 2
3 4 3 1 5 4 4 3 2 2 5 1 5 2 3 4 5 5 3 3 3 5 1 5 1 4 4 2 1 4 2 1 1 2 2
2 3 2 5 5 5 2 4 1 5 4 4 4 4 3 3 4 1 1 1 5 3 3 5 4 2 4 2 4 1 3 5 5 1 5 3 5 1 5 5 1 4 3 1 1 4 3 2 1 3 3 2 1 4 1 2 2 4 3 2 4 3 3 2 2 5 1 2 2 4 3 5 1 2
72 1
3
5
7
9
11
13
15
Monitoring and sanctioning Period
160 128
5 3 3 4 2 1 2 1 4 5
2 1 5 3 3 4 2 5 4 1 2 3 5 5 5 3 5 3 4 2 2 2 3 4 1 1 3 1 4 1 5 1 3 4 5 4 4 2 1 2
4 5 5 3 2 4 1 1 3 2
3 5 4 4 5 5 2 3 3 2 4 1 1 1 2
Group ID
72 1
3
5
7
9
11
13
15
Figure 11.1 Aggregate CPR appropriations by treatment.
Median spline
Spies and swords 221 Table 11.2 Summary data (averages across treatments)1 Period
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Baseline
Monitoring
G2
GrsE 3,4 G
105.6 119.0 121.4 114.2 122.8 106.2 112.6 119.6 119.0 123.8 116.8 118.6 122.6 125.0 120.2
0.60 0.27 0.21 0.40 0.12 0.60 0.43 0.26 0.27 0.10 0.34 0.29 0.18 0.10 0.24
M5
105.0 n/a 117.8 n/a 122.4 n/a 131.2 n/a 120.0 n/a 118.8 10.2 126.8 10.4 124.2 8.4 120.0 5.2 125.4 4.0 131.2 3.2 122.4 5.4 124.6 3.0 128.6 1.8 126.2 0.4
Sanctioning NetE
GrsE
G
M
NetE
GrsE
0.65 0.33 0.17 –0.13 0.21 0.25 0.00 0.11 0.24 0.06 –0.13 0.16 0.10 –0.04 0.06
0.65 0.33 0.17 –0.13 0.21 0.27 0.02 0.12 0.26 0.07 –0.13 0.17 0.11 –0.04 0.06
116.4 127.0 125.8 127.2 132.4 104.4 104.8 104.0 108.2 111.8 111.0 103.2 110.4 115.2 116.4
n/a n/a n/a n/a n/a 9.4 10.2 11.4 9.6 10.0 7.6 7.4 5.4 5.0 4.2
0.36 0.01 0.04 –0.02 –0.22 0.43 0.25 0.25 0.23 0.14 0.04 0.42 0.27 0.19 0.27
0.36 0.01 0.04 –0.02 –0.22 0.62 0.61 0.65 0.57 0.47 0.49 0.65 0.40 0.39 0.35
Notes 1 These values are the average values for the five sessions within each treatment. 2 Recall that the Nash prediction involves G = 128 and the social optimum involves G = 72. 3 The efficiency calculations (see equations 11.3 and 11.4) account for gains as a fraction of that possible over-and-above the Nash equilibrium. 4 NetE and GrsE are identical in the baseline treatment as monitoring and sanctioning are unavailable. 5 Monitoring was not available (n/a) in the first five periods of the monitoring and the sanctioning sessions.
1
Baseline
Monitoring
1 1 3 5 1 2 2 1 1 2 2 1 2 2 1 4 5 2 5 4 5 3 4 3 1 2 5 5 3 2 2 1 3 1 4 1 4 4 2 3 3 4 4 1 2 3 5 5 3 5 1 3 5 4 5 2 4 2 5 3 4 4 3 1 3 3 5 5 4 4 2 3
2 2 5 1 3 4 3 3 2 1 2 4 5 2 2 2 4 2 3 1 4 2 1 2 3 3 4 3 5 1 5 4 1 3 1 1 1 5 5 1 1 4 1 4 1 5 5 2 4 3 5 5 4 3 3 3 2 1 4 3 4 4 3 4 5 1 4 5 5 2 53 5 2 2
Gross efficiency gain over Nash
4 1 5
0.5 0 0.5 1
1
Monitoring and sanctioning 1 0.5 0 0.5
1 5 1 4 2 1 5 4 3 2 2 3 4 3 5
1 1
3
1 4 2 2 5 4 3 1 4 5 3 1 1 3 2 3 5 4 5 2 1 5 4 3 1 5 2 2 3 4
5
7
9
2 2 1 1 3 4 1 4 1 1 1 2 3 4 2 5 3 2 5 2 3 5 5 5 4 3 5 4 4 3
11 13
3
5
7
9
11 13
15
Period Group ID
15
Figure 11.2 Gross efficiency gain over Nash by treatment.
Median spline
222 R. Moir Monitoring
Baseline 1
1 1 3 5 1 2 2 2 1 2 1 1 1 4 5 2 5 1 2 4 2 5 1 3 4 2 1 4 5 5 3 2 3 4 2 3 3 4 3 4 4 2 3 5 5 5 3 1 5 1 5 2 4 2 3 3 4 3 4 1 3 5 5 4 4 2 3
Net efficiency gain over Nash
4 1 5
0.5 0 0.5 1
2 1 4 5 3
2 5 1 3 4 2 3 3 2 1 4 2 5 2 2 3 1 2 4 1 4 2 3 1 4 3 5 4 3 3 1 2 1 5 5 1 4 2 4 3 5 1 3 4 1 5 1 3 4 4 5 1 4 5 2 53 5 2 5
1
3
5
Monitoring and sanctioning 1 0.5 0 0.5
1 5 1 2 4 4 3 1 2 5 2 4 3 3 5
1 1
3
1 4 1 4 2 3 2 3 5 4 4 5 4 3 3 5 3 2 2 1 4 5 1 1 5 2 3 1 5 2
5
7
9
2 2 3 4 1 3 4 1 4 5 5 1 5 2 4 1 5 3 3 2
11
13
7
9
11
2 1 1 5 5 1 2 5 4 3 3 3 4 4 2
13
15
Period 1 2 2 3 5 5 1 3 4 4
Group ID
Median spline
15
Figure 11.3 Net efficiency gain over Nash by treatment.
receives at best only moderate support. Moreover, broad support for the success of sanctioning in reducing CPR appropriation is evident. The remaining results address the specific predictions from the previous section. Due to data limitations – small sample size, time dependence, and uncertain data distribution – the exact randomization test for difference in means (Moir, 1998) is used as the primary tool of analysis as it imposes few distributional assumptions.20 Result 1: Other than a small initial effect in period six, CPR appropriation is not reduced in either the baseline or monitoring treatments. However, in the sanctioning treatment, CPR appropriation is significantly reduced. This result is based upon the change in appropriation as defined in equation 11.2, which is net of any cooperation exhibited by the group in the first five periods. In each of the three cases, it is reasonable to expect that the introduction of the treatment (information, information and monitoring, information and monitoring and sanctioning) might actually reduce CPR appropriation and increase efficiency. However, this seems to be the case only in the sanctioning treatment. Support for this result is presented in Table 11.3. Consider the baseline treatment. While we can reject the null of no change in appropriation in favor of an alternative that predicts a decrease in appropriation in period six (p-value = 0.004), this result does not carry over when data, either averaged over periods six to 15 or from period 15 only, are considered (p-values of 0.702 and 0.742 respectively). Similar results are evident in the calculation of changes in gross efficiency gain over Nash (see Table 11.3). Simply informing the subjects of the optimal symmetric solution to the problem does not significantly affect their long-term behavior.
Spies and swords 223 Table 11.3 Within treatment comparisons Null
Alternative
Period(s)
Baseline p-value1
Monitoring p-value
Sanctioning p-value
∆G = 0
∆G < 0
avg. 6–15 6 15
0.702 0.004 0.742
1.000 0.460 1.000
0.004 0.024 0.024
∆NetE = 0
∆NetE > 0
avg. 6–15 6 15
1.000 0.460 0.980
0.004 0.024 0.103
∆GrsE = 0
∆GrsE > 0
avg. 6–15 6 15
1.000 0.381 0.980
0.004 0.004 0.024
0.583 0.004 0.703
Note 1 The p-values were calculated using a one-sided exact randomization test for differences in means (Moir, 1998). In each pairwise comparison, five data points from each treatment were used – one for each session within a treatment. Very low p-values are associated with rejection of the null in favor of the alternative. Very high values suggest that an opposite alternative hypothesis is supportable.
Once monitoring is available, however, subject behavior is changed, though surprisingly in the opposite direction to that hypothesized. CPR appropriation does not significantly decline in period six (p-value = 0.460). The average appropriation between periods six and 15 does not decline when compared to the first five periods (p-value = 1.000) nor when data from the final period is used (p-value = 1.000). These exceptionally high p-values indicate that the opposite alternative hypothesis – CPR appropriation would increase – is supportable. Again, similar results occur when using net or gross efficiency gain data. Monitoring alone increases appropriation from the CPR. Sanctioning is effective in reducing CPR appropriation. On average groups reduced their allocation to CPR appropriation by 16.8 tokens – over two tokens per subject. The p-values reported in Table 11.3 indicate that in all cases the null hypothesis of no change in appropriation is rejected in favor of the alternative that appropriation decreases (highest p-value is 0.024). This result is extended to analysis concerning efficiency values. An interesting comparison can be made between the p-value of the change in net efficiency gain in period 15 (p-value = 0.103) and the p-value of the change in gross efficiency gain in the same period (p-value = 0.024). The considerably higher p-value for change in net efficiency gain suggests that a large number of sanctions were levied, even in the last period when no further interaction is to take place. This certainly rejects the SSPE prediction and suggests that sanctioning might contain elements of punishment for offenses against moral norms.21 In summary, information on its own does not change CPR appropriation. Information combined with costly monitoring increases appropriation.
224
R. Moir
Information combined with costly monitoring and costly sanctioning decreases appropriation from the CPR. Given these results, the across treatment results are not all that surprising. Result 2: Monitoring, on its own, does not significantly reduce CPR appropriation when compared to the baseline treatment. Sanctioning, on the other hand, significantly reduces CPR appropriation when compared to either the baseline or monitoring sessions. Support for this result is found in Table 11.4. Consider section 1 of Table 11.4 (monitoring vs. baseline). All p-values are in excess of 0.65. In other words, monitoring did not reduce appropriation beyond the reduction already evident in the baseline sessions.22 Monitoring, with no formal method of sanctioning, exhibits a detrimental effect in this CPR environment. Subjects did not seem to be motivated by a moral aversion to being caught cheating. Instead, monitoring enabled them to see what others were doing and react by increasing their own appropriation rates. Perhaps they felt justified in increasing their appropriation knowing that they were not the only over-appropriators. Maybe they did not want to be the only “loser”, cooperating and earning lower profits while everyone else gained. It is also possible subjects attempted to police the group by over-appropriating (allocating more tokens to Market 2) as a method of punishing non-cooperators, and that this signal went unheeded or caused others to reply in kind.
Table 11.4 Across treatment comparisons Comparison
Null
Alternative
Period(s)
p-value1
Monitoring vs. baseline
∆GM = ∆GB
∆GM < ∆GB
avg. 6–15 6 15
0.837 0.929 0.659
Sanctioning vs. baseline
∆GS = ∆GB
∆GS < ∆GB
avg. 6–15 6 15
0.004 0.091 0.052
Sanctioning vs. monitoring
∆GS = ∆GM
∆GS < ∆GM
avg. 6–15 6 15
0.004 0.016 0.008
Sanctioning vs. monitoring
MS = MM
MS > MM
avg. 6–15 6 15
0.012 0.595 0.012
Note 1 The p-values were calculated using a one-sided exact randomization test for differences in means (Moir, 1998). In each pairwise comparison five data points from each treatment were used – one for each session within a treatment. Very low p-values are associated with rejection of the null in favor of the alternative. Very high values suggest that an opposite alternative hypothesis is supportable.
Spies and swords 225 Sanctioning significantly reduces CPR appropriation when compared to either the baseline or monitoring treatments.23 In sections 2 (sanctioning vs. baseline) and 3 (sanctioning vs. monitoring) of Table 11.4, all p-values are significant at the 10 percent level. This result is evident to a lesser degree when net efficiency gain is analyzed and is clearly evident when gross efficiency gain is analyzed (see Table 11.2). As compared to the monitoring treatments, subjects seem to concentrate less on over-appropriation and more on pecuniary punishment as a form of control in this environment. Punishments were not always trivial. Of the 50 periods in which sanctioning was available, 20 sanctions were levied that exceeded $1 (Canadian) in value.24 In one case a single subject received sanctions totaling over $9 in one period. In ten cases subjects were willing to absorb single period sanction costs in excess of $1 (the largest being $2.79) to impose a sanction. Result 3: Monitoring is used significantly more often in the sanctioning sessions than in the monitoring-only sessions. Support for this result can be found in Table 11.4 (section 4) and Figure 11.4. Except for period six (in which the p-value is 0.595) we can reject the null of equal monitoring in favor of increased monitoring under the sanctioning treatment (p-value = 0.012). Under both treatments, subjects initially make approximately equal investment in monitoring. Likewise, there is a noticeable decline in monitoring in both treatments, but the decline is not as severe in the sanctioning sessions.
Monitoring
Monitoring and sanctioning
25 5
Total monitoring
20
2 5 2
5
15
5 4
4 2 2 4
2 4
10
5 1
3 4
1
1 3 3
5
1 3 5
1
5 2 5
1 5 3 4
5 3 1 2
1 2 5
4
0 7
9
3 1 2
2 4 3
3
3 5 3
2
6
2 3
4 1 2
11
2 1
4 3
4 2 1
5 1
1 5 4
5
1
2
4
4 2 3 5
3 4 1
3
2 5 3
3 3
4
1 2
1 45 4
13
1 1 2 5 3 4
4
15 6
7
9
Period Group ID
Median spline
Figure 11.4 Group monitoring levels by treatment.
11
5
13
15
226
R. Moir
Discussion and conclusion This experiment was designed to examine the role of costly monitoring and costly sanctioning as self-governance devices to control excessive use of a CPR. The design follows that of OGW (1994) with a few important exceptions. Specifically, the method of communication was altered, monitoring was costly, and sanctions were endogenously chosen within a framework of rules specified by the experimenter. Using the information passage is not equivalent to the face-to-face communication that took place in OGW. In the baseline treatment, CPR appropriation rates approximate the Nash equilibrium prediction, both before and after the reading of the passage. Face-to-face communication seems to have longerlasting effects in OGW. To this extent, I believe that face-to-face communication has three basic effects: i coordination (focus on a specific appropriation level or plan), ii social sanctioning and reinforcement of moral norms, and iii formulation of group solidarity or group pride. In this experiment, the reading of the passage was meant to mimic (i) while avoiding (ii). The third effect can be thought of in the following way: because the group has put effort into finding an appropriation plan each member may be more willing to cooperate with the plan. In an experiment, this may manifest itself as the subjects devising plans to get the most money from the researcher.25 It seems that the three elements in conjunction provide the cooperative effects we associate with communication. Costly monitoring is an important element in modeling CPR appropriation behavior. It is empirically false to model monitoring as costless. Villagers must give up time to patrol CPR use or must hire guards to do so. However, monitoring on its own is not enough, at least with this subject pool, to curtail over-appropriation from the CPR. In fact, in this experiment, monitoring alone leads to increased CPR appropriation. This means that the results of many of the collective action experiments in public goods and oligopolies that involve costless information about the decisions of others in the experiment cannot be generalized as we once thought. Monitoring may alter people’s behavior but the effect can lead to excessive competition if sanctions are not available.26 Finally, an enforceable sanctioning rule is an effective device in controlling CPR appropriation. Even in this harsh environment, in which cheating upon group cooperation pays almost $2 Canadian per period, sanctioning leads to significant decreases in CPR appropriation. Despite theoretical claims that monitoring and sanctioning should not take place, subjects make extensive use of both devices. The large amount of sanctioning in this experiment, as compared to the OGW endogenous sanctioning result, may be attributed to a combination of costly monitoring and the lack of important elements of communication.
Spies and swords 227 Nevertheless, the theoretical prediction that sanctioning is a useless governance device is inconsistent with experimental evidence and empirical observation, and should be altered. That said, the net efficiency gain over the Nash prediction is not significantly improved in the sanctioning treatment when compared to the baseline. While the resource is better managed, overall economic welfare is not necessarily improved. An important policy implication can be drawn from these results. Monitoring, without the ability to impose sanctions, social or pecuniary, can lead to CPR appropriation rates that exceed levels when only minimal monitoring is available (e.g. when resource users can only observe aggregate extraction rates). The Pacific salmon fishery on the North American West Coast is a good example of just such a policy breakdown. In 1997 a dispute arose between Canada and the United States regarding conservation efforts and this brought the Pacific Salmon Treaty into question. During the treaty negotiations Canadian and American fishers were able to monitor each other’s fishing effort (if not the actual catch itself), but with no treaty in place, there were no rules to be broken and no principle for imposing sanctions. This resulted in a race for capture in both countries and severe depletion of the salmon stock. There are further insights to be gathered both from this data and from future experiments. With the current data a more detailed analysis of individual appropriation, monitoring, and sanctioning behavior can be performed using a game theoretic approach like that outlined in Weissing and Ostrom (1991 and 1993). When the environment is expanded to include costly monitoring or an enforced sanctioning rule the strategies available to subjects increase in an almost exponential fashion. Many subjects commented that they cooperated long enough to establish a reputation and then dumped all their tokens into CPR appropriation for one period and then returned to cooperative behavior. These subjects used reputation to lull others into cooperative strategies and then earned high profits by defecting. Other subjects commented that they would have been more eager to cooperate had they had the chance to engage in face-to-face communication. Actions mediated by the computer and the experimenter remove the personal interaction according to some subjects. Still, the introduction of an enforced sanction rule did serve the purpose of decreasing CPR appropriation. Another line of research involves analyzing this data to see if the endogenously determined level of sanctions is increasing in the size of the offense. Future experiments using the OGW design should look at how changes in the costs of monitoring and sanctioning affect resource appropriation. The software used for this experiment allows for these costs and the sanctioning rules to be modified on-the-fly, so subject-selected values can be implemented. This will allow for subjects to select their governance devices using various constitutional orders and allow research to continue in the important field of institutional economics. These future studies have the potential to enhance understanding of CPR governance and more generally the governance of individual behavior in collective action environments.
228
R. Moir
Acknowledgments I appreciate the helpful comments of Bram Cadsby, David Feeny, Stuart Mestelman, R. Andrew Muller, Elinor Ostrom, and Tasuyoshi Saijo and four anonymous referees. All remaining errors are my own.
Notes 1 Bromley et al. (1992) provides a large number of case studies that attest to both the successes and failures of communally managed CPRs. In particular, Berkes (1992) provides an interesting analysis of fisheries in Turkey, and is able to ascertain some of the variables that affect success. 2 Ostrom et al. (1994) summarize and expand upon experiment results first reported in Ostrom et al. (1992). 3 I adopt the use of the word “sanction” to be a punitive device, the value of which is not returned to the community. A good example of this would be fishers destroying the nets and traps of uncooperative individuals or outsiders (Acheson, 1988). I will use “fine” when any money is returned to the economy either directly to the individual catching a transgressor or to the community at large. 4 As an extension of OGW, this chapter focuses specifically upon individuals monitoring other individuals just as OGW permitted individuals to sanction each other. Acheson (1988) describes a lobster fishery in Maine, in which fishermen self-police; to a certain extent, they observe each other’s fishing behavior and mete out punishment as required. A different line of research examines the role of collective monitoring when individual actions are too costly to observe (e.g. Taylor et al., 2004). 5 In Ostrom (1992b: 69–76), monitoring is listed as a key design principle, distinct from sanctioning. There are three main reasons for this. First, monitoring costs in terms of time and resources. Second, monitoring may be a difficult procedure if the damage from overuse is not immediately apparent. Third, monitoring allows individual transgressors to be caught and possibly punished. It is possible to design Dr. Strangelove doomsday rules that involve group-wide punishment for any infraction (e.g. all boats are destroyed if anyone overfishes), but such rules are not fair, and may lead to highly inefficient outcomes if the punishment is ever used. 6 This result is contrary to the general finding in voluntary contributions to public goods experiments, where subjects seem to show cooperation in excess of the Nash equilibrium prediction but, over time, collapse towards the Nash equilibrium. This difference in results may be due to the differences in the externality – public good experiments have a positive externality while CPR experiments have a negative externality – or because of income effects (Ledyard, 1995). Ito et al. (1995) suggest that subjects may be interested in share maximization or difference maximization. Another possible explanation may be because the difference in payoffs at the Nash equilibrium and social optimum in public good experiments has been large whereas in CPR experiments it has typically been small. This hypothesis has not been tested. 7 A broader range of CPR experiments are summarized in Moir (1996), Ostrom (1998), and an earlier version of this chapter. Here I concentrate upon collective action experiments in which sanctioning and/or monitoring play a significant role. 8 Subjects in this final treatment were drawn from a pool of subjects experienced with an imposed sanctioning rule. It is interesting to note that in the sessions in which subjects chose to adopt a sanctioning rule and set its level, the sanctions were rarely imposed. Nonetheless, cooperation remained high. This suggests that not only is the ability to sanction important, but how rules arise through group discussion (e.g. the agreement upon an appropriation level and the selection of a sanctioning scheme) also affects the outcome.
Spies and swords 229 9 OGW state “[t]here is a nontrivial amount of sanctioning that can be classified as error, lagged punishment, or ‘blind’ revenge” (p. 176) in their exogenously imposed sanctioning, no-communication sessions. Similar instances occurred in communication sessions (pp. 188–189) and were concerns expressed by subjects (p. 190). 10 The cost of monitoring is the investment opportunity forgone. Both the Nash equilibrium and socially optimal solution to this problem are independent of endowment so the “cost” of monitoring is equal to the return from investment in the private market (3.68 lab-dollars). 11 With seven other individuals, a person could devote up to seven tokens to monitoring. However, because subjects were always told the aggregate investment in CPR appropriation (G) and knew their own appropriation (gi), then with only six tokens devoted to monitoring, they could deduce the appropriation of the remaining individual. 12 Solving equation 11.1 for A= 0 when M = S = 0 implies that G > 151. Thus when G > 151, aggregate profits are negative, the resource is depleted. Although total endowment is 160 and would permit negative aggregate profits, the largest subset of individuals (seven) commands only 140 tokens and thus cannot cause aggregate profits to be negative on its own. OGW results suggest that when a subset of individuals can cause negative aggregate profits, the disequilibrium effects are large and subjects become extremely competitive over CPR appropriation. 13 Holcomb and Nelson, in a duopoly environment, find that this note passing facilitates tacit collusion between subjects, whereas Issac et al., in a public good environment, find that cooperation is not noticeably enhanced. 14 This is a reasonable model of society. A lobster fisherman may be unaware of infractions upon another boat (monitoring information is private), but cut trap lines and the severity of the damage to equipment will quickly become public knowledge. 15 Unlike the OGW design, here the symmetric optimal individual investment is an integer. 16 The calculation of benchmark values AO and A* assumes that M = S = 0. 17 Seabright (1993: 121) argues that history-dependent threats may allow groups to achieve efficient outcomes, “but if the outcomes are achieved, the threats do not need to be exercised, so we may never see any history-dependence in observed behavior”. Devices that embody such threats (e.g. monitoring and sanctioning) may thus serve to enforce efficient behavior while remaining relatively unused. Fehr and Schmidt (1999) suggest that a model of inequity aversion could explain why sanctions are used and why they improve efficiency. Fehr and Gächter (2000a) suggest that positive and negative reciprocity can explain the use of sanctions while Fehr and Falk (2002) extend the model to include an agent’s desire to avoid social disapproval (or gain social approval). Cooper and Stockman (2002) conduct a sequential contribution steplevel public good experiment. They suggest that the latter three behavioral models do not explain their data. 18 Instructions are available from the author upon request. 19 This experiment was conducted at the McMaster Experimental Economic Laboratory (Online, available at: //socserv.mcmaster.ca/econ/mceel/home.htm) using a Novell network and the Windows operating system. The software was designed by the author and Andrew Muller, using the Delphi programming language. 20 These tests were conducted using software that allows me to calculate p-values for one-tail hypothesis testing. When testing against zero, a second sample of five observations, all equal to zero, was used. The lowest possible p-value is 0.004, which means that all observations “agreed with” the alternative hypothesis. The highest possible p-value of 1.000 implies that all observations directly contradicted the alternative hypothesis. For instance, if the alternative hypothesis was that appropriation rates should fall, a p-value of one indicates that all observations in the sample show an increase in appropriation rates. Strictly speaking, we fail to reject the null in these
230
21
22
23
24 25
26
R. Moir
cases. However, it suggests that an alternative alternative hypothesis, directly opposing the one actually tested, could be supported. Subjects were aware that the experiment would finish at the end of period 15. Given sanctions are costly, any sanction a subject levies will reduce his cumulative profit and cannot lead to future cooperation in this game as it has ended. Perhaps this result is further evidence of the limitation of simple game theory in ultimatum games (see Roth, 1995 for a survey). In addition to the exact randomization test, a parametric regression, Gt = β0 + β1M + β2S + β3t + εt was conducted in which, M = 1 for monitoring-only sessions (and 0 otherwise), S = 1 for sanctioning sessions (and 0 otherwise), and t = {1,2, . . ., 10} is the period number normalized at 1 for period six. The monitoring coefficient is positive (3.70) and significant (p-value = 0.053) while the sanctioning coefficient is negative (−18.66) and significant (p-value = 0.000). The adjusted R2 for this cross-sectional time series is 0.536. Additional information can be found in Moir (1996). The difference in the payoff function between this experiment and OGW leads to an important difference in the gains from cheating when all others are investing in CPR appropriation at the optimal level. In OGW, a payoff-maximizing reaction leads to a potential increase in earnings of $1.10 in a period, while in this design potential earnings increase by almost $2.00 in a period. The monetary gain to cheating is greater in this experiment and yet sanctioning is still effective. To put this in perspective, recall that the average subject payoff using the results of all subjects in all sessions was $24.25, or approximately $1.62 per period. A sanction of $1 reduces this average period payoff by almost 62 percent. This issue was pointed out to me by Elinor Ostrom at a meeting of the Economic Science Association. It was further reinforced by the comments of a subject who was participating in an unrelated experiment at McMaster. Her comments amounted to (I paraphrase), “I am glad we talked in the hall without the experimenter. It became a challenge to see how much money we could take from the experimenter.” Communication clearly led to group solidarity in this instance. This may have a detrimental effect in CPR or public good environments as it may increase free riding behavior but it may have a socially desirable effect in the case of oligopolies by increasing output and reducing prices (Holcomb and Nelson, 1991 and 1997).
References Abreu, D., 1988. On the theory of infinitely repeated games with discounting. Econometrica, 80 (4), 383–396. Acheson, J.M., 1988. The Lobster Gangs of Maine. Hanover, NH: University Press of New England. Andreoni, J. and Petrie, R., 2004. Public goods experiments without confidentiality: a glimpse into fund-raising. Journal of Public Economics, 88 (7–8), 1605–1623. Bergstrom, T., 2003. The algebra of assortative encounters and the evolution of cooperation. International Game Theory Review, 5 (3), 211–228. Berkes, F., 1992. Success and failure in marine costal fisheries of Turkey. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.) Making the Commons Work: Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press, 161–182. Blaikie, P., Harriss, J., and Pain, A., 1992. The management and use of common-property resources in Tamil Nadu, India. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.) Making the Commons Work:
Spies and swords 231 Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press, 247–264. Bromley, D.W., 1992. The commons, property, and common-property regimes. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.). Making the Commons Work: Theory, Practice Policy. San Francisco, CA: Institute for Contemporary Studies Press, 3–15. Bromley, D.W., Feeny, D., McKean, M., Peters, P., Gilles, J., Oakerson, R., Runge, C.F., and Thomson, J. (eds.), 1992. Making the Commons Work: Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press. Caputo, M. and Lueck, D., 1994. Modeling common property ownership as a dynamic contract. Natural Resource Modeling, 8 (3), 225–245. Casari, M. and Plott, C.R., 2003. Decentralized management of common property resources: experiments with a centuries-old institution. Journal of Economic Behavior and Organization, 51 (2), 217–247. Cason, T. and Kahn, F., 1999. A laboratory study of voluntary public goods provision with imperfect monitoring and communication. Journal of Development Economics, 58 (2), 533–552. Cooper, D. J. and Stockman, C.K., 2002. Learning to punish: experimental evidence from a sequential step-level public goods game. Experimental Economics, 5 (1), 39–51. Cordell, J. and McKean, M., 1992. Sea tenure in Bahia, Brazil. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.). Making the Commons Work: Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press, 183–205. Dutta, P.K. and Sundaram, R.K., 1993. The tragedy of the commons? Economic Theory, 3 (3), 413–426. Fehr, E. and Schmidt, K.M., 1999. Theory of fairness, competition and cooperation. Quarterly Journal of Economics, 114 (3), 817–868. Fehr, E. and Gächter, S., 2000a. Fairness and retaliation: the economics of reciprocity. Journal of Economic Perspectives, 14 (3), 159–181. Fehr, E. and Gächter, S., 2000b. Cooperation and punishment in public goods experiment. American Economic Review, 90 (4), 980–994. Fehr, E. and Falk, A., 2002. Psychological foundations of incentives. European Economic Review, 46 (4–5), 687–724. Gächter, S. and Fehr, E., 1999. Collective action as a social exchange. Journal of Economic Behavior and Organization, 39 (4), 341–369. Gordon, S., 1954. The economic theory of a common-property resource: the fishery. Journal of Political Economy, 62 (2), 124–142. Hardin, G., 1968. The tragedy of the commons. Science, 162 (December), 1343–1348. Hawkins, M., 1998. Too many chasers. Times Globe, (28 August). Saint John, NB, newspaper, p. B1. Hechter, M., 1987. Principles of Group Solidarity. Berkeley, CA: University of California Press. Holcomb, J.H. and Nelson, P.S., 1991. Cartel failure: a mistake or do they do it to each other on purpose? Journal of Socio-Economics, 20 (3), 235–249. Holcomb, J.H. and Nelson, P.S., 1997. The role of monitoring in duopoly market outcomes. Journal of Socio-Economics, 26 (1), 79–94. Huberman, B.A. and Glance, N.S., 1993. Diversity and collective action. In: H. Haken and A. Mikhailov (eds.) Interdisciplinary Approaches to Nonlinear Systems. New York, NY: Springer Press, 44–64.
232
R. Moir
Isaac, R.M., McCue, K.F., and Plott, C.R., 1985. Public goods provision in an experimental environment. Journal of Public Economics, 26 (1), 51–74. Ito, M., Saijo, T., and Une, M., 1995. The tragedy of the commons revisited: identifying behavioral principles. Journal of Economic Behavior and Organization, 28 (3), 311–335. Kreps, D., Milgrom, P., Roberts, J., and Wilson, R., 1982. Rational cooperation in the finitely repeated prisoner’s dilemma. Journal of Economic Theory, 27 (2), 245–252. Ledyard, J., 1995. Public goods: a survey of experimental research. In: J.H. Kagel and A.E. Roth (eds.) The Handbook of Experimental Economics, Princeton, NJ: Princeton University Press, 111–194. Lueck, D., 1994. Common property as an egalitarian share contract. Journal of Economic Behavior and Organization, 25 (1), 93–108. McKean, M., 1992. Traditional commons land in Japan. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.) Making the Commons Work: Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press, 63–98. Moir, R., 1996. The analysis of cooperation in collective action games. PhD Thesis. Hamiton, ON: McMaster University. Moir, R., 1998. A Monte-Carlo analysis of the Fisher randomization technique: reviving randomization for experimental economists. Experimental Economics, 1 (1), 87–100. Ones, U. and Putterman, L., 2007. The ecology of collective action: a public goods and sanctions experiment with controlled group formation. Journal of Economic Behavior and Organization, 62 (4), 495–521. Ostrom, E., 1992a. The rudiments of a theory of the origins, survival and performance of common-property institutions. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.) Making the Commons Work: Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press, 293–318. Ostrom, E., 1992b. Crafting Institutions for Self-Governing Irrigation Systems. San Francisco, CA: Institute for Contemporary Studies Press. Ostrom, E., 1998. A behavioral approach to the rational choice theory of collective action. American Political Science Review, 92 (1), 1–22. Ostrom, E., Walker, J., and Gardner, R., 1992. Covenants with and without a sword: selfgovernance is possible. American Political Science Review, 86 (2), 404–417. Ostrom, E., Gardner, R., and Walker, J., 1994. Rules, Games, and Common-pool Resources. Ann Arbor, MI: University of Michigan Press. Palfrey, T.R. and Rosenthal, H., 1994. Repeated play, cooperation and coordination: an experimental study. Review of Economic Studies, 61 (3), 545–565. Peña-Torres, J., 1997. The political economy of fishing regulation: the case of Chile. Marine Resource Economics, 12 (4), 253–280. Platteau, J.P., 1994a. Behind the market stage where real societies exist – Part I: the role of public and private order institutions. Journal of Development Studies, 30 (3), 533–577. Platteau, J.P., 1994b. Behind the market stage where real societies exist – Part II: the role of moral norms. Journal of Development Studies, 30 (3), 753–817. Roth, A.E., 1995. Bargaining experiments. In: J.H. Kagel and A.E. Roth (eds.) The Handbook of Experimental Economics, Princeton, NJ: Princeton University Press, 253–348. Saijo, T. and Nakamura, H., 1995. The “spite” dilemma in voluntary contribution mechanism experiments. Journal of Conflict Resolution, 39 (3), 535–560.
Spies and swords 233 Schmitt, P., Swope, K., and Walker, J., 2000. Collective action with incomplete commitment: experimental evidence. Southern Economics Journal, 66 (4), 829–854. Seabright, P., 1993. Managing local commons: theoretical issues in incentive design. Journal of Economic Perspectives, 7 (4), 113–134. Sefton, M., Shupp, R., and Walker, J., 2005. The effect of rewards and sanctions in provision of public goods. Manuscript: Ball State University. Online, available at: http://web.bsu.edu/cob/econ/research/papers/ecwps054shupp.pdf. Selten, R., 1973. A simple model of imperfect competition where 4 are few and 6 are many. International Journal of Game Theory, 2 (3), 141–201. Sethi, R. and Somanathan, E., 1996. The evolution of social norms in common property resource use. American Economic Review, 86 (4), 766–788. Taylor, M.A., Sohngen, B., Randall, A., and Pushkarskaya, H., 2004. Group contracts for voluntary nonpoint source pollution: evidence from experiment auctions. American Journal of Agricultural Economics, 86 (5), 1196–1202. Thomson, J., Feeny, D., and Oakerson, R., 1992. Institutional dynamics: the evolution and dissolution of common-property resource management. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.) Making the Commons Work: Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press, 129–160. Wade, R., 1992. Resource management in South Indian villages. In: D.W. Bromley, D. Feeny, M. McKean, P. Peters, J. Gilles, R. Oakerson, C.F. Runge, and J. Thomson (eds.) Making the Commons Work: Theory, Practice and Policy. San Francisco, CA: Institute for Contemporary Studies Press, 207–228. Walker, J., Gardner, R., and Ostrom, E., 1990. Rent dissipation in a limited-access common-pool resource: experimental evidence. Journal of Environmental Economics and Management, 19 (3), 203–211. Walker, J.M. and Halloran, M.A., 2004. Rewards and sanctions and the provision of public goods in one-shot settings. Experimental Economics, 7 (3), 235–247. Weissing, F. and Ostrom, E., 1991. Irrigation institutions and the games irrigators play: rule enforcement without guards. In: R. Selten (ed.) Game Equilibrium Models II: Methods, Morals, and Markets. Berlin: Springer-Verlag, 188–262. Weissing, F. and Ostrom, E., 1993. Irrigation institutions and the games irrigators play: rule enforcement on government- and farmer-managed systems. In: F.W. Scharpf (ed.) Games in Hierarchies and Networks: Analytical and Empirical Approaches to the Study of Governance Institutions. Boulder, CO: Westview Press, 387–428. Yamagishi, T., 1986. The provision of a sanctioning system as a public good. Journal of Personality and Social Psychology, 51 (1), 110–116.
12 Discussion Common property and public goods Catherine L. Kling
Part II of this book entitled “Common property and public goods,” contains five chapters that employ experimental methods to study the effects of alternative incentives with respect to public goods and common pool resources on the behavior of subjects. How useful are the findings from these chapters to advancing the field of environmental economics? It is my task to take a stab at this question and to opine, more generally, on ways in which experimental studies could be adapted to be more useful. As an environmental economist with a strong interest in non-market valuation, I find a fun paradox to this question as it appears to me that some of the earliest work in experimental economics was done by environmental economists and that this early body of work identified some key issues that helped spawn the behavioral economics literature. After all, what were the early contingent valuation studies (a method that was firmly established as an important tool by the mid 1970s) but experimental studies in the field? The WTP/WTA divergence was first observed in these studies (for example Brown and Hammack 1973); further, as early as 1979, Bishop and Heberlein were experimenting with real money payments and numerous environmental economists were studying interviewer effects, sequencing, alternative payment vehicles, and the effects of many other institutional features of markets and the environmental goods being valued. Perhaps it is appropriate that we have come full circle to ask what the current research in experimental economics is contributing to the field of environmental economics and how those contributions might be further strengthened. In considering this question, it is surely important to remind ourselves that there are many ways in which we can learn from any one study and that any one study rarely provides all the answers to any question. Indeed, some experimental findings may have very immediate and direct policy relevance, others provide insight into response to incentives under specific institutions, but only in a highly controlled environment that will be hard to extrapolate beyond the experimental setting. Nonetheless such contributions help identify key drivers, which may ultimately have direct policy relevance. The chapters in this part provide a rich base under which to consider this continuum. Capra and Tanaka take us to the world of decision making in an explicitly dynamic setting in the purview of a classic common property problem. They
Discussion 235 motivate their experiment as being relevant to the case of a renewable fishery where the current period’s fish stock is a function of last period’s stock in a discontinuous manner: if the previous period’s stock is less than 31, there is less than full replacement, and the stock size and consumption potential declines in the next period. Above 31, the stock size increases and consumption potential increases commensurately.1 Interestingly, they seek to assess the type of communication that is most effective in solving the common property problem. By analyzing the script of a chat room that allowed communication between the five subjects in each session, they conclude that three forms of communication were necessary to achieve the optimum: (1) expression of awareness of the threshold effect, (2) recognition of this by the majority of the subjects, and (3) acknowledgment of success. What do we learn from this experimental setting for regulation in fisheries or other common property resources? The focus of this work on the form and content of the communication between the subjects mirrors the important literature in fisheries related to information sharing (or lack thereof!) and control. The authors have indeed focused on an important question that resource economists understand to be important. However, in addition to the small size of the subject pool (it would be a rare commercial fishery with only five fishermen), there is the difficulty of interpreting causation in their work. While they make a compelling case that when subjects solve the coordination problem, there is increased communication of the sort they identify, it is not clear whether this communication is the cause or result of this successful coordination. Indeed, the third identified category, acknowledgement of success, can only occur after successful cooperation and seems quite likely to be directly caused by that success. Rob Moir also studies the behavior of subjects deciding how much of a common property resource to exploit in a dynamic setting. In his experimental design however, subjects cannot communicate, but in one treatment they can expend resources to learn about the level of use of the resource by other individuals in the experiment, and in another treatment subjects can monitor individuals and punish (sanction) those who they have monitored. He finds evidence that monitoring without sanctioning does not move the group towards social efficiency (in fact it moves further towards the Nash solution), but that monitoring in conjunction with sanctioning does. Moir argues that these results are relevant for fisheries using the Pacific salmon fishery as an example. Citing a dispute between the US and Canada that resulted in non-enforcement of the relevant treaty, he argued that anglers monitored one another, but were not able to impose sanctions and that significant overfishing occurred as a result. While this analogy is compelling, one would not want to jump to the conclusion that a policy recommendation of enabling anglers to punish one another is the solution. However, the identification of the lack of consequences associated with monitoring as a key problem does help us identify the sources of market failure and the directions that effective policy
236
C.L. Kling
could take, for example, don’t bother monitoring if you aren’t going to put some teeth behind it. Cherry and Dickinson study the consequences of offering subjects multiple public goods in a voluntary contribution mechanism with an eye toward how such an offering affects the total amount of public good provision. In one treatment, they replicate the standard findings that positive, but suboptimal public good provision come from a VCM, in a second treatment they offer three identical public goods (the “multiple homogeneous” treatment), and in a third treatment, subjects can contribute to three different public goods (that differ by differing marginal benefits of contribution). They find that offering multiple public goods increases the total contribution to that public good. What does such an experiment teach us about the design of efficient provision mechanism for public goods? The authors indicate that: Our results indicate that more options for otherwise similar public goods will increase total dollar contributions towards the larger cause. For example, multiple options for providing relief following an environmental disaster is predicted to increase the total amount of voluntary giving among private citizens. In my view, this conclusion reaches too far for at least two reasons. First, they are comparing one public good with three identical public goods: there is no way to know whether the effect they find will be of the same sign when moving from a baseline other than one. Diminishing total contributions might set in when moving from three to four, six to seven, or any other baseline level of provision. Second, recall that the “public good” studied in this context is the contribution of a token that returns 0.6 of its value to each of the group’s members. If there is one thing that we have learned in over 30 years of non-market valuation studies with respect to the value of public goods, it is that context matters. The value of a change in the provision of a public good varies with its initial condition, the extent, form, and specific type of change proposed, the payment mechanism used to fund the provision, the number and quality of substitutes available, the time needed for the change to occur, etc. To conclude that what is learned in the context of a single experiment for a good that is quite sterile in its context will apply uniformly to all situations of public good provision overreaches. Having thus criticized the conclusions, I hasten to add that the results are likely to hold in some cases and for some/many goods. Indeed, they are quite consistent with the simple observation that most charitable causes do indeed have multiple options for provision – there are multiple charitable organizations that collect funds for cancer research, homeless populations, aid to Africa, preservation of natural habitats, diabetes research, support for endangered species, and almost any charitable cause that comes to mind. Cotten et al. directly raise and assess a potential limitation concerning the relevance of experimental findings regarding voluntary contribution
Discussion 237 mechanisms for environmental policy. They consider the disturbing question of whether subjects are truly responding to the incentives of the experimental setting or are merely confused. Their experimental design, randomly matching some human subjects with a preprogrammed computer who will not earn rewards for anyone, provides striking findings that support the contention that other contributions to the public good in this context cannot be fully explained by “other-regarding” behavior. Their inference that this unexplained behavior is due to “confusion” may be correct, but it may also be that other explanations can account for it. For example, it may be that this is really a test of whether the incentives are large enough to motivate subjects to consider seriously the consequences of their actions. While the authors note that their payment levels are in line with typical wage rates for students, this alone does not prove that the rewards are high enough. Indeed, the fact that subjects do not respond to the incentive in an apparently logical way could be interpreted as prima facie evidence that inadequate incentives exist for full engagement in the activity. As other researchers consider these questions, additional possible explanations may arise; nonetheless, the study raises considerable concern about the ability to apply directly the findings of public good experiments to environmental policy. In a study using international emissions abatement as the motivating public good, Sturm and Weimann construct an experiment to test whether laboratory subjects behave as predicted by Hoel’s (1991) model of unilateral emissions abatement. Hoel predicts that abatement by a single country above the equilibrium level will result in greater total abatement, although lower abatement levels by the other countries. Sturm and Weimann test this hypothesis as well as one based on a variation of the game that has one player moving first. In the latter setting, their goal is to determine whether leadership matters. Their findings largely confirm Hoel’s predictions – unilateral emissions abatement does generate lower total emissions. Their more interesting findings come in their sequential treatments where they find that an appointed leader will usually (33 out of 36 times!) choose a higher abatement level rather than the lower abatement predicted by Nash equilibrium. Their findings suggest that leadership does indeed matter and in a socially optimal direction. How relevant are these findings for predicting governments’ actions in international pollution abatement? The authors argue that there is little basis for making a claim to relevance as “the external validity of results gained in singular laboratory experiments is restricted to the specific laboratory environment, we have to admit that we are not able to make any recommendations for environmental policy purposes.” While this modest assessment is refreshing, experiments such as this provide important insight that can eventually add provide clear policy advice. Each of these studies has clearly contributed to our understanding of environmental economics, although they represent very different locations on the spectrum of immediate relevance. While each contributes valuable information, do I have any suggestion as to how these studies might be of even greater relevance
238
C.L. Kling
to environmental economics? The degree to which the results of an experiment are relevant to a particular question or problem depends on the degree to which it captures the important features of the problem. This is, of course, the standard conundrum of any model: simplify (assume away) too much and one is left with a sterile model (experiment) that cannot usefully inform policy, simplify too little and one is left with an overly complex model that does not identify the key policy variables or response. Thus, the ticket is to get the “right” amount of context, whether that is the right amount of realistic detail and financial incentives in an experiment, the right amount of information to provide respondents in a stated preference study, or the right number and definition of explanatory variables in an econometric analysis. In my view, many experiments could move along the continuum towards the more immediate policy relevance side by adding more relevant context to their studies. This can entail describing (as some of the experiments in this group of five did) the good in non-generic terms, it can entail increasing the payoffs to subjects, and it can entail using subject pools that are directly relevant to the questions being studied (fishermen, environmental group representatives, etc.). There are of course many other ways in which relevant context can be added. I readily admit that this view is hardly novel, indeed I suspect that Erica Jong summarizes it very well when she noted that “advice is what we ask for when we already know the answer but wish we didn’t.” It’s hard work to add context and it requires understanding a great deal about the institutions and history of a particular environmental problem to get it right. While this may make the job of the experimentalist more difficult, it has the potential for large dividends in increasing the direct relevance of the findings of experiments to environmental policy.
Note 1 While the authors, use of two different values for “A” in their production function makes the threshold between rising and declining stocks discontinuous and larger than it would be for a single value, a single value would still generate the key threshold effects for their functional form. Specifically, stocks would decline in later periods when Kt < A2, and increase otherwise.
References Bishop, R. and T. Heberlein. (1979) “Measuring Values of Extra-Market Goods: Are Indirect Measures Biased?” American Journal of Agricultural Economics 61: 926–930. Brown, Gardner and Judd Hammack. (1973) “Dynamic Economic Management of Migratory Waterfowl.” Review of Economics and Statistics 55: 73–82. Capra, C. Mónica and Tomomi Tanaka. “Communication and the Extraction of Natural Renewable Resources with Threshold Externalities” in this volume. Cherry, Todd L. and David L. Dickinson. “Voluntary Contributions with Multiple Public Goods” in this volume.
Discussion 239 Cotten, Stephen J., Paul J. Ferraro, and Christian A. Vossler. “Can Public Goods Experiments Inform Policy? Interpreting Results in the Presence of Confused Subjects” in this volume. Hoel, Michael. (1991) “Global Environmental Problems: The Effects of Unilateral Actions Taken by One Country.” Journal of Environmental Economics and Management 20: 55–70. Moir, Rob. “Spies and Swords: Behavior in Environments with Costly Monitoring and Sanctioning” in this volume. Sturm, Bodo and Joachim Weimann. “Unilateral Emissions Abatement: An Experiment” in this volume.
Part III
Regulation and compliance
13 Managerial incentives for compliance with environmental information disclosure programs Mary F. Evans, Scott M. Gilpatric, Michael McKee, and Christian A. Vossler
Introduction Publicly reported information on the environmental behavior of firms can increase the efficacy of private markets as a mechanism to control environmental malfeasance through liability for harm, consumer demand response, and shareholder reaction. Within the realm of environmental policy, examples exist of both mandatory information disclosure programs such as the EPA’s Toxics Release Inventory (TRI), and voluntary programs such as Energy Star (see US EPA, 2001). In the case of mandatory information disclosure programs, firms are required to report information that is potentially damaging to them. Thus, an understanding of firm incentives under such programs is essential to evaluating their performance, improving their design, and motivating the emergence of new programs. A number of factors have the potential to alter the effectiveness of environmental information disclosure programs in encouraging firms to adopt desirable behaviors. These factors include features related to program design such as the timing of information release, firm characteristics such as size, and the existence of complementary policies such as liability rules. These factors may affect the quantity of firm emissions, the firm’s decision of whether to comply with reporting requirements, and the accuracy of reported emissions. Some of the factors that may affect the firm’s pollution and/or reporting decision such as financial status, compliance costs, and history of detected violations for example, have received limited attention in the literature (see Shavell, 1984; Beard, 1990; Larson, 1996; Harrington, 1988; Helland, 1998). To date, the literature has primarily focused on pollution and/or reporting decisions at the firm level. We seek to build on this literature by examining the pollution/reporting decision of individuals within the firm. We argue that a firm’s internal organizational structure alters the incentives faced by decision makers and therefore has the potential to affect their compliance decisions. We adapt a model developed by Gilpatric (2005) to examine these incentives and test the resulting hypotheses using experimental data. The next part of the chapter motivates our work with an overview of the information disclosure and tournament literatures. Following that, we first
244
M.F. Evans et al.
examine opportunities for malfeasance in the context of information disclosure programs; then, we turn to the firm’s organizational structure and present a model where incentives of lower-level or division managers, who report to an ownermanager, are determined by a rank-order tournament. From this model, we derive testable hypotheses of behavior that vary with the payoffs received by division managers (based on rank and whether a manager is found to have engaged in malfeasance), the probability that malfeasance is detected, and the penalty imposed on a manager caught engaging in malfeasance. We test these predictions using laboratory experiments and report results. In the final part of the chapter, we offer some conclusions and discuss the next steps in this line of research.
Motivation Previous studies of information disclosure programs have focused primarily on investigating two empirical questions. First, what is the reaction of investors to the release of information regarding a firm’s environmental performance (Hamilton, 1995; Khanna et al., 1998; Konar and Cohen, 2001)? Khanna et al. (1998) list several motivations for this line of research. For example, investors may expect firms with poor environmental performance to face increased future compliance costs and greater risk of liabilities. In addition, investors may perceive poor environmental performance as an indication of inefficient input use. Regardless of the reason, investors react to information concerning the firm’s financial health such that the value of the firm (share prices) tends to fall when adverse environmental information (such as TRI reports) is made publicly available.1 The second question relates to the effect of public information disclosure on subsequent firm environmental performance. Using data from the TRI, Konar and Cohen (1997) find that future emissions were lower among firms with the largest stock price decreases on the day of the information release. Empirical analyses of compliance with information disclosure programs are less common due to the lack of detailed compliance data and the difficulties associated with detecting non-compliance through misreporting rather than through failing to report at all. Estimates from the Government Accountability Office (until 2004 known as the General Accounting Office, GAO) suggest that approximately a third of facilities subject to reporting under the TRI during its initial years failed to report (GAO, 1991). However, an analysis of TRI compliance of facilities in Minnesota by Brehm and Hamilton (1996) suggests that ignorance of the requirements of the regulation may better explain violations (measured as failure to report) than evasion. Their analysis suggests that facility size may be an important factor in compliance. They find that both the smallest hazardous waste generators in their sample, which they argue have lower compliance costs, and the firms with the largest sales volumes, which they maintain are more likely to employ a dedicated environmental staff, are less likely to violate. Brehm and Hamilton also find that subsidiaries of larger companies are less likely to violate, perhaps because the larger company provides
Managerial incentives for compliance 245 environmental and legal staff to the subsidiaries. They maintain that this finding supports the argument that firms with more information (less ignorance) are more likely to comply. However, it seems reasonable that access to parent company environmental and legal staff may also reduce compliance costs thus increasing the likelihood of compliance. While a potentially important consideration, to our knowledge the literature has overlooked the possible role of a firm’s internal organizational structure in creating a divergence between manager incentives and the objectives of an information disclosure program. Consider that many internal reward structures (including promotion ladders) imply that division managers are playing a rankorder tournament game. Lazear and Rosen (1981) show that such mechanisms can induce efficient behavior when all managerial actions directed toward winning the game are in the form of productive effort. However, if purported performance is improved via fraudulent reporting or other malfeasance such as cost savings through higher and unreported toxics releases, compensation mechanisms based on tournament structures can induce managers to engage in such malfeasance. A substantial literature has compared the efficiency of tournaments with alternative compensation schemes, such as piece rates, with regard to such factors as the risk-aversion of workers and the flexibility of the incentive framework to environmental uncertainty (e.g. Nalebuff and Stiglitz, 1983). However, little work has explored incentives in a tournament setting when workers choose not solely how much work effort to exert, but some other aspect of the work that is undertaken. Managers may be able to influence the mean output through choices other than work effort, such as through choice of production process or regulatory compliance. When monitoring is imperfect it is likely that manager incentives are not perfectly aligned with those of the firm because there are opportunities to increase the probability of winning the tournament by engaging in activities that do not serve the firm’s interest. In general, this type of malfeasance may take the form of a manager increasing division profits by illegally dumping waste, failing to maintain equipment adequately, or manipulating accounts to show larger current revenues at the expense of future revenues. All such activities may increase the manager’s output as observed by his employer, while imposing potentially large future liabilities on the firm.2 In the context of compliance with information disclosure programs (and other regulatory mandates), the program may have sufficient sanctions for noncompliance such that compliance is optimal at firm level assuming the firm can costlessly monitor manager behavior. However, to achieve full compliance it may be very costly to monitor the behavior of division managers.3 To the extent that non-compliance may improve managers’ apparent output or productivity and such behavior is costly to observe, firms face a tradeoff as compensation schemes that encourage greater managerial effort also generate an incentive for non-compliance. The most significant line of research regarding malfeasance in tournament settings involves the exploration of “influence activities”: behavior that arises
246
M.F. Evans et al.
when workers can influence the choice of superiors regarding who is promoted or otherwise rewarded in an organization through actions that are nonproductive, ranging from ingratiation to bribery and sabotage of competitors (Milgrom and Roberts, 1988; Prendergast and Topel, 1996; Kim et al., 2002; and Chen, 2003). Such behavior is costly to the firm because it dulls a worker’s incentives to exert productive effort to win the tournament. The malfeasance we discuss here differs from influence activities because it does not derive from an agency conflict (i.e. the fact that the individual making decisions about whom to promote or otherwise reward benefits from the behavior at the expense of the firm). Malfeasance in the form of non-compliance with regulatory mandates, including failing to disclose information accurately, imposes direct costs on the firm that may significantly exceed those resulting from dulled incentives. Environmental malfeasance of course also entails important social costs that do not arise from influence activities within a firm and that are clearly of significant concern to regulators.
Firm organizational structure and information disclosure Non-compliance and malfeasance in the context of information disclosure While the compliance literature has relied primarily on a framework that focuses on firm-level decision making, the firm may be an inappropriate unit of account. The firm may wish to limit emissions to avoid associated penalties and potential liability. However, any internal organizational structure that includes incentivebased compensation, in which managers’ payments depend on their output, may provide managers with incentives to engage in malfeasance. Gilpatric (2005) constructs such a model of the firm to examine the general case of corporate governance. Here, we define malfeasance as a behavior that is inconsistent with the firm’s objectives. If managers can increase their apparent output (such as the profits from their division) by increasing emissions or reducing care (and thus increasing the probability of accidental emissions) and if this behavior is sufficiently costly for the firm to monitor and prevent such that monitoring is imperfect, then any compensation that rewards managers for higher output will generate both the intended incentive for them to exert greater work effort, but also an incentive to engage in malfeasance. We focus on the incentives generated by a rank-order tournament compensation scheme (such as promotion ladders) for two reasons: (1) competing for promotion, bonuses, or other rewards is perhaps the most ubiquitous incentive mechanism within firms, and (2) tournaments have the characteristic that players’ incentives to “cheat” depend not on the absolute gain from doing so (as would be true for piece-rate compensation, for example) but on the advantage cheating provides relative to competitors. Therefore, in an evenly matched tournament individuals can face a strong incentive to cheat even if doing so achieves only a small output gain if this is sufficient to increase significantly
Managerial incentives for compliance 247 their probability of winning. The opportunities for malfeasance that arise when we extend the model to include internal organization may result in higher levels of overall emissions and/or more frequent misreporting as divisions compete to reduce current production costs. Let x represent the firm’s (owner-manager’s) optimal total emissions level. Let z represent the level of emissions that is optimal (at the firm level) to report to the environmental authority with z ≤ x. Assume that the firm is composed of N divisions, each of which has a designated manager with the responsibility of reporting emissions for his division to the owner-manger. Let zˆ i represent emissions reported by the ith division manager and xˆi represent the optimal level of emissions for division i from the perspective of the division manager. The owner-manager reports firm-level emissions to the environmental authority as required by the information disclosure program. In order to focus on the effect of division manager-level decision making, we assume that the owner-manager reports N
z = ∑ zi ∧
∧
i =1
to the environmental authority.4 By considering the emissions and reporting decisions of lower-level managers, we introduce several opportunities for non-compliance. First, as shown above, non-compliance may result from behavior on the part of the ownermanager. We do not explicitly model this form of non-compliance here. Second, even if the owner-manager wishes to report the level of emissions truthfully, malfeasance on the part of division managers may prevent him from doing so. Managers are said to be engaging in malfeasance or cheating if they (1) emit more than optimal from the firm’s perspective, and/or (2) fail to report their actual emissions. Table 13.1 illustrates the possible cheating and noncompliance cases where N
z = ∑ zi ∧
∧
i =1
represents the level of reported emissions based on the division manager reports and N
x = ∑ xi ∧
∧
i =1
is actual emissions of the firm. In the first three cases, division managers are cheating or engaging in malfeasance. In cases 2 and 3 the firm is misreporting its emissions and therefore is non-compliant with the information disclosure program. Note that even in cases 1 and 4 where the firm is compliant with the reporting requirements, the level of emissions need not equal the socially optimal level.
248 M.F. Evans et al. Table 13.1 Potential cheating and non-compliance cases Case
Relationship between x and xˆ
Relationship between xˆ and zˆ
Are managers cheating?
Is firm compliant with reporting requirement?
1 2 3 4
xˆ > x xˆ > x xˆ = x xˆ = x
zˆ = xˆ zˆ < xˆ zˆ < xˆ zˆ = xˆ
Yes Yes Yes No
Yes No No Yes
By adapting the model of Gilpatric (2005), we derive hypotheses regarding the likelihood of cheating on the part of managers who are playing a rank-order tournament game in terms of their financial compensation. We make the following assumptions. Division managers are directed to emit no more than xi , where xi represents the optimal level of emissions for division i from the perspective of the owner-manager. They are able to improve their output by increasing emissions up to a level of xˆi . It is costly for the firm to audit the behavior of division managers and it does so with probability η . If managers are found to have “cheated” by emitting more than xi or by misreporting they are disqualified from winning the tournament (e.g. being promoted) and may also face additional sanction (e.g. being fined or fired). Because managers face the same penalty if found to have cheated regardless of the magnitude of cheating there is no marginal deterrent and the manager’s decision reduces to choosing xi as directed by the firm or cheating by choosing xˆi . In this setting malfeasance always consists of both emitting more than is optimal for the firm and failing to truthfully report emissions (case 2 above). In what follows, we focus exclusively on the second case in Table 13.1 above, leaving additional discussion and experimental testing of the remaining cases to future research. Malfeasance with managerial compensation based on tournament payoffs Gilpatric (2005) develops a model of cheating in a tournament in which identical contestants first choose effort then, after observing their opponents’ effort, choose whether to cheat. Cheating is modeled as simply increasing output by a constant. The model developed in that paper shows how the likelihood of cheating depends on the payoffs at stake in the tournament, the variance of output, probability of cheating being detected, number of contestants, and the penalty associated with being found to have cheated. The direction of these effects is generally quite intuitive. The probability of cheating decreases as the probability of detection grows, the gain from cheating decreases, or the cost of being caught cheating increases. However it remains an important question whether these effects are observed empirically and how well the model captures behavior. One expects that a greater likelihood of cheating being detected or stiffer penalties if caught will deter cheating to some degree, but increasing our understanding of
Managerial incentives for compliance 249 exactly how behavior responds to changes in the competitive framework is quite valuable for understanding how competitive incentive systems can elicit effort while minimizing malfeasance and monitoring costs. In this chapter we consider a special case of the model in which three contestants compete in a rank-order tournament. Contestants play only the second stage of the game in which they choose whether or not to “cheat”.5 Players choose a distribution of output, denoted y, among two distributions, a “high” distribution and a “low” distribution. Cheating entails choosing the high distribution. Here we illustrate this application of the model and show how the predicted probability of cheating is derived conditional on the underlying parameters of the model. We first develop some notation. An “audit” of contest behavior occurs with probability η , and if an audit occurs all contestants who cheated are discovered to have done so. This parameter represents the intensity of monitoring activity undertaken by the tournament sponsor.6 If a player is found to have cheated he faces two possible types of sanctions: (1) a cheating player is disqualified from winning the tournament and receives the payoff associated with finishing last; (2) the player may face additional “outside” penalty in excess of any compensation at stake in the tournament. Outside penalties represent such factors as a negative reputation arising from being found to have cheated. Let r represent the outside penalty imposed on a player caught cheating. The contestant with the highest output who is eligible to win (i.e. not caught cheating) receives payoff w1 , those who do not win but are not caught cheating receive w2 , and a player caught cheating receives w2 − r. Let s represent the payoff spread, w1 − w2. To solve for strategies as a function of the tournament parameters we first identify the minimum probability of audit that will fully deter cheating. This is found by deriving the audit probability such that, if a contestant believes his opponents will not cheat then he is indifferent between cheating and not (i.e. his expected payoffs are identical). If the audit probability is greater than this value, which we will denote η a , not cheating is a dominant strategy. Let P(.,.) represent player i’s probability of finishing first (but not necessarily receiving w1 since this probability does not account for the possibility of disqualification if cheating). The first argument of P denotes the action of player i and the second argument gives the action of his opponents. Then player i’s expected payoff if he cheats when his opponents do not is (1 − η )P(C, NC )S + w2 − η r whereas player i’s expected payoff if he does not cheat when his opponents do not is P(NC, NC)S + w2. In this symmetric contest, P(NC, NC) = 1/N. Finding P(C, NC) is rather more complicated. In general player i’s probability of having the highest draw when he receives a draw from density function f(y) and faces N − 1 opponents k of whom cheat and who each receive a draw from a distribution G(y) if they do not cheat or H(y) if they do cheat is
Pi = ∫ f (y )(G (y ))
N −1− k
(H (y ))k dy .
We can now set the expected payoff from cheating equal to that from not cheating to solve for η a . The model predicts that cheating will be fully deterred (i.e. a
250
M.F. Evans et al.
player’s dominant strategy is not to cheat) if the probability of detection is at least
ηa =
P(C , NC ) − 1 / N . P(C , NC ) + r / s
(13.1)
We can employ similar calculations to find the audit probability below which cheating is a dominant strategy, which we denote ηb . This is the value where player i is indifferent between cheating and not cheating if he believes all his opponents will cheat. Cheating will be a dominant strategy if the probability of audit is less than
ηb =
[P(C , C )− P(NC , C )] [P(C , C )− P(NC , C )]+ (S + r )/ S
.
(13.2)
Note that ηb < ηa. For audit probabilities between ηa and ηb there is a unique symmetric equilibrium where each player cheats with probability ρ such that players’ expected payoffs for cheating and non-cheating are equal. In other words, players are indifferent between cheating and not cheating for audit probabilities in this range. Note that the more a contestant believes his opponents will cheat the lower the payoff he receives from cheating and the higher the payoff from not cheating. This yields the existence of a symmetric mixed strategy equilibrium toward which behavior should converge over time if the game is played repeatedly and players update their beliefs regarding the rate of cheating among other contestants. If contestants conclude that their opponents are cheating more frequently than with probability ρ they will do better not to cheat, and if they conclude that they are cheating less frequently, they do better to cheat. We now illustrate how we solve for the equilibrium probability of cheating, ρ, as a function of the tournament parameters. When player i faces N − 1 opponents the probability that k of them cheat given that each opponent cheats with probability ρ is defined by the binomial function b(k, N − 1, ρ). For expositional ease and consistency with our experimental application, let N = 3. In this context, P(.,.,.) continues to represent the probability that player i wins the tournament. However, now the first argument represents player i’s strategy and the second and third arguments denote his opponents’ respective strategies. If i does not cheat given each of his opponents cheat with probability ρ, then his expected payoff is
b(0,2, ρ)P(NC , NC , NC )+ b(1,2, ρ)P(NC , NC , C)⎫ ⎬ ⎩+ b(2,2, ρ) P(NC , C , C ) ⎭
(1− η)(S)⎧⎨
+ η (S ){b(0,2, ρ ) P(NC , NC , NC )+ b(1,2, ρ)P(NC , NC )+ b(2,2, ρ)} + w2 In the absence of an audit, the probability of winning (the first bracketed term) is the sum of the probabilities of winning given each possible combination of
Managerial incentives for compliance 251 cheating and non-cheating opponents, with each term weighted by the probability of that occurrence. The second term in brackets, indicating the probability of winning if an audit occurs, is similar except that cheating opponents are disqualified so the P terms are quite different (and in the final case where both opponents cheat player i wins with probability one if there is an audit). We can similarly find that the expected payoff to player i if he cheats is in this context is
b(0,2, ρ) P(C , NC , NC )+ b(1,2, ρ) P(C , NC , C )⎫ ⎬ − η r + w2 . ⎩+ b(2,2, ρ) P(C , C , C ) ⎭
(1 − η)(S )⎧⎨
Setting these two expressions equal to each other (as they must be in equilibrium) and rearranging we have
⎧b(0,2, ρ) [P(C , NC , NC )− P(NC , NC , NC )]⎫ (1 − η)(S )⎪⎨+ b(1,2, ρ)[P(C , NC , C )− P(NC , NC , C )] ⎪⎬ = ⎪ ⎪+ b(2,2, ρ)[P(C , C , C )− P(NC , C , C )] ⎭ ⎩ ⎧b(0,2, ρ) P(NC , NC , NC ) ⎫ η (S )⎨ ⎬ + ηr ⎩+ b(1,2, ρ) P(NC , NC )+ b(2,2, ρ )⎭
(13.3)
Solving this equation for ρ provides an expression for the equilibrium probability of cheating given values for η, r, and s. In the next section, we discuss the results of experiments designed to test hypotheses that stem from the theoretical model.
Laboratory experiments Testing the theoretical model with field data is clearly problematic for a variety of reasons. Most important of these, perhaps, is that it is impossible to know for certain how much cheating takes place in any context without perfect monitoring of behavior. It is the absence of such monitoring, of course, that describes the circumstances the model seeks to capture. Economics experiments allow us to control the parameters of the competition and observe all behavior by contestants to learn whether they respond as theory predicts. A key to the use of the results of laboratory experiments to inform the policy debate is the precept of parallelism (Smith, 1982; Plott, 1987; Cummings et al., 2001). We establish parallelism through ensuring that the essential features of the field environment are captured in the laboratory. The experiments designed for this line of research focus on the strategic elements of the theory: we test behavioral arguments. Experimental design Our laboratory experiments are designed to test the responsiveness of the frequency of cheating to changes in the probability of audit and the imposition of
252
M.F. Evans et al.
an outside penalty for managers caught cheating. Participants are randomly assigned to three-player groups and play the role of division managers in a rankorder tournament. The decision faced by the participant is whether to receive an output draw from a “low” distribution, y ~ U[15,45], or a “high” distribution, y ~ U[22,52]. The choice of a draw from the high distribution corresponds with the decision to cheat, for example by emitting more than permitted in order to increase productivity but falsely reporting lower emissions. As in the model described in the previous section, the group faces a random audit with probability η. If an audit occurs, cheaters are caught with a probability of one and are disqualified from the tournament. Further, cheaters face an outside penalty, r, which is equal to zero or five. The eligible (i.e. non-disqualified) participant with the highest output wins the tournament and receives the highest payoff. In particular, the winner receives a payoff of 19 lab-dollars and other participants receive 7 lab-dollars less any penalty if they are disqualified. Using the experiment parameters above, we can apply the formulas from the theory section to obtain values for win probabilities for player i. In the case where i cheats and his opponents do not we have that f(y) = 1/30 and G(y) = (y − 15)/30 for 22 ≤ y < 45 and G(y) = 1 for y ≥ 45. Thus, the win probability for player i given his opponents do not cheat is 2
45 ⎛ 1 ⎞⎛ y − 15 ⎞ 7 P(C , NC ) = ∫ ⎜ ⎟⎜ ≈ 0.562 . ⎟ dy + 22 30 30 30 ⎠ ⎝ ⎠⎝
In the case where i does not cheat and his opponents do we have that f(y) = 1/30 and H(y) = (y − 22)/30 for 22 ≤ y ≤ 45 and f(y) = 0 for y ≥ 45, such that the win probability for player i is 2
45 ⎛ 1 ⎞⎛ y − 22 ⎞ P(NC , C ) = ∫ ⎜ ⎟⎜ ⎟ dy ≈ 0.150 . 22 30 ⎝ ⎠⎝ 30 ⎠
With P(C,NC) and P(NC,C) in hand, we can solve for ηa and ηb using formulas (13.1) and (13.2), respectively. For the case where r = 0 we have that ηa ≈ 0.407 and ηb ≈ 0.155. When r = 5 we have that ηa ≈ 0.234 and ηb ≈ 0.114. Similarly, using expression (13.3) above and the parameters of each experimental session we can calculate the predicted frequency of cheating, ρ, in a mixed strategy equilibrium where one exists (i.e. where the session is not designed to elicit a dominant strategy). We investigate audit probabilities that fall inside and outside the ηa and ηb ranges above. In particular, for r = 0, we include treatments corresponding to η = 0.1, 0.2, 0.32, and 0.5. As the audit probability 0.1 is less than ηb and 0.5 is greater than ηa it follows that for these parameter values there is a dominant strategy to cheat and not cheat, respectively. Audit probabilities of 0.2 and 0.32 give rise to unique mixed strategy equilibria. For r = 5 we include treatments corresponding to η = 0.2 and 0.3. With η = 0.2 there is a unique mixed strategy equilibrium whereas for η = 0.3 there is a dominant strategy not to cheat. The unique mixed strategy
Managerial incentives for compliance 253 Table 13.2 Design parameters by treatment Treatment
N per contest
Audit prob. η
Payoffs: (Win, not win, ineligible)
Payoff spread (s)
Penalty (r)
Predicted prob. of cheating (ρ)
1 2 3 4 5 6
3 3 3 3 3 3
0.10 0.20 0.32 0.20 0.30 0.50
(19,7,7) (19,7,7) (19,7,7) (19,7,2) (19,7,2) (19,7,7)
12 12 12 12 12 12
0 0 0 5 5 0
1.00 0.76 0.29 0.27 0.00 0.00
equilibria are solved for using formula (13.3). The design parameters and predicted cheating probabilities for the six treatments are summarized in Table 13.2. Although our theoretical model describes a one-shot game, we allow for possible learning through repeated play over T identical decision periods. Repetition appears to be important here given some equilibrium predictions are predicated on mixing strategies. To thwart motivations for strategic play and efforts at tacit coordination, in each period participants are randomly and anonymously reassigned to tournament groups. Experiment instructions are presented both orally and in writing. Decisions are made via laptop computers using software programmed in z-Tree (Fischbacher, 1999). Note that while we make analogies here to managerial decisions on environmental compliance, instructions use neutral language. The decision to receive a high draw is not framed as cheating or malfeasance so as not to engender uncontrolled payoffs associated with ethical costs associated with cheating. Similarly, we characterize the audit simply as a computer “check” of which distribution was chosen. After each period, the participant receives feedback on: (1) his output; (2) his output rank; (3) whether there was an audit; (4) whether he was disqualified; (5) how many opponents were disqualified; and (6) his payoff. The total of 96 undergraduate student subjects at the University of Tennessee participated in experiments in the summer and fall of 2005. Participants were drawn from a large pool of volunteers and represent a wide range of academic majors. The experiments were conducted in a designated experimental economics laboratory. Sessions consisted of nine to 15 people, and participants were visually isolated through the use of dividers. Matching was anonymous; subjects were not aware of the identity of the other members of their group. The experiment lasted 30 to 60 minutes, and subjects received average compensation of approximately $15. Due to time considerations, the experiment lasted either 20 or 30 periods. Experiment results The results are summarized in Table 13.3. As we observe, the subjects do not behave exactly as the theory predicts. In particular, using Wilcoxon tests where the unit of measurement is cheating frequency for the individual over all decision
254
M.F. Evans et al.
Table 13.3 Observed cheating in experiments Treatment No. of No. of subjects periods
Observed prob. Predicted Prob. Wilcoxon Test: of cheating of cheating observed vs. predicted (z-statistic)
1 2 3 4 5 6
0.74 0.63 0.42 0.53 0.54 0.46
15 18 18 18 12 15
30 20 20 20 30 30
1.00 0.76 0.29 0.27 0.00 0.00
–3.26 –1.55 2.16 2.94 3.07 3.41
periods, we find that predicted and observed cheating is statistically different for all treatments at the 5 percent significance level, with the exception of treatment 2. Note that observed cheating probabilities vary very little across rounds such that the results of statistical tests do not depend on which periods are considered. Nevertheless, our results are generally supportive of the theory as it predicts responses to changes in the audit probability. For example, actual cheating drops from 63 percent (treatment 2) to 42 percent (treatment 3) when the audit probability increases from 20 percent to 32 percent. This difference is statistically significant using a two-sample Wilcoxon Test (z = 2.10, prob. = 0.036). The effect of an outside penalty appears to be less pronounced. For instance, with η = 0.2, the penalty decreases cheating by 0.1. However, for η = 0.3, cheating actually increases by 0.12. In both cases, the effect of the penalty is not statistically significant. Overall there is a tendency toward an indifference between cheating or not, with observed rates of cheating below the predicted level when the theory predicts cheating the majority of the time (treatments 1 and 2) and observed rates of cheating above the predicted level when the theory predicts cheating a minority of the time (treatments 3 to 6). We turn now to a more formal analysis of individual behavior and estimate a probit model of the decision to cheat. To account for unobserved subject heterogeneity, and to allow for possible distribution mis-specification, we estimate the parameter covariance matrix using White’s robust “sandwich” estimator adjusted for clustering at the individual level. As participant behavior may be influenced by experience in prior periods, and in particular the feedback received after each period, the model controls for factors related to history of play as well as policy parameters. In terms of policy variables, we include an indicator for the presence of a penalty and a variable corresponding to the audit probability. Feedback variables include the proportion of prior “wins by opponent disqualification” whereby the participant won only as the result of competitors with higher output being disqualified, and the proportion of prior “wins by cheating” whereby the participant won as a result of cheating. These variables correspond to signals of how background win probabilities change conditional on the decisions of other players. As the effects of the exogenous audit probability and the two subjective win probabilities may have more pronounced
Managerial incentives for compliance 255 short-term effects, we also include three indicator variables corresponding to whether there was an audit, whether the subject won by disqualification, and whether the subject won by cheating in the previous period. Table 13.4 presents our estimated probit coefficients and corresponding marginal effects. As predicted, participants respond to the higher audit probability by reducing their probability of cheating. However, the marginal effect suggests that changes in the audit probability has an effect on cheating less pronounced than predicted by theory. In particular, consider the mean audit probability across all non-penalty treatments, which is about 0.28 with an associated cheating probability of roughly 0.5. If the audit probability is increased (decreased) by about 0.13, the theory predicts a decrease (increase) in cheating by about 0.5. However, the model suggests that observed cheating would only increase (decrease) by 0.06 for such a change in the audit probability. If an individual was audited in the previous period, the model suggests an effect inconsistent with theory: he is more likely to cheat in the current period. This is the oft observed “gambler’s fallacy” behavior, the presence of which at least partially explains why observed cheating is lower than predicted for low audit probabilities and higher than predicted for high audit probabilities. In particular, according to the estimated marginal effect, the presence (absence) of an audit in the previous period increases (decreases) the probability of cheating by 0.185. Consistent with our non-parametric test results, the presence of the penalty has no statistically significant effect on the cheating probability. This result is surprising, but has a parallel in the law and economics literature where some studies find that increased penalties for criminal offenses (such as the death penalty) have little or no deterrent effect on crime rates (Katz et al., 2003). Table 13.4 Probit model results Variable Penalty Audit probability Audit in previous period Proportion of wins by opponent Disqualification in prior periods Win by opponent disqualification in previous period Proportion of wins by cheating in prior periods Win by cheating in previous period Constant Wald χ2 (7 d.f.) Pseudo R2 Number of observations
Coefficient (robust standard error)
Marginal effect (robust standard error)
–0.115 (0.116) –1.239* (0.409) 0.485* (0.107)
–0.045 (0.046) –0.488* (0.160) 0.185* (0.039)
–1.993* (0.758)
–0.785* (0.299)
–0.612* (0.152)
–0.239* (0.056)
2.160* (0.321) 0.131 (0.092) 0.109 (0.162) 131.20* 0.118 2244
0.851* (0.127) 0.051 (0.036)
Note An asterisk indicates the parameter is statistically different from zero at the 5 percent level.
256
M.F. Evans et al.
Additionally, the proportion of wins by disqualification and proportion of wins by cheating statistically decreases and increases, respectively, the probability of cheating. There is nearly a one-to-one relationship, which is quite rational, between changes in these subjective win probabilities and changes in cheating probabilities. In particular, for a 0.1 increase in the proportion of wins by disqualification the model estimates that cheating decreases by 0.08. Similarly, a 0.1 increase in the proportion of wins by cheating corresponds with an increase in cheating by 0.09. As these ceteris paribus interpretations are possibly confounded by the presence of the lagged indicator variables associated with the two subjective probability variables, we note that the changes in cheating become 0.1 and 0.09 when the indicator variables are excluded from the model. Finally, we note that a participant who wins by disqualification is even more likely not to cheat in the following period.
Implications and extensions This chapter embarks on preliminary steps towards improving the design and implementation of environmental information disclosure programs. By examining manager-level emissions and compliance decisions, we obtain predictions of firm characteristics, namely features of their organizational structures that induce greater non-compliance with environmental regulations and information disclosure programs. One implication of our model is that the optimal intensity of regulatory enforcement efforts depends on the magnitude of monitoring and enforcement within firms. Firms with managerial compensation systems that generate strong competitive incentives for cheating despite firm-level compliance with disclosure requirements and other regulatory mandates, and which have little internal monitoring of managerial behavior, merit greater regulatory scrutiny than those that more intensively monitor internal behavior. One might also draw an analogy with enforcement of another type of information disclosure mandate, that of financial disclosure. To a large extent financial disclosure requirements are implemented through the requirement that publicly traded firms are subject to an independent audit by an outside auditor. Audits of behavior by the regulators (i.e. the SEC) are very infrequent. Clearly this system is imperfect, as the recent high-profile accounting scandals involving Enron, WorldCom, and others make clear. Nevertheless, it remains true that mandating credible monitoring of internal firm behavior, such as through independent auditing, may be an effective means of increasing compliance with information disclosure and other regulatory requirements. Our results can inform the debate on the efficient design of auditing procedures for verification of the information reported by firms as required under a mandatory disclosure program. The similarities of the underlying decision structure between compliance with corporate tax regulations and environmental reporting regulations suggests that we can gain some insight on how to design appropriate audits from the tax compliance literature. As Alm and McKee (1998) have argued we can
Managerial incentives for compliance 257 learn a great deal about managerial decisions (especially regulatory compliance decisions) from the extensive research work on tax compliance. The reporting requirements under the various environmental regulations are applied at the firm level. Our analytical framework examines behavior at the sub-firm (e.g. division) level. However, our work suggests that there are systematic links between the organizational structure of the firm and its overall environmental malfeasance and reporting behavior. Specifically, our work suggests that the method of compensation of divisional leaders and the number and size of divisions will affect the firm’s overall level of compliance. For many publicly held firms the general form of the compensation structure will be public information as will be the divisional structure. Our discussion of the effects of rank-order tournament compensation schemes on managerial reporting incentives provides some simple insights for the design of an auditing program. First, one can improve the efficiency of the audit process through the use of systematic or endogenous audits (selecting firms based on observable characteristics). Stranlund and co-authors (Murphy and Stranlund, 2006, 2007; Stranlund and Dhanda, 1999) argue that compliance in an emissions trading environment is independent of firm characteristics. This finding suggests that systematic audit rules will not be productive. However, the emissions trading environment differs from simple information disclosure environments. In the case of emissions trading, targeted enforcement does not enhance efficiency because the market for permits yields an equilibrium price such that there are no differential incentives to evade. No such market occurs in response to mandatory information disclosure programs. For these programs, evasion costs and benefits do differ across firms at the margin and these differences may be reflected in observable firm characteristics as suggested by our theoretical development and experimental results. The structure of the internal organization, in particular the managerial incentives, will affect the propensity to emit and to under report. Second, considerable research in tax compliance behavior (e.g. Alm and McKee, 2004, 1998; Alm et al., 2004, 1992, 1993; Chen and Chu, 2005) has shown that individuals and firms respond in predictable ways to the elements of audit regimes. If the results of this literature apply to the setting of compliance with information disclosure programs, then we would expect increased enforcement effort (such as the use of penalties and random audits) to increase compliance. Of particular relevance to the information disclosure programs is the lag in the audit process. Even if compliance with the reporting requirement is perfect (the firm reports exactly what is released), the owner-manager could benefit from releases that lower cost of production if the releases are reported to the public with a sufficient lag. A sufficiently long lag may allow the ownermanager to realize his payoff from the assets owned and exit the firm prior to the release of the information and the subsequent negative effect on the firm’s value. This suggests that the reporting period should be shortened and audit resources optimized through the use of staggered reporting dates. In this way, the information concerning emissions would be provided to the market in a timely fashion and the anticipated effects on share values realized quickly.
258
M.F. Evans et al.
An important distinction between tax compliance and compliance with information reporting requirements is that, in some cases, non-compliance with reporting requirements could result in damages that are not easily reversed. In the case of income tax evasion, the evader can make the government whole through the payment of back taxes and interest. The government may also argue for the imposition of additional fines given the incomplete detection and punishment regimes (much like punitive damages in tort litigation). In the case of noncompliance with environmental reporting requirements, ex post actions will not likely make the public whole. However, the liability system may apply additional penalties on firms that have violated the regulatory standards and failed to comply with reporting requirements. Information disclosure programs, such as the TRI, have the potential to achieve significant improvements in the environmental behavior of firms. The extent to which this potential is realized depends on the extent to which the information is accurate and timely. All firms will wish to report only information that casts them in a favorable light and must be “encouraged” to provide truthful and timely information. Our research is directed to improving the performance of information disclosure programs through both the identification of firm characteristics that are more likely to be correlated with environmental malfeasance and incomplete information disclosure, as well as the identification of the properties of information disclosure programs that enhance compliance. The former investigations will suggest ways the audit regimes can be improved while the latter will suggest design elements of the information disclosure program.
Acknowledgments We thank participants at Appalachian State University’s Experimental Economics Workshop and the 2005 ESA meetings in Montreal for comments on earlier versions of this research. David Bruner programmed the experiments.
Notes 1 Of course the market will anticipate positive emissions levels in many cases and the reaction of the market will depend on the extent to which reported emissions differ from expectations. 2 Nalebuff and Stiglitz (1983) term this issue the influence of prize on choice of technique, but do not model the problem. Stiglitz and Weiss (1981) address a related problem of the influence of the interest rate in bank lending on the risk involved in projects undertaken by borrowers. 3 Of course the cost of monitoring the regulatory compliance behavior of managers can be thought of as simply one aspect of the total cost of regulatory compliance for the firm. Our point here is to separate the costs of implementing full compliance within the managerial incentive system from the direct costs of regulatory compliance (such as costs associated with using “cleaner” production technology). 4 Internal environmental auditing procedures may increase the validity of this assumption to the extent that the existence of internal records of division managers’ reports discourages the owner-manager from choosing to report a level of emissions inconsistent with
Managerial incentives for compliance 259 these reports. Anton et al. (2004) suggest that elements of the internal organization of firms’ environmental programs are important in explaining TRI emissions. 5 Gilpatric (2005) finds that in equilibrium all players choose identical effort in the first stage of the tournament and therefore play a symmetric cheating game in the second stage. Our focus here is on testing predicted behavior in this symmetric cheating game. Clearly players may not choose identical effort levels, and other circumstances may well occur that render contestants unequal when choosing whether to cheat, but we leave the study of behavior arising in such a setting to future research. 6 Gilpatric (2005) discusses how behavior differs when audits are independent and shows that correlated audits as discussed here (in which all players are audited or none are) more effectively deter cheating than independent audits of equal probability.
References Alm, J. and M. McKee, 1998. Extending the Lessons of Laboratory Experiments on Tax Compliance to Managerial and Decision Economics. Managerial and Decision Economics, 19 (4–5), 259–275. Alm, J. and M. McKee, 2004. Tax Compliance as a Coordination Game. Journal of Economic Behavior and Organization, 54 (3), 297–312. Alm, J., B.R. Jackson, and M. McKee, 1992. Institutional Uncertainty and Taxpayer Compliance. American Economic Review, 82 (4), 1018–1026. Alm, J., M.B. Cronshaw, and M. McKee, 1993. Tax Compliance with Endogenous Audit Selection Rules. Kyklos, 46 (1), 27–45. Alm, J., B.R. Jackson, and M. McKee, 2004. Audit Information Dissemination, Taxpayer Communication, and Tax Compliance: An Experimental Investigation of Indirect Audit Effects. Presented at the 2004 ESA meetings in Tucson, AZ. Anton, W.R.Q., G. Deltas, and M. Khanna, 2004. Incentives for Environmental SelfRegulation and Implications for Environmental Performance. Journal of Environmental Economics and Management, 48 (1), 632–654. Beard, T.R., 1990. Bankruptcy and Care Choice. RAND Journal of Economics, 21 (4), 626–634. Brehm, J. and J.T. Hamilton, 1996. Non-compliance in Environmental Reporting: Are Violators Ignorant, or Evasive, of the Law? American Journal of Political Science, 40 (2), 444–477. Chen, K., 2003. Sabotage in Promotion Tournaments. Journal of Law, Economics, and Organization, 19 (1), 119–140. Chen, K.-P. and C.Y.C. Chu, 2005. Internal Control versus External Manipulation: A Model of Corporate Tax Evasion. RAND Journal of Economics, 36 (1), 151–164. Cummings, R., M. McKee, and L. Taylor, 2001. To Whisper in the Ears of Princes: Laboratory Economic Experiments and Environmental Policy. In: H. Folmer, H.L. Gabel, S. Gerking, and A. Rose, eds., Frontiers of Environmental Economics. Northampton, MA: Edward Elgar, pp. 121–147. Fischbacher, U. 1999. Toolbox for Readymade Economic Experiments. Technical Report IEW Working Paper 21, University of Zurich. GAO (General Accounting Office), 1991. Toxic Chemicals: EPA’s Toxics Release Inventory is Useful but Can be Improved. Washington: General Accounting Office. Gilpatric, S., 2005. Malfeasance in Tournaments. Working Paper, University of Tennessee, Department of Economics.
260
M.F. Evans et al.
Hamilton, J.T., 1995. Pollution as News: Media and Stock Market Reactions to the Toxics Release Inventory Data. Journal of Environmental Economics and Management, 28 (1), 98–113. Harrington, W., 1988. Enforcement Leverage When Penalties Are Restricted. Journal of Public Economics, 37 (1), 29–53. Helland, E., 1998. The Enforcement of Pollution Control Laws: Inspections, Violations, and Self-Reporting. Review of Economics and Statistics, 80 (1), 141–153. Katz, L., S.D. Levitt, and E. Shustorovich, 2003. Prison Conditions, Capital Punishment, and Deterrence. American Law and Economics Review, 5 (2), 318–343. Khanna, M., W.R.H. Quimio, and D. Bojilova, 1998. Toxics Release Information: A Policy Tool for Environmental Protection. Journal of Environmental Economics and Management, 36 (3), 243–266. Kim, S., C. Qin, and Y. Yu, 2002. Bribery in Rank-Order Tournaments. Working Paper, University of California Santa Barbara, Department of Economics. Konar, S. and M.A. Cohen, 1997. Information as Regulation: The Effect of Community Right to Know Laws on Toxic Emissions. Journal of Environmental Economics and Management, 32 (1), 109–124. Konar, S. and M.A. Cohen, 2001. Does the Market Value Environmental Performance? Review of Economics and Statistics, 83 (2): 281–289. Larson, B., 1996. Environmental Policy Based on Strict Liability: Implications of Uncertainty and Bankruptcy. Land Economics, 72 (1), 33–42. Lazear, E.P. and S. Rosen, 1981. Rank-Order Tournaments as Optimum Labor Contracts. Journal of Political Economy, 89 (5): 841–164. Milgrom, P. and J. Roberts, 1988. An Economic Approach to Influence Activities in Organizations. American Journal of Sociology, 94, S154–S179. Murphy, J.J. and J.K. Stranlund, 2006. Direct and Market Effects of Enforcing Emissions Trading Programs: An Experimental Analysis. Journal of Economic Behavior and Organization, 61 (2), 217–233. Murphy, J.J. and J.K. Stranlund, 2007. A Laboratory Investigation of Compliance Behavior Under Tradable Emissions Rights: Implications for Targeted Enforcement. Journal of Environmental Economics and Management, 53 (2), 196–212. Nalebuff, B. and J. Stiglitz, 1983. Prices and Incentives: Towards a General Theory of Compensation and Competition. Bell Journal of Economics, 14 (1), 21–43. Plott, C.R., 1987. Dimensions of Parallelism: Some Policy Applications of Experimental Methods. In: A.E. Roth, ed., Laboratory Experimentation in Economics. Cambridge: Cambridge University Press, pp. 193–219. Prendergast, C. and R. Topel, 1996. Favoritism in Organizations. Journal of Political Economy, 104(5), 958–978. Shavell, S., 1984. A Model of the Optimal Use of Liability and Safety Regulation. RAND Journal of Economics, 15(summer), 271–280. Smith, V.L., 1982. Microeconomic Systems as an Experimental Science. American Economic Review, 72(5), 923–955. Stiglitz, J. and A. Weiss, 1981. Credit Rationing in Markets with Imperfect Information, Part I. American Economic Review, 71 (3), 393–410. Stranlund, J.K. and K.K. Dhanda, 1999. Endogenous Monitoring and Enforcement of a Transferable Emissions Permit System. Journal of Environmental Economics and Management, 38 (3), 267–282. US EPA, 2001. The United States Experience with Economic Incentives for Protecting the Environment, EPA-240-R-01–001. Washington, DC: US EPA.
14 An investigation of voluntary discovery and disclosure of environmental violations using laboratory experiments James J. Murphy and John K. Stranlund
Introduction State and federal self-discovery and disclosure rules seek to encourage greater compliance with environmental regulations by reducing penalties for violations that are voluntarily discovered and reported to authorities. For example, the EPA’s Audit Policy reduces penalties “for regulated entities that voluntarily discover, promptly disclose, and expeditiously correct noncompliance.”1 Concurrent with the implementation of rules for voluntary discovery and disclosure of environmental violations over the last decade or so, a significant body of literature emerged that examines the conceptual properties of these rules (e.g., Malik, 1993; Kaplow and Shavell, 1994; Innes, 1999, 2001a, 2001b; Pfaff and Sanchirico, 2000). Taken as a whole this literature is noncommittal on the question of whether voluntary disclosure policies are worthwhile complements to conventional enforcement strategies. In fact, provided that the predictions about the performance of voluntary disclosure policies hold up under empirical scrutiny, it is clear that whether these schemes are worthwhile will depend upon the specifics of particular regulatory settings. Unfortunately, empirical analyses of the performance of voluntary disclosure policies are limited to just a few examinations of the effects of existing state and federal discovery and disclosure rules. For example, Stafford (2005) finds evidence that the EPA’s Audit Policy and state audit policies have had a positive effect on compliance among hazardous waste facilities. Pfaff and Sanchirico (2004) examine the effects of the Audit Policy on the number and form of selfdisclosed violations and find that the policy has encouraged self-discovery and disclosure of violations, but these reported violations are minor in comparison with the violations uncovered by conventional EPA audits. While econometric studies with field data are critical for understanding the effectiveness of existing policies, data limitations and the inability to vary these policies in a controlled setting can preclude direct tests of theoretical predictions. Moreover, experiments provide direct control over the parameters of interest, which allows researchers to perform sensitivity analyses that may not be possible outside of the laboratory. Therefore, in this chapter we report the results
262
J.J. Murphy and J.K. Stranlund
of a series of experiments designed to test fundamental hypotheses about the performance of voluntary discovery and disclosure policies. In particular, we address the following questions: How well do these policies perform in terms of motivating firms to voluntarily investigate whether they are in violation of an environmental standard and to disclose any violations they discover? How do voluntary discovery and disclosure policies affect the care that firms exercise to prevent environmental violations? Relative to conventional enforcement strategies, what are the effects of these policies on enforcement effort and environmental quality? We designed and conducted a series of experiments with seven treatments. Using a within-subject design, each subject participated once in each of the seven treatments. All experiments began with a conventional enforcement model as the baseline treatment. In this treatment subjects were responsible for making a costly decision about the level of care taken to reduce the likelihood of a violation occurring. Subjects did not incur any costs if a violation occurred; however, they were audited with a known, exogenous probability, and they were penalized if a violation was discovered. The elements of the conventional enforcement treatment were contained in the other six treatments. Each of the other treatments gave subjects the opportunity to voluntarily disclose their violations under different conditions. These treatments varied according to the penalty for voluntarily disclosed violations and whether it was costly for the subjects to determine their compliance status. Subjects responded strongly to the disclosure incentive. In each of our treatments involving an opportunity for voluntary disclosure, a significant number of subjects chose to disclose. As expected, the number of disclosers tended to fall as the automatic penalty for reported violations was increased. The policy significance of inducing a significant number of voluntary disclosures is well known – relative to a conventional enforcement strategy, the government can reduce the effort it expends to detect violations because it can focus these efforts on the subset of firms that do not disclose a violation (Malik, 1993; Kaplow and Shavell, 1994). However, reducing the penalty for disclosed violations to motivate more selfreporting also reduced the care that the subjects took to avoid these violations. Thus, we find strong evidence of a tradeoff between increased violation disclosures and reduced environmental quality. This does not mean, however, that every disclosure policy results in lower environmental quality. In fact, under the condition that subjects did not have to pay to discover their compliance status, we find that it is possible to induce a significant number of violation disclosures without affecting the deterrence of a conventional enforcement strategy (Kaplow and Shavell, 1994). However, attempting to induce increasing numbers of voluntary disclosures will at some point result in less deterrence relative to a conventional enforcement strategy and, hence, will eventually lead to reduced environmental quality. We also find strong support for a hypothesis of Malik (1993) that, relative to conventional enforcement, disclosure polices will result in more violations being
Voluntary discovery and disclosure 263 sanctioned, but fewer of these sanctions are for violations that are uncovered by the government. Sanctioning violations is likely to be costly. If the costs of sanctioning voluntarily disclosed violations are roughly equal to the costs of sanctioning violations that the government uncovers, then voluntary disclosure policies will tend to increase sanctioning costs. However, because fewer sanctions are applied to violations that the government uncovers, Malik (1993) argues that a voluntary disclosure policy may decrease sanctioning costs, in spite of the increase in the number of sanctions applied, because punishing disclosed violations is probably less expensive than punishing violations that the government uncovers. A firm that voluntarily discloses a violation is essentially admitting liability for being noncompliant. This admission can reduce the burden on the government to produce sufficient evidence for a finding of liability. Moreover, a firm that voluntarily admits liability is less likely to engage in costly efforts to challenge or otherwise avoid the imposition of a penalty.2 Although our results are largely consistent with the qualitative predictions of the existing theory regarding the role of voluntary disclosure in regulatory enforcement, we do observe one unanticipated effect. For each disclosure policy we examined, subjects who chose not to disclose their violations when given the opportunity to do so tended to exercise more care to avoid violations than under a conventional enforcement strategy. This is unexpected because a subject who chooses not to disclose a violation opts to face the identical random monitoring and penalty as under conventional enforcement. That the addition of a voluntary disclosure policy to a conventional enforcement strategy tended to induce nonreporters to exercise more care likely suggests a framing effect associated with the opportunity to voluntarily disclose violations. It is important to ask whether this effect is simply an artifact of the laboratory setting, or if there is some reason to believe that regulated firms would likely behave in this way. Since we see no reason to expect that this framing effect would hold in non-laboratory regulatory settings, our view is that it is probably limited to the laboratory. Despite this framing effect, our work provides strong empirical evidence of the fundamental tradeoffs inherent in voluntary discovery and disclosure policies. Thus, the policy significance of our work is clear. Both the theoretical underpinnings of this work and our experimental tests make it clear that any conclusions about the relative benefits and costs of voluntary disclosure policies will require detailed knowledge of monitoring costs, sanctioning costs, the harm caused by environmental violations, and firms’ costs of internal audits to determine their compliance status. Therefore, it is likely that the question of whether disclosure policies are a worthwhile complement to regular environmental enforcement will have to be answered on a case-by-case basis.
Theory and hypotheses The theoretical underpinnings of our study are drawn from a simple model of an industry composed of n identical risk neutral firms.3 Each firm chooses a level of care to reduce the probability, p, of a violation of an environmental standard.
264 J.J. Murphy and J.K. Stranlund Each has a profit function v(p), with v' (p) > 0, and v"(p) < 0. Under conventional enforcement of the standard, firms do not have an opportunity to disclose their violations. The government randomly audits a subset of firms so that the probability that any firm will be audited is π. Uncovered violations are punished with a monetary penalty φ. A risk neutral firm chooses the probability that a violation occurs to maximize its expected profit, V(p, πφ ) = v(p) – pπφ . The interior choice of the probability of a violation is p *(πφ ), which is the implicit solution to v'(p) – πφ = 0. The main hypotheses of our work are devoted to examining the behavior of firms when voluntary disclosure rules are added to an existing conventional enforcement strategy, and the policy implications that flow from these behavioral hypotheses.4 All of the behavioral hypotheses are focused on the reporting and care decisions of firms under disclosure rules that would leave risk neutral firms indifferent between voluntarily disclosing their violations and choosing instead to face the random monitoring and penalty of conventional enforcement. These disclosure rules provide useful benchmark policies from which we can derive the relative merits of voluntary disclosure policies. Moreover, we conducted additional experiments to provide sensitivity analyses around these benchmarks. An obvious starting point is to ask whether a disclosure policy can motivate noncompliant firms to report voluntarily their violations to the government, and whether this has any effect on deterrence. Suppose that firms are given the opportunity to disclose voluntarily their violations to the government, and those that do so are penalized φd < φ automatically (the subscript d indicates voluntary disclosure). Under the common theoretical assumption that firms always disclose their violations when they are indifferent about doing so, a risk neutral firm will disclose a violation if and only if the automatic penalty for a disclosed violation does not exceed the expected penalty it faces if it fails to report the violation; that is, disclosure occurs if and only if φd ≤ πφ. From a theoretical perspective, setting φd = πφ so that risk neutral firms are indifferent between disclosing their violations and not doing so implies that all violations will be disclosed. Of course, in a laboratory setting we do not expect that indifferent subjects will always choose to disclose, nor do we expect that all subjects are risk neutral. Nevertheless, we test the following hypothesis: Hypothesis 1: Any voluntary disclosure policy that leaves risk neutral firms indifferent between disclosing their violations and not disclosing them will motivate a significant number of voluntary disclosures. When risk neutral firms have costless and perfect information about their compliance status, setting φd = πφ yields the care choice p *(φd) = p *(πφ), which implies no effect on deterrence. Therefore, we have: Hypothesis 2: Suppose that firms know their compliance status without a costly self-audit. A voluntary disclosure policy that leaves risk neutral firms indifferent
Voluntary discovery and disclosure 265 between disclosing their violations and not disclosing them will not change the care that disclosers take to avoid violations. However, complex regulations and production technologies may make it difficult for firms, particularly large firms, to determine whether they are in compliance with environmental standards without undertaking a costly self-audit of their operations (Pfaff and Sanchirico, 2000). With costly discovery, Innes (2001b) has shown that inducing voluntary discovery and disclosure of violations requires that the certain penalty for disclosed violations must be reduced to compensate firms for their discovery costs. However, doing so weakens deterrence. To demonstrate this result, suppose that a firm incurs a cost c to discover whether it has violated the standard. If the firm does not invest in discovery, then its expected payoff is the same as under conventional enforcement, that is, V(πφ) = v(p *(πφ)) – p *(πφ)πφ. However, if the firm has invested in discovery and has discovered a violation, then the firm will report the violation if φd ≤ πφ. Assuming that this holds, a firm’s choice of violation probability is p *(φd) if it invests in self-discovery and its expected payoff is V(φd) – c, where V(φd) = v(p *(φd)) – p *(φd)φd . Clearly, the firm is indifferent to discovery and disclosure if V(φd) – c = V(πφ). Since this requires V(φd) > V(πφ), the penalty for disclosed violations must be strictly lower than the expected penalty under conventional enforcement, thereby weakening deterrence. Consequently, p *(φd) < p *(πφ), and we have: Hypothesis 3: If firms must incur a cost to discover their compliance status, then a voluntary disclosure policy that leaves risk neutral firms indifferent between voluntarily discovering and disclosing their violations and facing a conventional enforcement strategy will motivate disclosers to decrease the care they take to avoid violations. Our final behavioral hypothesis deals with firms that choose not to disclose their violations. Obviously, firms that choose not to disclose their violations are simply choosing to face the conventional enforcement strategy. Therefore, the addition of a voluntary disclosure rule to a conventional enforcement strategy should have no effect on the care that non-disclosers take to avoid violations. Of course, this is true of any voluntary disclosure rule, leading to: Hypothesis 4: Add any voluntary disclosure rule to a conventional enforcement strategy. Those firms that choose not to report their violations will not change the care they take to avoid violations. From hypotheses 1 through 4 follow several important policy implications that reveal the relative merits of voluntary disclosure policies. First, if Hypotheses 2 and 4 hold, Kaplow and Shavell (1994) have shown: Policy Implication 1: If firms know their compliance status without a costly selfaudit, then a voluntary disclosure policy that leaves risk neutral firms indifferent
266
J.J. Murphy and J.K. Stranlund
between disclosing their violations and not disclosing them will not change the expected number of violations. But if discovery is costly and a disclosure rule motivates a significant number of firms to report their violations (Hypothesis 1), and if Hypotheses 3 and 4 hold, then we have the following result due to Innes (2001b): Policy Implication 2: If firms must incur a cost to discover their compliance status, then a voluntary disclosure policy that leaves risk neutral firms indifferent to voluntary discovery and disclosure will increase the expected number of violations. While a disclosure rule may or may not reduce deterrence depending on whether firms must undertake a costly self-audit to determine their compliance status, these rules will always allow the government to reduce its monitoring effort if they motivate a significant number of voluntary disclosures (Malik, 1993; Kaplow and Shavell, 1994). Since the government does not need to audit those that disclose their violations, it can focus its monitoring effort on the subset of firms that do not report a violation. Clearly, maintaining the same level of deterrence for those who do not report a violation requires fewer audits. This result holds regardless of whether firms must conduct a costly self-audit. Thus, if a significant number of firms are motivated to disclose their violations (Hypothesis 1), then we have: Policy Implication 3: A voluntary disclosure policy that leaves risk neutral firms indifferent to disclosing their violations will reduce the number of audits that are required to maintain the same level of deterrence for those that do not disclose their violations. While we expect that adding a voluntary disclosure policy to a conventional enforcement strategy will allow the government to reduce its monitoring effort, we also expect that more violations will be sanctioned. If firms do not have to pay to determine their compliance status, then Policy Implication 1 asserts that the expected number of violations will be unchanged. However, the voluntarily disclosed violations are sanctioned with certainty, whereas without the disclosure opportunity these violations would only be sanctioned with the probability of an audit. Thus, if a significant number of violations are disclosed (Hypothesis 1) when firms know their compliance status without cost, then a voluntary disclosure policy that leaves risk neutral firms indifferent to disclosing their violations and not doing so will increase the total number of sanctioned violations. This move toward more sanctions is reinforced when firms must audit themselves to determine their compliance status, simply because deterrence is weaker and the expected number of violations increases (Policy Implication 2). Thus, we have: Policy Implication 4: Any voluntary disclosure policy that leaves risk neutral firms indifferent about disclosing their violations and not doing so will increase the expected number of total sanctions.
Voluntary discovery and disclosure 267 Since sanctioning noncompliant firms is likely to be costly, it is possible that a voluntary disclosure rule could increase enforcement costs if the additional costs of sanctioning a higher number of violations outweigh the reduction in monitoring costs (Kaplow and Shavell, 1994). However, the effect on sanctioning costs is complicated by the fact that voluntary disclosure policies induce a shift from penalizing violations that are uncovered by the government to penalizing voluntarily disclosed violations. Malik (1993) argues that punishing violations that are voluntarily disclosed is probably cheaper than punishing violations that the government uncovers, because punishments for disclosed violations require less evidence and are less likely to be challenged. Although more violations are punished, fewer sanctions are levied for violations that are uncovered by the government. This is a simple consequence of Hypotheses 1 and 4 – firms that do not disclose their violations choose the same level of care to prevent their violations as under conventional enforcement (Hypothesis 4), but there are fewer violations that the government uncovers because only a subset of firms choose not to disclose their violations (Hypothesis 1), hence: Policy Implication 5: A voluntary disclosure policy that leaves risk neutral firms indifferent to disclosing their violations will reduce the expected number of sanctions that are levied on undisclosed violations. Before we move on to our experiments it is worth saying a few words about risk preferences. The theory we present, the hypotheses we test, and the implications that follow from these hypotheses are all based on the assumption that agents are risk neutral. We refrain from developing and testing a theory with more flexible risk preferences because the main motivation of this work is to test hypotheses from the existing theory of self-reporting in law enforcement. To our knowledge there is no theoretical work in this area that allows for flexible risk preferences. Thus, developing new theory to account for non-neutral risk preferences is beyond the scope of this chapter. Despite its limitations, the existing theory of voluntary disclosure policies with risk neutral firms provides a useful benchmark from which to judge the qualitative effects of disclosure policies on the variables of real policy significance, that is, the effects of these policies on overall deterrence and government enforcement efforts. Nevertheless, we do believe that all experimental studies that examine compliance behavior in various settings could benefit from information about subjects’ risk preferences.5
Experimental design Our experiments were designed to test the hypotheses and policy implications presented in the previous section, and were conducted in a computer laboratory using software specifically developed for this research. In all treatments, subjects were responsible for making a production decision that yielded earnings, v. When they produced, there was a probability, p, that a violation would occur.6 Subjects could reduce the likelihood of a violation, but this was costly in terms
268 J.J. Murphy and J.K. Stranlund Table 14.1 Experimental design Conventional enforcement
Voluntary disclosure only
Voluntary disclosure with costly discovery
CE
D-H ($2.35) D-I ($1.50) D-L ($0.97)
CD-H ($1.50) CD-I ($0.97) CD-L ($0.60)
Note The conventional enforcement penalty in all treatments is $2.50. The reduced penalty, d, for voluntary disclosure is shown in parentheses.
of foregone production earnings, in particular, v(p) = 3.60 – [0.55/(0.30 + p)]. The computer screen presented each subject with a table that displayed all the possible violation probability/production earning combinations in 0.05 increments between 0.05 and 0.95. Table 14.1 summarizes the experimental design. The conventional enforcement treatment (CE) formed the baseline; the remaining six treatments built upon CE such that all features of CE were common throughout the experiment. Under the CE treatment, each subject knew that they would be audited with probability π = 0.6. If a violation occurred and was uncovered by an audit, then the subject incurred a fine of φ = $2.50. Subjects in this treatment did not have an opportunity to disclose voluntarily their violations. The middle column of Table 14.1 contains our voluntary disclosure only treatments: D-H, D-I, and D-L. In these treatments, subjects knew automatically and without cost whether a violation occurred. These treatments were identical to the CE treatment, except that subjects had the option to disclose voluntarily whether a violation occurred. If a subject chose not to disclose a violation, then she faced the identical enforcement strategy as CE (0.6 audit probability, $2.50 fine). However, if she chose to voluntarily disclose a violation, then she automatically paid a reduced fine, φd. The level of this fine is the distinguishing factor among the D-H, D-I, and D-L treatments and is shown in parentheses next to the treatment labels in Table 14.1. In treatment D-I, the automatic penalty for a voluntarily disclosed violation (φd = $1.50) was set such that a risk neutral subject would be indifferent between disclosing a violation if one occurred and facing the uncertainty of the conventional enforcement strategy. Note that this penalty equals the expected penalty under conventional enforcement; that is, πφ =φd = $1.50. To examine the responsiveness of the subjects to the voluntary disclosure incentive, we chose a higher disclosure penalty of $2.35 for the D-H treatment and a lower disclosure penalty of $0.97 for the D-L treatment. The final column of Table 14.1 contains our voluntary disclosure with costly discovery treatments: CD-H, CD-I, and CD-L. These treatments were the same as the voluntary disclosure only treatments, except that subjects did not know whether a violation occurred unless they paid $0.20 to find out. Those who chose not to pay the cost of self-discovery could not voluntarily disclose a
Voluntary discovery and disclosure 269 violation and therefore faced the identical enforcement strategy as CE. Like the voluntary disclosure only treatments, the costly discovery treatments varied according to the penalty for disclosed violations. In treatment CD-I, the automatic penalty for a disclosed violation was φd = $0.97. In theory, this disclosure penalty makes a risk neutral subject indifferent between discovering and disclosing a violation, or facing the conventional enforcement strategy. Notice that this automatic penalty for a disclosed violation in the CD-I treatment is lower than the expected penalty under conventional enforcement (πφ = $1.50). This is necessary to motivate subjects to invest in self-discovery. As with the voluntary disclosure only treatments, we chose a higher disclosure penalty for CD-H and a lower penalty for CD-L to examine the responsiveness of subjects to the disclosure incentive. For the six treatments that included the option to voluntarily disclose a violation, we used the strategy method to ensure that we had an observation for each subject’s disclosure decision regardless of whether a violation occurred. Before it was revealed whether a violation occurred, subjects had to decide whether they would commit to disclosing voluntarily a violation if one occurred, or face the uncertainty of random audits and potential penalties under conventional enforcement. Conceptually, since the disclosure decision was not costly, forcing subjects to commit to this decision at the outset should not affect their behavior. A total of 180 students were recruited from the student population at the University of Massachusetts, Amherst. Subjects were paid $5 for agreeing to participate and showing up on time, and were then given an opportunity to earn additional money in the experiment. These additional earnings ranged between $10.55 and $18.27, with a mean of $14.88 (σ = 1.49). Earnings were paid in cash at the end of each experiment. Each experiment lasted about an hour and a half. Subjects were given a copy of the instructions that the experimenter then read aloud.7 The experimenter used an overhead projector to demonstrate the software while the subjects performed the same tasks on their individual computers. It took about 30 minutes to complete the instructions and answer any questions. The same experimenter conducted all sessions. Every subject participated in all seven treatments, starting with conventional enforcement as the baseline. The remaining six treatments were presented in one of six sequences using a Latin Square design to control for possible order effects; 30 subjects participated in each of the six sequences. The sequences of treatments are provided in Table 14.2. Within a sequence (i.e., a row in Table 14.2) there were seven stages, one for each treatment. A stage consisted of three practice rounds, followed by one “real money” round. The parameters for the practice and real rounds were the same; data from the practice rounds were discarded. Thus, each subjected generated seven observations, one for each treatment. The Latin Square was constructed such that each treatment appears once in each sequence, once in each stage (i.e., a column in Table 14.2), and each treatment precedes and follows every other treatment one time.
270 J.J. Murphy and J.K. Stranlund Table 14.2 Sequence of treatments using a Latin Square Sequence ID
A B C D E F
Stage 1
2
3
4
5
6
7
CE CE CE CE CE CE
D-H D-I D-L CD-H CD-I CD-L
D-I D-L CD-H CD-I CD-L D-H
CD-L D-H D-I D-L CD-H CD-I
D-L CD-H CD-I CD-L D-H D-I
CD-I CD-L D-H D-I D-L CD-H
CD-H CD-I CD-L D-H D-I D-L
Results The tests of the behavioral hypotheses and their policy implications that we specified in the second part of the chapter, as well as sensitivity analyses with respect to the disclosure incentive, are conducted with the data in Table 14.3. The second column of this table contains the mean violation probability by disclosure decision for each treatment, and the third column contains the number of individuals who did and did not commit to voluntary disclosure. To show how we calculated the remaining values in Table 14.3, define these variables in the following way: for a particular treatment, p¯d and p¯nd are the mean violation probabilities for those who did and did not commit to disclosing their violations, respectively; nd is the number of subjects who committed to disclosure and, given 180 subjects, 180 − nd is the number of subjects who chose not to commit to disclosure. Using the mean violation probabilities and the numbers of individuals who did and did not commit to disclosing their violations, we calculated the expected number of violations for those who committed to disclosure, p¯d nd , and the expected number violations for those who did not, p¯nd (180 − nd) These values are reported in the fourth column of Table 14.3. Note that the expected number of violations under CE is simply the mean violation probability for this treatment times the number of subjects. Table 14.3 also includes the expected number of audits necessary to maintain the π = 0.6 audit probability for those who did not report a violation. To calculate these values, note first that individuals who committed to disclosure might have been subject to an audit if they did not experience a violation and, hence, did not submit a violation report. For a particular treatment, the expected number of individuals who committed to disclosure but were subject to an audit because a violation did not occur is (1 − p¯d )nd. Obviously, all who did not commit to disclosure were subject to an audit. Therefore, the expected number of audits required to maintain the π = 0.6 probability of a random audit for those who did not disclose a violation is π[(1 − p¯d )nd + (180 − nd)] = (0.6)(180 − p¯d nd). These values are reported in the fifth column of Table 14.3. The required number of audits under CE is simply πN = (0.6)(180) = 108.
Voluntary discovery and disclosure 271 Table 14.3 Mean violation probabilities, expected numbers of violations, and expected numbers of enforcement actions Treatment (disclosure penalty)
Mean violation probability
N
Expected number of violations
Expected number of audits
Expected number of fines
CE D-H ($2.35) Disclose Not disclose D-I ($1.50) Disclose Not disclose D-L ($0.97) Disclose Not disclose CD-H ($1.50) Disclose Not disclose CD-I ($0.97) Disclose Not disclose CD-L ($0.60) Disclose Not disclose
0.509 0.448 0.497 0.434 0.497 0.534 0.468 0.611 0.656 0.525 0.467 0.547 0.421 0.536 0.616 0.448 0.650 0.701 0.502
180 180 38 142 180 79 101 180 117 63 180 66 114 180 94 86 180 134 46
91.6 80.5 18.9 61.6 89.5 42.2 47.3 109.8 76.8 33.1 84.1 36.1 48.0 96.4 57.9 38.5 117.0 93.9 23.1
108.0 96.7
55.0 55.9 18.9 37.0 70.5 42.2 28.4 96.6 76.8 19.8 64.9 36.1 28.8 81.0 57.9 23.1 107.8 93.9 13.9
82.7 61.9 86.3 73.3 51.6
Finally, we calculated the expected numbers of fines levied on disclosed and undisclosed violations. Since fines are levied on all reported violations, the expected number of fines for disclosed violations in a particular treatment equals the expected number of violations of those who committed to disclosure, p¯d nd. On the other hand, fines for undisclosed violations are levied with the probability of an audit. Thus, the expected number of fines for undisclosed violations is the expected number of these violations times the audit probability; that is, (0.6) ( p¯nd )(180 − nd). In the final column of Table 14.3 we report the expected numbers of fines for disclosed and undisclosed violations for each treatment, as well as their sums. The expected number of fines under CE is the expected number of violations for this treatment times the audit probability. Before discussing the results contained in Table 14.3, it is worth noting that, as expected, the observed outcomes do not match specific predictions about violation probabilities and reporting choices based on a model of risk neutral, expected payoff-maximizing agents. Violation probabilities tended to be higher than what a risk neutral subject would be expected to choose. Furthermore, no risk neutral subject would choose to disclose a violation in the D-H and CD-H treatments, yet a non-trivial minority of subjects did so. Similarly, every risk neutral subject would choose to disclose their violations in the D-L and CD-L treatments, but a non-trivial minority of subjects chose not to do so.
272 J.J. Murphy and J.K. Stranlund All of our hypotheses and their policy implications entail pairwise comparisons of the treatment effects of each disclosure treatment relative to conventional enforcement. Because each subject participated first in the conventional enforcement treatment and then once in each of the six disclosure treatments, we preserve the within-subject comparison by using the non-parametric Wilcoxon signed-rank test for matched pairs.8 Behavioral hypotheses Hypothesis 1 holds as expected. In both the D-I and CD-I treatments, there is a roughly even split between the number of people who committed and who did not commit to disclosing their violations (79:101 for D-I and 94:86 for CD-I). This confirms our expectation that a significant number of subjects would choose to disclose their violations under disclosure policies that make risk neutral individuals indifferent between disclosure and non-disclosure. Now consider the impacts of voluntary disclosure on the care taken by individuals to prevent violations in the D-I and CD-I treatments relative to their choice under CE (conventional enforcement). When self-discovery is costless, as in D-I, there should be no change in the choice of violation probabilities by those who committed to disclosing their violations (Hypothesis 2). Our results are consistent with this hypothesis: for those who committed to disclosure in D-I, their mean violation probability (0.534) is not statistically different from their mean violation probability under CE (0.518, p = 0.61).9 However, when it is costly for subjects to discover whether a violation occurred, as in CD-I, the disclosure penalty must be reduced below the expected penalty under conventional enforcement in order to induce discovery. Since deterrence is weaker for those who choose to discover and disclose, we should observe higher violation probabilities for these individuals relative to their choices under conventional enforcement (Hypothesis 3). As predicted, the mean violation probability for those subjects who committed to discovery and disclosure under CD-I (0.616) is significantly higher than the mean violation probability made by these same subjects under CE (0.506, p = 0.01). Although those subjects who committed to disclosing their violations under the D-I and CD-I treatments behaved as theory predicts, those who did not commit to disclosing their violations behaved unexpectedly. Hypothesis 4 asserts that these individuals should not change their violation probabilities when a disclosure policy is added to a conventional enforcement strategy, simply because choosing not to disclose their violations means they are choosing to face the unchanged conventional enforcement strategy. Instead, in all six voluntary disclosure treatments, those subjects who did not commit to disclosure tended to choose lower violation probabilities than under CE. Let ∆ t be the mean change in violation probability from CE for those who did not commit to disclosure in voluntary disclosure treatment t, and let p be the result of a Wilcoxon signed-rank test of the hypothesis that there is no change. We observe: ∆D-H = −0.053 with p = 0.00; ∆D-I = −0.035 with p = 0.02; ∆D-L = −0.02 with p = 0.64;
Voluntary discovery and disclosure 273 ∆CD-H = −0.079 with p = 0.00; ∆CD-I = −0.065 with p = 0.01, and ∆CD-L = −0.047 with p = 0.24. Clearly we reject the hypothesis of equal violation probabilities in four of the six voluntary disclosure treatments (including D-I and CD-I), which is inconsistent with Hypothesis 4. These reductions are surprising, because the incentives for exercising care to prevent violations are unchanged by the introduction of a voluntary disclosure policy if one does not intend to disclose a violation. Hence, these reductions suggest a framing effect that is due to the introduction of a voluntary disclosure option that, for some reason, motivated non-disclosers to choose lower violation probabilities. Policy implications In the absence of any framing effects, all of the policy implications follow directly from the behavioral hypotheses. However, the framing effect (Hypothesis 4) is a potentially complicating factor, so we need to determine whether the policy implications hold despite this effect. Policy Implication 1 asserts that the expected number of violations under treatment D-I should not be different from that number under CE. This is precisely what we observe despite the framing effect for those who chose not to disclose. Table 14.3 shows that the mean violation probability (and therefore the expected number of violations) under CE (0.509) is about the same as the mean violation probability for both disclosers and non-disclosers under D-I (0.497), and this difference is not statistically significant (p = 0.23). This implies that the expected numbers of violations in these treatments are not significantly different (91.6 under CE vs. 89.5 under D-I). Thus, it appears that it is possible to add a disclosure opportunity to a conventional enforcement strategy without affecting deterrence – at least as long as there are no discovery costs. However, when subjects incur a cost to discover whether a violation occurred under CD-I, the expected number of violations should be higher than under CE (Policy Implication 2) because those that disclose their violations increase their violation probabilities (Hypothesis 3) while those that do not disclose should choose the same violation probabilities (Hypothesis 4). We have already seen that the disclosers in CD-I significantly increased their violation probabilities by 0.11, on average, from their violation probabilities under CE. However, the framing effect we have identified led non-disclosers to reduce their violation probabilities by 0.065 on average. Overall, the mean violation probability under CD-I is a bit higher than under CE (+0.027), but this difference is not statistically significant (p = 0.93). Thus our data do not support Policy Implication 2. Despite our failure to support Policy Implication 2, we have good reasons to continue to expect that disclosure policies when firms must invest in self-discovery would lead to less overall deterrence in non-laboratory settings. First, our view is that it is hard to justify a belief that the framing effect that led to our failure to support Policy Implication 2 would actually motivate firms in the field. Second, our results strongly support the hypothesis that inducing costly discovery and voluntary disclosure will lead to weaker deterrence for those who
274 J.J. Murphy and J.K. Stranlund choose to discover and disclose. Thus, assuming that the framing effect for nondisclosers is unlikely to hold outside the laboratory, we believe that our results do justify a continued expectation of weaker deterrence when firms are given the opportunity to discover and disclose their violations voluntarily. Policy Implication 3 asserts that if a voluntary disclosure policy motivates a significant number of violation disclosures, then fewer government audits are required to maintain the same level of deterrence for those that do not disclose. In fact, it is impossible for the expected number of audits to increase: as long as some firms disclose their violations, the remaining subset that are subject to random audits will necessarily be smaller than under CE. This is precisely what we observe; since a significant number of subjects voluntarily disclosed their violations under both D-I and CD-I (consistent with Hypothesis 1), as shown in Table 14.3, the expected number of audits required to maintain a 0.6 audit probability for those who did not report a violation under D-I (82.7) and CD-I (73.3) is significantly lower than under CE (108). Although these results indicate that voluntary disclosure policies can lead to reduced monitoring effort, Policy Implication 4 suggests that the expected number of fines, and possibly sanctioning costs, could increase. This result necessarily holds if some violations are voluntarily disclosed (Hypothesis 1) and therefore automatically sanctioned, and if there is no change in violation probabilities of those who choose not to disclose (Hypothesis 4). Despite our observation that individuals who did not commit to disclosure tended to reduce their violation probabilities relative to conventional enforcement, as shown in Table 14.3, the expected total number of fines under D-I (70.5) and under CD-I (81.0) are significantly greater than the expected number of fines under CE (55.0). While voluntary disclosure policies are likely to result in a greater number of sanctions, Policy Implication 5 asserts that the number of the potentially more costly fines for undisclosed violations will be smaller. Since a significant number of subjects chose not to disclose their violations in the D-I and CD-I treatments and there is only a small reduction in their mean violation probabilities in these treatments, it is not surprising that the results in Table 14.3 support this policy implication. Of the 70.5 expected fines under D-I, only 28.4 are for violations that are uncovered by a government audit. Similarly, under CD-I, only 23.1 of the 81.0 expected fines are for undisclosed violations. Both of these are significantly lower than the 55.0 expected fines under CE. Sensitivity analysis Our behavioral hypotheses, their policy implications, and, hence, the discussion thus far has focused on the two treatments that were parameterized such that risk neutral agents would be indifferent about committing to disclosing their violations (D-I and CD-I). To examine the sensitivity of our results to the incentive for voluntary disclosure, we varied the reduced penalty for voluntarily reported violations.
Voluntary discovery and disclosure 275 Our results suggest that the commitment to disclose a violation is accompanied by a potentially substantial decrease in the care that individuals took to prevent violations. Table 14.3 shows that within each of the six treatments that allow voluntary violation disclosure, the mean violation probability (and corresponding expected number of violations) is higher for those who committed to disclosure than for those who did not (e.g. 0.656 vs. 0.525 for D-L). As one would expect, this difference is not statistically significant for the D-I treatment (p = 0.14 using a Mann-Whitney test for unmatched pairs).10 However, in four of the other five cases, this difference is significant at the 1 percent level: only for D-H is this difference not significant (p = 0.26). Moreover, the mean violation probabilities of those who committed to disclosing their violations increases rather rapidly as the penalty for disclosed violations is reduced. For the disclosure only treatments, observe that the mean violation probability for those who committed to disclosure increases from 0.497 to 0.534 and then to 0.656 as the penalty for disclosed violations is reduced. A Mann-Whitney test making the pairwise comparison of treatments D-H and D-L indicates that this increasing trend is statistically significant (p = 0.00).11 A similar pattern occurs in the costly discovery treatments.12 Since the number of subjects who committed to disclose their violations also increases as the disclosure penalty is reduced, the expected number of violations of those who committed to disclosure also increases quickly. For the disclosure only treatments, these values increase from 18.9 for D-H to 42.2 for D-I, and on to 76.8 for D-L. The same pattern holds for the costly discovery treatments. Largely because the number of subjects who committed to disclosing their violations and their violation probabilities both increase as the disclosure penalty is reduced, the expected total number of violations also increases. From Table 14.3, the mean violation probability for disclosers and non-disclosers combined increases in the disclosure only treatments from 0.448 to 0.497 and then to 0.611 as the disclosure penalty is reduced. Again, this overall trend is highly significant: the p-value for a Mann-Whitney test comparing D-H to D-L is 0.00.13 Perhaps more revealing is the resulting increase in the expected number of violations from 80.5 to 89.5 and then to 109.8 for the disclosure only treatments. These patterns hold for the costly discovery treatments as well.14 While the expected number of violations increases as the disclosure penalty is reduced, the expected number of required audits falls – from 96.7 to 82.7 and then to 61.9 for the disclosure only treatments and similarly for the costly discovery treatments. Our results clearly suggest an important tradeoff inherent in voluntary discovery and disclosure policies: reducing the penalty that firms automatically pay if they voluntarily disclose a violation effectively induces more of them to report their violations, thereby conserving monitoring effort, but these lower penalties also provide an incentive for firms to exercise less care in avoiding violations. Of course, whether the reduction in monitoring effort results in a reduction in total enforcement costs also depends upon the impact on sanctioning costs. The results in Table 14.3 show that reducing the penalty for disclosed violations
276
J.J. Murphy and J.K. Stranlund
increases the total number of fines in the disclosure only treatments from 55.9 to 70.5 and then to 96.6. The same pattern holds for the costly discovery treatments. However, the number of fines levied on undisclosed violations falls as the disclosure penalty is reduced: 37 to 28.4 and then to 19.8 in the disclosure only treatments, with a similar pattern holding for the costly discovery treatments. Thus, although lower disclosure penalties led to more penalties being levied, fewer penalties were levied on undisclosed violations. Clearly, how total enforcement costs change with a greater incentive for voluntary discovery and disclosure depends on the relative costs of monitoring and sanctioning disclosed and undisclosed violations.
Conclusions A key conclusion of our study is that, when firms know their compliance status without cost, it is possible to motivate a significant number of voluntary violation disclosures without adversely affecting environmental quality. In this case, whether voluntary disclosure policies are worthwhile depends solely on their impact on government enforcement costs. While voluntary disclosure policies can reduce government efforts to monitor the compliance behavior of firms, their impact on the costs of sanctioning noncompliant firms depends on the relative costs of sanctioning voluntarily disclosed violations and sanctioning violations that the government uncovers. However, when firms must undertake costly self-audits to determine their compliance status, we doubt that adding a voluntary disclosure policy to an existing conventional enforcement strategy will leave deterrence unaffected. Although our results in this case fail to provide unequivocal support for the hypothesis that disclosure policies will lead to more violations when firms’ selfaudits are costly, this failure is due solely to a framing effect that we doubt would persist in field settings of environmental enforcement. Generally, our work highlights some of the essential tradeoffs inherent in voluntary discovery and disclosure policies. Namely, motivating an increasing number of violation disclosures is associated with increasing incidences of noncompliance, worsening environmental quality, decreasing government monitoring effort, and more sanctions, fewer of which are for violations that the government uncovers. Consequently, it appears that there is little theoretical or empirical justification to warrant general support for, or opposition to, voluntary discovery and disclosure policies. Moreover, any conclusion about the benefits and costs of voluntary disclosure policies will require detailed knowledge of the harm caused by environmental violations, the costs of monitoring firms and sanctioning violations, as well as firms’ costs of auditing themselves to determine their compliance status. Clearly, whether voluntary discovery and disclosure policies are an efficiency-enhancing complement to conventional environmental enforcement will have to be determined on a case-by-case basis. While we have examined many of the essential aspects of voluntary disclosure policies, there are others that we have not considered, but that can be
Voluntary discovery and disclosure 277 addressed with straightforward modifications of our experimental designs. For example, our framework can easily be adapted to examine Innes’ (1999 and 2001a) claims that there are additional benefits to voluntary discovery and disclosure policies when firms are required to undertake costly remediation (e.g. clean-up of spills), or when they are able to engage in costly efforts to avoid government detection and punishment of their violations. Likewise, the claims of several authors that self-discovery and disclosure rules might not be as effective as hoped because firms fear that the information they discover might improve the government’s own monitoring efforts (Pfaff and Sanchirico, 2000; Mishra et al., 1997) can and should be examined within our framework. Finally, while our study was motivated by voluntary discovery and disclosure policies to support compliance with environmental regulations, our results apply more broadly. Many environmental regulations require that firms report their compliance status to regulators. Although our experiments, and most of the literature on self-reporting in law enforcement, have focused on voluntary reporting, the tradeoffs that we highlight will also manifest themselves when reporting is mandatory. In addition, the use of disclosure policies extends well beyond environmental policies to regulations concerning occupational health and safety, product safety, and federal sentencing guidelines (Kaplow and Shavell, 1994; Innes, 2001b). Our results apply to these contexts as well.
Acknowledgments Primary funding for this research was provided by the US EPA – Science to Achieve Results (STAR) Program grant #R829608. Additional support was provided by the Cooperative State Research Extension, Education Service, US Department of Agriculture, Massachusetts Agricultural Experiment Station, and the Department of Resource Economics under Project No. MAS00871, and by the Center for Public Policy and Administration, University of Massachusetts Amherst. Maria Alejandra Velez and Elizabeth Gonzalez provided outstanding research assistance. We are also grateful to Jay Shimshack, Jason Shogren, and an anonymous referee for useful comments and suggestions.
Notes 1 US EPA (2000). We follow the terminology used by the EPA. “Discovery” refers to costly efforts by regulated entities to discover whether they are in violation of an environmental regulation. Some also call these actions self-audits. “Disclosure” means voluntary reporting of violations to the authorities. In the related economics literature this is usually referred to as self-reporting. 2 Innes (1999) has argued that the fact that voluntary disclosure policies will tend to lead to a greater number of sanctioned violations could also imply improved environmental quality if firms are required to correct the harm caused by their violations. 3 Innes (2001b) provides a comprehensive review of the theoretical literature on voluntary discovery and disclosure policies. The model presented here is essentially the same as the one he employs to motivate his review.
278
J.J. Murphy and J.K. Stranlund
4 Thus, conventional enforcement is held fixed throughout this chapter. Moreover, we do not consider the optimal design of voluntary discovery and disclosure rules, choosing instead to focus on the qualitative effects of adding various disclosure rules to an existing, fixed conventional enforcement strategy. For the design of optimal discovery and disclosure rules see Innes (2001b). 5 Unfortunately there is no consensus about how to elicit these preferences. Instruments such as that presented by Holt and Laury (2002) may be useful, but there is evidence that risk preferences may be domain specific and not stable across institutions (Isaac and James, 2000). Therefore it is unclear whether risk preferences elicited with the Holt/Laury mechanism would be robust in predicting behavior in other settings. We believe that this is an important area for future research for those who investigate compliance behavior in experimental settings. 6 To avoid the possibility of introducing unwanted biases, we framed the experiments as a production decision in which subjects chose the probability of an unspecified accident, instead of the probability of a violation of an environmental standard. We will continue to speak of violations throughout the chapter, even though the subjects’ actions were about preventing and possibly disclosing accidents. 7 Instructions are available upon request from the author. 8 For each hypothesis, a matched pair t-test yielded the same conclusions. 9 Note that in Table 14.3 the mean violation probability under CE for all 180 subjects is 0.509. The mean violation probability under CE for the 79 subjects who committed to disclosure under D-I is 0.518. 10 This follows simply by combining Hypotheses 2 and 3. 11 A pairwise comparison of the violation probabilities under D-H and D-I for disclosers only is not significant (p = 0.50), but a comparison of these values under D-I and D-L is significant (p = 0.00). 12 The results of Mann-Whitney tests for the violation probabilities of disclosers only are as follows: CD-H vs. CD-L (p = 0.00), CD-H vs. CD-I (p = 0.12), CD-I vs. CD-L (p = 0.02). 13 A pairwise comparison of the violation probabilities under D-H and D-I for all subjects is significant (p = 0.05), as is a comparison of these probabilities under D-I and D-L (p = 0.00). 14 The results of Mann-Whitney tests for the violation probabilities of all subjects are as follows: CD-H vs. CD-L (p = 0.02), CD-H vs. CD-I (p = 0.00), CD-I vs. CD-L (p = 0.00).
References Holt, C.A. and Laury, S.K., 2002. Risk Aversion and Incentive Effects. American Economic Review, 92 (5), 1644–1655. Innes, R., 1999. Remediation and Self-Reporting in Optimal Law Enforcement. Journal of Public Economics, 72 (3), 379–393. Innes, R., 2001a. Violator Avoidance Activities and Self-Reporting in Optimal Law Enforcement. Journal of Law, Economics, and Organization, 17 (1), 239–256. Innes, R., 2001b. Self Enforcement of Environmental Law. In: A. Heyes, ed. The Law and Economics of the Environment. Cheltenham: Edward Elgar, pp. 150–184. Isaac, R.M. and James, D., 2000. Just Who Are You Calling Risk Averse? Journal of Risk and Uncertainty, 20 (2), 177–187. Kaplow, L. and Shavell, S., 1994. Optimal Law Enforcement with Self-Reporting of Behavior. Journal of Political Economy, 103 (3), 583–606. Malik, A., 1993. Self-Reporting and the Design of Policies for Regulating Stochastic Pollution. Journal of Environmental Economics and Management, 24 (3), 241–257.
Voluntary discovery and disclosure 279 Mishra, B.K., Newman, D.P., and Stinson, C.H., 1997. Environmental Regulations and Incentives for Compliance Audits. Journal of Accounting and Public Policy, 16 (2), 187–214. Pfaff, A. and Sanchirico, C.W., 2000. Environmental Self-Auditing: Setting the Proper Incentives for Discovery and Correction of Environmental Harm. Journal of Law, Economics and Organization, 16 (1), 189–208. Pfaff, A. and Sanchirico, C.W., 2004. Big Field, Small Potatoes: An Empirical Assessment of EPA’s Self-Audit Policy. Journal of Policy Analysis and Management, 23 (3), 415–432. Stafford, S.L., 2005. Does Self-Policing Help the Environment? EPA’s Audit Policy and Hazardous Waste Compliance. Vermont Journal of Environmental Law, 6. Online, available at: vjel.org/articles/articles/Stafford11FIN.htm (accessed 14 September 2007). US Environmental Protection Agency, 2000. Incentive for Self-Policing, Discovery, Disclosure, Correction and Prevention of Violations. Federal Register, 65 (70), 19618– 19627. Online, available at:.epa.gov/compliance/incentives/auditing/auditpolicy.html (accessed 14 September 2007.
15 Congestion pricing and welfare An entry experiment Lisa R. Anderson, Charles A. Holt and David Reiley
Introduction One of the most persistent problems facing cities is freeway congestion. With billions of dollars spent every year to add capacity, the number of commuters appears to be growing at a faster rate. In some cases, travel on city streets is an attractive alternative. Many commuters face the daily dilemma of taking a predictable, but slower, route or risking hours of gridlock on a potentially faster freeway. Highway congestion was once just a problem for large cities like Los Angeles and Washington, DC, but it is increasingly affecting smaller metropolitan areas. In addition to the political pressure traffic puts on city planners and other elected officials, congestion imposes huge welfare costs on commuters. Instead of providing innovative solutions to this problem, technological advances have spawned new areas where congestion must be managed, like cell phones and the internet.1 We study the problem of congestion in the context of a binary choice game. Subjects must choose between a “safe” route with a fixed payoff and a “risky” route for which the payoff depends on the number of other users. When subjects must simultaneously choose between the safe and risky options, there is significant congestion, resulting in large welfare losses, even when subjects make the same decision for as many as 60 rounds. We test the effectiveness of a user fee (i.e. a toll) in this environment and find this effective at reducing congestion. However, the simultaneous nature of the decision still results in a high variance in the number of entrants, which is costly from a welfare perspective. We also test information provision as a policy option and find that it reduces the number of entrants and the variance.
Ducks and “magic” Yogi Berra once remarked: “Nobody goes there anymore. It’s too crowded.” The humorous contradiction in this comment raises the issue of how individuals actually respond to congestion. The solution is suggested by a famous animal foraging experiment, reported by Harper (1982), in which two people stood on
Congestion pricing and welfare 281 opposite banks of a duck pond in the Cambridge University Botanical Garden and began throwing out five-gram bread balls at fixed intervals. The payout rate was twice as high on one bank (every ten seconds instead of every 20 seconds). The flock of ducks sorted themselves to equalize expected payoffs, as measured in grams per minute. Moreover, a change in the interval times resulted in a new equilibrium within about 90 seconds, which is less time than it would take for most ducks to obtain a single bread ball. As Paul Glimcher (2002) notes, the ducks were in constant motion, with some switching back and forth, even after equilibrium was reached. This stochastic element could offer an evolutionary advantage if it hastens the adjustment to changes in payoff conditions. Entry and congestion problems arise often when choices are decentralized, as in the decisions of individuals concerning whether to congregate in a potentially crowded bar. This latter case is known in the literature as the “El Farol” dilemma, named after a popular bar in Santa Fe (Morgan et al., 1999). Psychologists and economists have conducted a series of similar binary choice experiments with congestion effects, beginning with Kahneman (1988), who observed the payoff equalization and remarked: “To a psychologist, it looks like magic.” There have been a number of subsequent experiments in which observed behavior tends to equate payoffs, e.g. Ochs (1990). This successful coordination has been explained in terms of adaptation and learning (Meyer et al., 1992; Erev and Rapoport, 1998). However, some experiments have produced too much entry into the more congestion-prone activity. For example, Fischbacher and Thöni (forthcoming) conducted an experiment in which each entrant essentially gets a single lottery ticket with an equal chance for winning a money prize, so the expected payoff for entry is a decreasing function of the number of entrants. There was excess entry, which was more severe with large numbers of potential entrants. Camerer and Lovallo (1999) conclude that entry can be affected by overconfidence, since they observe over entry when post entry payoffs depend on a skill based trivia competition, but not otherwise. Goeree and Holt (2005) provide a unified treatment of some (but not all) of these disparate results, using the argument that exogenous random noise in behavior may tend to pull entry rates towards one half, which would result in over entry when the theoretical prediction is less than half and under entry otherwise. This systematic (“inverse S”) pattern of over and under entry is reported by Sundali et al. (1995). To summarize, the general result is that the amounts of over entry and under entry are small, and that theoretical predictions are fairly accurate, as long as they are not extreme. The experiment reported in this chapter will focus on the welfare consequences of congestion, and on factors that may increase efficiency. In other words, the focus is on how to improve the lives of the ducks.
A stylized model of congestion Consider a group of N commuters who must choose between a slow reliable route and a faster, but potentially congested freeway, bridge, or tunnel. Commuting time
282
L.R. Anderson et al.
Table 15.1 Payoff for the risky route Number of 1 entrants Average 4.00 payoff per entrant Total earnings for all
2
3
4
3.50 3.00 2.50
5
6
7
8
9
10
11
12
2.00 1.50 1.00 0.50 0.00 –0.50 –1.00 –1.50
9.50 12.50 13.50 14.00 13.50 12.00 9.50 6.00 1.50 –4.00 –10.50 –18.00
is fixed on the safe route and is an increasing function of traffic on the risky route. Suppose that the average payoff for an entrant is decreasing in the number of entrants: A – Bx, where x is the number of entrants and A and B are positive parameters. The payoff from taking the reliable route is C, which represents the opportunity cost for an entrant. With free entry, average payoffs are equalized if A − Bx = C, or equivalently, if x = (A – C)/B. This equal payoff equilibrium outcome is not socially optimal, since entrants do not consider the effects of their own entry decisions on the other entrants. To see this, note that the total payoff with x entrants and N – x non-entrants is: x(A – Bx) + (N – x)C, which is maximized when the marginal social value equals the marginal social cost: A – 2Bx = C, or when x = (A – C)/2B. It follows from these calculations that the optimal rate of entry is half of the equilibrium entry rate in this linear model. Consider an example in which the payoff for the safe route is $0.50 and the payoff for the risky route is $4.50 minus the number of entrants (C = 0.50, A = 4.50, and B = 1). Table 15.1 shows entry payoffs for the risky route with a total of 12 commuters. Notice that the payoff for the risky route is equal to the payoff for the safe route (at $0.50) when two-thirds of the commuters enter the risky route. Hence, the free entry equilibrium prediction is for eight of the 12 commuters to take the risky route, as can be verified by substituting the payoff parameters into the formula for equilibrium entry derived earlier. Figure 15.1 shows the locations of the free entry equilibrium and socially optimal entry levels. The marginal (private or social) cost is $0.50, as shown by the horizontal dotted line. This marginal cost is the payoff from taking the safe route, and it equals the individual payoff from entry (“average payoff”) when there are eight entrants, which constitutes an equilibrium. Individual entrants do not consider the cost of entry on other users of the risky route, so to them, the marginal private benefit from entry is just the average payoff, shown by the solid line in the figure. But entry imposes costs on others, so the marginal social benefit of entry (shown by the dashed line) is below the average payoff line. The marginal social benefit line is steeper than the average payoff line because, as the number of entrants increases, each additional entrant causes the value of entry to fall by $0.50 for a larger number of people. The socially optimal number of entrants occurs where marginal social
Congestion pricing and welfare 283 $6
$3
Average payoff Marginal cost Marginal social benefit
Equilibrium
$0 Optimum $3 $6 0
2
4
6
8
10
12
Number of entrants
Figure 15.1 Benefits and costs of the risky route.
benefit is equal to the marginal private cost, at four entrants in this example. Since the marginal private benefit of $2.50 exceeds the marginal social benefit of $0.50 by $2 at this point, an entry fee of $2 corrects the externality. In the figure, the effect of a $2 fee would be to shift the average payoff line down by $2 in a parallel manner, so that the intersection with the marginal cost line occurs at the optimal entry level of four. The experiment to be discussed in the next part of the chapter will evaluate the effects of both exogenous and endogenously determined entry fees. The equal average payoff equilibrium that results from free entry is closely related to the notion of a Nash equilibrium. To see this, note that there is an asymmetric Nash equilibrium in which exactly eight people enter, since it follows from Table 15.1 that a ninth entrant would earn zero, which is less than the $0.50 payoff from staying out. Conversely, with eight entrants, each earns $0.50, so none can be better off by taking the safe route. There is, however, another Nash equilibrium with seven entrants, each earning $1, since a nonentrant who attempts to enter will drive the number of entrants up to eight and hence will only earn $0.50. (In the free entry “competitive” approach, this nonentrant would enter anyway, not realizing that the act of entry will reduce the payoffs from entry.) Alternatively, consider a symmetric Nash equilibrium in mixed strategies, with the probability of entry denoted by p. This probability must be set to ensure that if the other 11 people enter with probability p, then the expected entry payoff for the remaining person exactly equals the exit payoff of $0.50. This person’s expected entry payoff can be calculated as a function of p using the formula for the density of a binomial distribution with N = 11. It is straightforward to show that the expected entry payoff is $0.50 when p is 7/11, which is approximately 0.64. The difference between this number and the two-thirds entry rate that equalizes expected payoffs is due to the fact that the number of entrants is finite. To see the intuition, think of an individual player who is
284 L.R. Anderson et al. considering entry or not. To be willing to randomize, the person must have the same expected payoff from entry (with probability 1) and exit. The player would be indifferent if exactly eight entrants are expected (that person and seven others). In order to get seven entrants out of the 11 others with a binomial distribution, the probability of entry must be 7/11. Another perspective is to think about why randomizing with probability 2/3 is not an equilibrium. If the other 11 players randomize with probability 2/3, then it turns out that the remaining player would prefer not to enter. This is because a given player faces an expected 2/3(11) = 7.33 other entrants, so for that player to enter would create total expected entry of 8.33, which is more than the entry level that equates the payoffs from entry and exit. Indeed, with 11 other players each entering with probability 2/3, it turns out that an entrant earns an expected payoff of $0.33 rather than the $0.50 that could be earned from exit. In the symmetric mixed-strategy equilibrium with all 12 players choosing entry with probability 7/11, each player earns an expected payoff of $0.50, exactly the same amount as the exit payoff. To summarize, a Nash equilibrium in mixed strategies with risk-neutral players is for entry to occur with probability 0.64, and a “free-entry” equilibrium that equates expected payoffs involves an entry rate of 0.67. For large numbers of players, these two approaches would be equivalent, and for the parameters used in the experiment, the predictions are quite close. Therefore, we will use the free entry prediction of 2/3 as the prediction, except as noted below.
A congestion experiment Subjects were recruited from undergraduate classes at the College of William and Mary and at the University of Virginia. There were 12 sessions with 12 participants in each. The experiment was conducted using the Market Entry program on the Veconlab website (online, available at: veconlab.econ. virginia.edu/admin.htm). In each round, subjects faced a binary choice to enter the market (i.e. the risky route) or not. As described above, subjects earned a sure $0.50 payoff in each round they did not enter. The payoff for entry in a given round was determined by the total number of entrants in that round according to the following formula: $4.50 − 0.50*x, where x denotes the total number of entrants. (The only exception was in the first session where the payoffs were all doubled, since the session involved only 20 rounds.) Sessions lasted about an hour and the average person earned about $0.50 per round over 30–60 rounds, depending on the treatment, plus a $6 show-up payment. The treatment parameters for all sessions and the resulting entry rates are listed in Table 15.a.1 in the Appendix to this chapter. The data for all sessions are online, available at: people.virginia.edu/~cah2k/data/. Figure 15.2 shows results from a session in which subjects made this entry decision for 60 rounds. While the average entry rate is close to the prediction of 2/3, there is significant variation from round to round. Even with 60 rounds
Congestion pricing and welfare 285 Rate of entry 0.9
Equilibrium
0.8 0.7
Entry rate
0.6 0.5 0.4 0.3 0.2 0.1 0 0
5
10
15
20
25
30
35
40
45
50
55
60
Round
Figure 15.2 An entry game session with 60 rounds (me070804).
of play, the noise does not subside. Although this was the longest session we ran, the amount of variation shown here is typical of the sessions with shorter durations. As mentioned in the previous section, there is a symmetric Nash equilibrium for this game, in which each person enters with a probability of 7/11, or 0.64. Using a binomial distribution (p = 0.54 and N = 12), the probabilities associated with each number of entrants can be calculated, as shown by the gray bars in Figure 15.3, which show a mode at eight entrants. The frequencies of the actual numbers of entrants for the 60 round session (in Figure 15.2) are represented by the black bars in Figure 15.3, which indicates that outcome variability is about what would be expected. As noted above, free entry yields inefficient outcomes since entrants do not take the social cost of over entry into account. With a linear average payoff line, the total payoff will be quadratic and concave, and variance will reduce average payoffs. This effect is illustrated in the bottom row of Table 15.1. At the equilibrium level of eight entrants, the earnings are $0.50 for each person, whether or not they enter, so total earnings are $6. Now consider how social welfare changes with some variance in the entry rate. If the number of entrants is seven in one round and nine in the next, the average entry rate is consistent with the theoretical prediction of eight. However the average total earnings for these two periods is ($9.50 + $1.50)/2 = $5.50. As the variance grows, the welfare loss grows at an increasing rate. For example, with entry at six in one period and ten in the next period, average of the two earning amounts is ($6.00 − $4.00)/2 = $1. Result 1: There is considerable variation in the entry level, even over long periods of play, which results in large efficiency losses.
286
L.R. Anderson et al.
0.35 Nash Data
Frequency
0.30 0.25 0.20 0.15 0.10 0.05 0.00 0
1
2
3
4
5 6 7 8 Number of entrants
9
10
11
12
Figure 15.3 Predicted and observed distributions of entry outcomes for a session with 60 rounds (me070804).
Congestion tolls A common policy approach to congestion is to tax freeway use via a toll. Figure 15.4 shows results from a session with the optimal user fee of $2 per entrant. Note from Table 15.1 that the private benefit from entry is $2.50 at the socially optimal entry level of four. Imposing this $2 cost on entry reduces the private benefit to $0.50, thus moving the free entry equilibrium prediction to the optimal level. Entry rates quickly fell when the user fee was imposed; the average entry rate was 34 percent with the fee. Overall, the optimal entry fee was imposed in parts of six sessions. In two of the sessions, the revenue collected from the fee was split equally between the 12 No entry fee Entry fee Equilibrium
1 0.9 0.8
Entry rate
0.7 06 0.5 0.4 0.3 0.2 0.1 0 0
5
10
15
20 Round
Figure 15.4 Entry game with $2 entry fee (me062904).
25
30
35
40
Congestion pricing and welfare 287 subjects. In the other four sessions, the entry fee revenue was not rebated to the subjects. The success of the entry fee did not depend on whether or not it was rebated to subjects. In the two sessions with the rebate, the average entry rates were 33 percent and 38 percent, and in the sessions without the rebate, the average entry rates were 34 percent, 35 percent, 35 percent, and 35 percent. Notice that the rebate may reduce the variance around the optimal rate, but there is still considerable noise in the data, and social welfare is not maximized in these sessions. The overall effect of imposing an entry fee, with or without rebate, is substantial and clear; all of the entry rates listed above with the fee are below the rates for the sessions with no-fee treatments: 69 percent, 65 percent, 63 percent, 66 percent, 68 percent, 63 percent, and 62 percent. There is no overlap in these entry rates by treatment, so the result would be highly significant on the basis of standard non-parametric tests. Result 2: With an optimal user fee, entry is reduced to the socially optimal level on average, but there is still noise.
Information and coordination Despite the success of user fees at moving usage towards the socially optimal level, the variance in entry still persists in these sessions. If entry decisions are not made simultaneously, then coordination may be facilitated by improved information about current conditions. Much like rush hour traffic reports, we made information about prior entry available to subjects as they were making decisions. Specifically, the number of entrants at any given point in time was displayed on all of the computer screens. People were allowed to enter in any order. Figure 15.5 shows results from a typical session with endogenous entry order and the provision of prior entry information. The combination of the entry fee and the information in the last half of the session resulted in the socially optimal entry rate in 16 of 20 rounds of play. In most cases, over entry was the result of two players clicking the “enter” button at precisely the same moment. Even without an entry fee, the provision of prior entry information tended to reduce variance of entry rates. There were five sessions that began with ten or more rounds of a “no-view” treatment, and seven sessions that began with ten or more rounds of a “view” treatment that posted the number of prior entrants on each screen update. There was considerable variance in all of the no-view treatments, and the pattern with the view treatment was basically a flat line with an occasional “blip” as seen in Figure 15.5. The only exception was in Session 9, where only four of the first ten entry rates were at the same level (0.67), but even in this case, the overall variance of entry rates was relatively low. The variances for the first ten rounds of each of the 12 sessions are shown in Table 15.2. All of the variances are less than 0.01 for the view treatments, and are greater than 0.01 for the no-view treatments. This difference is significant at the 0.01 level using standard non-parametric tests.
288
L.R. Anderson et al. 10
No entry fee and information Entry fee = $2 and information Equilibrium
0.9 0.8
Entry rate
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0
5
10
15
20 Round
25
30
35
40
Figure 15.5 Entry game with information about other entrants (me071504).
Table 15.2 Variances of entry rates in the first ten rounds, by view treatment Session
1
2
3
4
5
6
7
8
9
10
11
12
Variance 0.022 0.035 0.042 0.017 0.011 0.031 0.001 0.004 0.006 0.001 0.002 0.046 Treatment No No No No No No View View View View View No
Result 3: Information about prior entrants reduces noise, which increases welfare. When combined with the optimal user fee, social welfare is maximized in most rounds.
Voting and endogenous entry fees Consider the effect of an entry fee, F, that is paid by each entrant. One issue is whether the fee setter has an incentive to set an optimal fee. Let the total earnings of entrants, the “surplus,” be denoted by S(x), with S”(x) < 0. When there are x entrants, the average earnings per entrant are given by S(x)/x. (If the surplus is a quadratic concave function, this yields the linear average payoff model considered previously in this chapter.) The total earnings of the group as a whole are represented by S(x) + (N − x)C, which is maximized by equating marginal surplus to marginal cost: S’ (x) = C. In contrast, the free entry equilibrium that equates average payoffs from entry and exit is determined by the equation: S(x)/x = C + F. Multiply both sides of this equation by x to obtain an expression for the total entry fee revenue: xF = S(x) − Cx, which is maximized when S’ (x) = C, i.e. when the marginal value of the surplus equals the marginal cost. Thus the revenue-maximizing fee under free entry is the efficient fee that
Congestion pricing and welfare 289 maximizes total earnings for this model. As noted previously, the optimal fee for the parameters used in the experiment is $2, which internalizes the externality at the optimal level of entry. One way to provide subjects in the experiments with the incentive to adopt an optimal entry fee is to split the fee revenues equally, since the fee that maximizes total fee revenue will maximize the 1/N share of this revenue. Figure 15.6 shows results from a session in which subjects were allowed to vote on an entry fee, with all fee revenue divided equally among participants, whether or not they entered. Voting sessions started with ten rounds of decision making with no fee. At the end of those ten rounds, one subject was randomly chosen to be the “chair” and following instructions were read aloud: Now everybody should come to the front of the room, and we will have a meeting to discuss whether or not to require people who enter the market to pay an entry fee, and if so, how much the fee should be. All fees collected will be totaled and divided equally among the 12 participants, regardless of whether or not they entered. To facilitate this discussion, we will use a random device (throw of a die) to choose a person to chair the meeting. This person will call on people to speak, and then when someone makes a motion that is seconded, the chair will count the votes. The chair may vote to break a tie. Once the fee is selected, it will be entered into the computer and will be in effect for the next 10 rounds, after which we may meet again to decide on a fee for the 10 rounds that follow. You are free to discuss any aspect of the process, except that you cannot talk about who enters and who does not. Let me stress two things: all fees collected get divided up equally among all participants, those who entered and those who did not.
1
No entry fee Entry fee = $1 Entry fee = $2
0.9 0.8
Entry rate
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
5
10
15
20
25
30
Round
Figure 15.6 Entry game with voting on entry fee (me063004).
35
40
290
L.R. Anderson et al.
The chair presided over group discussions of the fee. Anyone could propose a fee and call for a vote. Majority rule determined whether the proposed fee would be enacted for ten rounds, followed by another vote on an entry fee for the next ten rounds. In this particular session, an entry fee of $1 was proposed and passed with very little discussion beforehand. In a second meeting following round 20, someone proposed an entry fee of $2, but it only received four votes. A proposal of $0.50 also failed to pass, with only three votes. A motion to keep the $1 fee passed with a majority of votes. Subsequently fees of $3 and $1.50 were proposed and rejected. Finally, the $2 was reproposed and passed with seven of 12 votes. In another session, the subjects started with a $1 fee and adjusted it to $2 during the second round of voting. However, they lowered it to $1.75 in the third round of voting and to $1.60 in the fourth round of voting. In the last round of voting they increased the fee to $1.80. Result 4: Subjects have some success at finding the optimal user fee with discussion and voting.
Summary We present results from a binary choice experiment based on a stylized model of congestion. On average, entry behavior is approximately at a level that equalizes expected payoffs, but these near-equilibrium entry rates are inefficiently high. Moreover, the variability of entry from round to round introduces another source of inefficiency. By charging the optimal entry fee, outcomes move closer to the socially optimal level but there is still some under and over entry. The combination of the optimal entry fee and information about current entrants moves behavior very close to the socially optimal outcome. The linear congestion function used in this experiment is, for some purposes, a little too forgiving in the sense that small increases in traffic often have “snowball effects” that increase congestion dramatically. An interesting extension would be to use congestion functions with nonlinear and stochastic elements. This modification would tend to add outcome variability even in settings with many more potential entrants. Also, the information based and fee based allocation mechanisms implemented in this experiment would have an additional efficiency-enhancing role if individuals differed in their values for lowered congestion and faster commutes, as shown by Plott (1983).
Acknowledgments We gratefully acknowledge financial support from the National Science Foundation (SBR-0094800) and the Bankard Fund at the University of Virginia. We wish to thank Angela M. Smith for research assistance.
Congestion pricing and welfare 291
Appendix Table 15.a.1 Sessions, treatments, and data averages A, B, C N
Rounds Entry fee (share)
View Voting
Predicted entry rate
Average entry rate
Session 1 me062304 UVA Session 2 me062404 UVA Session 3 me062904 UVA Session 4 me063004 UVA
9.0, 1.0, 1.0 N = 12
1–10 11–20
0.00 (0) no 4.00 (1/12) no
no no
0.67 0.33
0.69 0.38
4.5, 0.5, 0.5 N = 12
1–20 21–40
0.00 (0) no 2.00 (1/12) no
no no
0.67 0.33
0.65 0.35
4.5, 0.5, 0.5 N = 12
1–20 21–40
0.00 (0) 2.00 (0)
no no
no no
0.67 0.33
0.63 0.34
4.5, 0.5, 0.5 N = 12
1–10 11–20 21–30 31–40 1–11 12–20 21–30 31–40 41–50 51–60 1–60
0.00 (0) 1.00 (1/12) 1.00 (1/12) 2.00 (1/12) 0.00 (0) 1.00 (1/12) 2.00 (1/12) 1.75 (1/12) 1.60 (1/12) 1.80 (1/12) 0.00 (0)
no no no no no no no no no no no
no vote vote vote no vote vote vote vote vote no
0.67 0.50 0.50 0.33 0.67 0.50 0.33 0.38 0.40 0.37 0.67
0.66 0.57 0.52 0.35 0.68 0.57 0.34 0.37 0.42 0.38 0.63
Session 5 Me070104 UVA
4.5, 0.5, 0.5 N = 12
Session 6 me070804 UVA Session 7 me071404 W&M Session 8 me071504 W&M Session 9 me072104* W&M Session 10 me072804 UVA Session 11 me111004 UVA Session 12 me111504 UVA
4.5, 0.5, 0.5 N = 12
0.00 (0)
view no
0.67
0.66
0.00 (0) view no 2.00 (1/12) view no
0.67 0.33
0.67 0.35
4.5, 0.5, 0.5 N = 12
1–20
4.5, 0.5, 0.5 N = 12
1–20 21–40
4.5, 0.5, 0.5 N = 12
1–20
0.00 (0)
view no
0.67
0.67
4.5, 0.5, 0.5 N = 12
1–20 20–40
0.00 (0) 2.00 (0)
view no view
0.67 0.33
0.67 0.35
4.5, 0.5, 0.5 N = 12
1–15
0.00 (0)
view no
0.67
0.68
4.5, 0.5, 0.5 N = 12
1–20 21–40
0.00 (0) 2.00 (0)
no no
0.67 0.33
0.62 0.35
no
Note * The database name for this session was a temporary name, not me072104.
292
L.R. Anderson et al.
Note 1 Experiments motivated by internet congestion issues are reported in Chen, et al. (2007) and Friedman and Huberman (2004).
References Camerer, Colin, and D. Lovallo (1999) “Overconfidence and Excess Entry: An Experimental Approach,” American Economic Review, 89 (March), 306–318. Chen, Yan, Laura Razzolini and Theodore L. Turocy (2007) “Congestion Allocation for Distributed Networks: An Experimental Study,” Economic Theory, 33 (1), 121–143. Erev, Ido, and Amnon Rapoport (1998) “Coordination, ‘Magic,’ and Reinforcement Learning in a Market Entry Game,” Games and Economic Behavior, 23 (May), 146–175. Fischbacher, Urs, and Christian Thöni (forthcoming) “Inefficient Excess Entry in an Experimental Winner-Take-All Market,” Journal of Economic Behavior and Organization. Friedman, Daniel and Bernardo Huberman (2004) “Internet Congestion: A Laboratory Experiment,” in Proceedings of the ACM SIGCOMM Workshop on Practice and Theory of Incentives in Networked Systems, Portland, Oregon, USA, 177–182. Glimcher, Paul W. (2002) “Decisions, Decisions, Decisions: Choosing a Biological Science of Choice,” Neuron, 36 (2), 223–232. Goeree, Jacob K. and Charles A. Holt (2005) “An Explanation of Anomalous Behavior in Models of Political Participation,” American Political Science Review, 99 (2), 201–213. Harper, D. G. C. (1982) “Competitive Foraging in Mallards: ‘Ideal Free’ Ducks,” Animal Behavior, 30 (2), 575–584. Kahneman, Daniel (1988) “Experimental Economics: A Psychological Perspective,” in Bounded Rational Behavior in Experimental Games and Markets, R. Tietz, W. Albers, and R. Selten, eds., New York: Springer-Verlag, 11–18. Meyer, Donald J., John B. Van Huyck, Raymond C. Battalio, and Thomas R. Saving (1992) “History’s Role in Coordinating Decentralized Allocation Decisions: Laboratory Evidence on Repeated Binary Allocation Games,” Journal of Political Economy, 100 (April), 292–316. Morgan, Dylan, Anne M. Bell, and William A. Sethares (1999) “An Experimental Study of the El Farol Problem.” Discussion Paper, presented at the Summer ESA Meetings, Tucson. Ochs, Jack (1990) “The Coordination Problem in Decentralized Markets: An Experiment, ” Quarterly Journal of Economics, 105 (May), 545–559. Plott, Charles R. (1983) “Externalities and Corrective Policies in Experimental Markets,” Economic Journal, 93 (369), 106–127. Sundali, James A., Amnon Rapoport, and Darryl A. Seale (1995) “Coordination in Market Entry Games with Symmetric Players,” Organizational Behavior and Human Decision Processes, 64 (2), 203–218.
16 Social preferences in the face of regulatory change J. Gregory George, Laurie T. Johnson, and E. Elisabet Rutström
Introduction Economic analyses of regulatory solutions to social dilemmas focus more often on the efficiency than on the distributional consequences. In this chapter, we show that individuals hold both reciprocal and distributional preferences over alternative regulatory solutions to social dilemmas in a laboratory experiment. Increased attention concerning distributional consequences may therefore be called for. The importance of distributional concerns can be seen in several solutions to distributional conflicts that communities devise. In irrigation and fishery commons, for example, appropriation rules stipulating equal usage are common, and lotteries have been used to assign rights at some of the most productive fishery spots. When access rights in fishing commons are violated, they have sometimes been met with negative reciprocal acts resulting in property damage and, in some cases, even homicide, with serious distributional consequences (Schlager, 1994). Additionally, in irrigation systems, users are often given equal time slots to extract water (Tang, 1994). In other social dilemmas, such as pollution emissions, the externality has broader geographical reach and a larger number of involved parties, making it harder to reach voluntary equitable agreements. The allocation of pollution permits has generated controversy, leading to intensive lobbying by interested parties (Tietenberg and Johnstone, 2004). These controversies often revolve around distributional issues. At issue has been whether permit allocations should be made to incumbent firms according to some historic performance and needs, using so-called “grandfathering” allocation schemes (Harrison, 2004; Ellermann, 2004), versus having the government auction pollution permits. In the latter case, the rents are transferred from permit holders to the government conducting the auction. There is a growing experimental literature regarding what is often referred to as “social preferences,” inspired in part by such distributional conflicts. In a number of experiments, subjects have been found to be motivated not only by self-interest, but also by a concern for payoffs to others (Charness and Rabin, 2002). For example, social preferences may be based on unconditional
294
J.G. George et al.
distributional preferences (Bolton and Ockenfels, 2000; Fehr and Schmidt, 1999), or on reciprocal motivations (Hoffman et al., 1994; Rabin, 1993; Charness and Rabin, 2002).1 This chapter contributes to this literature by testing how distributional choices depend on whether participants have a common interactive history in a social dilemma. Distributional preferences are elicited in a non-interactive setting, but they may be affected by past interactions. The design builds on the design in Johnson, Rutström, and George (2006) (henceforth, JRG). JRG investigate social preferences in a laboratory experiment. They introduce a two stage design, where the first stage generates a common interactive history and the second stage elicits distribution choices. This common history is one of a social dilemma, namely, a negative externality game. The distribution choices are elicited during a regulatory change that introduces solutions to this social dilemma. The regulation choices offered all guarantee that the social optimum is achieved, but they differ in the distributional consequences. Incentive compatibility is ensured by making distribution choices costly to the individual. The inspiration behind the design of the experiment is the proposal for a CO2 tradable permit market that includes a mechanism for redistribution of the scarcity rents, known as “Sky Trust.”2 According to this proposal, carbon emission permits would be sold to companies and the income distributed to US citizens in the form of equal dividends. The US Congressional Budget Office (2000) evaluated such a redistribution mechanism as one of several ways in which the government can distribute revenues from permit sales. Policy proposals such as Sky Trust are based on the premise that preferences over the distribution of the scarcity rents of the licenses exist and that voters (and interest groups) care about the way in which the policy solution distributes these licenses. When the agents affected by the regulatory change have a history together, such as in the JRG experiments, preferences over the distributional consequences may reflect not just self-interest or unconditional distributional preferences, but also reciprocity. JRG control for the influence from self-interest and focus on the latter two as motivations for redistribution choices. Subjects are offered two choices: the default choice (which is free of charge) is to maintain the income distribution that was generated during the first stage but at the socially optimal aggregate income level, and the other choice is a costly alternative implementing an egalitarian income distribution at the same (socially optimal) income level. They find that the choices are consistent with some mixture of distributional and reciprocal preferences. The propensity to select the egalitarian option is increasing in the tendency for an individual to cooperate during the first stage of the game. JRG argue that this is consistent with some form of reciprocal preferences. Nevertheless, the observed choice pattern reveals that there may be additional reasons for choosing the egalitarian trust fund based on unconditional distributional preferences. Participants express some willingness to pay for redistribution independent of the performance of the group during stage one, which cannot be explained based on reciprocity alone. In this chapter we extend the JRG experimental design in a way that more clearly allows us to infer distributional and reciprocal preferences separately.
Social preferences and regulatory change 295 We introduce a new experimental design that rules out reciprocal motivations, keeping other aspects of the experiment the same, thus allowing us to identify the presence of unconditional distributional preferences. We then pool the data from these new experiments with that of JRG and estimate the additional motivation to redistribute that is introduced by reciprocal preferences based on the common history. We conclude that both reciprocal and distributional preferences play a role. In particular, unconditional distributional preferences are strongest among low income group members, implying that empathy towards others who are similarly positioned may be an ingredient in social preferences. This motivational pattern could not be inferred from the JRG data alone, which demonstrates the value of removing reciprocal motivations altogether. In addition, reciprocal preferences are stronger in groups that were less successful at cooperating. Lower earning individuals in particular are more likely to take from the rich and give to other poor when they have experienced a relatively uncooperative group, even though they have to pay to enable such redistributions.
Experimental design The experiment in JRG consists of two stages, where in stage one subjects interact in a negative externality game in groups of six over ten rounds. Matchings into groups are anonymous and subjects remain in the same group for the entire experiment. All initial positions in the game are symmetric, but aggregate earnings become unequally distributed across both groups and individuals within a group due to variations in play during the ten rounds. In every period of the game each player chooses from an identical list of discrete activity levels that are increasing in private earnings and external costs. Table 16.1 displays the activities and the accompanying costs and earnings. The second stage consists of the introduction of two regulatory solutions to the externality problem, both constraining activities so that the social optimum is imposed. The solutions differ only in the implied distribution of income. Since the regulatory choice defines the activity choices of any subsequent interactive game, there is no need to play it out. Subjects are simply paid the earnings that are determined by the regulatory choice. A default solution, with future earnings distributed in proportion to each subject’s aggregate earnings in stage one, can be implemented at no cost. This default solution models a situation where pollution permits are allocated according to “demonstrated needs,” and where the rents of the permits are distributed in proportion to past profits. Subjects are offered the opportunity to exchange this default for an alternative solution, but only at a cost. This alternative implements an egalitarian earnings distribution, intended to model a public trust fund, where permits are auctioned out and the proceeds are redistributed to everyone in such a way that the future earnings distribution is perfectly egalitarian. The language in the instructions is context neutral with no reference to pollution, pollution permits, or public trust funds.
296 J.G. George et al. Table 16.1 Per period activity table for stage one (payoffs in cents) Activity choice
Private earnings
Social cost
Social welfare*
SW per person*
a b c d e f g h i j k l m n o p q r
255 265 275 285 290 295 307 326 337 348 365 378 387 390 393 394 395 396
10 10 10 10 12 13 18 25 27 29 38 49 57 59 61 62 64 66
1170 1230 1290 1350 1308 1302 1194 1056 1050 1044 822 504 270 216 162 132 66 0
195 205 215 225 218 217 199 176 175 174 137 84 45 36 27 22 11 0
Notes Only the first three columns were shown to subjects in the experiment. * Social Welfare (SW) is defined as private earnings minus social cost for the agent’s activity choice, minus the sum of the social costs of the other five group participants. In this table SW is calculated based on every player making the same activity choice. The social optimum is activity choice “d.”
The public choice mechanism used in the second stage is based on a three step procedure. First, all subjects are asked to express whether they prefer the default or the alternative regulatory solution. Those who prefer the alternative are then asked to express their willingness to pay (WTP) in percentage terms of their stage one earnings. To make the WTP elicitation incentive compatible the Becker–deGroot–Marschak (henceforth, BDM) lottery mechanism (Becker et al., 1964) is used. Subjects submit a bid that is compared to a randomly selected value from a commonly known uniform distribution. Bids are restricted to be in percentages of stage one earnings, with a lowest bid of 1 percent and a highest bid of 100 percent.3 A bingo cage that contains 100 balls numbered one to 100 is used to select the random value. If the subject’s bid is higher than (or equal to) this randomly drawn value, the bid becomes binding. Incentive compatibility is assured since the subject pays the randomly drawn value rather than the bid, removing any incentives to bid above or below his true value. In the third step, one randomly selected subject gets his stated preference implemented for his group if his bid exceeds the drawn value, paying an amount equal to the drawn value. An essential feature of the design is that an individual’s policy choice in stage two does not affect his or her own earnings in this stage, only the earnings of the other five group members. This is similar to Engelmann and Strobel (2004) and the multi-person experiments in Charness and
Social preferences and regulatory change 297 Rabin (2002). JRG elicit choices from all subjects and select one of the six group members at random at the end of the experiment as the one whose decision will be imposed on the group and the one who will pay for it. The “random dictator” receives a fixed payment equal to what each group member would get if the alternative solution were selected. This guarantees that the alternative solution creates an equal income distribution in stage two, and that earnings for the random dictator will be the same regardless of which solution he chooses. With this design, revealed preferences reflect neither a subject’s concern over stage two group efficiency, nor own-income comparisons across the two solutions. Table 16.2a illustrates typical distribution choices with numbers taken from two of the experimental groups as an illustration.4 The first column shows the earnings position a subject attained in stage one, and the second contains his share of total stage one group income, from highest to lowest. The third column shows corresponding earnings under the default (grandfathering) scheme, and the last column shows earnings under the alternative (trust) scheme. Both of the schemes have the same aggregate group income for all groups ($135), therefore eliminating efficiency as a motivation for choice.5 This income is set equal to ten periods of optimal play in the stage one game, so that incentives in stage one and two are commensurate. The alternative solution gives the same earnings to all group members, whereas the default gives incomes that are proportional to the incomes earned during stage one.6 Our new experiment is identical to the second stage of the JRG experiment, but it has no first stage game. Instead, the default, or “grandfathered” distributional choice is populated by randomly selected representative distributions Table 16.2a Typical distribution options in stage two Income position of group participant
Participant’s share of stage one group earnings (%)
Default solution payments
Alternative solution payments
JRG, Group 3 Session 3 1 23.2 2 22.5 3 21.5 4 17.0 5 13.9 6 2.0 Total 100.0
$31.29 $30.32 $29.00 $22.89 $18.74 $2.76 $135.00
$22.50 $22.50 $22.50 $22.50 $22.50 $22.50 $135.00
JRG, Group 4 Session 9 1 21.3 2 21.2 3 15.2 4 14.4 5 14.3 6 13.6 Total 100.0
$28.76 $28.62 $20.52 $19.44 $19.31 $18.36 $135.00
$22.50 $22.50 $22.50 $22.50 $22.50 $22.50 $135.00
298
J.G. George et al.
Table 16.2b Selected distributions for one-stage experiment JRG experiment GroupEff 11.41 11.91 16.51 17.87 18.50 19.30 20.22 23.19 23.82 24.26 25.38 25.41 26.93 27.38 29.11 29.96 30.29 31.42 32.22 33.96 34.11 35.67 35.75 36.72 36.90 37.15 37.89 38.10
New experiment applied = YES
YES YES YES YES YES YES YES YES
YES YES YES YES YES YES
Note A complete table can be found in the excel file “distribution tables.xls” in the digital archive at ExLab (online, available at: exlab.bus.ucf.edu). GroupEff is calculated as the combined earnings by all six members of a group divided by the maximum possible $135.
from the JRG experiment, as shown in Table 16.2b. Each group in the new experiment received an initial income distribution that had been generated by some group during stage one of JRG. After subjects are randomly assigned to groups and income positions, the second stage is implemented exactly as it was in JRG. By eliminating the common history, all choices over the two alternatives should be based on distributional preferences and not on reciprocity, since there are no previous actions that can be rewarded or punished.
Results We combine the data from the new experiments reported here with the data from JRG. The data from JRG is based on 28 groups of six individuals, whereas the
Social preferences and regulatory change 299 new data is based on observations of 14 groups also of six individuals. Thus our total sample size is 252. Table 16.3a summarizes results from the two experiments. The first three rows display the distribution choices made in the one stage experiment, followed by a summary of the behavior in stage one and the choices in stage two in the two stage experiment. The data from JRG (the two stage experiment) displays a great deal of variation in the cooperativeness of subjects during the interactive game. The earnings rank of a person within a group, Rank, measures how uncooperative the individual was in stage one. It is calculated as the percentage of the group income that the individual received, so it is normalized by group income, or efficiency. Rank varies from 1 percent to 24 percent in our sample with a mean of 17 percent. Perfect equality would have resulted in uniform ranks of 17 percent. We use two variables to capture the extent to which groups are cooperative in the JRG data, GroupEff and GroupVar. GroupEff measures the extent to which the group was able to achieve the goal of cooperation, namely, efficiency. It is measured as the percentage of the maximum possible group earnings ($135) that was achieved in stage one. For the data from the new experiment, this variable simply captures the randomly allocated group income level. GroupVar measures the dispersion, or inequality, of stage one earnings within a group (in dollars), showing, for the JRG data, how well group members coordinated in their action choices. It is measured as the variance in earnings within the group. Groups where only a few individuals cooperate are not considered to be as cooperative as groups where most or all Table 16.3a Characteristics of subject pool from each experiment: characterization (N = 168), stage 1 and stage 2 results Mean
Median
Standard deviation
One stage experiment Propensity 0.48 %WTP 44.60 $WTP 2.33
0.00 45.00 2.02
0.50 29.26 1.51
0.00 1.00 0.05
1.00 100.00 5.43
Two stage experiment Propensity %WTP* $WTP* Rank (percentage) GroupEff GroupVar Rank-GrEff Rank-GrVar
0.00 37.00 1.49 17.20 28.20 1.50 4.50 22.40
0.50 34.45 1.84 4.40 7.88 4.22 1.80 82.70
0.00 1.00 0.07 1.20 11.40 0.30 0.40 3.10
1.00 100.00 8.70 24.20 38.10 19.20 8.80 465.00
0.44 44.40 2.11 16.70 27.50 2.80 4.60 46.60
Minimum
treatment Maximum
Note * These statistics are calculated based only on those who actually selected the trust and therefore bid a non-zero amount.
300
J.G. George et al.
Table 16.3b Characteristics of subject pool from each experiment: demographic questionnaire responses
Male White African American Hours worked per week Hourly pay Highest expected education. Bachelor’s Degree GPA Age
Macon State College (N = 72) two-stage experiment
UCF (N = 96) two-stage experiment
UCF (N = 84) one stage experiment
42% 71% 15% 27 hours $15
53% 64% 9% 21 hours $9
55% 73% 7% 13.5 hours $6
75% 3.0 19
70% 2.3 21
61% 2.9 28
members cooperate, even if they achieve the same efficiency level. Coordination is therefore an important property of cooperation. Controlling for GroupVar, we take GroupEff as a good measure of cooperation history. The same is not true in reverse, however. The coefficient on GroupVar can reflect not only a reaction to group cooperation history, but also the extent to which preferences are based on the degree of inequality in the default choice, since the unequal distribution of the default choice in stage two is identical to the final income distribution in stage one. Thus, GroupVar characterizes not only the degree of coordination in stage one, but also the extent to which the grandfathering allocation results in an unequal income distribution. GroupVar varies between 30 cents and $19.00, with a mean of $2.80.7 GroupEff varies between 11 percent and 38 percent with a mean of 28 percent. Efficiency is therefore quite low, resulting in stage one earnings that average only $6.20 per participant. The lowest income was $0.50 and the highest one was $12.00, which can be compared to the maximum possible individual earnings of $22.50 in the social optimum. We also construct two interaction variables. Rank-GrEff is the interaction between Rank and GroupEff, and Rank-GrVar the interaction between Rank and GroupVar. Since the randomly allocated stage one earnings in the new experiments were selected to match those of the JRG experiment, the distributions of these variables are similar by design. Figure 16.1 shows the bid distributions for the alternative solution in the two experiments. The left panel shows data from JRG. This panel shows the bids of the subjects who expressed a preference for the alternative and its resulting redistribution (44 percent of the subjects). Their average WTP was $2.11, or 44.4 percent of stage one earnings. The data from the new experiment show very similar aggregate preferences for redistribution. A total of 48 percent of participants selected the alternative solution, with an average WTP of $2.33, or 44.6 percent of their randomly assigned allocation. Since the new experiment is designed to eliminate motivations for reciprocity, the fact that these aggregate numbers are so similar
Social preferences and regulatory change 301 Two stage experiment
One stage experiment
20
20 Percent
30
Percent
30
10
10
0
0 0 1 2 3 4 5 6 7 8 9 Willingness to pay (dollars)
0 1 2 3 4 5 6 7 8 9 Willingness to pay (dollars)
Figure 16.1 Distributions of bids for the alternative solutions. Note Data shown does not include $0.00 bids.
seems to indicate that there are no reciprocal preferences present.8 Nevertheless, in the conditional regression analysis below we show that there are some significant and interesting differences in behavior across the two experiments. By combining the data from JRG with the new data and performing regression analysis we are able to say more about behavioral differences between the two experiments, and infer possible motivations behind the choice heterogeneity reported in JRG. Tables 16.3a and 16.3b report summary statistics of our explanatory variables and Table 16.4 reports our regression results. We employ a hurdle model specification where the first step is to regress the propensity to bid on a set of covariates using a probit specification, and the second step is to regress the actual bid, conditional on the choice of bidding, on the same set of covariates using a tobit specification.9 This specification is common in health economics, for example, where it is used to capture the idea that the factors that cause someone to seek medical care are distinct from the factors that cause the doctor and patient to decide how much to spend.10 The coefficients reported in Table 16.4 are marginal effects.11 We report coefficients on the interaction variables with the history treatment (the JRG data) as the marginal effect controlling for the same variable’s effect in the no history treatment (the new data). The coefficient on the interaction variable is therefore to be interpreted as the difference in effect between the history and no history treatments.12 We estimate support for two measures of demand for redistribution: first, the propensity to select the alternative distribution, and second, the % WTP expressed for such choices. For both measures, we find a significantly weaker
302
J.G. George et al.
Table 16.4 Regression results Variable
1 – Propensity
Rank GroupEff GroupVar Rank-GrEff Rank-GrVar Rank*history treatment GroupEff*history treatment GroupVar*history treatment Rank-GrEff*history treatment Rank-GrVar*history treatment Session2 Session3 Session4 Session5 Inst Age Male White African-American Hours work per week Hourly pay Highest education GPA
–0.110** (0.016) –0.016 (0.552) –0.086** (0.019) 0.0002 (0.913) 0.004** (0.025) –0.021 (0.385) –0.044** (0.012) 0.126** (0.026) 0.0032*** (0.010) –0.007** (0.020) 0.048 (0.661) –0.071 (0.504) 0.096 (0.459) –0.237* (0.076) –0.066 (0.525) –0.010 (0.134) 0.072 (0.312) 0.020 (0.825) 0.198 (0.122) –0.002 (0.442) 0.004* (0.100) –0.113** (0.018) –0.018 (0.453)
2 – % WTP –9.06*** (0.001) –4.63*** (0.007) –1.33 (0.591) 0.195* (0.090) 0.109 (0.451) –1.01 (0.588) –0.662 (0.548) 2.89 (0.356) 8.11 (0.411) –0.051 (0.811) –7.15 (0.388) –19.74** (0.026) –2.82 (0.784) 10.79 (0.479) 20.23*** (0.010) 0.495 (0.443) 20.6*** (0.000) –5.65 (0.396) –30.8** (0.014) 0.390** (0.049) 0.292* (0.087) –5.57 (0.144) –4.74** (0.019)
Notes p-values in parentheses. * 10% level, ** 5% level, ***1% level. Hausmann specification tests do not detect significant endogeneity due to stage one choice variables used as explanatory variables in the stage two regressions.
demand for redistribution as the earnings rank of the individual increases, as indicated by the negative significant coefficient on rank, with no additional effect from the history treatment, as seen by the lack of significance on the interaction variable rank*history treatment. This is consistent with motivations that do not reflect reciprocity. Whether receiving a high earnings rank through chance or through one’s own actions during an interactive game, the effect on the demand for redistribution is the same. This implies that highly ranked people are more likely to let other highly ranked people keep their money than are people with low rank. We conclude that distributional preferences are not independent of the person’s position in the earnings hierarchy, even when the redistribution does not affect his or her own earnings. On the other hand, the significant effects on the propensity to demand redistribution from GroupEff is absent in the new data, as shown by the lack of significance on GroupEff. In contrast, when the group earnings are generated interactively with the history treatment we find that the demand for redistribution declines in the earnings of the group.13 This is consistent with motivations based
Social preferences and regulatory change 303 on reciprocal preferences. We also see from the positive coefficient on the interaction variable rank-GrEff*history treatment that this decrease in demand weakens as the earnings rank of the individual increases. Lower earning individuals are more strongly affected by the lack of cooperation in the group than are higher earning individuals. They are more likely to take from the rich and give to other poor when they have experienced a relatively uncooperative group. This behavior is consistent with a wish to reward individuals who acted cooperatively and to punish those that did not. These reciprocity effects due to the cooperativeness of the individual do not carry over to the % WTP measure, as shown by the insignificant effects on all variables interacted with history treatment. All the significant effects on the percentage WTP can be explained by distributional preferences alone. In regressions using the dollar WTP as the dependent variable, we similarly find no significant effects for any of these variables. The dispersion in the stage one earnings within a group, as measured by GroupVar, has an effect on the propensity for participants to redistribute, but not on their WTP. When motivated by reciprocity, demand for the trust increases as inequality increases (the coefficient on GroupVar*history treatment is +0.126), but in the absence of reciprocal motivations, such as in the no history treatment, demand for redistribution decreases with dispersion (the coefficient on GroupVar is −0.86). In alternative specifications we use the standard deviation in group income or the coefficient of variation and find the same result. Thus, we find no evidence that dispersion in group income per se is a motivation to redistribute.
Conclusions We find evidence consistent with both distributional and reciprocal preferences. The aggregate preferences expressed in the two experiments appear to be very similar, with almost the same average propensity to select the alternative solution and very similar average willingness to pay. From this it appears as if there are no reciprocal preferences. Nevertheless, our regression analysis shows that these aggregate numbers are misleading. Reciprocal preferences are indeed present and they are correlated with the cooperativeness of both individuals and groups. We find that distributional preferences depend on the position of the individual in the income ranking, such that lower ranked individuals are more inclined to demand redistribution than higher ranked individuals. This observation is consistent with empathetic preferences in favor of similarly positioned group members. We also find that reciprocal preferences depend on the relationship between the cooperativeness of the individual and the cooperativeness of the group. The willingness to reward and punish increases with the extent to which the group failed to cooperate, and this effect is strongest among cooperative group members. The two experiments discussed here were designed to identify social
304
J.G. George et al.
preferences under conditions of changing regulations of social dilemmas. We find evidence of preferences in favor of redistribution that do not reflect selfinterested earnings maximization. We caution that our results are based on a context-free experimental design, and that further experiments that test the external validity of our findings may be called for before policy implications are clear. Nevertheless, our findings should increase the interest in undertaking additional studies of distributional preferences in relation to regulatory change.
Acknowledgments Johnson thanks the University of Denver Faculty Research Fund and the Political Economy Research Institute at the University of Massachusetts, Development, Peacebuilding, and Environment Program of the University of Massachusetts. Rutström thanks the US National Science Foundation for research support under grants NSF/IIS 9817518 and NSF/POWRE 9973669. All supporting documentation can be found Online, available at: exlab. bus.ucf.edu. We are grateful to Anabela Botelho, Ryan Brossette, Ligia Pinto, Bob Potter, and Mark Schneider for assistance in conducting the laboratory experiments.
Notes 1 The term “unconditional” refers to the fact that the behavior is unaffected by either evaluations of past interactions or expectations on future ones. Unconditional distributional preferences are therefore purely consequentialist. 2 US Sky Trust Inc. Online, available at: usskytrust.org/ (accessed 26 July 2007). 3 JRG restrict bids to be in percentages of stage one earnings in order to avoid both house money effects and incentive problems at the boundaries of the BDM value distributions. Bids are restricted to come out of stage one earnings so that they do not change the distribution of future earnings. 4 Subjects did not see distribution tables that looked exactly like those in Table 16.2 when they made their distribution decision. Instead, they were shown a table that showed the subject id number of each group member, the dollar amount earnings from stage one, and the dollar amounts of the two distribution choices, i.e. the last two columns from Table 16.2. All subjects in the group saw the same distribution table. They could therefore infer the redistribution consequences within their groups for each of the other participants and relate this to the outcome from stage one. On the same screen we included text reminding them that if they were selected to make the choice, their own payoffs would be $22.50 no matter what they chose. 5 Observations in Engelmann and Strobel (2004) imply that preferences over efficiency can otherwise confound distributive preferences. Charness and Rabin (2002) also report many cases where subjects tradeoff efficiency and redistribution. 6 Column 3 is simply column 2 multiplied by $135.00. 7 The distribution of GroupVar is quite skewed. Of the 28 groups, 26 have a GroupVar of less than seven, with 21 of these 26 being less than two. The remaining two have values of 14 and 19. Because of the skewness in this variable we also run our regressions based on the standard deviation of the group income, as well as the coefficient of variation. The regression results are robust with respect to the use of these alternatives.
Social preferences and regulatory change 305 8 The propensity to select the alternative solution is not significantly different across the two experiments, using chi-square or Fisher Exact tests. P-values for both tests are 0.59 and 0.34, respectively. Wilcoxon Mann-Whitney non-parametric tests of differences in either the dollar amount WTP or the percentage WTP similarly do not show a significant difference with p-values of 0.38 and 0.43, respectively. 9 For a discussion of hurdle models, see McDowell (2003). 10 See Coller et al. (2002) for an application. 11 Since the main explanatory variables are choice variables in stage one, JRG test for endogeneity but find none. We caution that the marginal effect for the interacted variables Rank-GrEff and Rank-GrVar cannot be interpreted as interaction effects in the sense of cross-partial derivatives since the model is nonlinear. See Norton et al. (2004) for a discussion of this issue. We separately estimated the interaction effect of Rank-GrEff and verify the qualitative interaction effects indicated by the coefficient estimate on the interacted variables. 12 We test alternative specifications with respect to measuring an individual’s cooperativeness, measuring the dispersion in group income, and also the functional forms of the regression equations. In terms of the latter we included the use of censored normal regression for the dollar WTP instead of a tobit regression of the percentage WTP, and incorporating the square of the variables GroupEff, rank, as well as the interaction variable between them. We also ran the main regression model without including the demographic controls as an additional test for endogeneity. The qualitative findings are robust with respect to all of these variations in specification. 13 We caution the reader not to interpret the coefficient on the interaction variables in the probit regression as marginal interaction effects since this is a nonlinear model (see Note 11). We verify separately that the marginal interaction effect is qualitatively the same, however. We are able to estimate the interaction effect in a more limited specification of the model using the inteff command in Stata.
References Becker, G.M., DeGroot, M.H., and Marschak, J., 1964. Measuring Utility by a SingleResponse Sequential Method. Behavioral Science, 9 (3), 226–232. Bolton, G. and Ockenfels, A., 2000. ERC: A Theory of Equity, Reciprocity and Competition. American Economic Review, 90 (1), 166–93. Charness, G. and Rabin, M., 2002. Understanding Social Preferences with Simple Tests. Quarterly Journal of Economics, 117 (3), 817–869. Coller, M., Harrison, G.W., and McInnes, M.M., 2002. Evaluating the Tobacco Settlement: Are the Damage Awards Too Much or Not Enough? American Journal of Public Health, 92 (6), June, 984–989. Ellerman, D.A., 2004. The US SO2 Cap-and-Trade Programme. In T. Tietenberg and N. Johnstone, eds. Tradable Permits: Policy Evaluation, Design and Reform. Paris: Organization for Economic Cooperation and Development. 77–106. Engelmann, D. and Strobel, M., 2004. Inequality Aversion, Efficiency, and Maximin Preferences in Simple Distribution Experiments. American Economic Review, 94 (4), 857–869. Fehr, E. and Schmidt, K.M., 1999. A Theory of Fairness, Competition, and Cooperation. Quarterly Journal of Economics, 114 (3), 817–868. Harrison, D. Jr., 2004. Ex-Post Evaluation of the Reclaim Emissions Trading Programmes for the Los Angeles Air Basin. In T. Tietenberg and N. Johnstone, eds. Tradable Permits: Policy Evaluation, Design and Reform. Paris: Organization for Economic Cooperation and Development. 45–70.
306
J.G. George et al.
Hoffman, E., McCabe, K., Shachat, K., and Smith, V.L., 1994. Preferences, Property Rights, and Anonymity in Bargaining Games. Games and Economic Behavior, 7 (3), 346–380. Johnson, L.T., Rutström, E.E., and George, J.G., 2006. Income Distribution Preferences and Regulatory Change in Social Dilemmas. Journal of Economic Behavior and Organization, 61 (2), 181–198. McDowell, A., 2003. From the Help Desk: Hurdle Models. Stata Journal, 3 (2), 178–184. Norton, E.C., Wang, H., Ai, C., 2004. Computing Interaction Effects and Standard Errors in Logit and Probit Models. Stata Journal, 4 (2), 103–166. Rabin, M., 1993. Incorporating Fairness into Game Theory and Economics. American Economic Review, 83 (5), 1281–1302. Schlager, E., 1994. Fishers’ Institutional Responses to Common-Pool Resource Dilemmas. In E. Ostrom, R. Gardner, and J. Walker, eds. Rules, Games, and Common-Pool Resources. Ann Arbor, MI: University of Michigan Press. 247–265. Tang, S.Y., 1994. Institutions and Performance in Irrigation Systems. In E. Ostrom, R. Gardner, and J. Walker, eds. Rules, Games, and Common-Pool Resources. Ann Arbor, MI: University of Michigan Press. 225–245. Tietenberg, T. and Johnstone, N., 2004. Ex Post Evaluation of Tradable Permits: Methodological Issues and Literature Review. In T. Tietenberg and N. Johnstone, eds. Tradable Permits: Policy Evaluation, Design and Reform. Paris: Organization for Economic Cooperation and Development. 9–44. US Congressional Budget Office, 2000. Who Gains and Who Pays Under CarbonAllowance Trading: The Distributional Effects of Alternative Policy Designs. Unpublished Manuscript. Online, available at: cbo.gov/ftpdocs/21xx/doc2104/carbon.pdf (accessed 18 September 2007).
17 The effects of recommended play on compliance with ambient pollution instruments Robert J. Oxoby and John Spraggon
Introduction Segerson (1988) has shown that ambient pollution instruments are theoretically able to induce individual non-point source polluters to reduce their emissions to the optimal level. However, recent evidence from the laboratory (e.g. Cochard et al., 2005; Alpízar et al., 2004; Poe et al., 2004; Spraggon, 2004b, 2002) casts doubt on the ability of these instruments to induce polluters to comply with a standard. In each of these studies, while the instruments are able to induce the group to the target outcome, the choices of participants are not individually optimal. As a result, there is significant inefficiency and inequality under these instruments. One potential explanation for the inability of these instruments to induce individuals to the socially optimal outcome is that subjects do not understand the decision making environment. For example, Oxoby and Spraggon (2005) have shown that, among participants who understand the concept of Nash equilibrium, these instruments induce individually and socially optimal decision making. With this in mind, we investigate making the decision environment and the corresponding incentives clearer to decision makers (i.e. recommended play): we explain the environment and the concept of “marginal decision making” carefully to participants. In explaining the environment, we hope participants will not only be able to identify their dominant strategies, but also realize the reward and punishment properties of the instrument. Other studies (e.g. Andreoni et al., 2003; Fehr and Gächter, 2000; Dickinson, 2001) have shown that providing subjects with the ability to punish or reward others results in more efficient outcomes. This ability to punish or reward other members of the group is implicit in the ambient pollution instrument. Individuals who choose to reduce their decision numbers below the individually optimal level are rewarding the behavior of others in their group while those who choose numbers that exceed this level are punishing group members. To the extent that subjects realize this nature of the instrument we may observe more subjects choosing to punish the other members of their group. This would lead to less efficient and more inequitable outcomes. The environment investigated in this chapter is of particular interest because it differs from the standard public good environments so often investigated
308
R.J. Oxoby and J. Spraggon
experimentally (see Zelmer, 2003). In these standard public good environments, authors typically appeal to alternate preferences (e.g. preferences embodying payoff inequities, fairness, or reciprocity; see Charness and Rabin, 2002; Fehr and Schmidt, 1999) and decision error (Anderson et al., 1998; Ledyard, 1995) to explain non-Nash decision making. Indeed, ambient pollution instruments can be designed so that preferences incorporating altruism and reciprocity lead to Nash play. However, preferences embodying, say, relative payoff maximization may lead subjects to make choices that differ from Nash. As a result, the question posed here is “Does explaining the environment more carefully reduce errors in decision making and lead to more Nash play or do these other preference explanations dominate when subjects better understand the decision environment?” While others have investigated the effects of recommended play on behavior, the work of Croson and Marks (2001) is most germane to our interests.1 In a threshold public good environment, Croson and Marks show that recommended play increases Nash play when agents are heterogeneous. Given our interest in non-point source pollution, our focus is somewhat different. It does not seem reasonable to expect that an environmental regulator would know the socially optimal level of emission for any given firm. However, what does seem reasonable is that the regulator could explain the instrument in detail to representatives from each polluting source. That is, the regulator could explain the form of the instrument: everyone is fined if the ambient level of pollution exceeds the target and therefore everyone should reduce their emissions to the point where their marginal benefit of emitting one more unit is equal to the tax rate. The purpose of this chapter is to determine if this type of recommendation increases compliance with ambient pollution instruments. We proceed as follows; the next part of the chapter presents the experimental environment and relates it to the non-point source pollution problem. We then present our results. We delineate these results by analyzing the effects of enhanced instructions (i.e. recommended play) at the aggregate level, by participant type, and at the individual level. With enhanced explanations, the results are mixed: enhanced instructions improve compliance under the tax instrument but not under the tax/subsidy instrument. Moreover, although there is more compliance, efficiency is not improved by much. The chapter concludes by discussing the policy implications of this result and suggests that these results may be due to seemingly minor weaknesses in the instruments from the point of view of standard theory.
Experimental design The underlying structure of the experiment is based on the non-point source pollution problem (e.g. Segerson, 1988). Non-point sources emit pollution into the air or water in a diffuse manner, making it prohibitively costly to determine how much an individual source is emitting. In this environment, we assume that the regulator can measure the ambient level of pollution and knows the potential
Recommended play on compliance 309 sources of the pollution. Under an ambient pollution instrument, polluters are fined if the observed ambient level of emission exceeds a target level and are (potentially) subsidized if the ambient level of emissions is below the target. The tax and subsidy rates are chosen so that individual polluters choose the optimal level of emission where the marginal benefit of one more unit of emission is equal to the marginal cost (the tax) they pay on that unit. Thus, the ambient pollution instrument implements the socially optimal outcome as a dominant strategy Nash equilibrium (Segerson, 1988). To investigate ambient pollution instruments in an experimental environment, subjects choose decision numbers that are analogous to emission levels.2 The higher a participant’s decision number the higher is her private payoff. There is also a group component to participants’ payoffs such that the higher the aggregate decision number (the sum of decision numbers within a group) the lower the group payoff. Subjects’ private payoffs are presented in a table and the ambient pollution instrument is presented as a function. Subjects also have access to a calculator allowing them to determine their payoff from any feasible combination of their decision number and the aggregate decision number of those in their group. The private payoff function Bn is given by
Bn ( xn ) = 25 − 0.002( xnmax − xn ) 2
(17.1)
Where xn is subject n’s decision number and x max n is the subject n’s maximum decision number. Notice that private payoff is maximized when xn = x max n . The maximum decision number can be thought of as the unconstrained emission level in the non-point source pollution context. The quadratic payoff function was chosen for consistency with Nalbantian and Schotter (1997) and for mathematical simplicity. We investigate two ambient pollution instruments. Both instruments involve a tax if the aggregate decision number (analogous to the ambient level of pollution) exceeds the target. The instrument that we refer to as the tax/subsidy instrument also involves a subsidy when the aggregate decision number is below the target. The tax instrument is presented as ⎧0.3( X − 150) Tn ( X ) = ⎨ ⎩0
if X > 150, if X ≤ 150.
The tax/subsidy instrument is presented as ⎧0.3( X − 150) Tn ( X ) = ⎨ ⎩0.3( X − 150)
where N
X ≡ ∑ xn n =1
if X > 150, if X ≤ 150,
310
R.J. Oxoby and J. Spraggon
is the aggregate decision number (referred to as the group total).3 The tax/subsidy rate (0.3) was chosen as the emission damage rate for simplicity. This choice, coupled with the number of subjects per group (N = 4) and the form of the private payoff function (equation 17.1), determines the exogenous target (150). Thus, under the tax instrument, each individual maximizes ⎧0.3( X − 150) π n = 25.00 − 0.002( xnmax − xn ) 2 − ⎨ ⎩0
if X > 150, if X ≤ 150.
Under the tax/subsidy instrument, each individual maximizes
π n = 25.00 − 0.002( xnmax − xn ) 2 − 0.3( X − 150). for the tax/subsidy instrument. Therefore, for the tax/subsidy instrument each individual’s best response for any given X is
xn* = xnmax − 75. We utilize four subject groups, each with two subjects having a maximum decision number of 100 and two subjects with a maximum decision number of 125. We refer to these different types as medium and large capacity subjects, respectively. Thus, under the tax/subsidy instrument medium capacity subjects should choose x*n while large capacity subjects should choose x*n = 50. Note that there is also a group optimal outcome under this instrument where all subjects choose xn = 0 and the total payoff to everyone in the group is maximized.4 For the tax instrument, a subject’s best response function is the same as for the tax/subsidy instrument if the sum of everyone else’s decision numbers is greater − 75). than or equal to the target minus the subject’s optimal decision (x max n However, if the aggregate decision of the other group members is below this level, the subject should choose a decision number just large enough to insure that the aggregate decision is equal to the target. Since all subjects face this same incentive, the Nash equilibrium (where everyone chooses x*n = x max − 75) is unique. n We refer to the outcome where subjects reduce their decision number from the maximum by 75 as socially optimal as this is the solution to the social planner’s problem (the difference between the individual benefits from emission minus the cost to society of the emission): 4 ⎡ 4 ⎤ SP = max ⎢ ∑ Bn − 0.3∑ xn ⎥ . ( x1 ,..., x4 ) n =1 ⎣ n =1 ⎦
(17.2)
Previous experiments in this environment (Spraggon, 2002, 2004b; Oxoby and Spraggon, 2005) suggest that subjects do not choose the dominant strategy Nash equilibrium. Spraggon (2004a) suggests that both decision error and alternate preferences are important explanations for this non-Nash behavior. In the same environment, Oxoby and Spraggon (2005) find much more Nash play among
Recommended play on compliance 311 subjects who have had a course in game theory, suggesting that decision error may be more important in explaining non-Nash behavior than alternate preferences. If this is the case then providing subjects with a better explanation of the environment should result in more Nash decision making. In our experiments we conduct two treatments: a standard instruction treatment, and a treatment with enhanced instructions.5 Primarily, the differences between the instructions lie in the description of the payoff function, which was expanded to include the marginal benefit from increasing the subject’s decision number by one for each decision number and an explanation of “marginal decision making.” The following is the relevant part of the enhanced instructions: The purpose of the Group Payoff is to insure that everyone chooses a certain Decision Number. Notice that by increasing your Decision Number by one you increase your Private Payoff by the number given in the third column of Table 1. However, by increasing your Decision Number by one you reduce the Group Payoff by 0.3. As a result you maximize your Total Payoff by increasing your decision number to the point where increasing your decision number by one more will increase your Private Payoff by less than 0.3. Subjects were also provided with hypothetical numerical examples and a question to test their understanding.
Results The results from the experimental sessions are striking. When subjects are given instructions that include a description of the marginal analysis they are much more likely to choose the dominant strategy Nash decision. Moreover, they are also much more likely to choose numbers that are below their Nash decision, presumably in an attempt to achieve the group optimal outcome. Data and method of analysis The data was collected from eight sessions conducted at the University of Calgary in the winter and fall of 2003 and two sessions conducted in the winter of 2005. Participants were recruited from the general university population. Each experiment consisted of 25 periods. Sessions took approximately an hour and a half and average earnings varied between (Canadian) $10 and $25. Efficiency is measured as the difference between the optimal and actual value of the Social Planner’s problem (equation 17.2) as a percentage of the difference between the optimal and minimum possible value of the Social Planner’s problem. This definition of efficiency accounts for not only differences between the group total and the target, but also for reductions in total payoff due to subjects reducing their decision numbers by more or less than is individually optimal. For example, if two large capacity subjects each chose 75 and two
312
R.J. Oxoby and J. Spraggon
medium capacity subjects each chose zero, the group total would be 150 but the efficiency of this outcome would be only 89 percent. We begin by discussing the results at the aggregate level. Means are calculated for each group of four subjects and the mean of these means is calculated for each treatment cell. These statistics are independent for the analysis at the aggregate level and we have five observations in each cell. We also compare the data by participant type (medium and large capacity) across the different treatments. This data is also independent.6 Finally we look at the distributions of individual decisions. While this data is not independent, we follow Anderson et al. (1998) in assuming that errors in decision making result in a distribution of decisions. As it turns out, these distributions are reasonably normal (subject to the constraints of the decision space). Analysis at the aggregate level Table 17.1 shows the aggregate decision numbers and efficiency calculated at the means of session means. Note that the mean aggregate decision number is closer to the target with the enhanced instructions for the tax instrument but further for the tax/subsidy. The group totals are not significantly different from each other using either analysis of variance (p = 0.3205 for the tax/subsidy and p = 0.1356 for the tax) or the Mann-Whitney U-test (p = 0.3472 for the tax/subsidy and p = 0.1172 for the tax). The efficiency results are consistent with the group total results (although the enhanced instructions improve efficiency for both treatments). Again the
Table 17.1 Mean aggregate decision numbers by treatment Treatment
Mean group total
Confidence interval Lower bound
Tax/subsidy enhanced instructions Tax/subsidy standard instructions Tax enhanced instructions Tax standard instructions
96.94 (11.07) [5] 133.50 (32.69) [5] 183.18 (10.19) [5] 205.30 (8.59) [5]
Mean group efficiency
Upper bound
66.20
127.68
42.00
222.26
154.89
211.48
181.45
229.15
90.18 (0.017) [5] 87.09 (0.036) [5] 91.77 (0.026) [5] 89.59 (0.013) [5]
Note Standard errors are provided in parenthesis and number of observations are provided in square brackets.
Recommended play on compliance 313 Tax/subsidy standard
Tax/subsidy enhanced
600 500 400 300 200 100 0 0
5
10
15
20
25
Period
Figure 17.1 Mean group totals by treatment and period, tax/subsidy instrument.
differences are not significant using standard parametric tests (p = 0.4577 for the tax/subsidy and p = 0.4700) or the Mann-Whitney U-test ( p = 0.3472 for the tax/subsidy and p = 0.3472). Figures 17.1 and 17.3 show the differences in the aggregate decision number and efficiency through time for the tax/subsidy instrument. Notice that the mean Tax standard
Tax enhanced
600 500 400 300 200 100 0 0
5
10
15
20
Period
Figure 17.2 Mean group totals by treatment and period, tax instrument.
25
Tax/subsidy standard
Tax/subsidy enhanced
1
0.95
0.9
0.85
0.8 0
5
10
15
20
25
Period
Figure 17.3 Mean efficiency by treatment and period, tax/subsidy instrument.
Tax standard
Tax enhanced
1
0.95
0.9
0.85
0.8 0
5
10
15
20
Period
Figure 17.4 Mean efficiency by treatment and period, tax instrument.
25
Recommended play on compliance 315 Table 17.2 Mean aggregate decision numbers, under the tax/subsidy by treatment Treatment
Enhanced instructions, large capacity Standard instructions, large capacity Enhanced instructions, medium capacity Standard instructions, medium capacity
Mean decision 28.59 (4.88) [5] 35.03 (9.69) [5] 19.88 (4.65) [5] 31.72 (7.10) [5]
Confidence interval
Median decision
Lower bound
Upper bound
15.05
42.12
25.5
8.13
61.93
36.5
6.98
32.79
20.5
11.99
51.44
26
Note Standard errors are provided in parenthesis and number of observations are provided in square brackets. Median is the median of medians by group.
group total is typically closer to the target under the enhanced instructions with the standard instructions but efficiencies are higher during the early period with the enhanced instructions. Figures 17.2 and 17.4 show that group total and efficiency under the tax instrument are better in the treatment with the enhanced instructions during the early periods but are very similar in the later periods. These results suggest that, at the aggregate level, providing more information to subjects does not significantly affect the efficiency of either instrument. Analysis by participant type Recall that x*n = 50 and x*n = 25 are dominant strategies for large and medium capacity subjects facing the tax/subsidy instrument. Table 17.2 shows that these predictions are not as consistent with the Nash prediction as the results regarding aggregate decision making might suggest. For large capacity subjects under the tax/subsidy instrument (Table 17.2) decisions in the enhanced instruction treatment are significantly below the Nash prediction. Further, notice that the median decisions in both the enhanced and standard instruction treatments are below this prediction. Similar results are observed for medium capacity subjects although decisions are somewhat above x*n under the standard instructions. Under the tax instrument, decisions are much higher than the prediction for both the large and medium capacity subjects under the standard instructions and medium capacity subject under the enhanced instructions (Table 17.3). However, the results are more consistent with the enhanced instructions.
316 R.J. Oxoby and J. Spraggon Table 17.3 Mean aggregate decision numbers, under the tax by treatment Treatment
Enhanced instructions, large capacity Standard instructions, large capacity Enhanced instructions, medium capacity Standard instructions, medium capacity
Mean decision 56.09 (1.75) [5] 59.12 (2.46) [5] 35.50 (5.46) [5] 43.53 (5.04) [5]
Confidence interval
Median decision
Lower bound
Upper bound
51.24
60.93
50
52.39
65.95
60
20.33
50.68
27.5
29.55
57.51
49.5
Note Standard errors are provided in parenthesis and number of observations are provided in square brackets. Median is the median of medians by group.
Figure 17.5 depicts the time series of average decision number by treatment and subject type for both the tax/subsidy (T/S) and tax instruments. Note that for the tax/subsidy under both the standard and enhanced instruction treatments decisions are very similar between the large and medium capacity types. Under the tax instrument the decisions of large and medium capacity subjects are different, although they are above the Nash predictions. In summary, explaining the marginal nature of the decision making environment results in participants recognizing the collusive outcome under the tax/subsidy and choosing lower decision numbers as a result. Under the tax instrument subjects seem to choose slightly larger decision numbers, but this effect is somewhat mitigated by the enhanced instructions. Analysis by participant Figures 17.6 and 17.7 present the distributions of individual decisions by subject type and instrument. Figure 17.6 provides the distributions for the tax/subsidy instrument. Notice that subjects seem to focus on the group optimal decision (zero) in all cases except the large capacity subjects under the enhanced instructions. These distributions are significantly different using standard non-parametric tests (p < 0.01, < 0.01 for the Mann-Whitney test, p < 0.0, < 0.011 for the Median test and p < 0.01, < 0.01 for the Kolmogorov-Smirnov test for large and medium capacity subjects, respectively).7 Figure 17.7 shows that the story is different for the tax instrument. The enhanced instructions focus both the large and medium capacity subjects on the optimal decision. Again, the distributions are significantly
T/S standard instructions Large capacity
Medium capacity
150 125 100 75 50 25 0 0
5
10
15
20
25
Period
T/S enhanced instructions Large capacity
Medium capacity
150 125 100 75 50 25 0 0
5
10
15
20
25
Period
Tax standard instructions Large capacity
Medium capacity
150 125 100 75 50 25 0 0
5
10
15
20
25
Period
Figure 17.5 Mean decision by subject type and period, tax/subsidy instrument.
Tax enhanced instructions Large capacity
Medium capacity
150 125 100 75 50 25 0 0
5
10
15
20
25
Period
Figure 17.5 continued.
Standard instructions, large capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
130
140
150
130
140
150
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Decision Enhanced instructions, large capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Decision
Figure 17.6 Distributions of individual decisions, by treatment, tax/subsidy.
Standard instructions, medium capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
150
140
130
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Decision Enhanced instructions, medium capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
130
140
150
130
140
150
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Decision
Figure 17.6 continued. Standard instructions, large capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Decision
Figure 17.7 Distributions of individual decisions, by treatment, tax.
Enhanced instructions, large capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
150
140
130
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Decision
Standard instructions, medium capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
130
140
150
130
140
150
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Decision
Enhanced instructions, medium capacity 0.6
Fraction
0.5 0.4 0.3 0.2 0.1
Decision
Figure 17.7 continued.
120
110
100
90
80
70
60
50
40
30
20
10
0
0
Recommended play on compliance 321 different using standard non-parametric tests (p = 0.0909, < 0.01 for the MannWhitney test, p < 0.01, < 0.01 for the Median test and p < 0.01, < 0.01) for the Kolmogorov-Smirnov test for large and medium capacity subjects, respectively). These distributions are unchanged when only the data from periods ten to 20 is considered. Standard non-parametric tests for these periods are also consistent with this contention except for large capacity subjects under the tax instrument for which the Mann-Whitney and Median tests do not suggest a significant difference.8
Conclusion The policy implications of these results are clear. Providing a better explanation of the instrument to the participants does not necessarily result in more compliance at the aggregate level. This suggests that the reductions in efficiencies observed in previous empirical studies of these instruments (Cochard et al., 2005; Poe et al., 2004; Spraggon, 2004b) are likely due to a combination of decision errors and strategic play in which participants attempt to take advantage of the group nature of these instruments. In terms of implementation, two issues regarding these instruments should be considered. First, the instruments’ information requirements (on the part of firms and policy makers) are extreme as optimal tax/subsidy rates depend on both firms’ costs and the environmental damages they impose. However, following Segerson (1988), one could consider firm-specific tax rates based on firms’ differing costs or damage functions. This may be appropriate for non-point source pollution problems as a firm’s damage (for a fixed level of emissions) depends on distance from the resource in question (e.g. the watershed).9 Second, if firms are competitors in product markets, an ambient pollution instrument may provide an additional tool for competition: increasing emissions reduces a firm’s abatement costs and increases its fine, but also increases the fines paid by competitors without reducing a corresponding reduction in their abatement costs. Despite this, the experimental evidence (Cochard et al., 2005; Poe et al., 2004; Spraggon, 2002, 2004b) suggests that these instruments are remarkably efficient at reducing the aggregate emission level to the target.10 These reductions in efficiency are primarily due to differences from optimal behavior at the individual (rather than aggregate) level. Combined with our results on the efficiency and compliance gains of recommended play in conjunction with these instruments, this provides reason to be optimistic about the ability of these instruments to mitigate not only the non-point source pollution but also other group moral hazard problems (Spraggon, 2002). This may be critically important for pollution problems where there are a large number of polluters and it is financially infeasible to monitor effectively.
Acknowledgments We thank Jim Murphy, John Stranlund, and participants from the annual meeting of the Economic Science Association (Tucson, 2003) and the Experimental
322
R.J. Oxoby and J. Spraggon
Economics and Public Policy Workshop (Appalachian State University, 2005) for helpful comments. We thank Kendra N. McLeish for valuable research assistance. Financial support for this project was provided by the Social Science and Humanities Research Council of Canada and the W.E. Upjohn Institute.
Notes 1 Other experiments using recommended play include Brandts and Holt (1992), Brandts and MacLeod (1995), and Oxoby and McLeish (2004). 2 The instructions are presented in neutral language to abstract from the subjects’ feelings about pollution. 3 The instrument is presented to subjects in this way to be clear about the instruments’ dichotomous nature. 4 This outcome was achieved in the experiments of Cochard et al. (2005). These experiments involved a simpler environment where subjects choose between zero and 20. This outcome was only observed among one group of inexperienced subjects with the standard instructions. 5 All instruction sets are available upon request from the authors. 6 Means calculated for a subject type (or a subject) are independent from the means calculated for the same subject type (or different subjects) across treatments, but not independent for different subject types (or different subjects) within the same treatment. 7 A Tobit regression also suggests that the enhanced instruction treatment matters (p = 0.004 on the dummy variable for the standard instructions in the regression with dummy variables for tax versus tax/subsidy, capacity, instructions, and the four interactions), but we feel that the non-parametric tests are more appropriate due to the non-normality of the data. 8 p-values for the Mann-Whitney, Median and Kolmogorov-Smirnov tests for large and medium capacity subjects respectively are 0.0022, 0.0084; 0.000, 0.002; 0.000, 0.000 for the tax/subsidy and 0.4618, 0.0001; 0.138, 0.000; 0.056, 0.000 for the tax instrument. 9 Indeed, Weersink et al. (1998) argue (although not theoretically) that these instruments are only appropriate for homogeneous firms. 10 Poe et al. (2004) finds a much lower level of efficiency (56.2 percent) when a simple market is combined with the ambient pollution instrument.
References Alpízar, F., T. Requate, and A. Schram, 2004. Collective versus random fining: an experimental study on controlling ambient pollution, Environmental and Resource Economics, 29 (2), 231–251. Anderson, S. P., J. K. Goeree, and C. A. Holt, 1998. A theoretic analysis of altruism and decision error in public goods games, Journal of Public Economics, 70 (2), 297–323. Andreoni, J., W. Harbaugh, and L. Vesterlund, 2003. The carrot or the stick: rewards, punishment and cooperation, American Economic Review, 93 (3), 893–902. Brandts, J. and C. A. Holt, 1992. An experimental test of equilibrium dominance in signaling games, American Economic Review, 82 (5), 1350–1365. Brandts, J. and W. B. MacLeod, 1995. Equilibrium selection in experimental games with recommended play, Games and Economic Behavior, 11 (1), 36–63. Charness, G. and M. Rabin, 2002. Understanding social preferences with simple tests, Quarterly Journal of Economics, 117 (3), 817–869.
Recommended play on compliance 323 Cochard, F., M. Willinger, and A. Xepapadeas, 2005. Efficiency of nonpoint source pollution instruments: an experimental study, Environmental and Resource Economics, 30 (14), 393–422. Croson, R. and M. Marks, 2001. The effect of recommended contributions in the voluntary provision of public goods, Economic Inquiry, 39 (2), 238–249. Dickinson, D. L., 2001. The carrot vs. the stick in work team motivation, Experimental Economics, 4 (1), 107–124. Fehr, E. and K. Schmidt, 1999. A theory of fairness, competition, and cooperation, Quarterly Journal of Economics, 114 (3), 817–868. Fehr, E. and S. Gächter, 2000. Cooperation and punishment in public goods experiments, American Economic Review, 90 (4), 980–994. Ledyard, J. O., 1995. Public goods: a survey of experimental research. In J. H. Kagel and A. E. Roth (eds.), Handbook of Experimental Economics. New Jersey: Princeton University Press, 111–194. Nalbantian, H. and A. Schotter, 1997. Productivity under group incentives: an experimental study, American Economic Review, 87 (3), 314–341. Oxoby, R. J. and K. N. McLeish, 2004. Specific decision and strategy vector methods in ultimatum bargaining: evidence on the strength of other regarding behavior, Economics Letters, 84 (3), 399–405. Oxoby, R. J. and J. Spraggon, 2005. Game theory for playing games: bounding rationality in a negative externality experiment, Lakehead University, Manuscript. Poe, G. L., K. Segerson, W. D. Schulze, J. F. Suter, and C. A. Vossler, 2004. Exploring the performance of ambient-based policy instruments when non-point source polluters can cooperate, American Journal of Agricultural Economics, 86 (15), 1203–1210. Segerson, K., 1988. Uncertainty and incentives for nonpoint pollution control, Journal of Environmental Economics and Management, 15 (1), 87–98. Spraggon, J., 2002. Exogenous targeting instruments as a solution to group moral hazard, Journal of Public Economics, 84 (2), 427–456. Spraggon, J., 2004a. Individual decision making in a negative externality experiment, Experimental Economics, 7 (3), 249–269. Spraggon, J., 2004b. Testing ambient pollution instruments with heterogeneous agents, Journal of Environmental Economics and Management, 48 (2), 837–856. Weersink, A., J. Livernois, J. Shogren, and J. Shortle, 1998. Economic instruments and environmental policy in agriculture, Canadian Public Policy, 24 (3), 309–327. Zelmer, J., 2003. Linear public goods experiments: a meta-analysis, Experimental Economics, 6 (3), 299–310.
18 Discussion Regulation and compliance Kathleen Segerson
In a recently published popular book entitled Freakonomics, authors Levitt and Dubner (2005) describe an experiment in several Israeli day care centers in which a fine was put in place to try to discourage parents from being late in picking up their children.1 Economic theory predicted that a penalty for late pick-ups would reduce the number of late pick-ups. However, the outcome of the experiment was exactly the opposite of what theory predicted, i.e. when the penalty was put in place, the number of late pick-ups increased instead of decreasing. Levitt and Dubner cite two reasons to explain this unexpected outcome: (1) the magnitude of the penalty that was imposed was too low (only $3), and (2) the day care had replaced a moral incentive or penalty for being late (i.e. the associated guilt) with an economic penalty, which parents were apparently much more willing to bear. Although this was a field rather than a laboratory experiment, it highlights the important role that experiments can play in testing theory and the effectiveness of alternative policy interventions in a relatively low cost way. The chapters on regulation and compliance in this volume provide additional evidence in support of this view. Collectively, the chapters in this part of the book describe a series of laboratory experiments designed to enhance our understanding of behavior in general, but more specifically behavioral responses to environmental policy interventions (such as taxes on ambient pollution or congestion, or the provision/disclosure of information). Since environmental economics is at its core the study of externalities, policy interventions to control externalities are a fundamental part of this field. Without laboratory experiments, analysts have two tools for investigating the effects of alternative policy options: (1) economic theory, and (2) empirical analysis of actual real world policies that have been implemented. For policy instruments that are innovative and have never been tried before, economic theory is the only analytical tool available. Clearly, the predictions of theory can be “wrong,” as in the example above, and experimenting with alternative policy approaches through real world implementation can be costly in terms of time, money, and in some cases political capital. Laboratory experiments of the type described in these chapters provide a very useful middle ground for investigating the likely impacts of proposed policies at lower cost. However, for reasons described below, in order to have confidence that the real world response to a
Discussion 325 policy intervention will mirror the response observed in the laboratory, it seems imperative that the experiment be conducted (albeit on a small scale) in a field environment that mirrors the real world context in which the policy might be used. After all, a laboratory experiment of the penalty for late day care pickups, especially if conducted with university students (including economics majors) who have little or no experience with day care, might very well have shown a response to the imposition of the late fee consistent with the (incorrect) predictions of theory. There are at least three interrelated issues that arise in assessing the usefulness of laboratory experiments in understanding environmental policies. The first is the purpose of the experiment, i.e. what we hope to learn from it. Some experiments are simply designed to teach us more about people’s preferences and how they make decisions, while others seek insight into specific policy instruments or interventions. In this part, the chapter by George et al. is an example of the former, while the other four chapters all fall into the latter category. George et al. consider whether individuals hold “social preferences” and respond to motives other than self-interest when making decisions. While the results of such experiments can have important implications for the design of environmental policy, the experiments themselves focus on the nature of preferences rather than the incentives created by alternative policies. Information about preferences is fundamental to all policy analysis. However, most economic analyses assume neoclassical preferences under which agents respond primarily to economic incentives (i.e. standard rewards and punishments). As Levitt and Dubner (2005) note, behavior can also be motivated by moral or social incentives, and in some cases (e.g. the day care example) these incentives can dominate. Similarly, Evans et al. appeal to the “gambler’s fallacy” as an explanation of observed results that are inconsistent with (neoclassical) theory. Both illustrate why it is important to understand the extent to which preferences driven by non-economic considerations affect choice. Experiments such as those conducted by George et al. can play a key role in generating this type of information. They show that individuals have both reciprocal and distributional preferences and are motivated by empathy. However, the authors (appropriately) caution that their results are based on a context-free experimental design. It seems likely that non-economic motives are context specific, since, for example, one might expect perceived moral obligations to hinge on context.2 Thus, while they may illustrate the existence of motives beyond self-interest and the potential for these motives to be important in some contexts, these experiments do not show how other motives are likely to influence choices in a particular context. For this, the context-neutral setting in the laboratory must be replaced with a more realistic consideration of the specific attributes and types of decision makers that characterize the particular context of interest. Even when experiments focus on evaluation of alternative policy approaches rather than underlying preferences, they can differ in the nature of the hypotheses that the experiments are designed to test. These differences
326
K. Segerson
constitute a second issue of interest. Some experiments are designed to test simply whether a given policy is “effective” in the sense of moving the equilibrium in a particular direction, while others seek to test whether the mechanism achieves a given outcome predicted by theory, such as an efficient outcome or a predicted equilibrium outcome. For example, the chapter by Murphy and Stranlund simply seeks to test how a specified audit policy with voluntary disclosure affects (i.e. increases, decreases, or leaves unchanged) a firm’s decisions regarding disclosure and care under different assumptions about the cost of self-audits, and the implications of this for enforcement costs, environmental quality, and, ultimately, policy design. They ask whether voluntary discovery and disclosure policies are efficiency enhancing and not whether they achieve a given target. In contrast, Evans et al. compare cheating levels observed in their experiments to those predicted by theory and show that observed rates are sometimes above and sometimes below predicted levels. Likewise, Oxoby and Spraggon ask whether various ambient pollution instruments achieve a first best, as predicted by theory. Anderson et al. also compare observed outcomes (levels of highway entry) under a congestion tax to the efficient outcome to determine whether this policy instrument is able to achieve a first best. Interestingly, they find that while on average observed entry levels achieve the target, there is considerable variability around the target, leading to inefficiency. This result is consistent with previous experimental work on the efficiency of ambient pollution instruments (see, for example, Spraggon, 2002). Clearly, in evaluating whether experimental results are consistent with theory, it is easier to show that a given policy instrument moves the equilibrium in a desired direction (i.e. enhances efficiency) than to show that it reaches a specified target (e.g. the first best). Of course, there is then the question of whether these weaker tests are nonetheless useful for environmental policy design. Since most theoretical models abstract from many real world complications and in practice the first best will be extremely difficult to know, it is perhaps both too much and unnecessary to ask of any given policy instrument that it moves the equilibrium to the first best. In practice, perhaps we should be reassured when a given instrument at least has the predicted effect rather than the opposite effect as in the day care example described above. This would imply that the incentives we seek to create through the policy are, in fact, “working” in the sense of inducing the type (if not the exact magnitude) of behavioral response that is desired. As an environmental economist interested in policy design, I would want first and foremost to know if a given policy mechanism is at least effective. If it is shown to be effective in a laboratory experiment, then there is hope that it would also be effective if put into practice (especially if the experiments were context specific). In contrast, while it would be nice to know that in addition the mechanism induced an efficient outcome in the laboratory, I think it is perhaps overly optimistic to conclude from this that it would likely lead to a first best outcome in practice as well.
Discussion 327 The final issue, alluded to above, is the role of context. Many experimental researchers go to great lengths to be sure that their experiments are described in “neutral” terms and hence devoid of specific context that might influence the outcomes. The chapters by Evans et al., George et al., and Oxoby and Spraggon are examples. Although the Evans et al. experiments seek to test cheating incentives, the authors specifically avoid framing decisions as cheating or malfeasance to avoid payoffs associated with the ethical costs of cheating. Likewise, in their experiments George et al. do not make any reference to pollution, despite the fact that their work is motivated by a proposal for the distribution of carbon emission permits. Oxoby and Spraggon also use neutral language in their experiments to “abstract from the subjects’ feeling about pollution.” In contrast, the experiments in Murphy and Stranlund are couched specifically in the contexts of enforcement and compliance. Previous work on instruments to control ambient pollution has also been context specific (e.g. Vossler et al., 2006). When deciding how much context to provide, there appears to be a fundamental tradeoff. Stripping the experiment of context has the advantage of testing more fundamental principles, thereby allowing for more generalization. For example, Evans et al. claim that their experimental results could provide insights into financial disclosure and tax compliance. In addition, use of neutral language avoids framing effects that could be specific to the laboratory.3 The disadvantage, of course, is that the resulting experiment might also abstract from context effects that could be very important in practice. This is particularly true if moral or social incentives play a role in one context but not another. The provision of public goods such as environmental protection is a context in which these considerations might play an important role for some people. For example, it seems likely that at least part of the objection of some environmentalists to the use of emission taxes is the view that, by creating a “license to pollute,” these taxes remove any moral stigma that might otherwise be attached to releasing pollution into the environment. Similarly, it is not clear whether a group of neighboring farmers would respond to alternative ambient pollution policies the same way that unrelated undergraduates in a laboratory would. Given this, it seems that it would be risky to move directly from laboratory results using undergraduates placed in a context neutral setting to implementation of a given policy approach in practice. The step that seems to be essential but currently missing from most experimental work is the “testing” of results in a context that more closely mirrors the real world context in which the policy would be used. Short of field testing, this could simply mean laboratory experiments that (1) use subjects drawn from the population of decision makers who would actually be faced with the incentives that are being tested, and (2) are couched in the context of the real world setting in which their behavioral responses would occur (e.g. pollution control). Without this crucial step, real world implementation of policies that “worked” in the laboratory may turn out to be very costly indeed.
328
K. Segerson
Notes 1 See Gneezy and Rustichini (2000) for the original study. 2 For example, some individuals might feel that society has a moral obligation to provide equal access to some goods (e.g. emergency room care) but no moral obligation to provide equal access to other goods (e.g. luxury cars). 3 Murphy and Stranlund attribute one of their results that is inconsistent with theory to a framing effect, which they claim is likely to be limited to the laboratory.
References Gneezy, Uri, and Aldo Rustichini, 2000, “A Fine is a Price,” Journal of Legal Studies, 29(1) (January): 1–17. Levitt, Steven D., and Stephen J. Dubner, 2005, Freakonomics: A Rogue Economist Explores the Hidden Side of Everything, New York: HarperCollins. Spraggon, John, 2002, “Exogenous Targeting Instruments as a Solution to Group Moral Hazards,” Journal of Public Economics 84(2): 427–456. Vossler, Christian A., Gregory L. Poe, William D. Schulze, and Kathleen Segerson, 2006, “Communication and Incentive Mechanisms Based on Group Performance: An Experimental Study of Nonpoint Pollution Control,” Economic Inquiry 44(4): 599–613.
Part IV
Valuation and preferences
19 Preference reversal asymmetries in a static choice setting Timothy Haab and Brian Roe
The economic analysis of environmental policy requires a thorough understanding and appreciation of the ways in which people make decisions towards certain or risky events. Traditional environmental economic policy analyses have relied almost exclusively on the assumption that people are capable of and in fact do make rational decisions. In an unpublished draft manuscript, J. Shogren (2005) puts the problem succinctly: “Relying on rational theory to guide environmental valuation and policy makes more sense if people make, or act as if they make, consistent and systematic choices toward certain and risky events.” Here, we focus on one particularly troubling individual decision making anomaly: preference reversals. Preference reversals, in which subjects favor one object in a set when preference is elicited via one method (e.g. choice among alternatives) but favor another object when preference is elicited via another method (e.g. inferred from auction bids), are well documented. Most evidence of reversals comes from studies of preferences over lotteries with similar expected payoffs, though more evidence is emerging from studies with time differentiated payments or with payments involving issues of equity and fairness across several recipients (see Seidl (2002) for a thorough review of preference reversals). In an attempt to broaden our understanding of preference stability, we look for experimental evidence of preference reversal behavior in a static choice setting. The experiment is formulated in a simple labor market context where subjects are asked their preferences for performing menial tasks and then later asked to formulate bids representing the minimum willingness to accept to perform the tasks subject to an incentive compatible elicitation mechanism. We are able to induce preference reversals through the use of anchoring cues, thereby demonstrating the rank instability of preferences in a static choice setting. Perhaps of greater interest, we identify an asymmetry in these static reversals that cannot be explained by the traditional reversal explanations posited in the risky choice literature.
Background Evidence from studies of choice over gambles, time-differentiated payments or interpersonal income distributions may simply expose the tenuousness of
332 T. Haab and B. Roe extending a static preference theory to richer choice environments rather than problems with the axioms girding static choice theory. A smaller though intriguing trove of evidence is emerging that documents similar reversals in simple, static settings that more directly contradicts the core assumptions of utility theory: individuals are endowed with stable, coherent preferences that allow for intransigent orderings over consumption bundles regardless of the procedure used to elicit them. For example, List (2002) reports that average bid prices for a smaller, nested set of sports cards are higher than for the full set when bids are elicited for each set separately, but average bid prices are higher for the larger set when both sets are evaluated jointly. The bids are elicited from sports card enthusiasts and dealers spending their own money in a real market and the goods in question did not involve any of the complications outlined above. Because of its market setting, many of the critiques lodged by economists against preference reversal studies are clearly avoided (Grether and Plott, 1979). Ariely et al. (2003) report several experiments in which they elicit subjects’ willingness to accept compensation to experience unpleasant stimuli (noises, disagreeable drinks and mild pain) using incentive compatible bidding mechanisms. They find considerable arbitrariness in terms of cardinal preference structures but considerable robustness of preference orderings. Ariely et al. conclude that preferences are arbitrary but coherent, which leaves unchallenged the base assumption that preference orderings are stable though cardinal representations of these may not be unique. Other evidence of reversals in static situations has been documented in the psychology literature (see Hsee et al. (1999) and Seidl (2002) for summaries), though few of these studies feature market pressures and realities like the List or Ariely et al. studies. Few studies performed on preference reversals in static situations (hereafter, SR for static reversals) utilize a within-subject design, while many studies performed in richer choice contexts (hereafter GR for gamble reversals) feature within-subject design. Hence the GR literature reports inconsistency within the same person while SR studies report reversals on the aggregate.1 Extensive analysis of within-subject reversals from the GR literature uncovers an unusual pattern within the anomaly. Reversals tend to occur for only one subset of respondents: those respondents who prefer gambles with high odds and low stakes to gambles of equivalent expected payouts but low odds and high stakes. The typical setup involves allowing respondents to choose between two such lotteries where the low odds, high stakes gamble is called the $-bet and the high odds, low stakes gamble is called the P-bet. Those who prefer the P-bet in a direct, joint comparison are much more likely to reverse themselves than those who prefer the $-bet when, later in the experiment, the minimum bids for selling each lottery are elicited. A common explanation focuses on anchoring and adjustment. When faced with formulating dollar bids that may determine if the subject actually plays the gamble, subjects anchor on the amount of the potential payout because is it also demarcated in dollars. Because the $-bet involves a higher dollar figure than the
Preference reversal asymmetries 333 P-bet and payouts for losing the bet are similarly small, the subjects are likely to state a higher price for the $-bet than the P-bet, that is subjects anchor their responses to the winning dollar payout for each bet. This increases the likelihood that those who chose the P-bet in the choice setting will be reversed and reduces the likelihood that those choosing the $-bet in the choice setting will be reversed. The anchoring implicit in many of the GR studies reinforces the preference ordering of those who initially choose the $-bet and offsets the preference ordering of those who initially choose the P-bet and from this pattern emerges the asymmetry. Subsequent experiments that reverse the direction of anchoring (Schkade and Johnson, 1989) or state winning outcomes in units other than dollars (Slovic et al., 1990) show that this asymmetry in reversals is often reduced, though typically not eliminated, which lends credence to anchoring and adjustment mechanisms as part of the explanation. In this chapter, we induce within-subject preference reversals in a static choice setting by using anchoring cues prior to eliciting selling prices. The experiment is formulated in a simple labor market context where subjects are asked their preferences for performing several menial tasks (among them, typing and adding) and then later asked to formulate bids representing the minimum willingness to accept to perform the tasks subject to an incentive-compatible elicitation mechanism. In the process of reversing preferences, we identify an asymmetry in these static reversals where subjects who prefer typing words to adding numbers are likely to reverse themselves during the price elicitation stage, but subjects who prefer adding numbers rarely reverse themselves. Unlike the asymmetries in gambling reversals, however, reversing the direction of the anchors does not reduce this asymmetry. Furthermore, the plethora of other explanations forwarded for gambling reversals do not seem to explain the asymmetry in static reversals we identify. The remainder of the chapter is organized as follows. We first introduce the two experiments and discuss the results. We then compare the results to those of other SR and GR studies and discuss similarities and differences. We finish with some hypotheses of what could be driving this static asymmetry result and a discussion of the implications of the results for microeconomic theory and behavioral economics.
Experiment one To test the rank stability of preferences, we develop an experiment that asks subjects to rank order two goods in the presence of external stimuli. Previous experiments have found that value elicitation experiments are more effective if they elicit subject willingness to accept to give up goods rather than the willingness to pay for non-endowed goods (Kahnemann et al., 1991). To avoid potential endowment effects, we elicit values for a good with which subjects are preendowed: their time. The experiment simulates a simple labor market in which we act as an employer and ask subjects to indicate their “wage demanded” for two simple but potentially time-consuming tasks: addition and typing.
334
T. Haab and B. Roe
Subjects were recruited to participate in the computer-based experiment via posted flyers and emails sent to undergraduate and graduate students of several departments on campus. Subjects were told they would have the opportunity to participate in a computer-based experiment that would guarantee a minimum cash payout of $5 and would offer the possibility of earning more money during the course of the experiment. The expected time commitment for the experiment was less than 1 hour. The resulting subjects consisted of a mix of undergraduate and graduate students. The experiment is fully automated, and once the subjects begin, they have no contact with anyone other than questions of clarification to the moderator (a very rare occurrence). A total of 71 subjects participated in the first experiment. The introductory section gets subjects to think about and rank their preferences over five tasks: typing a list of words, adding pairs of numbers, mopping floors in an office building, working in a fast-food kitchen and telemarketing. On separate screens, with the order randomized, subjects are asked to imagine performing each task for one hour and then rank order the five tasks from most to least enjoyable if they were paid the same amount to perform each task. We then provide a sample demonstration of two of the five tasks: typing and adding, and elicit a pre-treatment ranking of the two tasks. Subjects are told that later in the experiment they will have the opportunity to perform one of two tasks for pay. The exact tasks to be performed consist of typing a list of 500 words into the computer, or adding together 175 pairs of two-digit numbers and typing the sum into the computer. To familiarize subjects with the tasks and to allow them to form realistic expectations of the full task, each task is demonstrated, in random order, on a small scale. Subjects type a list of 30 words, and add together ten pairs of numbers. The time to complete each of the demonstrations is recorded, and subjects are given an estimate of the time it will take to complete the full adding or typing task. Subjects can modify this predicted time according to their own expectations. Following the demonstrations, subjects rank the five tasks assuming the tasks paid the same. The lowest ranked (most enjoyable) of the adding and typing tasks determines the pretreatment preference. Following the pre-treatment preference elicitation, a series of value elicitation questions establishes the payment demanded for each of the two tasks. The procedure for establishing the minimum payment demanded utilizes a Becker– DeGroot–Marshak (BDM) incentive compatible elicitation method described to subjects as follows: We are going to ask you some questions to determine the smallest amount of money we would have to pay you to perform the tasks. We will refer to this as your “wage demanded.” After we have established your wage demanded for each task, the computer will randomly choose one of the two tasks. After the task is chosen, the computer will randomly select a price and announce it to you. Please note that the price chosen by the computer is completely random and does not depend on your wage demanded for each
Preference reversal asymmetries 335 task. Depending on your wage demanded for the chosen task, and the announced price, you will be asked to either a) perform the task for the announced price and receive payment upon completion of the task, or b) not perform the task and receive no payment. No specifics are given to the subject regarding the distribution from which the random prices are drawn as such information would dilute the effect of anchors we provide later in the bidding elicitation.2 A series of follow-up screens repeats the procedure, and a quiz ensures subjects understand the procedure. Subjects are told that they have no incentive to reveal an amount other than their true lowest willingness to accept. For the value elicitation questions, subjects are randomly assigned to one of three anchor treatment groups: 25 percent receive a no anchor control treatment and 75 percent are randomly split between two anchor treatments. The no anchor treatment group receives an open-ended value elicitation question for each task of the form: “We would like to know the lowest amount that we would have to pay you to add 175 pairs of number together. What is the smallest amount we will have to pay you to add 175 pairs of numbers?” A similar question is asked for the 500 word typing task (and the order of the two elicitations is randomized). The two anchored treatment groups receive an anchoring question prior to the open-ended elicitation for each task. For the adding task, the anchor question is: “Would you be willing to add 175 pairs of numbers for $A?” For the typing task, the anchor question is “Would you be willing to type 500 words for $T?” The anchor amounts ($A and $T) vary depending on the randomly assigned anchor treatment. Table 19.1 summarizes the two anchor treatments: add low/type high and type low/add high. Following the value elicitation, the computer randomly determines the task to be offered (a 50/50 chance for each), and the price to be paid for the task. The price is randomly chosen from a uniform distribution over the range $2 to $12. Depending on the payment demanded for each task, the subject either performs the full task for the announced price or does not perform the task and receives no payment for the task. Following the successful completion of the task, the total payment due subjects is calculated and subjects are dismissed to the hallway for a simple game of chance to win an additional $20 (1/16 chance of success). Table 19.1 Anchor treatments for experiment 1 Anchor
Treatment
Add low/type high Type low/add high
$A
$T
$2 $12
$12 $2
336
T. Haab and B. Roe
Discussion of experiment one Of relevance are two preference rankings: the pre-treatment ranking and the ranking implied through the value elicitation procedure. Prior to the anchoring treatment subjects rank the adding and typing tasks in relation to each other and three other tasks. If preferences are rank stable, the anchoring treatments will have no effect on the subsequent relative ranking of the two tasks as implied by the value elicitation. If in the pre-treatment ranking, the subject prefers the typing task, but in the post-treatment, the subject indicates a willingness to accept less compensation to perform the adding task, then we label this a preference reversal. Similarly, a preference reversal will be observed if the subject prefers the adding task pre-treatment but reports a strictly lower willingness to accept payment for the typing task. Given the random assignment of subjects to the three treatment groups, and the predetermined anchors for each group, the anchor treatments can be either reinforcing or counter-balancing to the pre-treatment rankings. If the subject has a pre-treatment preference for typing and receives the type low/add high anchoring treatment, the anchor reinforces the pre-treatment ranking. If the same subject receives the add low/type high anchor treatment then the anchors are counter to the pre-treatment ranking. If anchoring effects do not exist, we expect the post-treatment preferences for the two tasks to be independent of the anchoring treatment across subjects, and we expect that the anchoring effects will not induce reversals from pre- to post-treatment within subjects. Experiment one results We first examine the potential for across-subject reversals. If preferences are rank stable across subjects, we expect the proportion of participants preferring each task after the anchoring treatment to be independent of the anchoring treatment. Table 19.2 presents the number of participants preferring each task post-treatment broken down by treatment. Rows represent the three experimental treatments. Columns represent the post-treatment preferences as revealed through the relative payments demanded for each task.3 Table entries are the number falling in each preference category over the number offered each treatment.
Table 19.2 Post-treatment preferences by treatment Post-treatment preference Prefer typing Indifferent Prefer adding Anchoring treatment
Low add ($2)/high type ($12)
15/28
8/28
5/28
No anchor High add ($12)/low type ($2)
8/18 3/25
6/18 7/25
4/18 15/25
Preference reversal asymmetries 337 0.70 0.60
Proportion
0.50 0.04 0.30 0.20 0.10 0.00 Prefer typing after treatment (26 of 71) Low add ($2)/high type ($12)
Indifferent after treatment (22 of 71) No anchor
Prefer adding after treatment (24 of 71)
High add ($12)/low type ($2)
Figure 19.1 Between-subject comparison: experiment 1 post-treatment preferences for tasks (by treatment).
Figure 19.1 gives a graphical representation of these results. It is apparent that there are large differences in the relative proportion of those preferring each task by anchoring treatment. Pair-wise tests of differences in proportions support the assertion that anchoring significantly influences the ranking of the adding and typing tasks on average. The proportion of subjects indicating a post-treatment preference for the typing task is significantly higher for subjects receiving the high anchor for typing than for those receiving the low anchor for typing (p = 0.002). Similarly, the proportion of subjects preferring the adding task, post-treatment, is significantly higher for those that received the high adding anchor than for those receiving the low adding anchor (p = 0.002). The proportion of indifferent subjects does not vary significantly by treatment. Interestingly, the adding anchor appears to have no statistically significant effect relative to the no anchor treatment.4 Reinforcing the findings of Ariely et al., who find that on average a single anchor can affect the location of preferences for a single good (preferences are arbitrary), but not the relative ranking to other goods (i.e. preferences are coherent, even in the presence of multiple anchors), we find that multiple anchors can on average affect the relative ranking of goods. Of potentially more significance would be a demonstration that participants could be induced to reverse rank orderings of two goods internally. To investigate this, we look at the incidence of preference reversals within individuals. Of the 71 participants in the first experiment, 21 (30 percent) reversed their preferences for the two tasks from pre- to post-treatment. Of those, 19 received a (randomly assigned) anchoring treatment that provided anchors counter to their pre-treatment preferences. Anchoring opposite to the pre-treatment preferences results in a 23 percent (and statistically significant) increase in the probability of
338
T. Haab and B. Roe
Table 19.3 Probit results on within-subject preference reversals, experiment 1: dependent variable = 1 if preferences reverse, 0 otherwise Coefficient (p-value) Constant Adding preferred pre-treatment/counterbalancing anchors Typing preferred pre-treatment/counterbalancing anchors Reinforcing anchors
–0.765 (0.020) –0.303 (0.652) 0.861 (0.036) –0.517 (0.306)
Note Log likelihood = –36.85, Chi-squared = 12.52, observations = 71.
a preference reversal relative to the no anchor treatment, while reinforcing anchors do not significantly affect the probability of reversal relative to the no anchor treatment. Table 19.3 summarizes probit results on within-subject reversals. The dependent variable is a binary indicator for within-subject preference reversals. Independent variables are indicators for counter-balancing anchors by pre-treatment preference group, and an indicator for a preference reinforcing anchoring treatment.5 The omitted category is the no anchor treatment. For those with a pre-treatment preference for the typing task, counterbalancing anchors induce a statistically significant increase (32 percent) in the probability of preference reversal relative to the no anchor treatment. The effects of reinforcing anchors, or counter-balancing anchors for subjects with a pre-treatment preference for the adding task, produced no significant change in the probability of reversal relative to the no anchor treatment. Similar to the preference over gambles literature, it appears possible that an asymmetry exists in the preference reversal over the typing and adding tasks. Those that prefer the adding task pre-treatment appear to have well-defined and invariant preferences and are therefore uninfluenced by anchors opposite their preferences. On the other hand, those that prefer typing pre-treatment seem to have more malleable preferences and are significantly more likely to reverse preferences in the presence of counter-balancing anchors.
A second experiment The asymmetry anomaly is surprising in a static setting where anchoring, a leading explanation for asymmetry in GR, is controlled for within the experimental design. To provide further verification of the asymmetry, we conduct a second experiment with slight modifications from the first. Because the pretreatment ranking in experiment 1 requires the subject to rank completely five tasks, it is possible that the ranking of the additional three tasks distracts the subject from ranking the two tasks of interest: typing and adding. Further, a possible source of confusion in experiment 1 is the use of the wording “wage demanded.” It is possible that the responses to the value elicitation questions
Preference reversal asymmetries 339 were based on the expected hourly wage for performing the tasks rather than the piece-meal rate for the two tasks as we anticipated.6 To circumvent these potential problems in experiment 2, we phrase all question in terms of the “payment demanded,” and we repeatedly remind subjects that we are looking for lumpsum payments and not hourly wages. Further, the pre-treatment ranking from experiment 2 asks subjects to rank directly just the typing and adding tasks in terms of which task would demand a higher payment: “If we were to pay you to either add 175 pairs of numbers, or type 475 words, which task would we have to pay you more to perform?”7 Finally we introduce reminders to subjects before the pre-treatment ranking, and before the value elicitation questions that their payment demanded responses will indicate to us a preference for one task or the other: “We believe that if you prefer a task, you would be willing to accept a lower payment for that task. That is, we would have to pay you more to perform a less preferred task.” Because we are interested in testing the asymmetry of preference reversals, we modify the anchoring treatments from experiment 1 to be conditional on the pre-treatment ranking. Following the pre-treatment ranking of the two tasks, 70 percent of the sample was assigned to one of two treatment groups while the remaining subjects received no anchoring treatment. Subjects assigned to an anchor treatment receive anchors designed to induce preference reversals, i.e. those who prefer typing (PT) receive a $12 anchor for typing and a $2 anchor for adding while those who prefer adding (PA) receive a $12 anchor for adding and a $2 anchor for typing. Experiment 2 results A total of 58 subjects participated in experiment 2. Of those, 18 indicated a pretreatment preference for the adding task (15 were assigned to the anchor group), and 40 indicated a pre-treatment preference for adding (26 assigned to the anchor group). After the anchoring treatment and value elicitations, 15 preference reversals (26 percent of subjects compared to 30 percent in experiment 1) were observed. Thirteen reversals were in one of the two anchored groups: ten from the pre-treatment typing group that was anchored, and three from the pretreatment adding group that was anchored. Of the two unanchored reversals, one preferred typing and one preferred adding pre-treatment. Table 19.4 reports the results of a probit model from experiment 2 reversals (1 = reversal). Independent variables include indicator variables for pre-treatment preference for adding, pre-treatment preference for typing and the same two indicators crossed with anchoring treatment. The significant coefficient of the anchor treatment in the typing preferred group indicates that those who prefer typing in the pre-treatment can be induced to prefer adding in the value elicitation by anchoring opposite to their initial ordering. The same cannot be said for those that preferred adding in the pretreatment. The insignificant anchoring effect indicates that those that prefer adding in the pre-treatment are not induced by the counter-balancing anchors to
340
T. Haab and B. Roe
Table 19.4 Probit results on within-subject preference reversals, experiment 2: dependent variable = 1 if preferences reverse, 0 otherwise Coefficient (p-value) Adding preferred pre-treatmenta Typing preferred pre-treatmenta Adding preferred pre-treatment and counter-balanced anchors Typing preferred pre-treatment and counter-balanced anchors
–0.431 (0.565) –1.465 (0.004) –0.411 (0.622) 1.172 (0.037)
Notes Log likelihood = –30.34, Chi-squared = 5.63, observations = 58. a With no constant, default categories are pre-treatment preference for each task with no anchors. No reinforced anchors are offered.
indicate a preference for typing in the post-treatment rankings. Experiment 2 reinforces the findings of experiment 1, that counter-balancing anchors can induce rank reversals for the typing task. However, the result is asymmetric. Those with a pre-treatment preference for the adding task appear to have less manipulable preferences over the two tasks.
Discussion and conclusions Complementary to, and perhaps more damning than, the findings of preference reversals in preferences over uncertain gambles, we find the existence of rank reversals in preferences over simple tasks with certain outcomes. Subjects that are anchored counter to their initial preference ordering during a subsequent bidding elicitation show a tendency to reverse those preferences in the direction of the anchors. An asymmetry arises in the static preference reversals, as one selfselected group – those who prefer adding to typing in the joint preference elicitation – are rarely manipulated to reverse their preference via an anchoring mechanism. Several common explanations for such asymmetries in the GR literature exist (anchoring theory, prominence theory, compatibility theory). Subjects facing preference elicitations demarcated in dollars rely upon other dollar demarcated information within the problem, such as the value of payouts in a lottery, but may rely upon other attributes (e.g. probability of winning) when facing other preference elicitation modes (choice). We control for such possibilities in our experiments in a way that no asymmetry should arise, but yet they do. We hypothesize that instead of the phenomenon arising from heterogeneity of the method in which preferences are elicited, which has been the focus of much of the preference reversal literature, that preference reversals may be a simple issue of sample selection that arises from heterogeneity across individuals. Even when studies feature extensive experimental control, no research design can assign a subject’s first rank ordering of preferences. Rather, it is always selfselected, regardless of the mode of preference elicitation. Subsets of subjects immune to preference reversals may simply self-select into the $-bet preference in the GR literature or choose adding in our study.
Preference reversal asymmetries 341 Alternatively, heterogeneity with regards to intensity of preferences may lead to reversal asymmetries, e.g. those who prefer the $-bet and the adding task may simply have more intense, less malleable preferences for these particular goods and, hence, are less likely to reverse themselves. Our data shows some credence for this explanation. In both experiments, those who rank adding higher than typing pre-treatment provide an average absolute difference in willingness to accept approximately $4.00 more than those who rank typing above adding, after controlling for anchoring treatments. Among the non-anchored subjects, the average difference between the payment demanded for adding and the payment demanded for typing is $3.91 higher for those preferring adding pretreatment in experiment 1 and $4.09 for those preferring adding pre-treatment in experiment 2. Both differences are significant at the 90 percent level indicating mild support for the hypothesis that in the absence of anchoring, those preferring the adding task have more intense preferences for their preferred task than those preferring the typing task. It is clear, however, that much work remains before we understand the causes and implications of rank instability of preferences. Our ability to use simple anchoring mechanisms to induce within subject preference reversals in an economically meaningful, static, simple choice setting further escalates the need for economists to reconsider the wisdom of treating preferences as stable in either a cardinal or an ordinal sense, even in choice settings that are not encumbered with the intricacies of probabilistic outcomes, time discounting or issues of interpersonal utility. We restate with amplification many of the concerns raised by others who question the dogma of stable, coherent sets of preferences (Grether and Plott, 1979; Tversky and Thaler, 1990; Seidl, 2002). How does the core of microeconomic theory operate if preferences are, minimally, highly malleable and, maximally, constructed on the fly as people are bombarded with key economic and institutional stimuli inherent in the decisions that interest economists? A partial list of disturbing questions includes: which set of preferences is correct – those elicited via direct choice among alternatives or those inferred from transacted prices? Is a market mechanism that expends $X to match a marginal buyer to a marginal seller more or less efficient than one that expends $Y < $X to shape the preferences of a proximate buyer and seller such that they are both at the margin? Which preference elicitation mechanism should be used when gathering information that will shape public policy? Do we adopt a policy if those who gain under that policy can use their windfall to shape the preferences of those who lose such that the policy passes the potential compensation principle? Further experimental work to investigate the robustness of our results is warranted. For example, unlike the preference reversal over gambles literature (e.g. Cox and Grether, 1996), our experimental work has not introduced repetition of the elicitation mechanisms into the design to see if increased familiarity with the choice and bidding procedures limits either the extent or asymmetry of reversals. We leave this and other design augmentations to future work and would not be surprised if the asymmetry we uncover in static preference reversals follows the
342
T. Haab and B. Roe
course of the asymmetries uncovered in preference reversals in gambles: they remain robust in many settings but, with enough market feedback and experience, they shrink to the boundary of significance.
Acknowledgments We want to thank Jason Eslick for extensive help in programming the experiments contained herein. We also thank John Kagel and Kerry Smith for helpful comments on earlier drafts.
Notes 1 The exception from the SR literature is Ariely et al., who collect multiple observations per subject but only report averages across treatment groups. Even if their data were analyzed for within-subject reversals, it is unlikely that such inconsistencies would arise, as this was not the primary focus of their research. 2 Also, Peter Bohm et al. (1997) show that failing to describe the distribution helps alleviate some of the critiques lodged against the BDM procedure and yields bids equivalent to those elicited during a second price auction mechanism. 3 If the payment demanded for the adding task is strictly less than the payment demanded for the typing task, they are classified as preferring adding, and vice versa. If the two payments demanded are identical then they are classified as indifferent. 4 The results of all of the pair-wise tests are available on request from the authors. 5 We assume that the effect of reinforcing anchors on the probability of preference reversal is the same for those preferring typing and those preferring adding. Because of the small number of subjects preferring adding pre-treatment, it was not possible to identify separately the effects of reinforcing anchors by pre-treatment preference. 6 Although, when responses to experiment 1 are converted to an hourly wage rather than a piece-meal wage, the reversal results are qualitatively identical. 7 We reduced the number of words to be typed from 500 in experiment 1 to 475 in experiment 2 based upon average times recorded in experiment 1 such that both tasks would take about 15 minutes to complete.
References Ariely, D., Loewenstein, G. and Prelec, D., 2003. “Coherent Arbitrariness”: Stable Demand Curves without Stable Preferences. Quarterly Journal of Economics, 118 (2), 73–105. Bohm, P., Lindén, J. and Sonnegård, J., 1997. Eliciting Reservation Prices: Becker– DeGroot–Marschak Mechanisms vs. Markets. Economic Journal, 107 (443), 1079–89. Cox, J. and Grether, D., 1996. The Preference Reversal Phenomenon: Response Mode, Markets and Incentives. Economic Theory, 7 (3), 381–405. Grether, D. and Plott, C., 1979. Economic Theory of Choice and the Preference Reversal Phenomenon. American Economic Review, 69 (4), 623–38. Hsee, C., Loewenstein, G., Blount, S. and Bazerman, M., 1999. Preference Reversals Between Joint and Separate Evaluations of Options: A Review and Theoretical Analysis. Psychological Bulletin, 125 (5), 576–90. Kahnemann, D., Knetsch, J. and Thaler, R., 1991. The Endowment Effect, Loss Aversion, and Status Quo Bias: Anomalies. Journal of Economic Perspectives, 5 (1), 193–206.
Preference reversal asymmetries 343 List, J., 2002. Preference Reversals of a Different Kind: The “More is Less” Phenomenon. American Economic Review, 92 (5), 1636–43. Schkade, D. and Johnson, E., 1989. Cognitive Processes in Preference Reversals. Organization Behavior and Human Decision Processes, 44 (2), 203–31. Seidl, C., 2002. Preference Reversal. Journal of Economic Surveys, 16 (5), 621–55. Shogren, J., 2005. Valuation in the lab. Unpublished draft manuscript, University of Wyoming Department of Economics. Slovic, P., Griffin, D. and Tversky, A., 1990. Compatibility Effects in Judgment and Choice. In: Hogarth, Robin M., ed., Insights in Decision Making: Theory and Applications. Chicago: University of Chicago Press, pp. 5–27. Tversky, A. and Thaler, R., 1990. Anomalies: Preference Reversals. Journal of Economic Perspectives, Spring 1990, 4 (2), 201–11.
20 Measuring preferences for genetically modified food products Charles Noussair, Stephane Robin, and Bernard Ruffieux
Introduction The introduction of genetically modified organisms (GMOs) into food products has been a major political issue for over a decade in many parts of the world. Regulatory authorities such as the FSA in the United Kingdom, the FDA in the United States, and the DGAL in France, on the basis of recommendations from the scientific community, have recognized that the GMO products currently available are safe for the consumer and the environment. Moreover, there is a consensus among scientists that biotechnology has the potential to create products that will enhance nutrition, increase crop yields, and reduce the use of toxic pesticides and herbicides. However, polling of consumers consistently indicates a high degree of hostility to the presence of GMOs in the food supply. For example, Noussair et al. (2001) report that 79 percent of French respondents either agreed or mostly agreed with the statement “GMOs should simply be banned”. A total of 89 percent were opposed to the presence of GMOs in food products, 89 percent in livestock feed, 86 percent in medicine, 46 percent in food packaging, and 46 percent in fuels. In the UK, surveys show a similar pattern. Moon and Balasubrimanian (2001) report the results of a survey conducted of 2,600 consumers in the UK. Of these, 38 percent indicated that they were in support of agrobiotechnology and 46 percent were opposed. A poll of Americans conducted by ABC News in June 2001 found that 35 percent believed that GM foods were safe to eat, while 52 percent believed that they were not. The results of the fifth Eurobarometer survey on biotechnology and the life sciences (Gaskell et al., 2003) indicate that a majority of Europeans would not buy or eat GM foods. Between 30 percent and 65 percent of the respondents in every EU country reject every reason for buying GM foods listed in the survey. Greece, Ireland, and France are the countries in which the highest percentage of respondents rejects GM foods. Survey responses indicate that aversion to GMOs is based on both private considerations, such as potential health risk and a preference for natural foods, as well as social dimensions, such as environmental effects and ethical concerns. The unfavorable view has been exacerbated by the spread of
Measuring preferences for GM food 345 the “mad cow” epidemic, the lack of benefit that the first generation of GMOs provides to the consumer, and the initial introduction of GMOs without the public’s knowledge. The dichotomy between scientific recommendations and public opinion has complicated the formulation of government policy with respect to GMOs, since in a democratic system public opinion must be taken into account in addition to the scientific merits of the policy and the market pressures in the economy. However, there is reason to question whether the anti-GMO sentiment expressed in surveys would be reflected in actual purchase behavior. It is known that individuals’ decisions can differ drastically between when they are hypothetical, as in a contingent valuation study or other survey, and when they involve a real commitment to purchase (see for example Neill et al. 1994; Cummings et al. 1995; Brookshire and Coursey, 1987; List and Shogren, 1998; or List and Gallet, 2001). Furthermore, most surveys do not inquire about actual purchase decisions at specific prices, while contextual cues or small changes in information provided to survey respondents may change results dramatically (Ajzen et al., 1996). Surveys about preferences over public goods, such as the preservation of GMO-free crops, may be particularly suspect. Sagoff (1988), Blamey et al. (1995), and Nyborg (2000) argue that survey and hypothetical contingent valuation measurement techniques for public goods do not accurately reveal participants’ willingness to pay. Surveys place respondents in the role of citizens, who make judgments from society’s point of view, rather than consumers, who make actual purchase decisions. Thus the two instruments, surveys and purchase decisions, measure different variables. In addition, even if provision or preservation of a public good is valuable to an individual, it may not be reflected in his willingness to pay because of the free rider problem (Stevens et al., 1991; Krutilla, 1967). A well documented example of a dichotomy between surveys and consumer behavior was observed during the introduction of recombinant bovine somatropin (rbST), a bovine growth hormone, into milk production in the United States in 1993. Surveys indicated that a majority of consumers had a negative opinion of the technique, primarily on ethical grounds. On the basis of the survey data, analysts predicted a 20 percent decline in total milk consumption. However, there was no decrease in actual milk consumption after the introduction of the technique (Aldrich and Blisard, 1998). The focus of the first study described in this chapter (Noussair et al., 2004b) is to consider, using experimental methods, the extent that actual decisions to purchase food products are affected by the presence of GMOs. We study purchasing behavior of consumers using a laboratory experiment designed to elicit and compare the willingness to pay for products that are traditional in content and labeling, that are explicitly guaranteed to be GMO free, and that contain GMOs. We also consider buyer behavior with respect to different thresholds of maximum GMO content. The second study surveyed here (Noussair et al., 2002) is motivated by the fact that, despite the hostility toward GMOs that is ubiquitous in survey data,
346
C. Noussair et al.
sales do not decrease when the label reveals that the product contains GMOs, for those few GM products that have been put on the European market, where GM content must be indicated on the product label. We use an experiment to consider whether the absence of a reaction in demand to the current labeling of products is due to the fact that most customers do not notice the labeling, and thus do not realize that the product they are purchasing contains GMOs. The experimental approach is particularly appealing here because of the absence of field data. The current policy of most major European retailers not to carry GM foods, which has resulted from pressure of activists and the media, means that it is very difficult to estimate product demand for foods containing GMOs using field data from European countries. For the few GM products that are available, there is reason to believe that consumers are unaware of the labeling of GM content. Furthermore, in the US, where the vast majority of GM food is sold, demand for GMOs cannot be inferred from market data since GM content is not indicated on the labeling. We are unaware of any previous estimates of consumer demand for the GMO-free characteristic in food products other than those obtained from experimental studies (see Lusk et al., 2001; Huffman et al., 2001; Lusk et al., 2005). However, previous works (see for example Shogren et al., 1999) suggest that experiments provide a good alternative method to study product demand in general, and that the artificial setting of the laboratory does not drastically alter consumer behavior. Moreover, experimental methodology provides an environment to measure individual preference by controlling for noise and other confounding factors. In particular, researchers in the laboratory are able to control precisely the information communicated about product characteristics, which is not possible in the field.
Background Policy issues: segregation and thresholds In response to the tension between scientific and public opinion on the issue of GM foods, the policy adopted by most European governments has been to declare a moratorium on approval of new GM products for cultivation and sale. For the few products that have already been approved, their policy has been to segregate GM and GMO-free products at all stages of production, to require labeling of products containing GMOs, and to allow the market to determine how much of each type of product is sold. Any food product sold in the European Union for human consumption that contains an ingredient that consists of more than 0.9 percent GMOs must be labeled “contains GMOs”. There is no GM produce currently sold in Europe and the only GM products for sale appear as ingredients in processed foods. Currently in France, three types of corn are authorized for cultivation. One type of corn and one type of soybean are authorized for importation. In the UK, in addition to corn and soybeans, one type of GM tomato is authorized for importation and used in tomato puree. No GM crops are grown commercially in the UK. In contrast, in the United States, as of early
Measuring preferences for GM food 347 2002, about two dozen different GM fruits, vegetables, and grains were being cultivated. In the US, there are no specific regulations for biotech products, which are subject to the same regulations as other products. See Caswell (1998, 2000) for a discussion of policy issues relating to the labeling of GM products. In Europe, when a product is classified under current law as containing GMOs, it must carry in its list of ingredients the statement “produced from genetically modified. . .”. A note at the end of the list of ingredients, specifying the genetically modified origin, is also considered sufficient, as long as it is easily legible. The size of the letters must be at least as large as those in the list of ingredients. To account for the incentive of producers to make the labels indicating their products’ positive characteristics as prominent and those revealing the unfavorable characteristics as discreet as possible, regulators have imposed strict conditions on the size, color, and positioning of information on packaging. Although the current policy of segregation and mandatory labeling is freemarket oriented in that it offers consumers a choice, some economists might view it as an inefficient policy. Segregating the entire process of production is costly to farmers and firms throughout the production chain, especially in the upstream part of the chain, which consists of the seed producers, farmers, and primary processors. In the United States, according to the US Department of Agriculture, segregation costs have been estimated at 12 percent of the price of corn and 11 percent of the price of soybeans in the year 2000. Buckwell et al. (1999) find that in general, identity preservation for specialty crops increases final costs by between 5 percent and 15 percent. Since there is no hard evidence that the GMOs that regulatory authorities have approved are harmful either to health or to the environment, it can be argued that the expenditure represents a deadweight loss. The two main alternatives to this segregation of the market are (a) to ban GM varieties entirely, or (b) to ban labeling, effectively making GM and non-GM varieties indistinguishable from each other from the viewpoint of a consumer. Both of these policies have potential downsides. Banning new GMOs may be inefficient if there are welfare gains from the adoption of biotechnology that are foregone. Indeed, the studies that have estimated the gains from the adoption of biotechnology in farming in the United States have found them to be considerable. Anderson et al. (2000) estimate the gains from the introduction of biotechnology at $1.7 billion per year for cotton, $6.2 billion per year for rice, and $9.9 billion per year for coarse grains. For soybeans grown in the American Midwest, savings to farmers from the adoption of herbicide tolerant soybeans have been estimated at $60 million annually (Lin et al., 2000, 2001). Lin et al. also estimate that the $60 million in savings constituted 20 percent of the overall welfare gain to all parties (US farmers, rest of world farmers and consumers, the gene producers, and the seed companies). Falck-Zepeda et al. (2000) estimate that the welfare gains from the adoption of Bt corn in the US for the year 1996 equaled $240.3 million. Of this total, 59 percent went to US farmers. The gene developer received 21 percent, US consumers 9 percent, producers and consumers in other
348 C. Noussair et al. countries 6 percent, and the seed producer 5 percent. Traxler et al. (2000) find that the surplus from the use of Round up Ready soybeans in the US in 1997 was distributed in the following manner: 50 percent to US farmers, 8 percent to US consumers, 22 percent to the gene developer, 9 percent to the seed companies, and 12 percent to foreign consumers and producers. Lence and Hayes (2001), using simulation techniques, provide estimates of potential welfare gains and anticipated costs for the United States from the cultivation of GM crops. Under many of their parameterizations, both overall consumer and producer welfare is greater after the introduction of GM technology. On the other hand, if the production tracks are not segregated or labeling of GMO content is interdicted, as it is in the United States, a “lemons” scenario may result (Akerlof, 1970). The GMOs currently on the market were introduced for agronomic reasons and the foods containing them are indistinguishable from conventional foods to the consumer in the absence of labeling information. Since GMOs have lower production costs, producers have an incentive to insert them into the food supply. If consumers value foods containing GMOs less than foods that do not contain GMOs, they will be unwilling to pay more for an unlabelled product than an amount that reflects the presence of GMOs. This would cause the market for non-GMO varieties to disappear, reducing social welfare by eliminating potential gains from trade. Furthermore, it could potentially cause a market collapse for entire products. If a firm cannot disclose that its product uses no ingredients that contain GMOs, it might replace ingredients that consumers believe may contain GMOs with those that cannot contain GMOs. This could eliminate the entire market for many products, such as soy lecithin, corn syrup, and cornstarch. From an economist’s point of view, the appropriate policy depends in part on the relative sizes of consumer and producer surpluses and the costs of implementing different policies. The surplus calculation hinges on whether the actual purchase behavior of consumers corresponds to the polling data. If, as suggested by the polls, a large majority of consumers is unwilling to purchase products containing GMOs, banning GMOs is probably the best option, as the expense of creating two tracks of production would not be justified. On the other hand, if the large majority of consumers behave as if they are indifferent to GMOs, or would purchase products made with GMOs if they sold at lower prices, the production tracks could be safely integrated with little social cost. However, if a considerable segment of the market refuses to purchase products containing GMOs at any price, but another large segment would purchase GM products if they were cheaper, separation of the production tracks and the enforcement of mandatory labeling of products containing GMOs would be worth the expense. Under a policy of segregation, the threshold level of GMO content, above which a product is considered to be bioengineered, must be specified. Because of the ease of contamination throughout the production chain, it is impossible to make intentionally any product, in whose manufacture GMOs are already authorized, without any trace of GMOs. This technological constraint requires the specification of a threshold above zero, below which a product is to be considered as
Measuring preferences for GM food 349 GMO free, and above which the product must be labeled as containing GMOs. The lower the threshold, the greater is the cost of production of GMO-free products. The increase involves the cost of producing very pure seeds, isolating parcels of land, and cleaning storage and transportation containers. The marginal cost of lowering the threshold may be justified if consumers have a strong preference for a low threshold. A methodological issue: how to measure willingness to pay? Willingness to pay information is typically elicited with a demand revealing bidding mechanism, such as the Vickrey auction or the Becker–DeGroot– Marschak mechanism, both described later in this chapter. The dominant strategy of truthful bidding and the commitment of real money create an incentive to reveal truthfully limit prices, regardless of the risk attitude of the bidder and the strategies other participants use. A demand revealing auction has the advantage over the study of purchase decisions with field data that it allows an individual’s limit price to be measured directly. Observing only whether or not an individual purchases a product at the current market price in a store merely establishes whether or not his limit price exceeds the current market price. Accurate willingness to pay information is particularly useful for new products because other sources of demand estimates on which to base profit or costbenefit calculations are not readily available. Experimental economists have employed demand revealing auctions to study limit prices for goods as varied as consumer products (see for example Hoffman et al., 1993; Bohm et al., 1997; List and Shogren, 1998; and List and Lucking-Reiley, 2000), food safety (Hayes et al., 1995; Fox et al., 1998; Buzby et al., 1998; Huffman et al., 2001; and Lusk et al., 2001), and lotteries (Grether and Plott, 1979; Cox and Grether, 1996). In this research, we use the second-price sealed-bid auction, also called the Vickrey auction (Vickrey, 1961), and the Becker–DeGroot–Marschak (BDM) mechanism (Becker et al., 1964). In a second-price sealed-bid auction, each member of a group of potential buyers simultaneously submits a bid to purchase a good. The agent who submits the highest bid wins the auction and receives the item, but pays an amount equal to the second highest bid among the bidders in the auction. In a BDM, each subject simultaneously submits an offer price to purchase a good. Afterwards, a sale price is randomly drawn from a distribution of prices with support on an interval from zero to a price greater than the anticipated maximum possible willingness to pay among bidders. Any bidder who submits a bid greater than the sale price receives a unit of the good and pays an amount equal to the sale price. There is a substantial literature studying the behavior of the two mechanisms in the laboratory when university student subjects are bidding for goods. Some of this research has used the technique of induced values (see Smith, 1982, for an exposition of induced value theory) to create limit prices for fictitious goods. The experimenter offers a guarantee that bidders can resell goods
350
C. Noussair et al.
at prices that are specified in advance, should they purchase the items in the auction. Several authors, including Coppinger et al. (1980), Cox et al. (1982), Kagel et al. (1987) and Kagel and Levin (1993), have studied the behavior of the Vickrey auction, and other authors, including Irwin et al. (1998) and Keller et al. (1993) have studied the BDM process using goods with induced values. These studies reach a variety of conclusions about bids relative to valuations, and some suggest that average bids are biased away from valuations. For example Kagel et al. (1987) and Kagel and Levin (1993) find that most winning bids in the Vickrey auction are higher than valuations. Irwin et al. (1998) find that the BDM process is more successful at eliciting true valuations for certain distributions of sale prices than others. Furthermore, all of the studies show that there is heterogeneity in bidding behavior that leads to a dispersion of bids relative to valuations. In the case of auctions for goods with home-grown (and therefore unobservable) valuations, such as consumer products, the evidence that bids tend to differ from valuations is indirect. Bohm et al. (1997) find that bids in the BDM are sensitive to the choice of end points of the distribution of possible transaction prices. List and Shogren (1999) find that bids in the Vickrey auction tend to increase as the auction is repeated, which suggests a bias in bidding either in the early or in the late periods. Rutström (1998) finds that the two mechanisms generate different mean bids for the same objects, indicating that at least one of the two must be biased. The research we have conducted on preferences for GM products allows us to compare these auctions within a similar environment. We evaluate and compare the auctions according to the following criteria. (1) Does either or both of the systems contain a bias toward under or over-revelation of true valuations? (2) Under which system do individuals on average bid closer to their true valuations? (3) Under which system is convergence by repetition toward demand revelation, if it occurs, more rapid? We pose these questions under specific conditions, when the population considered is a diverse sample of the population, when the goods considered have induced valuations, and when specific training procedures are in effect that our experience and intuition suggest would enhance the demand revelation performance of the mechanisms (see Noussair et al., 2004c, for more detail).
Methodology The participants The participants in the experiments were a demographically representative sample of 209 consumers in the Grenoble, France area. A total of 97 subjects participated in experiment 1, and 112 participated in experiment 2. The two experiments comprised 26 sessions, and each session took approximately 2 hours. The ages of the subjects ranged between 18 and 75 years, and averaged 33 years. Of these, 53 percent were female. The socio-economic level of the sample was representative of the French urban population. At the time of
Measuring preferences for GM food 351 recruitment, subjects were invited to come to the laboratory to sample food products for a government research project. Only individuals who made the food purchasing decisions in the household were permitted to participate. We recruited only individuals who were regular consumers of the products we used in the experiment. At the time of recruitment, subjects received no indication that the experiment concerned GMOs or potential risks to the food supply. The BDM mechanism and the Vickrey auction In experiment 1, we used the BDM mechanism to elicit willingness to pay information. As mentioned earlier, in the BDM there is an optimal strategy for a bidder to bid his valuation, regardless of his risk attitude. Therefore in principle, the mechanism has the ability to reveal bidders’ valuations. The rules of the BDM mechanism are simple. Each subject simultaneously submits a bid to the experimenter in a closed envelope, indicating a price at which he offers to purchase one unit of the good offered for sale. The experimenter then randomly draws a sale price from a pre-specified interval, from zero to a price greater than the maximum possible willingness to pay among bidders. Any subject who submits a bid greater than the sale price receives an item and pays an amount equal to the sale price. The others do not receive units and make no payment. In experiment 2, we used Vickrey auctions to elicit willingness to pay information. In a Vickrey auction, each subject simultaneously submits a bid to purchase a good. No communication between subjects is allowed during the bidding process. The agent who submits the highest bid wins the auction, and pays an amount equal to the second highest bid among the bidders in the auction. The other bidders do not receive items and pay zero. Each bidder has a dominant strategy to bid truthfully an amount equal to his willingness to pay (Vickrey, 1961). The training phase Both experiments 1 and 2 began with a training phase to help subjects to learn to use the dominant strategy for the mechanism employed. This training was similar for each mechanism and proceeded in the following manner. At the beginning of a session, each subject received 100 francs (roughly US$14) in cash. Subjects then participated in several BDM or Vickrey auctions, depending on which mechanism was in effect for the session, in which they bid for fictitious items. The fictitious items had induced values. Before the auction took place, each subject received a sheet of paper that indicated an amount of money, for which he could redeem a unit of the fictitious item from the experimenter, should he purchase it in the auction. The induced value differed from subject to subject and was private information. The ability to redeem an item from the experimenter induced a limit price in the auction, since a subject’s payoff if he won the auction equaled the induced value minus the price he paid. The
352
C. Noussair et al.
inclusion of the auctions with induced values had three objectives: (a) to teach the subjects, and verify their comprehension of, the rules of the auction, (b) to reduce the biases and noise that tend to arise in bidding behavior, and (c) to show subjects that the auction involved transactions where real money was at stake. The dominant strategy of bidding one’s valuation in the auctions is not at first obvious to most subjects. We chose not to inform directly the subjects of the dominant strategy. Instead, we used a technique intended to encourage subjects to come to understand the strategies that constitute optimal behavior on their own. After subjects submitted their bids (and the experimenter drew a selling price in the case of the BDM), the experimenter wrote all of the valuations on the blackboard, and asked subjects if they could identify their own valuations and to predict which subjects would be receiving units of the good based on the valuations displayed. Then the experimenter recorded the submitted bids on the blackboard next to the corresponding valuations. He posed the following questions to the group of subjects, who were free to engage in open discussion on the topics: (a) which subjects received units in the auction? (b) how much did the winners pay? (c) did anyone regret the bid he submitted? After the discussion, each of the winners received, in full view of all participants, an amount of money equal to his induced value minus the price he was required to pay. The cash was physically placed on the desk in front of the subject after the auction. A series of identical auctions was conducted using the same procedure. The valuations in each period were randomly drawn from a uniform distribution whose end points differed in each period. The auctions continued until at least 80 percent of the bids were within 5 percent of valuations. We ended the training phase of each session with an auction of an actual consumer product, a bottle of wine, the label of which was visible. After bidding, all of the bids were posted, but there was no discussion as in the earlier induced value auctions. However, as in the induced value auctions, the sale price was drawn, the winners were announced, and the transactions were implemented immediately. There are two reasons that we added this auction to the training phase. The first reason is that it made subjects aware that others’ valuations for goods could differ from their own. The second reason was to provide an easier transition into the next phase of the experiment, where subjects would be placed in a situation that is different in three ways from typical market purchases. They buy products with labels and packaging already removed, they taste products without knowledge of the information displayed on the label, and they buy products without knowing the sale price beforehand. The auction for the actual consumer product also serves to illustrate to subjects that they are spending real money for real products that they can keep after the experiment, and that they are not in a hypothetical simulation. To render this transparent, a bottle of wine is given to each winner, who is required to pay immediately the price determined in the auction from his current cash total.
Measuring preferences for GM food 353 Experiment 1: GMO phase After the training phase of each session described above, the GMO phase of the session, the phase of primary interest, was conducted. In the GMO phase, we simultaneously auctioned four products, which we referred to as S, L, C, and N during the sessions. All four products were biscuits that are typically available in grocery stores and supermarkets throughout France, and we informed subjects of that fact before bidding began. The products were different from each other, but were close substitutes. The GMO phase of the experiment consisted of five periods, as outlined in Table 20.1. At the beginning of this phase, subjects received a sample of each of the four products to taste, without its packaging or labeling. Before bidding in the first period, subjects were required to taste each product. They then indicated how much they liked the product on a scale where “I like it very much” and “I don’t like it at all” were at the extremes of the rating scale (see Combris et al., 1997, or Noussair et al., 2004a). Then the auction for period 1 took place. The four products were auctioned simultaneously. Each of the following periods consisted of the revelation of some information about some or all of the products, followed by four simultaneous auctions, one for each product. The sale price was not drawn for any period until the end of period 5, and no information was given to participants about other players’ bids at any time. Table 20.1 shows the information made available to subjects at the beginning of each period. At the beginning of period 2, we informed the subjects that product S contained GMOs and that product N was GMO free. No information was given about products L and C in period 2. At the beginning of Table 20.1 Sequence of events in GMO phase of an experimental session, experiment 1 Period
Events
Period 1
Information: blind tasting of the four products S, L, C, and N. Recording of hedonic rating of the four products. Auction. Additional information: “S contains GMOs” and “N is GMO free”. Auction. Additional information: “no ingredient in L contains more than 1 percent GMOs”, “No ingredient in C contains more than 1/10 of 1 percent GMOs”, “One ingredient in S (soy) is derived from an authorized genetically modified product”, and “No ingredient in N contains any detectable trace of GMOs”. Auction. Additional information: general information about GMOs. Auction. Additional information: the brand names of the four products, and the designation “organically grown” for product N. Auction. Random draw of the auction that counts toward final allocations. Implementation of transactions for the period that counts.
Period 2 Period 3
Period 4 Period 5 Transactions
354
C. Noussair et al.
period 3, we informed the subjects that no ingredient in L contained more than 1 percent GMOs and that no ingredient in C contained more than 1/10 of one percent GMOs. We also indicated to subjects that no ingredient in N had any detectable trace of GM content, and that S did contain a GM ingredient, soy, which was authorized in France. At the beginning of period 4, subjects received a four-page handout containing background information about GMOs. The information consisted of (a) the definition of a GMO, (b) the criteria for classifying a product as containing GMOs, (c) the list of GM plants authorized in France, (d) the food products sold in France that contain GMOs, and (e) the current French law regarding GMOs. We took care to provide an unbiased characterization and provided only facts without comment. Before the last period, we revealed the brands of the four products and the label indicating that product N was organic. Experiment 2: GMO phase In experiment 2, the GMO phase, the phase of interest, consisted of three periods, in which we revealed information about the products and then conducted an auction for the products. Four chocolate bars were auctioned each period, including two identical bars, called S and U. Translated from the original French, the list of ingredients for product S includes “corn”. In the list of ingredients for product U, corn is replaced with “genetically modified corn”. The products are made by a world leader in the food industry and are widely available in grocery stores and supermarkets in Europe. At the beginning of period 1 of the GMO phase in experiment 2, subjects each received a sample of each of the four products to taste, without its packaging or labeling. Then a simultaneous Vickrey auction for each of the four goods took place. At the beginning of period 2, we distributed one unit of each of the products to each subject in its original packaging (with the price removed, but with the list of ingredients visible). Subjects then had 3 minutes to study the products. A second auction was then conducted for each of the goods. At the beginning of period 3, we magnified and projected the list of ingredients of each product, exactly as it appeared on the packaging, and invited subjects to read the list of ingredients. Subjects then bid in the final round of auctions.
Results Experiment 1: the impact of GMO information Figure 20.1 graphs the evolution of the average normalized bid over all subjects over the five periods of the GMO phase for the four products. The data in the figure are normalized by taking each individual’s actual bid in period 1 as the base equal to 100, tracking that individual’s bids over time relative to his bid in period 1, and averaging across all individuals in each period. Only the data from those who bid greater than zero for the product in period 1 are included in the
Measuring preferences for GM food 355 120 114.3 107.7
110 100
100
99.2
110.9 100.9
114.9 104.9 102 94.6
90 89.9
90.1
80
Contains GMOs (product S) Threshold of 1% (product L) Threshold of 0.1% (product C) GMO free (product N)
70
Period 3, thresholds
Period 2, with GMOs or GMO free
Period 1, blind
50
66.2 63.6
Period 5, brands
60.5
Period 4, background information
61.1
60
Figure 20.1 Average bids for the four biscuits in each period of GMO phase, experiment 1.
figure (no subject who bid zero for a product in period 1 ever submitted a positive bid for that product in later periods). We observe that consumers, on average, value the absence of GMOs. In period 2, we revealed that product N did not contain GMOs and product S did contain GMOs. The GMO-free guarantee raised the limit price for product N of the average consumer in our sample by 8 percent. Of the 83 subjects, 41, who bid more than zero for product N in period 1, raised their bid in period 2, and only seven lowered it. A sign test (eliminating the ties in which bids were the same in both periods) rejects the hypothesis that a bidder is equally likely to lower as to raise his bid at the p < 0.001 level. A pooled variance t-test also rejects the hypothesis that the mean bid for product N is equal in periods 1 and 2 at p < 0.01, indicating that, on average, consumers increased their bids for product N in period 2. In contrast, revealing that product S contains GMOs lowered its average limit price by 39 percent. Only four participants increased their bid for S after learning that it contained GMOs while 64 lowered their bid. Both a sign test and a pooled variance t-test reject the hypothesis of equality at the p < 0.001 level. The relatively small increase for the GMO-free product suggests that in the absence of any information about GM content, consumers typically act as if there is a low probability that products contain GMOs. The average premium for the GMO-free product over the product containing GMOs was 46.7 percent. Period 3 is designed to measure the impact of GMO content thresholds. Our subjects appear to view a guarantee that no ingredient contains more than 0.1 percent GMOs as consistent with the typical GMO content of conventional prod-
356 C. Noussair et al. ucts (the unlabeled product historically available). They value a 0.1 percent guarantee more highly than a 1 percent guarantee, and the 1 percent threshold appears to be seen as a higher level of GMO content than that of a conventional product. Furthermore, the 1 percent guarantee is viewed differently from the label “contains GMOs” and the 0.1 percent guarantee is interpreted differently from “GMO free”. In period 3, we revealed that no ingredient in product L contained more than 1 percent GMOs and no ingredient in product C contained more than 0.1 percent GMOs. We observed no significant change in the median willingness to pay for product C between periods 2 and 3 (p = 0.38 for the sign test), but the average bid for product L declined by 10 percent, and the decline was statistically significant (p < 0.05 for the sign test). A pooled variance t-test of the hypothesis that the mean normalized bids for products L and C are equal rejects the hypothesis at a significance level of p < 0.01. There was no consensus among the participants about whether a product meeting the 0.1 percent threshold was valued more or less highly than a conventional product. A total of 33 percent increased their bid (by an average of 28 percent) after learning the maximum possible GMO content was 0.1 percent of any ingredient, while 27.9 percent reduced their bid: 4.4 percent reduced their bid to zero. The bidding behavior for product L reveals that a product meeting a 1 percent threshold is viewed very differently from a product labeled as containing GMOs. A total of 17.9 percent of subjects increased their bid when informed of the 1 percent threshold, and 40.5 percent left their bid unchanged. Thus over half of our participants considered a product satisfying the 1 percent threshold as no worse than the conventional product. The 1 percent guarantee was viewed as different from the 0.1 percent guarantee. The mean normalized bids in period 3 for products N and C, as well as for products L and S, were significantly different from each other at p < 0.01. The distribution of background information about biotechnology in period 4 led to a slight increase in average limit prices, which was significant at p < 0.05 for three of the four products. The increase was greatest for the GMOfree product N. The information did not bring the prices of L, with a 1 percent threshold, or S, which contained GMOs, to their levels before any information was revealed. For all four products at least 57 percent of the bids were unchanged between periods 3 and 4. For product N, the GMO-free product, 20 bidders increased their bid while nine lowered it, and we can reject the hypothesis that an individual was equally likely to raise and to lower his bid at the p < 0.05 level. However, we cannot reject the analogous hypotheses for the other three products. Thus, for each of the products, though the pooled variance t-test indicates that the information increased the average bid, the more conservative sign test is not significant, and the majority of participants did not change their bids. Revealing the brand names of the products in period 5 raised the average prices for three of the four products. The effect was significant at p < 0.01 for L and S. The average bid for product C was significantly lower in period 5 than in period 4 at p < 0.01. However, for all four products, we fail to reject the hypothesis that an equal number of bidders raised and lowered their bids in period 5
Measuring preferences for GM food 357 relative to period 4. There was no increase in price for product N from revealing that it was organically produced, perhaps because revealing its label exerted an offsetting negative effect. Our consumers can be classified into four categories. Unwilling consumers bid zero for product S after learning that it contained GMOs. They comprised 34.9 percent of our subjects. Specifying a threshold did result in a lower incidence of zero bidding than the announcement “contains GMOs”. A total of 10.7 percent of the subjects bid zero for the product with a maximum of 1 percent GMO content in any ingredient, and only 4.4 percent bid zero at 0.1 percent. That means that over 95 percent of our participants were willing to accept a level of GMO content that typically results from inadvertent co-mingling if the product is sufficiently inexpensive. Of our consumers, 18.1 percent did not change their bid for product S upon finding out that it contained GMOs. We classify them as indifferent consumers. Another 4.9 percent of participants were favorable, demonstrating behavior consistent with having a preference for GM foods. Thus a full 23 percent of bidders were willing to accept GMOs in their food at the same price as the conventional product. Despite the current unpopularity of GMOs in food, there is still a large group of consumers willing to buy them at the same price as conventional products and to allow them to establish a foothold in the marketplace. A total of 42.2 percent lowered their bid for product S when they found out that it contained GMOs, but did not go so far as to bid zero. The average percentage of the decrease was 28.3 percent. We call this group the reluctant consumers. This group places negative value on GMO content or a claim for an equitable share of the surplus created by GMOs adoption. They will lower (raise) their bid prices when faced with products with higher (lower) GMO content. They are willing to tradeoff GMO content and the price they pay. Indeed, 36.1 percent of the reluctant consumers exhibited a willingness to pay that was monotonic in the strength of the guarantee of the maximum GMO content. Results from the second experiment: do individuals read labels? Figure 20.2 shows the normalized average bid over all subjects for each of the three periods of the GMO phase of experiment 2. Before bidding in period 2, subjects observe the products as they are seen in the supermarket. Presumably, we have created more favorable conditions for the subjects to read and study the labels than exist in the supermarket. Subjects are seated and have no alternative activities for 3 minutes other than to study the labels. Nevertheless, we observe that average bids do not change between periods 1 and 2. A pooled variance t-test fails to reject the hypothesis that the normalized average bids are different between periods 1 and 2 (t = 0.071 for product S, which does not contain GMOs and t = 0.070 for product U, which does contain GM corn). We also cannot reject the hypothesis that the bids are different from each other (t = 1.53). We thus obtain the result that the labeling of products as containing GMOs does not affect the willingness to pay of consumers.
358
C. Noussair et al.
100.0 100
95.9
97.0
97.0 90
Product S, GM free Product U, containing GM corn
80 72.7 70
Period 3: ingredients displayed
Period 2: observe package and labeling
Period 1: taste product
60
Figure 20.2 Average bids for the two identical chocolate bars in periods 1–3, experiment 2 (GM free corn corresponds to Product S, GM corn to product U).
However, the data change radically in period 3, in which subjects bid while able to view the list of ingredients on large overheads. The average willingness to pay for the product labeled as “containing GMOs” decreases by 27.3 percent compared to the previous period. The decrease is statistically significant (a pooled variance t-test for a difference in sample means between periods 2 and 3 for product U yields t = 2.40). In contrast, an identical product without any indication of GMO content (product S) experiences an insignificant average decrease of 3 percent from the previous period (t = 0.271). The bids for the two products, S and U, are significantly different from each other in period 3 (t = 10.37). Upon learning that product U contains GM corn, 22 percent of our subjects boycott the product entirely by bidding zero, and 60 percent lower their bid by at least 5 percent. Thus the labeling “contains GMOs”, when it is actually noticed, induces a substantial decrease in willingness to pay that is specific to that product. Comparison of the BDM and Vickrey processes The data from the training phase of the two experiments provides an opportunity to compare the BDM and Vickrey processes with regard to their demand revelation properties. We use two measures for our comparison. The first measure is the overall average bias of the mechanisms in period t, normalized
Measuring preferences for GM food 359 by the valuation. It indicates the extent to which average bids are higher or lower than valuations. The bias for period t is calculated as
Σj[bjt − vjt]/vjtnt , where bjt denotes player j’s bid in period t, vjt is her valuation in period t, and nt is the total number of bidders in period t. The second measure is the average dispersion, defined for period t as
Σj|[bjt − vjt]|/vjtnt . The average dispersion is equal to the average absolute value of the difference between bids and valuations, normalized by the valuation. For an individual bid, the dispersion is the absolute value of the bias. Table 20.2 illustrates the average value of each measure over the course of the first four periods, as well as the last period of a session (which never exceeded period 6), under both the Vickrey and the BDM processes. The standard deviations are indicated in parentheses. The table reveals the following patterns. Period 1 was a practice period that did not count toward participants’ earnings. The table shows that both auctions are biased in period 1, with bids tending to be below valuations. This bias is larger and the dispersion is greater under the BDM mechanism. Overall, 90 percent of subjects bid less than their valuations and only a very small percentage bid more than their valuations. Only 2.4 percent of participants bid more than their valuations under the BDM process and 6 percent did so under the Vickrey auction. The percentage bidding an amount equal to their valuations is also small in both auctions, between 6 and 7 percent of subjects under both systems, though 17 percent bid within 2 percent of their valuations in the Vickrey auction. On average, under the BDM mechanism, bids are 39.9 percent lower than valuations, with a standard deviation (of the percentage difference between bid and valuation) of 28.9 percent. In the Vickrey auction, the period 1 average bid is Table 20.2 Deviations of bids from valuation, induced value phase of both experiments (%) Period 1
Period 2
Period 3
Period 4
Last period
BDM Average bias BBDM = Σj[bjt – vjt]/vjtnt Average dispersion DBDM = Σj|[bjt – vjt]|/vjtnt
–39.87 (28.89) 41.65 (26.24)
–28.06 (22.32) 28.59 (21.62)
–12.76 (23.38) 16.86 (20.58)
–8.19 (23.60) 13.94 (20.69)
–6.33 (20.95) 11.75 (18.43)
Vickrey Average bias BV = Σj[bjt – vjt]/vjtnt Average dispersion DV = Σj|[bjt – vjt]|/vjtnt
–30.16 (32.53) 32.57 (30.10)
–11.50 (27.76) 16.79 (24.89)
–5.57 (11.94) 6.25 (11.59)
+1.33 (26.49) 9.27 (24.96)
–0.06 (11.13) 3.89 (10.42)
360
C. Noussair et al.
30.2 percent less than the corresponding valuation with a standard deviation of 32.5 percent. Pooled variance t-tests indicate that the bias is significantly different from zero at the p < 0.01 level for both mechanisms. The proportion of participants bidding less than their values is greater in the BDM than in the Vickrey auction. The magnitude of the average underbid is less severe in the Vickrey auction. The average underbid is 44.6 percent of the valuation for the BDM and 36 percent for the Vickrey auction. The average absolute difference between bids and valuations, our measure of dispersion, is 41.7 percent in the BDM compared to 32.6 percent in the Vickrey auction. Both the average bias and the average dispersion are significantly greater in the BDM than the Vickrey auction at the p < 0.05 level. Thus, in the practice period, the Vickrey auction is less biased, exhibits less dispersion, and has a greater percentage of agents bidding within 2 percent of values than the BDM process. In period 2, the first auction that counted toward subjects’ earnings, both auctions remain biased, but less so than in period 1. The introduction of monetary payments, as well as repetition, appears to improve decisions. Nonetheless, 87.8 percent of bids in the BDM and 76.1 percent of those in the Vickrey auction are less than valuations. The bias is −28.1 percent of valuation for the BDM and −11.5 percent for the Vickrey auction. The decline in the bias between periods 1 and 2 is steeper in the Vickrey auction than in the BDM. The bias in the BDM decreases by 29.6 percent, whereas in the Vickrey auction the decrease is 63.3 percent. The decline is mainly due to a reduction in the amount that agents underbid, and not to a decrease in the percentage of agents underbidding. The percentage bidding equal to valuations increases to over 10 percent overall and is slightly higher in the Vickrey auction than in the BDM. The overall dispersion shrinks in both systems but the decrease is steeper in the Vickrey auction (51.5 percent versus 31.4 percent). Thus, the overall data from periods 1 and 2 suggest that the Vickrey auction is less biased, exhibits lower dispersion, induces a greater percentage to reveal their exact valuations, and improves its performance more quickly over time. These trends continue in subsequent periods. The overall bias decreases in each subsequent period for both processes, reaching zero in the Vickrey auction and 6 percent in the BDM mechanism in the final period. In each period, the bias in the BDM is significantly greater in magnitude than in the Vickrey auction at p < 0.05 (according to a pooled variance t-test). Beginning in period 4, the bias is no longer different from zero at conventional significance levels in the Vickrey auction. However in all of the periods, the bias is significant at the 5 percent level in the BDM. The percentage of agents bidding an amount equal to their valuations increases from period to period under both processes, reaching 41.5 percent for the BDM and 68.4 percent for the Vickrey auction in the last period. In the Vickrey auction, 77 percent of bids are within 2 percent of valuations and 90 percent are within 10 percent of valuations in the last period. The dispersion between bids and valuations decreases in each period of the BDM. Though the measure does increase between periods 3 and 4 in the Vickrey auction, the overall trend is clearly downward. The average absolute
Measuring preferences for GM food 361 difference in the last period is 3.9 percent in the last period of the Vickrey auction compared to 11.8 percent in the BDM. In the Vickrey auction, the dispersion is significantly less than in the BDM at p < 0.01 in all periods except for period 4. Therefore, over the entire time horizon, the Vickrey auction generated data much closer to truthful bidding than did the BDM.
Discussion This chapter surveyed three studies. The first two consider empirical questions related to the willingness to pay for food products with genetically modified content, and the third compares two different techniques to elicit willingness to pay. Our results for the first experiment show a sharp contrast to the predominantly negative views of French survey respondents toward genetically modified organisms in food products. In our experiments, we observe a wide range of revealed preferences. Whereas 35 percent of our subjects absolutely refused to purchase a product containing GMOs, the remaining 65 percent of our subjects were willing to purchase a GM product if it was sufficiently inexpensive. Nearly one-quarter of participants showed no decrease in their willingness to pay in response to learning that a product contained GMOs. The two different thresholds, 0.1 percent and 1 percent, generated significantly different bids and were thus were clearly perceived as meaningfully different. Furthermore, the 0.1 percent threshold was not considered equivalent to GMO free, and the 1 percent threshold generated higher bids than the classification “contains GMOs”. This indicates that market demand is decreasing in GMO content. A total of 89 percent of our participants were willing to purchase a product satisfying the 1 percent threshold, the maximum content that the European Union exempts from labeling. Lowering the threshold to 0.1 percent would make another 7 percent of participants willing to purchase products satisfying the threshold, as 96 percent of our participants were willing to purchase a product, in which no ingredient contained more than 0.1 percent GMOs, if it were sufficiently inexpensive. The policy options available to address the arrival of biotechnology in food production can be grouped into three types. The first option is to ban the use of GMOs in food products. The second is to integrate conventional and biotech varieties into one production chain. The third is to create two production tracks and introduce a labeling system (which could be voluntary or mandatory) to allow the consumer to identify the two varieties. Based merely on polls, we would have concluded that the only policy action that would be feasible in France given current public opinion would be the complete interdiction of GMOs in food, at least for the time being. However, our experimental results indicate that only slightly more than a third of the population would be unwilling to purchase GM foods at any price. The remainder is willing to purchase GMOs even when no threshold is specified, and could receive a welfare gain if GMOs make products cheaper. The data thus argue against the banning of GMOs, which would cause gains from trade to be foregone.
362
C. Noussair et al.
The data also reveal potential welfare costs to consumers from integrating the two production streams. The consumers who are willing to purchase GMOs if they are sold at a discount might be made better off. However, the segment that refuses to purchase GM products at any price (35 percent of participants in our sample) would experience a decrease in their welfare, and would have to switch to products with ingredients that have no GM varieties. Therefore, in our opinion, our results weigh in favor of segmenting the market between products containing GMOs and products that are GMO free. In this way, the unwilling consumers could be assured of GMO-free varieties, while price-sensitive reluctant, as well as indifferent and favorable, consumers could benefit from the cost reductions that the first generation of GMOs provides. As long as the segregation costs are not greater than the welfare gains from market segmentation, the sizes of each of the markets appear to justify the establishment of two separate production tracks. The separation and labeling policy gives the market the role of transmitting information about the safety of GM products, by providing an opportunity and an incentive for consumers to sample the lower cost products made with GMOs voluntarily. Our data suggest that a large fraction of consumers would do so. Our comparison of the valuation elicitation systems indicates that, given the training methods and the procedures we have used in our study, the Vickrey auction is preferable to the BDM mechanism as an instrument for the elicitation of the willingness to pay for private goods. We observe that the BDM is subject to more severe bias, greater dispersion of bids, and slower convergence to truthful revelation than the Vickrey auction. With our techniques, neither auction could be made into a perfect tool to reveal valuations with our subjects, at least not during the time horizons that were available to us. However, the Vickrey auction performs better than the BDM by the three criteria we have set for it. Our experimental protocol was effective in de-biasing the Vickrey auction over several periods, but less effective on the BDM. Of course, it remains unknown whether unbiased bidding for goods with induced values carries over to subsequent bidding for goods with home-grown values. Our research supports the proposition that the Vickrey auction can be an effective tool for demand revelation with non-student subject pools, but also cautions that sufficient practice and appropriate training in the rules of the auction is important.
Acknowledgments This chapter is a summary of three articles (Noussair et al. 2002, 2004b, 2004c). The program “Pertinence économique et faisabilité d’une filière sans utilisation d’OGM”, as well as the French National Institute of Agronomic Research (Program on Consumer Behaviour) provided research support for this project. We would like to thank Isabelle Avelange, Yves Bertheau, Pierre Combris, Sylvie Issanchou, Egizio Valceschini, and Steve Tucker for valuable comments and assistance.
Measuring preferences for GM food 363
References Ajzen, I., Brown, T. C., and Rosenthal, L. H., 1996. Information Bias in Contingent Valuation: Effects of Personal Relevance, Quality of Information, and Motivational Orientation. Journal of Environmental Economics and Management, 30 (1), 43–57. Akerlof, G. A., 1970. The Market for “Lemons”: Quality Uncertainty and the Market Mechanism. The Quarterly Journal of Economics, 84 (3), 488–500. Aldrich, L. and Blisard, N., 1998. Consumer Acceptance of Biotechnology: Lessons from the rbST Experience. Current Issues in Economics of Food Markets, December (74701), 1–6. Anderson, K., Nielsen, C. M., and Sherman, R., 2000. Estimating the Economic Effects of GMOs: The Importance of Policy Choices and Preferences. Adelaide University Policy Discussion Paper 0035. Becker, G. M., Degroot, M. H., and Marschak, J., 1964. Measuring Utility by a SingleResponse Sequential Method. Behavioral Science, 9 (3), 226–232. Blamey, R., Common, M., and Quiggin, J., 1995. Respondents to Contingent Valuation Surveys: Consumers or Citizens? Australian Journal of Agricultural Economics, 39 (3), 263–288. Bohm, P., Linden, J., and Sonnegard, J., 1997. Eliciting Reservation Prices: Becker–DeGroot–Marschak Mechanism vs. Markets. Economic Journal, 107 (443), 1079–1089. Brookshire, D. S. and Coursey, D. L., 1987. Measuring the Value of a Public Good: An Empirical Comparison of Elicitation Procedures. American Economic Review, 77 (4), 554–566. Buckwell, A., Brookes, G., and Barfoot, P., 1999. Economics of Identity Preservation for Genetically Modified Crops. Food Biotechnology Communication Initiative, Boussels, Belgium. Buzby, J. C., Fox, J. A., Ready, R. C., and Crutchfield, S. R., 1998. Measuring Consumer Benefits of Food Safety Risk Reductions. Journal of Agricultural and Applied Economics, 30 (1), 69–82. Caswell, J. A., 1998. Should Use Of Genetically Modified Organisms Be Labeled? AgBioForum, 1 (1), 22–24. Caswell, J. A., 2000. Labeling Policy for GMOs: To Each His Own? AgBioForum, 3 (1), 305–309. Combris, P., Lecocq, S., and Visser, M., 1997. Estimation of a Hedonic Price Equation for Bordeaux Wine: Does Quality Matter? Economic Journal, 107 (441), 390–402. Coppinger, V. M., Smith, V. L., and Titus, J. A., 1980. Incentives and Behavior in English, Dutch and Sealed-Bid Auctions. Economic Inquiry, 18 (1), 1–22. Cox, J. C. and Grether, D. M., 1996. The Preference Reversal Phenomenon: Response Mode, Markets and Incentives. Economic Theory, 7 (3), 381–405. Cox, J. C., Roberson, B., and Smith, V. L., 1982. Theory and Behavior of Single Object Auctions. In V. L. Smith ed. Research in Experimental Economics, Vol. 2. Greenwich, CT, JAI Press, 1–43. Cummings, R. G., Harrison, G. W., and Rutström, E. E., 1995. Homegrown Values and Hypothetical Surveys: Is the Dichotomous Choice Approach Incentive-Compatible? American Economic Review, 85 (1), 260–266. Falck-Zepeda, J. B., Traxler, G., and Nelson, R. G., 2000. Surplus Distribution from the Introduction of a Biotechnology Innovation. American Journal of Agricultural Economics, 82 (2), 360–369.
364
C. Noussair et al.
Fox, J. A., Shogren, J. F., Hayes, D. J., and Kliebenstein, J., 1998. CVM-X: Calibrating Contingent Values with Experimental Auction Markets. American Journal of Agricultural Economics, 80 (3), 455–465. Gaskell, G., Allum, N. C., and Stares, S. R., 2003. Europeans and Biotechnology in 2002. Eurobarometer 58.0, a report to the EC Directorate General for Research from the project “Life Science in European Society” QLG-7-CT-1999-00286. Brussels, European Commission. Grether, D. M. and Plott, C. R., 1979. Economic Theory of Choice and the Preference Reversal Phenomenon. American Economic Review, 69 (4), 623–638. Hayes, D. J., Shogren, J. F., Shin, S., and Kliebenstein, J. B., 1995. Valuing Food Safety in Experimental Auction Markets. American Journal of Agricultural Economics, 77 (1), 40–53. Hoffman, E., Menkhaus, D. J., Chakravarti, D., Field, R. A., and Whipple, G. D., 1993. Using Laboratory Experimental Auctions in Marketing Research: A Case Study of New Packaging for Fresh Beef. Marketing Science, 12 (3), 318–338. Huffman, W., Shogren, J., Rousu, M., and Tegene, A., 2001. The Value to Consumers of GM Foods in a Market with Asymmetric Information: Evidence from Experimental Auctions. Iowa State University, mimeo. Irwin, J. R., McClelland, G. H., McKee, M., Schulze, W. D., and Norden, N. E., 1998. Payoff Dominance vs. Cognitive Transparency in Decision Making. Economic Inquiry, 36 (2), 272–285. Kagel, J. H. and Levin, D., 1993. Independent Private Value Auctions: Bidder Behavior in First-, Second-, and Third-Price Auctions with Varying Numbers of Bidders. Economic Journal, 103 (419), 868–879. Kagel, J. H., Harstad, R. M., and Levin, D., 1987. Information Impact and Allocation Rules in Auctions with Affiliated Private Values: A Laboratory Study. Econometrica, 55 (6), 1275–1304. Keller, L. R., Segal, U., and Wang, T., 1993. The Becker–DeGroot–Marschak Mechanism and Generalized Utility Theories: Theoretical Predictions and Empirical Observations. Theory and Decision, 34 (2), 83–97. Krutilla, J. V., 1967. Conservation Reconsidered. American Economic Review, 57 (4), 777–786. Lence, S. H. and Hayes, D. J., 2001. Response to an Asymmetric Demand for Attributes: An Application to the Market for Genetically Modified Crops. Iowa State University, Department of Economics, mimeo 2021. Lin, W., Chambers, W., and Harwood, J., 2000. Biotechnology: US Grain Handlers Look Ahead. Agricultural Outlook, (April), 29–34. Washington, DC: US Department of Agriculture, Economic Research Service. Lin, W., Price, G., and Fernandez-Cornejo, J., 2001. Estimating Farm Level Effects of Adopting Herbicide-Tolerant Soybeans. Oil Crops Situation and Outlook, Economic Research Service USDA, (October), 25–34. List, J. A. and Gallet, C. A., 2001. What Experimental Protocol Influence Disparities Between Actual and Hypothetical Stated Values? Environmental and Resource Economics, 20 (3), 241–254. List, J. A. and Lucking-Reiley, D., 2000. Demand Reduction in Multiunit Auctions: Evidence from a Sportscard Field Experiment. American Economic Review, 90 (4), 961–972. List, J. A. and Shogren, J. F., 1998. Calibration of the Difference Between Actual and Hypothetical Valuations in a Field Experiment. Journal of Economic Behavior and Organization, 37 (2), 193–205.
Measuring preferences for GM food 365 List, J. A. and Shogren, J. F., 1999. Price Information and Bidding Behavior in Repeated Second-Price Auctions. American Journal of Agricultural Economics, 81 (4), 942–949. Lusk, J. L., Daniel, S. M., Mark, D. R., and Lusk, C. L., 2001. Alternative Calibration and Auction Institutions for Predicting Consumer Willingness-to-Pay for Non Genetically Modified Corn Chips. Journal of Agricultural and Resource Economics, 26 (1), 40–57. Lusk, J. L., Jamal, M., Kurlander, L., Roucan, M., and Taulman, L., 2005. A Meta Analysis of Genetically Modified Food Valuation Studies. Journal of Agricultural and Resource Economics, 30 (1), 28–44. Moon, W. and Balasubrimanian, S., 2001. A Multi-Attribute Model of Public Acceptance of Genetically Modified Organisms. Southern Illinois University, mimeo. Neill, H. R., Cummings, R. G., Ganderton, P. T., Harrison, G. W., and McGuckin, T., 1994. Hypothetical Surveys and Real Economic Commitments. Land Economics, 70 (2), 145–154. Noussair, C., Robin, S., and Ruffieux, B., 2001. Genetically Modified Organisms in the Food Supply: Public Opinion vs. Consumer Behavior. Purdue, Krannert Graduate School of Management Working Paper 1139. Noussair, C., Robin, S., and Ruffieux, B., 2002. Do Consumers not Care about Biotech Foods or Do They Just not Read the Labels? Economics Letters, 75 (1), 47–53. Noussair, C., Robin, S., and Ruffieux, B., 2004a. A Comparison of Hedonic Rating and Demand-Revealing Auctions. Food Quality and Preference, 15 (4), 393–402. Noussair, C., Robin, S., and Ruffieux, B., 2004b. Do Consumers Really Refuse to Buy Genetically Modified Food? Economic Journal, 114 (492), 102–120. Noussair, C., Robin, S., and Ruffieux, B., 2004c. Revealing Consumers’ Willingness-toPay: A Comparison of the BDM Mechanism and the Vickrey Auction. Journal of Economic Psychology, 25 (6), 725–741. Nyborg, K., 2000. Homo Economicus and Homo Politicus: Interpretation and Aggregation of Environmental Values. Journal of Economic Behavior & Organization, 42 (3), 305–322. Rutström, E. E., 1998. Home-Grown Values and Incentive Compatible Auction Design. International Journal of Game Theory, 27 (3), 427–441. Sagoff, M., 1998. The Economy of the Earth. Cambridge: Cambridge University Press. Shogren, J. F., Fox, J. A., Hayes, D. J., and Roosen, J., 1999. Observed Choices for Food Safety in Retail, Survey, and Auction Markets. American Journal of Agricultural Economics, 81 (5), 1192–1199. Smith, V. L., 1982. Microeconomic Systems as Experimental Science. American Economic Review, 72 (5), 923–955. Stevens, T. H., Echeverria, J., Glass, R. J., Hager, T., and More, T. A., 1991. Measuring the Existence Value of Wildlife: What Do CVM Estimates Really Show? Land Economics, 67 (4), 390–400. Traxler, G., Falck-Zepeda, J. B., and Nelson, R. G., 2000. Rent Creation and Distribution From Biotechnology Innovations: The Case of Bt Cotton and Herbicide-Tolerant Soybeans in 1997. Agribusiness, 16 (1), 21–32. Vickrey, W., 1961. Counterspeculation, Auction and Competitive Sealed Tenders. Journal of Finance, 16 (1), 8–37.
21 An experimental investigation of choice under “hard” uncertainty Calvin Blackwell, Therese Grijalva, and Robert P. Berrens
Introduction Economists and other researchers have examined decision making under uncertainty in thousands of papers. The vast majority of this work has focused on situations where uncertainty can be described with some known probability. There is no consensus normative or positive theory of behavior regarding decision making under “hard” uncertainty, where the probabilities of a set of events occurring are unknown. This chapter uses induced-value experiments to explore several theories of decision making under hard uncertainty. Improving our understanding of decision making under hard uncertainty may have important implications for complex environmental policy (e.g. protection of endangered species and biodiversity, and global climate change). One specific motivation for this research into situations of hard uncertainty is to increase our understanding of the Safe Minimum Standard (SMS) approach to protecting renewable resources (Ciriacy-Wantrup, 1952; Bishop, 1978; Randall and Farmer, 1995). The SMS approach is commonly defined as a collective choice process that prescribes protecting some minimum level (safe standard) of a renewable resource unless the social costs of doing so are somehow unacceptable or intolerable (see reviews in Farmer and Randall, 1998, and Berrens, 2001). With acknowledged difficulty in defining minimum safety, and the determination of intolerable social costs left to the political or administrative process in any particular case (Batie, 1989; Castle, 1996), the SMS approach is typically viewed as a fuzzy concept (e.g. see Hohl and Tisdell, 1993; Van Kooten and Bulte, 2002), and existing somewhat on the periphery of the field of environmental and resource economics (Vaughn, 1997). Nevertheless, a variety of authors (e.g. Ciriacy-Wantrup, 1952; Bishop, 1978; Randall, 1991; Castle and Berrens, 1993; Toman, 1994; Castle, 1996; Farmer and Randall, 1998; Woodward and Bishop, 1997; Bulte and Van Kooten, 2000) have proposed some variant of the SMS as a either a pragmatic or preferred decision-making rule for complex issues involving both hard uncertainty and irreversibility.1 Take, for example, the question of biodiversity and endangered species protection. A particular species may or may not be essential to the continued survival of its ecosystem. Should this species be saved from extinction? Allowing this species’
Choice under “hard” uncertainty 367 extinction has an unknown probability of destroying the underlying ecosystem. What costs should be borne by the current generation to try to protect the species (Bishop, 1980)? A traditional cost–benefit analysis is severely hampered because science is unable to offer policy makers a defined set of probabilities over the possible outcomes (WSTB, 2004, p. 227). Because it is impossible to value the tradeoff between the increased probability of ecosystem destruction and the costs of saving the affected species, and because such decisions may be irreversible, many authors have proposed the SMS as the preferred decisionmaking rule, especially in pluralistic collective choice processes (e.g. Randall and Farmer, 1995, and see Arrow et al., 2000). But aside from positing the SMS as prescriptive rule, how do SMS-type approaches hold up in describing actual behavior? Randall and Farmer (1995) argue that SMS-type approaches will often emerge in pluralistic social solutions to complex problems involving potentially irreversible losses and a high degree of uncertainty. For example, it is commonly argued that the Endangered Species Act (ESA) of 1973, as amended, broadly mimics an SMS-type approach (Bishop, 1980; Castle and Berrens, 1993; Randall, 1991; Berrens et al., 1998; Berrens, 2001; WSTB, 2004). However, outside of the political economy of any particular piece of legislation, the open empirical question is how prevalent are SMS approaches in actual individual decision making under hard uncertainty? For example, devoid of any endangered species context, would individuals commonly choose SMS-type approaches under hard uncertainty scenarios? To understand such individual decision making, several authors have modeled SMS-type decision processes as minimax decision rules (i.e. maximizing the minimum possible gains) in a game versus nature in the presence of hard uncertainty (Bishop, 1978; Tisdell, 1990). However, Ready and Bishop (1991) demonstrated that such a rule might yield inconsistent outcomes (e.g. for preservation or for economic development) depending upon the structure of the game against nature. In particular, a minimax rule appears to give results that are inconsistent with Milnor’s (1964) bonus invariance axiom for game-theoretic decision rules, and ignores the costs associated with wrong choices, resulting in choices that seem inconsistent with the philosophical approach of the SMS (Palmini, 1999; Milnor, 1964). Palmini (1999) has attempted to reconcile some of this inconsistency by showing that although a simple minimax decision rule can lead to inconsistency, a minimax regret rule, which is perhaps more in line with the basic philosophy of the SMS, can eliminate some of the inconsistencies. If we take Palmini’s argument that the minimax regret rule provides a general underpinning to support SMS-type approaches as a starting point (and see WSTB, 2004, p. 233), then this presents a competing hypothesis that can be empirically tested. In this chapter, we use induced-value experiments to examine three questions. First, do participants use minimax, minimax regret, or some other type of decision rule (e.g. simple maximum expected value) to choose under circumstances of “hard” uncertainty? Second, do participants consistently use the same rule, or do they deviate depending upon the relative payoffs of the choices
368 C. Blackwell et al. involved? Third, are there outside factors that predict participants’ choice of decision rule and if so, what are they?
Literature review Normative theories Several ways of describing risk and uncertainty have been proposed. Knight (1921) drew a distinction between “risk” and “uncertainty.” His distinction has generally been interpreted in the following way: “risk” is defined as a situation with more than one potential outcome, and the probability of any of the possible outcomes is known in advance. “Uncertainty” is defined as a situation with more than one potential outcome and where the probability of any of the possible outcomes is unknown. Other attempts to differentiate between types of risk include “soft” vs. “hard” uncertainty (Vercelli, 1999), and use of the term “ambiguity” (Camerer, 1999). In a review of models of decision making under hard uncertainty, Camerer defined ambiguity as “known-to-be-missing information” (Camerer, 1999, p. 56). This definition precisely fits the information environment where the SMS approach is typically endorsed. What is important about all these definitions of uncertainty is that in all cases the probabilities of an event occurring are unknown. Such cases will be designated “hard” uncertainty. Research on optimal decision making under hard uncertainty has not converged to produce a single, optimal decision rule for the decision maker to follow. Milnor’s (1964) axiomatic approach typifies much of the research in this area. He examines various strategies that might be used in a game against nature when no information is known about the probability event i might occur. The player chooses a pure strategy, and then nature chooses a strategy by some unknown process, resulting in some known payoff. In particular, Milnor examined four decision criteria, which he named as follows: LaPlace, Wald, Hurwicz, and Savage. The LaPlace criterion essentially assumes that the probability of each of n events occurring is 1/n; the player then should choose the strategy that maximizes the “expected” payoff. The Wald criterion is a minimax criterion, where the player chooses the strategy that maximizes the minimum payoff. Under the Hurwicz criterion, the player chooses an “optimism” parameter α, 0 α 1, and then chooses the strategy such that αA + (1α) a is maximized, where A is the best payoff and a the worst payoff possible for a given strategy. The Savage criterion is the minimax regret criterion, where the player chooses the strategy that minimizes the maximum regret possible. Regret is defined as the difference between the best possible outcome given nature’s strategy, and the payoff the player actually receives. Arrow and Hurwicz (1972) made a second important addition to the literature. In this paper the authors describe four axioms of rationality regarding choice under hard uncertainty: independence of irrelevant alternatives, relabeling (choices should not change because states of nature are renamed), dominance (higher payoffs are preferred to lower payoffs), and irrelevance of
Choice under “hard” uncertainty 369 repetitive states. Arrow and Hurwicz then show that only choice criteria that are functions exclusively of the maximum and minimum possible values are consistent with these four axioms; the implication here is that when probability distributions are unknown, a rational decision maker will focus on the possible end points. More recent research has refined the insights of Milnor and Arrow and Hurwicz. For example, Maskin (1979) shows how a different set of “reasonable” initial axioms can lead to the maximin criterion for making decisions under hard uncertainty, while Barbera and Jackson (1988) introduce refinements of the maximin criterion to deal with situations in which the maximin criterion cannot distinguish between choices. A second strand of research takes as its starting point not hard uncertainty, but soft, i.e. a situation in which the decision maker has some, limited information about the probabilities associated with various outcomes. These theories allow for non-additive probabilities. Examples include Schmeidler (1989) and Gilboa (1987). A Choquet integration is used to make an expected utility maximization using the non-additive probabilities. These theories share similarities with rank-dependent expected utility theory (Quiggin, 1982). In a recent article, Nehring (2000) proposes another choice criterion, “Simultaneous Expected Utility Maximization” (SIMEU). This choice criterion shares with minimax regret the feature that it depends on not just the minimum and maximum possible payouts, but on payouts in between. Positive theories Although there has been considerable research on choice behavior in environments containing both hard uncertainty and risk, few strong conclusions have been reached about uncertainty. Since Ellsberg’s paper (1961) on ambiguity, much research (e.g. Camerer and Weber, 1992) has shown that experimental participants have “ambiguity aversion,” that is, when given the choice between gambles with known probabilities and gambles with unknown probabilities, subjects will pay a premium to avoid ambiguity. Hogarth and Kunreuther (1995) show that experimental participants behave differently when facing decisions under risk as opposed to “ignorance” (lacking information on both probabilities and payoffs). Several researchers have reviewed the empirical literature on decision making under hard uncertainty, but are unable to recommend one model over another. In a 1992 review, Kelsey and Quiggin do not advocate a particular model. Camerer (1999) discusses numerous theories of decision making under hard uncertainty, but arrives at no conclusions regarding which are best. None of the models he discusses has been tested in an environment in which all the choices are characterized by hard uncertainty. Vercelli (1999) is also unable to recommend clearly one theory over any others, although he does emphasize the success of Choquet Expected Utility theory.2 As this brief review makes clear, there is still much we do not know about decision making under hard uncertainty. To wit, while there are a number of
370
C. Blackwell et al.
competing conjectures, there is no consensus position. Given this backdrop, we attempt to implement some exploratory induced-value experiments.
Experimental design To ensure that our exploratory investigation is not tied to any particular description of uncertainty, we constructed five alternative scenarios that are posed to each participant. The basic experimental protocol was as follows. Subject pool. A total of 57 undergraduate students from Weber State University (Ogden, Utah) participated during the fall of 2003 and 24 undergraduates from the College of Charleston (Charleston, SC) participated during the summer of 2004. All subjects were recruited from the general campus population. Incentives. Participation took approximately 45 minutes. Participants received a $5 show-up fee, plus performance-based payoffs ranging from $10 to $30. We paid the participants one-fifth of their experimental earnings in cash. Setting. Participants received instructions and made decisions at a computer in a campus computer laboratory. In addition to the computer, participants were given a paper version of all materials presented on the computer (surveys and scenarios).3 Initially, participants were asked to fill out a short demographic survey. Then, participants were introduced to a sample scenario, included as Figure 21.1. Participants were asked to take a brief quiz to ensure comprehension of the decision task. The participants then received feedback on their quiz answers and were allowed to ask questions. Once all participants’ questions had been answered, the incentivized portion of the experiment began.
Decision maker Prospect 1
Prospect 2
Prospect 3
A1 $20
A2 $25
A3 $32
B1 $10
B2 $8
B3 $2
Figure 21.1 Basic decision tree. Note During the experiment you may be asked to decide which of three prospects to take. All three prospects have two potential outcomes, A or B. The chance of either of these outcomes occurring is unknown. If you choose Prospect 1, then you will either earn outcomes A1 ($20) or B1 ($10). If you choose Prospect 2, then you will either earn outcomes A2 ($25) or B2 ($8). If you choose Prospect 3, then you will either earn outcomes A3 ($32) or B3 ($2). The chance that A1 occurs if you choose Prospect 1 is the same as the chance that A2 occurs if you choose Prospect 2 or A3 occurs if you choose Prospect 3. The figure below describes the potential outcomes of your choice.
Choice under “hard” uncertainty 371 As noted, the decision task consisted of five different scenarios.4 Although each of the scenarios is framed differently, they all have some basic commonalities. In all the scenarios each participant plays a sequential game against nature. The participant plays first, then nature. The participant must choose among two or three alternatives. Each prospect is risky, i.e. each prospect has two possible outcomes, one of which will result after nature has played. In Figure 21.1, nature chooses either event A or event B. As the probability of either event occurring is not given to the participant, from his/her point of view the situation is one of hard uncertainty. A great deal of care was taken to ensure that the probabilities of any particular event occurring were unknown to the subjects.5 The process by which nature selects one event or the other is as follows: each choice by nature is made by a draw from an opaque bag. Participants were not allowed to examine the bags in any way. Participants are told that the bag holds an unknown number of pink and white candies. The correspondence between the color of candy drawn and the actual outcome is determined by the participants via majority rule. For example, in the practice scenario participants would be asked to vote for one of two possibilities: (1) if a pink candy is drawn, then event A occurs, or (2) if a pink candy is drawn, then event B occurs. Allowing participants to select the association helped to ensure transparency from the participants’ point of view, i.e. that the actual probabilities were not skewed heavily to the “bad” outcomes. Each participant made choices for five different scenarios. All participants in a given treatment received the five scenarios in the same order. After making their selections for all five scenarios, each scenario’s risk was resolved, i.e. the correspondence between event A and the color of candy was resolved by vote, and a candy was drawn to determine the actual outcome for each scenario. Participants were then asked to fill out a follow-up survey while the experimenters calculated each participant’s earnings. After completing this final survey the participants were paid the earnings in cash and the experiment ended.
Theoretical predictions The experiments were designed to distinguish between three choice criteria: minimax (MM), minimax regret (MMR) and “expected value” max (EV). To illustrate these three choice criteria, examine Figure 21.1. The MM criterion will choose Prospect 1, as it has the maximum minimum value (min[Prospect 1] = $10, min[Prospect 2] = $8, min[Prospect 3] = $2; max[min[Prospect 1], min[Prospect 2], min[Prospect 3]] = $10 for Prospect 1). The EV criterion assigns equal probability to each outcome occurring and then chooses the option with the highest “expected value.” For Scenario 1, this choice criterion selects Prospect 3 (EV(Prospect 1) = 1/2 ($20) + 1/2 ($10) = $15, EV(Prospect 2) = $16.50, EV(Prospect 3) = $17). An individual who uses the MMR strategy (minimizing the maximum possible regret) will choose Prospect 2. We define regret as the difference between the best possible outcome given nature’s resolution of the uncertainty and the outcome chosen by the individual. Table 21.1 presents
372
C. Blackwell et al.
Table 21.1 Regret analysis Option
Outcome
Best
Actual
Regret
Max regret
1 1 2 2 3 3
A B A B A B
$32 $10 $32 $10 $32 $10
$20 $10 $25 $8 $32 $2
$12 $0 $7 $2 $0 $8
$12 $7* $8
Note * Indicates the minimum of the maximum regrets.
the regret analysis, where it is shown that the prospect with the smallest maximum regret is Prospect 2. Using the analysis presented above, for each scenario, the first choice corresponds to the MM criterion, the second choice to the MMR criterion, and the third choice to the EV criterion. Treatments Tables 21.2 and 21.3 show the payoffs for each scenario under Treatments 1 and 2. These treatments differ only by the relative payoffs associated with the different prospects. Prospect 1’s payoffs remained unchanged from Treatment 1 to 2. Under Treatment 2, Prospect 2’s “good” payoff (the higher of the two payoffs) is 90 percent of Treatment 1, while the “bad” payoff is 93.75 percent of Treatment 1. For Prospect 3, Treatment 2’s payoffs were 88.75 percent of the “good” payoff, and 100 percent of the “bad” payoff. These changes to relative payoffs alter the predictions made by the EV criterion. Under Treatment 2, EV makes no distinction between any of the three prospects (i.e. the expected value of all three prospects is equal). The difference between Treatments 1 and 2 was made to induce participants to use more conservative choice criteria.
Table 21.2 Outcomes for treatment 1 Scenario
Framing
1
Travel
2
No context
3
Fishing
Outcomes/choices Train $20 $10 1 $10 $5 Lemon Bay $50 $25
Auto $25 $8 2 $12.50 $4 Mango Bay $62.50 $20
Plane $32 $2 3 $16 $1 Persimmon Bay $80 $5
Choice under “hard” uncertainty 373 Table 21.3 Outcomes for treatment 2 Scenario
Framing
1
Travel
2
No context
3
Fishing
Outcomes/choices Train $20 $10 1 $10 $5 Lemon Bay $50 $25
Auto $22.50 $7.50 2 $11.25 $3.75 Mango Bay $56.25 $18.75
Plane $28 $2 3 $14 $1 Persimmon Bay $70 $5
It seems reasonable to classify MM as the most conservative choice criterion, as it evaluates prospects purely on the basis of the worst that could happen (WSTB, 2004, p. 233). MMR is less conservative because the decision maker is concerned not merely with bad outcomes, but also with good ones. EV is the least conservative because it applies no factor to adjust for risk (e.g. if the real probability of the “bad” outcome is 90 percent, on average, EV will do much worse than the other two criteria). In Treatment 2, participants should be more likely to choose strategies consistent with MM and MMR than under Treatment 1. This behavior should occur because the risk–reward aspect has changed. Unlike in Treatment 1, there is no increase in the “expected value” in return for an increase in the variance of payoffs. This fact should make both MM and MMR more attractive, and allow a finer test of which of the two criteria is preferred.
Experimental results Table 21.4 shows the overall results. As the most conservative strategy available, the minimax (MM) criterion was not heavily chosen. In only one out of six cases did more than 50 percent of respondents choose this criterion, and in all other cases it was below 25 percent (Table 21.4). The minimax regret (MMR) criterion was chosen by more than 50 percent of respondents in five out of six cases. Overall, the MMR was chosen slightly more than 50 percent of the time. Dropping the most anomalous case (Scenario 2, Treatment 2) raises the MMR selection to over 60 percent of the time. Treatment 1 Participants clearly did not choose randomly, and a χ2 test of proportions confirms this conclusion (H0: participants choose any strategy with probability 1/3; p-value less than 0.0001 for all three scenarios). Scenarios 1–3 were, in part, designed to test for consistency; note that the values of the outcomes in Scenario 2 are one-half of the values of the outcomes for Scenario 1 and that the
374
C. Blackwell et al.
Table 21.4 Participants’ selections for scenarios 1–3 Scenario
Choice
Consistent with . . .
Treatment 1 percentage of participants selecting (%)
Treatment 2 percentage of participants selecting (%)
1
Train Auto Plane 1 2 3 Lemon Bay Mango Bay Persimmon Bay
Minimax Minimax regret EV max Minimax Minimax regret EV max Minimax Minimax regret EV max
23.6 60.0 16.4 16.4 65.5 18.2 12.7 70.9 16.4
23.1 50.0 26.9 73.1 11.5 15.4 23.1 61.5 15.4
2 3
Note For Treatment 2 any choice is consistent with the EV max criterion as all three prospects have the same “expected value” using the 1/n weight.
values of the outcomes in Scenario 3 are 2.5 times the values of the outcomes for Scenario 1. The proportion of participants making choices consistent with MM, MMR, and EV is not different across the three scenarios (a Friedman test of proportions of H0: proportion choosing MM in Scenario 1 = proportion choosing MM in Scenario 2 = proportion choosing MM in Scenario 3, yielded a p-value of 0.525). The aggregate data across Scenarios 1–3 seems to indicate stable criteria use by participants. To examine this stability further, we looked at individual consistency across the first three scenarios. To do this, we grouped participants by their “uncertainty tolerance,” ordering their choices from least uncertainty tolerant (choosing MM) to most uncertainty tolerant (choosing EV). The scale we created assigns a score of 0 for every choice consistent with MM, 1 for every choice consistent with MMR, and 2 for every choice consistent with EV, resulting in a scale varying from 0 to 6. For example, if a participant made choices consistent with MMR on all three scenarios, she would receive a score of 1 for each scenario, making her score on our uncertainty scale a 3; or if a participant who made two choices consistent with EV and one consistent with MMR would receive a score of 2(2) + 1 = 5. We removed participants whose choices did not seem consistent, which we defined as making choices that correspond to each of the three criteria. By removing these inconsistent data points, we removed the possibility of a single score referring to different sets of choices (for example, a participant who chose consistent with MM, MMR, and EV would receive a score of 0 + 1 + 2 = 3, similar to the first example above; thus, by removing these data points the redundancy in scores is removed as well). Figure 21.2 shows a histogram of the distribution of scores on this scale. Approximately 7.3 percent of participants fell into this “inconsistent” category. We conclude that most participants choose relatively consistently, i.e. their
Choice under “hard” uncertainty 375 40 35 Frequency (%)
30 25 Treatment 1 Treatment 2
20 15 10 5 0
0
1
2
3
4
5
6
Uncertainty scale score
Figure 21.2 Histogram of participant criteria selection for scenarios 1–3.
preferences regarding uncertainty are relatively stable. For example, approximately one-quarter of the sample chooses MMR in all three scenarios, and approximately one-half chooses MMR in at least two out of three scenarios. Further, most participants favor MMR (ranging from 60 percent to 71 percent) in each of the three scenarios.6 Treatment 2 The data from Treatment 2 is less clear than Treatment 1. We cannot reject the null hypothesis of random choice among all three prospects for Scenario 1 (for the χ2 test of proportions p-value = 0.191), although this hypothesis can be rejected for Scenarios 2 and 3 (both p-values 0.01 or less). The proportion of participants making choices consistent with MM, MMR, and EV is different across scenarios (a Friedman test of proportions of H0: proportion choosing MM in Scenario 1 = proportion choosing MM in Scenario 2 = proportion choosing MM in Scenario 3, yielded a p-value of 0.009). Under Treatment 2, behavior generally seems to be more random. Examining the “uncertainty tolerance” scale data for Treatment 2 (see Figure 21.2), behavior is more “conservative,” but also less smoothly distributed than under Treatment 1. Under Treatment 2, 12.5 percent of participants were classified as “inconsistent” (i.e. they made choices consistent will all three choice criteria). Comparison of Treatments 1 and 2 It appears there is some difference between behavior in Treatments 1 and 2, and that this behavior is changed in a predictable way, i.e. as Treatment 2 payoffs involve the same risk and less reward than Treatment 1, we see more conservative choices made. To draw statistical conclusions regarding the observed behavior, we
376 C. Blackwell et al. use a multinomial logit regression model.7 Variable definitions and descriptive statistics are presented in Table 21.5. Results of the regression analysis from both a trimmed and extended model (including demographic variables) are presented in Tables 21.6 and 21.7, respectively. Generally the regression analysis confirms the observations made earlier, although some non-intuitive results also appear. In Table 21.6, which presents results from a simple model including only treatment parameters (Model 1), we see that participants move towards MM under Treatment 2, but curiously, also towards EV. Participants choosing among the prospects of Scenario 2 are also more likely to choose consistent with MM under Treatment 2. Scenario 2 pushes choices towards MM, perhaps because MM has a high guaranteed payout, inducing participants to choose more conservatively. Two other results from Table 21.6 also fit well with intuition. First, risk attitude is correlated with Table 21.5 Variable definitions and descriptive statistics Variable name
Definition
Mean (St. Dev.)
50–50
0.364 (0.482)
Scenario 2
Dummy variable; = 1 if participant indicated s/he thought the actual probability of each outcome occurring was 50% 0 to 10 scale of risk aversion from Holt and Laury (2002); a higher score indicates more risk aversion Dummy variable; = 1 if data taken from Scenario 2
Scenario 3
Dummy variable; = 1 if data taken from Scenario 2
Treat 2
Dummy variable; = 1 if data taken from Treatment 2
SocSci
Age
Dummy variable; = 1 if participant majors in a social science Dummy variable; = 1 if participant has previously participated in a psychology experiment Participant’s age
Married
Dummy variable; = 1 if married
Male
Dummy variable; = 1 if male
Kids
Dummy variable; = 1 if participant has one or more children Dummy variable; = 1 if white
RiskAtt
PsychExp
White Republican HrWage WSU
Dummy variable; = 1 if participant is a member of the Republican Party Participant’s declared hourly wage Dummy variable; = 1 if participant was recruited from Weber State University campus
5.792 (2.083) 0.333 (0.472) 0.333 (0.472) 0.325 (0.469) 0.247 (0.432) 0.234 (0.424) 23.286 (4.747) 0.247 (0.432) 0.688 (0.464) 0.234 (0.801) 0.779 (0.416) 0.377 (0.486) 7.515 (4.938) 0.714 (0.453)
Choice under “hard” uncertainty 377 Table 21.6 Multinomial logit model 1 Variable
Choice = minimax
Choice = EV max
Constant 50–50 RiskAtt Scenario 2 Scenario 3 Treat 2 No. observations Log likelihood Restricted log likelihood
–2.583** (0.6589) 0.730* (0.3611) 0.125 (0.0893) 0.840* (0.4047) –0.455 (0.4432) 1.431** (0.3720)
0.594 (0.6474) 1.027* (0.4299) –0.508** (0.1140) 0.177 (0.4866) –0.450 (0.4924) 1.320** (0.4509) 231 –187.5321** –221.2524
Notes Numbers in parentheses are standard errors; * and **indicate significance at the 5 percent and 1 percent levels, respectively. In Treatment 2, the choice of EV max corresponds to choosing airplane in Scenario 1, Prospect 3 in Scenario 2 and Persimmon Bay in Scenario 3.
behavior in a sensible way; i.e. a lower level of risk aversion is correlated with increased choice of EV. Second, when participants assessed a subjective probability of 50 percent of each outcome occurring, they were more likely to choose EV. Puzzlingly, under the same circumstances participants were also more likely to choose MM. Table 21.7 Multinomial logit model 2 Variable
Choice = minimax
Choice = EV max
Constant 50–50 RiskAtt Scenario 2 Scenario 3 Treat 2 SocSci PsychExp Age Married Male Kids White Republican HrWage WSU No. observations Log likelihood Restricted log likelihood
–5.181 (1.8701) 0.879* (0.4119) 0.108 (0.095) 0.881* (0.4149) –0.469 (0.4514) 1.293* (0.5042) 0.143 (0.4531) 1.076* (0.4639) 0.111 (0.0785) –0.256 (0.5514) –0.413 (0.4488) –0.623 (0.4364) 0.135 (0.5058) 0.559 (0.4553) –0.026 (0.0448) 0.196 (0.5769)
–3.267 (2.2822) 0.614 (0.5292) –0.537** (0.1337) 0.161 (0.5215) –0.508 (0.5327) 0.361 (0.6167) –0.231 (0.5828) 0.257 (0.7065) 0.210* (0.0852) –0.585 (0.7186) 0.325 (0.6323) –0.191 (0.4818) –0.230 (0.5546) 1.312* (0.5638) –0.111 (0.0593) –0.093 (0.6749) 231 –172.5187** –221.2524
Note Numbers in parentheses are standard errors; * and **indicate significance at the 5 percent and 1 percent levels, respectively.
378
C. Blackwell et al.
The results of a more extended model (Model 2), including demographic variables, are presented in Table 21.7. Here we see again that participants are more likely to choose MM under Treatment 2 than under Treatment 1. We also see that lower levels of risk aversion lead to a higher likelihood of choosing EV, supporting the notion that MM and MMR are the more conservative decision criteria. Most of the socio-economic and demographic information we collected does not help predict behavior with one notable exception: previous participation in a psychology experiment. Students who were previous participants are more likely to choose MM, the most conservative strategy. We suspect these participants have learned not to trust experimenters (many psychology experiments involve deception), and so try to guarantee themselves the highest payoff possible assuming the uncertainty will always be resolved to the experimenters’ favor.
Discussion and conclusions Before discussing the conclusions and implications of this research, we should mention some of its limitations. First, as with any experiment, the decisions participants make are artificial, and generalizing from our results should be done with care. Second, because the five scenarios are framed differently, there is the possibility that framing effects may be tainting our results. For example, Scenario 1 involves a decision about travel with weather as the source of uncertainty, while Scenario 3 asks the decision maker to make a different sort of travel decision, and the uncertainty comes from the decisions of other fishermen. The scenarios are analytically similar, and the decision to frame them differently was made in order to maintain participant interest, but the tradeoff made is some ambiguity about interpretation of our results. In addition, we did not randomize the order of the scenarios or the physical position of the choices on the screen; these two framing effects may also have had an impact on participants’ decisions. Across this set of exploratory experiments, a strong majority of participants (approximately 90 percent) showed some degree of consistency in their choice of decision strategy, but the criterion used was mixed within the pool and was affected by the risk–reward structure that varied by treatments. Interestingly, participants in our experiments did not seem to use the minimax choice criterion particularly often. This result seems to imply that at least one of the axioms of rationality postulated by Arrow and Hurwicz (1972) is being violated by a fairly large number of participants. Because the minimax regret violates the independence of irrelevant alternatives assumption, and the results appear more consistent with MMR than minimax, this axiom seems a likely starting point for more research. A fruitful analysis would seem to use the minimax regret criterion as a starting point, but look for some other parameters to model to help improve predictive power. For example, Milnor’s Hurwicz criterion has the decision maker choose a parameter α to weight the minimum and maximum values in an expected value-type maximization. In addition, the more recent work of Nehring (2000)
Choice under “hard” uncertainty 379 presents the possibility that decision makers may use a sort of mixed strategy (for example, a decision maker might pick auto with 40 percent probability and airplane with 60 percent probability in Scenario 1) that is a function of all the possible payoffs in the decision-making process. This approach differs from Arrow and Hurwicz (1972) in emphasizing that the decision maker should weight all the payoffs, not just the minima and maxima. Given our mixed results, it remains an open question whether participants are using some alternative choice criterion than the three we have presented, and that there may be a variety of criterion present in any given population. While there is no predominant (clearly superior) decision strategy in our results, a majority of decisions under hard uncertainty in this stylized experimental setting were consistent with the MMR criterion. This provides at least some modicum of support for Palmini’s (1999) argument for a minimax regret justification for policy approaches such as the SMS. More specifically, as one of the original motivations for this research, the literature on SMS-type approaches emphasizes that they are collective choice process that are particularly appropriate in complex settings with pluralistic perspectives (Randall and Farmer, 1995; Castle, 1996; Berrens, 2001). Future experimental designs may need to seek ways to add or emphasize irreversibility, collective (group) choice, and pluralistic moral perspectives. But experimental methods are being applied in an increasingly complex set of tasks and contextrich settings (Harrison and List, 2004). A particularly important set of future treatments would be to vary the impact of the decision from one individual to groups of individuals. For example, to simulate better the decision policy makers or legislators face (say, under the Endangered Species Act, or in potential revisions), one treatment could ask one participant to choose a prospect for all the participants in the group. It is expected that this treatment would induce more “conservative” decisions, but it is unclear how to operationalize “conservative” – as minimax or minimax regret! It is quite possible, however, that an individual may use a different strategy if their choice affects the entire group. A variety of authors (e.g. Sagoff, 1986) have argued that individuals may distinguish between their consumer preferences versus their citizen preferences. For example, when choosing only for himself, an individual may simply use the minimax strategy, but when making a decision that will affect the group or a greater collective, that same individual may use minimax regret instead. The size of the benefits (and hence the regret) could also interact with the group versus individual treatment effect. More generally, as to future research into decision making under hard uncertainty, there are some obvious areas that merit investigation. One is differentiation among other competing approaches, such as those suggested by Milnor, like the Hurwicz criterion. The impact of increasing the number of prospects beyond three should be investigated. The impact of more than two events occurring (say A, B, and C) would allow for a much richer decision environment, and more complex criteria. Future experiments should investigate the impact of partial information on decision making (for example, the subject is told events
380
C. Blackwell et al.
A, B, or C will occur, and the probability of event A occurring is 10 percent, but no information about B or C is given). In closing, there is considerable room for improving our understanding about how people make choices under situations of hard uncertainty. One important example of the relevance of such understanding is in debates over the “rationality” of Safe Minimum Standard (SMS) approaches to the protection of endangered species and biodiversity (Farmer and Randall, 1998). But, certainly, there are other important arenas, such as expected impacts on the further future due to global climate change (Woodward and Bishop, 1997), or the gap between rapid introduction of nanotechnologies and the expected social and environmental impacts (Munysiwalla et al., 2003). Given such dilemmas, experimental economics offers a low-cost tool for learning about behavior, and incrementally improving our understanding. We hope that this exploratory research stimulates additional research into decision making under hard uncertainty.
Notes 1 Case study applications of the SMS remain limited (e.g. Bishop, 1980; Rogers and Sinden, 1994; Berrens et al., 1998 and 1999; Farmer, 2001; Solomon et al., 2005, Drucker, 2006). Global climate change is another problem that is characterized by uncertainty and irreversibility. See Woodward and Bishop (1997) for a discussion. 2 See Schmeidler (1989) for a description of Choquet Expected Utility. If there is a trend in the literature, it is towards a greater acceptance of non-additive utility theories, of which Choquet is an example. These theories allow the sum of the decision maker’s subjective probabilities over a distribution of outcomes to be other than one. 3 The experiment is online, available at: faculty.weber.edu/tgrijalva/SMS/SMSpage.htm (accessed November 2003). 4 Although participants made decisions for all five scenarios, for the purposes of this chapter, we will focus only on the first three scenarios. 5 The probabilities were unknown to the experimenters as well. Random amounts of pink and white candies were placed in each bag, and no attempt was made to count the candies before, during or after the experiments. 6 We have also tested for differences between the responses of Weber State University and College of Charleston undergraduates and found little evidence of a difference. 7 Four participants had missing values for either RiskAtt or 50–50, decreasing the data set by 12 observations (three data points for each participant).
References Arrow, K.J. and Hurwicz, L., 1972. An optimality criterion for decision-making under ignorance. In: C.F. Carter and J.L. Ford, ed. Uncertainty and Expectations in Economics: Essays in Honour of G.L.S. Shackle. Oxford: Basil Blackwell, 1–11. Arrow, K.J., Daily, G., Dasgupta, P., Levin, S., Maler, K-G., Maskin, E., Starrett, D., Sterner, T., and Tietenberg, T., 2000. Managing ecosystem services. Environmental Science and Technology, 34, 1401–1406. Barbera, S. and Jackson, M., 1988. Maximin, leximin and the protective criterion: characterization and comparisons. Journal of Economic Theory, 46 (1), 34–44. Batie, S., 1989. Sustainable development: challenges to the profession of agricultural economics. American Journal of Agricultural Economics, 71 (5): 1083–1101.
Choice under “hard” uncertainty 381 Berrens, R., 2001. The safe minimum standard of conservation and endangered species: a review. Environmental Conservation, 28 (2), 104–116. Berrens, R., McKee, M., and Farmer, M., 1999. Incorporating distributional considerations in the safe minimum standard approach: endangered species and local impacts. Ecological Economics, 30 (3), 461–474. Berrens, R., Brookshire, D., McKee, M., and Schmidt, C., 1998. Implementing the safe minimum standard approach: two case studies from the US endangered species act. Land Economics, 74 (2), 147–161. Bishop, R.C., 1978. Endangered species and uncertainty: the economics of a safe minimum standard. American Journal of Agricultural Economics, 60 (1), 10–18. Bishop, R.C., 1980. Endangered species: an economic perspective. Transactions of the North American Wildlife Conference, 45, 208–218. Bulte, E. and Van Kooten, G.C., 2000. Economic science, endangered species and biodiversity loss. Conservation Biology, 14 (1), 113–120. Camerer, C., 1999. Ambiguity-aversion and non-additive probability: experimental evidence, models and applications. In: L. Luini, ed. Uncertain Decisions: Bridging Theory and Experiments. Boston, MA: Kluwer Academic Publishers, 53–79. Camerer, C. and Weber, M., 1992. Recent developments in modeling preferences: uncertainty and ambiguity. Journal of Risk and Uncertainty, 5 (4), 325–370. Castle, E., 1996., Pluralism and pragmatism in the pursuit of sustainable development. In: W. Adamowicz, P. Boxall, M. Luckert, W. Phillips, and W. White eds. Forestry, Economics and Environment, Wallingford, UK: CAB International, 1–9. Castle, E. and Berrens, R., 1993. Economic analysis, endangered species and the safe minimum standard. Northwest Environmental Journal, 9, 108–130. Ciriacy-Wantrup, S.V., 1952. Resource Conservation: Economics and Policy. Berkeley, CA: University of California Press. Drucker, A., 2006. An application of the use of safe minimum standards in the conservation of livestock biodiversity. Environment and Development Economics, 11 (1), 77–94. Ellsberg, D., 1961. Risk, ambiguity and the Savage axioms. Quarterly Journal of Economics, 75 (4), 643–669. Farmer, M., 2001. Getting the safe minimum standard to work in the real world: a case study in moral pragmatism. Ecological Economics, 38 (2), 209–226. Farmer, M. and Randall, A., 1998. The rationality of a safe minimum standard. Land Economics, 74 (3), 287–302. Gilboa, I., 1987. Expected utility with purely subjective non-additive probabilities. Journal of Mathematical Economics, 16 (1), 65–88. Harrison, G., and List, J., 2004. Field experiments. Journal of Economic Literature, 42 (4), 1009–1055. Hogarth, R.M. and Kunreuther, H., 1995. Decision-making under ignorance: arguing with yourself. Journal of Risk and Uncertainty, 10 (1), 15–36. Hohl, A. and Tisdell, C.A., 1993. How useful are environmental safety standards in economics? The example of safe minimum standard for protection of species. Biodiversity and Conservation, 2 (2), 168–181. Holt, C.A. and Laury, S.K., 2002. Risk aversion and incentive effects. American Economic Review, 92 (5), 1644–1655. Kelsey, D. and Quiggin, J., 1992. Theories of choice under ignorance and uncertainty. Journal of Economic Surveys, 6 (2), 133–153. Knight, F., 1921. Risk, Uncertainty and Profit. Boston, MA: Houghlin Mifflin.
382
C. Blackwell et al.
Maskin, E., 1979. Decision making under ignorance with implications for social choice. Theory and Decision, 11 (3), 319–337. Milnor, J., 1964. Games against nature. In: M. Shubik, ed. Game Theory and Related Approaches to Social Behavior. New York: Wiley, 20–31. Munysiwalla, A., Daar, A., and Singer, P., 2003. Mind the gap: science and ethics in nanotechnology. Nanotechnology, 14, R9–R13. Nehring, K., 2000. A theory of rational choice under ignorance. Theory and Decision, 48 (3), 205–240. Palmini, D., 1999. Uncertainty, risk aversion, and the game theoretic foundations of the safe minimum standard: a reassessment. Ecological Economics, 29 (3), 463–472. Quiggin, J.C., 1982. A theory of anticipated utility. Journal of Economic Behavior and Organization, 3 (4), 323–343. Randall, A., 1991. The value of biodiversity. Ambio, 20 (2), 64–68. Randall, A. and Farmer, M., 1995. Benefits, costs and the safe minimum standard of conservation. In: D. Bromley, ed. The Handbook of Environmental Economics. Cambridge, MA: Blackwell, 26–44. Ready, R.C. and Bishop, R.C., 1991. Endangered species and the safe minimum standard. American Journal of Agricultural Economics, 73 (2), 309–312. Rogers, M. and Sinden, J., 1994. Safe minimum standards for environmental choices: old-growth forest in New South Wales. Journal of Environmental Management, 41 (2), 89–99. Sagoff, M., 1986. Values and preferences. Ethics, 96 (2), 301–316. Schmeidler, D., 1989. Subjective probability and expected utility without additivity. Econometrica, 57 (3), 571–587. Solomon, B., Corey-Luse, C., and Halvorsen, K., 2005. The Florida Manatee and ecotourism: toward a safe minimum standard. Ecological Economics, 50 (1–2), 101–115. Tisdell, C., 1990. Economics and the debate about preservation of species, crop variety and genetic diversity. Ecological Economics, 2 (1), 77–90. Toman, M., 1994. Economics and sustainability: balancing trade-offs and imperatives. Land Economics, 70 (4), 399–413. Van Kooten, G.C. and Bulte, E., 2002. The Economics of Nature: Managing Biological Assets. Malden, MA: Blackwell Publishers. Vaughn, G., 1997. Seigfried Von Ciriacy-Wantrup and his safe minimum standard of conservation. Choices, 12, 30–33. Vercelli, A., 1999. The recent advances in decision theory under uncertainty. In: L. Luini, ed. Uncertain Decisions: Bridging Theory and Experiments. Boston, MA: Kluwer Academic Publishers, 237–260. Woodward, R.T. and Bishop, R.C., 1997. How to decide when experts disagree: uncertainty-based choice rules in environmental policy. Land Economics, 73 (4), 492–507. WSTB, (Water Science and Technology Board) 2004. Valuing Ecosystem Services: Towards Better Environmental Decision-Making. National Research Council of the National Academies. Washington DC: National Academies Press.
22 Rationality spillovers in Yellowstone Chad Settle, Todd L. Cherry, and Jason F. Shogren
Introduction Rationality on the part of economic agents is presumed in economic models. A rational consumer has experience in markets, has experience with the available bundles of goods, and has clearly defined preferences over those bundles. This rationality has come into question in the literature for a variety of reasons, one being the inconsistent choices consumers make when choosing preference between two bundles of goods and then being asked for their willingness to pay for the two bundles – preference reversals (Grether and Plott, 1979).1 The phenomenon of preference reversals shows a potential failure in economic theory.2 Constraints on individuals’ cognition and humankind’s physiological limits on cognition play a role in the ultimate ability to process information (see Simon, 1955, 1990; Heiner, 1983). But the limitations reduced cognition puts on the types of rationality assumed in economics may or may not be a binding constraint. It is possible economic agents could learn from market experience and cognition isn’t the limiting factor, but rather inexperience is. Do these reversals lend credence to the theory consumers are not rational? Can we achieve rationality in this context or are preference reversals persistent? The observed differences between assumed optimizing behavior in economic theory and the actual behavior of people can be impacted through repeated market transactions with large enough sums of money at stake for the individual (see Smith, 1989; Smith and Walker, 1993; Shogren, 2006). Recent research has shown rationality can be increased in situations in which preference reversals are prevalent (Cherry et al., 2003). If preference reversals are a sign of irrationality, if rationality can be learned through market experience, and if rationality can spill over from one market to another, it is the experience in the market that makes consumers rational. Market experience can then be used as a tool to help consumers make more rational decisions. While the work of Cherry et al. (2003) has shown rationality spillovers exist in laboratory experiments, these laboratory experiments may need to be applied to specific problems. The laboratory experiments have shown rational behavior can spill over from one market to another, from a market context to a hypothetical context with a low probability, high severity event, such as an environmental good. Applying this method to a
384
C. Settle et al.
specific problem requires that we take the theory from the laboratory to the field and target people who are interested in these non-marketed goods. If an interactive survey can be used in the field to elicit preferences for and values of bundles of environmental goods, we cannot only elicit values, but those values will come from a more rational consumer – the values may well be closer to the consumer’s true willingness to pay.3 This research is an attempt to take the rationality spillover design from the laboratory into the field. We wish to determine preferences for and values of seeing species in and around Yellowstone Lake that might be affected by the introduction of an exotic species into Yellowstone Lake, lake trout. Lake trout are an exotic species to Yellowstone Lake and are a predator of the native and popular species, cutthroat trout. Cutthroat trout are not only important to fishermen who come to Yellowstone Lake to fish for cutthroat, but are also an important food source for grizzly bears, osprey, white pelicans, river otter, as well as many other species in Yellowstone National Park.4 Not only are cutthroat trout expected to decline in number, but these other species relying on cutthroat trout for food may decline as well. The chapter proceeds as follows. We next discuss the issues with the implementation of the experiment as well as providing an overview of participants’ views on the lake trout issue in Yellowstone Lake. We then give the reader the specific challenges of gathering data in this setting since this experiment was designed to gather information for a larger project. A detailed description of the experiment as presented to each participant in the experiment is followed by the results of the experiment and the conclusion.
Implementation As with any experiment, whether it is done in a laboratory or on the internet, issues dealing with control of subjects are important. Specific issues listed below were due to the particular experimental design used, which can easily be changed in future experiments; some are due to this experiment being conducted over the internet and beyond our control, given the current technology and given we are unable physically to be in the same room as the participants. In the internet experiment we did not control for: • • • •
whether people started the experiment, took a break, and then restarted it at a later date before finishing the experiment; the number of practice rounds the participants played; the number of people who could be looking at the screen at the same time helping the participant think about how to make his or her choices; a consistent physical environment for all participants.
We implement the experiment over the internet by recruiting participants from two sources – newsgroups and the New York Times. The first source was done by posting announcements to environmental newsgroups on the internet. The
Rationality spillovers in Yellowstone 385 number of participants was too low from the newsgroups and more data needed to be collected, which led to the second source, an advertisement. We ran a paid ad on the New York Times web page to gather participants. Approximately 250 people were gathered from the New York Times web page, as compared with less than 20 from the list serves. The hit rate from the banner advertisement was well above the industry standard of 0.5 percent, nearly doubling that to 0.9 percent. Possible explanations for the high hit rate were the $20 that was average earnings for participation and the cleverness of the banner ad. The campaign ran from 7 June to 6 July 2000 at various locations on the New York Times web page. Participant information A total of 269 people completed the internet experiment. Of these 269, 82 actually visited Yellowstone National Park. The first important distinction to make is differentiating fishermen from all other visitors. While about 1 percent (about 25,000 out of approximately 3,000,000 total annual visitors to Yellowstone National Park) of the people who visit Yellowstone National Park go to the park to fish, 5.6 percent of all participants in the experiment classified themselves as fishermen. If we consider only participants who actually visited Yellowstone National Park, the percentage of these participants who classified themselves as fishermen jumps to 17.1 percent. Another important distinction to make is how familiar people are with the problem of the lake trout introduction into Yellowstone Lake. Participants were asked how familiar they were to the problem and could respond with “well informed”, “moderately informed”, “barely informed”, or “not informed at all”. Table 22.1 provides summary statistics on familiarity for both the entire participant group as well as only those who visited the park.5 Not surprisingly, people who had actually visited Yellowstone National Park were more familiar with the problem of the introduction of lake trout into Yellowstone Lake than were people who had never been to the park. Limiting the participants to just those people who had visited the park led to an increase in the percentage of people who were “well informed”, “moderately informed”, and “barely informed”, while only decreasing “not informed at all”. Next, we look at the perceived seriousness of the introduction of lake trout into Yellowstone Lake. Participants were asked how serious they thought the problem was and could respond with “very serious”, “moderately serious”, Table 22.1 Familiarity of participants to the lake trout introduction (%)
All participants Visitors
Well informed
Moderately informed
Barely informed
Not informed at all
5.6 14.6
12.6 17.1
25.3 29.3
56.5 39.0
386
C. Settle et al.
Table 22.2 Participants’ perceptions of the seriousness of the lake trout introduction (%)
All participants Visitors
Very serious
Moderately serious
Barely serious
Not serious at all
25.6 31.7
53.9 54.9
13.8 9.8
6.7 3.6
“barely serious”, and “not serious at all”. We once again divide the participants into two groups, all participants and those who have been to the park. Table 22.2 summarizes these results. Once again participants who had visited the park had stronger views on the seriousness of the problem. Visitors view the problem as more serious, having a higher frequency of responses of “very serious” and “moderately serious” and lower response rate of “barely serious” and “not serious at all”. People who have visited the park view the problem as more serious, although the contrast of views between visitors and non-visitors here is not as large as between visitors and non-visitors with respect to familiarity as shown in Table 22.1. An important indicator of preferences for this particular problem is fish preference. Participants were asked their preference for catching fish – which fish would they prefer to catch at Yellowstone Lake? Visitors had to respond with one of the following: “cutthroat trout”, “lake trout”, “doesn’t matter”, or “I don’t fish” (see Table 22.3). While both visitors and non-visitors show a preference for cutthroat trout over lake trout, visitors show stronger preferences. A total of 46.4 percent of visitors had a fish preference while only 34.2 percent of non-visitors had a preference. This could be a direct result of the composition of the visitor. People who had visited the park were much more likely to have fished Yellowstone Lake than were all participants. Beyond the percentage of people who had a preference, the direction of preference is consistent – people prefer to catch cutthroat trout as opposed to lake trout at Yellowstone Lake. Finally, we consider how the changes to species populations might affect visitors’ decisions to come to the park. We asked participants whether a decreased chance of seeing a cutthroat trout, grizzly bear, bird of prey, or the core attractions of the park would decrease their chance of coming to the park. Responses were not limited to one species – a particular person could say they would be less likely to come to the park if they had a decreased chance of seeing any or all
Table 22.3 Participants’ preference for fish (%)
All participants Visitors
Cutthroat trout
Lake trout
Doesn’t matter
I don’t fish
28.3 36.6
5.9 9.8
20.1 18.3
45.7 35.4
Rationality spillovers in Yellowstone 387 Table 22.4 Percentage of participants affected by seeing attractions of the park (%)
All participants Visitors
Cutthroat trout
Bird of prey
Grizzly bear
Core attractions
13.4 31.7
43.1 54.9
61.7 72.0
77.7 78.0
of the species in the park. Table 22.4 summarizes the results. Visitors to the park are once again more affected by the presence of the different species around Yellowstone Lake. Visitors had a higher chance of visiting the park less often if any of the four were affected. The intention of the experiment was to: (1) gather preference data as well as valuation estimates of the park species goods, and (2) induce more rational behavior by participants. The specifics on the commodity bundles in the experiment are presented next.
Yellowstone application In designing the experiment for this case, we had specific uses of the data that were necessary to incorporate into our integrated economic system-ecosystem model (Settle and Shogren, 2006). The design needed to elicit information on visitors’ preferences and values from seeing each of the species in and around Yellowstone Lake and from catching both lake trout and cutthroat trout. However, these values were probabilistic in nature. When going to Yellowstone National Park, visitors do not purchase a sighting of a grizzly bear, seeing all of the core attractions of the park, and a guarantee they will catch two cutthroat trout and a lake trout. Instead, visitors are purchasing a probability of seeing the species and attractions in the park and a probability of catching fish if they spend time fishing. This reality of what visitors are purchasing with a visit to Yellowstone National Park matches up well with the probability of lotteries in the laboratory experiments in Cherry et al. (2003). In the laboratory experiments, participants are not purchasing a guaranteed good, but rather a lottery that has a distribution of outcomes. In the park, visitors are not purchasing a guaranteed good, but a “lottery” of hypothetical environmental goods that has a distribution of outcomes. The choice of environmental goods in the environmental lotteries were the species and environmental goods we were analyzing in our integrated model; lake trout, cutthroat trout, grizzly bears, birds of prey (including osprey and white pelicans), and the core attractions of the park. Each of these goods had a probability attached to seeing or catching it in the lottery. A probability for each of the potential goods, seeing a grizzly bear, seeing a bird of prey, catching a cutthroat trout, catching a lake trout, and seeing the core attractions of the park was included in the lottery.
388 C. Settle et al. The next step in designing the experiment was to include a range of possible probability distributions for each of the species. In a perfect world, we would have wanted to estimate the value of all possible probabilities, from 0 percent to 100 percent for all species. In implementation, that would have greatly increased the number of lottery pairs we had to include and would have increased our necessary number of participants beyond what was feasible. In order to reduce the number of lottery pairs, and to increase the number of observations per pair in the process, we had to limit our probability distributions of species to those of utmost importance. If an average visitor only has a 5 percent chance of seeing a grizzly bear, gathering information of how much someone would pay for an 80 percent chance of seeing a grizzly bear is not a high priority item for our model. Or if an average visitor only had a 2 percent chance of catching a lake trout and only had a 10 percent chance of catching a lake trout even if he spent all of his time fishing, gathering information on how much someone would pay for a 90 percent chance of catching a lake trout is equally imprudent. We only need to gather information around a likely outcome function. Therefore, we needed to cull the list of possible lottery pairs to those most useful. The first step for this is to determine the approximate probability for the average visitor seeing each of the species in the park and then determine how these probabilities would likely change with a change in the amount of time the visitor spent at each activity. Aggregate data was available for both fishing (for lake trout and cutthroat trout) and for seeing birds of prey and grizzly bears. This data was used to estimate the probability of a visitor catching or seeing each of the species based on the amount of time spent fishing versus driving around the park to see the core attractions. This data gave us our baseline estimates for how likely a person would be to see/catch each species. We then used these estimates to guide us to which lotteries needed to be included for our wildlife lotteries.6 Unfortunately for us but fortunately for visitors to the park, the probability of catching a fish or seeing the core attractions of the park is strongly correlated to the amount of time spent fishing or driving around the park. If a visitor wants to increase the probability of catching a fish, he can choose to increase the amount of time he spends fishing. We therefore had to include some high probability items for individual species. Similar to Cherry et al. (2003), we had some high probability, low severity events and some low probability, high severity events. The high probability events came from someone targeting a particular species, catching a trout. The low probability events came from someone trying to enjoy everything in the park, in this case, trying to see every species, catch both trout species, and see the core attractions of the park, and thus having a low probability of achieving all of their goals. A total of 90 lotteries were included. The data required from the experiment was valuations on visitations. The information from the experiment can then be used to estimate the valuation of a particular environmental good. The estimation of the value of each environmental good and the use of these values in determining optimal action is included in Settle and Shogren (2006).
Rationality spillovers in Yellowstone 389
Experimental design Before discussing the results of the internet experiment, we next discuss the specifics of the design and implementation of the experiment. The experimental instructions included seven steps. The first step in the experiment was to describe the options and situations to the participants. A description of both the money options and the wildlife options were given to the participants, explaining that the money options had particular probabilities associated with payoffs to the participant, each situation having two options to choose from, while the wildlife options had specific probabilities of seeing and/or catching species in Yellowstone National Park that were hypothetical (would not be realized). Examples of each of the two options in each of the two situations was given to the participants so they could see actual screen shots similar to what they would be presented with in the experiment. The beginning money balance prior to choosing their preferred option for each situation was explained. Step 2 provided a brief explanation that the participant had to choose which option they preferred of the two options given under each situation. Participants were told they had binding contracts based on their preference – if they preferred option A to option B, they may have to trade the option they hold (option B) for the option they prefer (option A). Step 3 included an overview of how the participant chose the money value associated with each option with which they were faced. Participants were given the time to place a monetary value to each option and once again had binding contracts – if they were willing to pay $X.XX for option A, they may have to pay $X.XX for option A in the computer market. Step 4 was a detailed description of how the computer market worked. The computer market could buy or sell options to a person, but only at the person’s stated value given in step 3. The computer market could trade options for each other, but only at the person’s stated preference (if they preferred option A to option B they could be asked to give option B for option A, but not asked to give option A for option B). The computer market would only buy, sell, and trade if it were beneficial to do so. No buying, selling, or trading takes place in situation 2 (the wildlife situation). A person’s best strategy was laid out at the end of step 4, being honest about their values for the options, since inaccurate values could lead to forced trades or sales at values that were not their true value. Step 5 was a short paragraph explaining how an option gets played after the transactions by the computer market. A random draw determined which outcome was realized from the two potential outcomes in the option they had after the computer market bought, sold, and traded options. Step 6 provided the participant with an assessment of how the outcome realized in step 5 would change their real money balance at the end of that round. After each round, the participants played another round until completion of the experiment after round 10. Step 7 concluded the description provided to each participant. The total earnings from the experiment were the sum of the ending balances for all ten rounds.
390
C. Settle et al.
After step 7, each of the seven steps was summarized in one sentence to review the entire experimental process prior to allowing each participant to complete the experiment.
Results As in the original rationality spillover paper (Cherry et al., 2003), we first start our analysis by investigating the preference reversal rates of subjects across periods in the experiment. A preference reversal is defined as a situation in which a subject makes inconsistent preference orderings (i.e. a participant has a stated preference for lottery A to lottery B, yet the participant is willing to pay more for lottery B than for lottery A). Our attempt in constructing lotteries in this experimental setting was similar to the construction of lotteries in the initial rationality spillover paper, to construct the lotteries in which preference reversals were most likely to occur – when one lottery is a low-risk lottery (a high probability of occurring but with a low payout) and one lottery is a high-risk lottery (a low probability of occurring but with a high payout). By using these lottery pairs, we hoped to be able to see preference reversals and then use arbitrage to induce rationality in participants by arbitraging them for irrational choices. Our experimental design in this case is limited to treatment 4 in the original rationality spillover paper (see Cherry et al., 2003) with one real money, arbitrage market and one hypothetical choice, non-arbitrage market over wildlife lotteries. Figure 22.1 shows the preference reversal rates for both the real money lotteries and the wildlife lotteries over the full ten rounds of the experiment. In the original rationality spillover paper, the preference reversal rates for each of the four treatments started between 30 and 40 percent in round 1 and,
0.18 Real money lotteries
0.16
Reversal rate
0.14 0.12 0.1 Wildlife lotteries
0.08 0.06 0.04 0.02 0 1
2
3
4
5 Round
Figure 22.1 Preference reversal rates.
6
7
8
9
10
Rationality spillovers in Yellowstone 391 after arbitrage was introduced, began to fall. The preference reversal rate without arbitrage hovered in the 30–40 percent range for the entire experiment, while the preference reversal rate with arbitrage fell to at most 10 percent by round 15. In this experiment, preference reversal rates for both the real money and wildlife lotteries are initially between 13 and 14 percent. The reversal rates do not continue to fall over the experimental rounds; instead they track up and down ending in round 10 between 11 and 14 percent. Reversal rates for both the real money lotteries and wildlife lotteries were not significantly different across time with p-values between 0.4 and 0.6. This leads to our first result. Result 1: Preference reversal rates in both the real market and wildlife settings do not significantly fall over time as the participants are arbitraged. This result is surprising given the significant drop in preference reversal rates over time seen in the laboratory experiments (Cherry et al., 2003). An important difference exists between the laboratory experiments and the internet experiments though. In the laboratory setting, no arbitrage was introduced until round 6 of the experiment. The internet experiment allowed participants to play hypothetical rounds prior to engaging in the experiment. Given result 1, it appears likely that participants with high preference reversal rates may have engaged in more practice rounds to gain familiarity with the experimental method. The practice rounds in this experiment may have played the same role that rounds 6–15 played in the laboratory setting. If the beginning rounds of this experiment mimicked the final rounds of the laboratory setting, this experiment allows us to test for differences that may occur with repeated play after rationality had already set in for most participants. We’ve already seen preference reversal rates did not significantly decline in this experimental setting. In order to test if participants were converging to one player type, we next test for differences between participants across socioeconomic data gathered in our experiment. We tested whether age, gender, or income make a difference in preference reversal rates across participants. We estimate both real money and wildlife preference reversals using Chamberlain’s logit model given below7 (Chamberlain, 1980). Pr (Yit = 1) = exp (Zit)/[1 +exp (Zit)] where Yit = 1 if preferences are reversed and 0 if preferences are consistent;8 Zit is a vector of attributes with socio-economic information for the ith participant including the participant’s age, gender, and income category, as well as a variable for time giving the round of the experiment for the individual lottery pair. Table 22.5 summarizes the results from each regression. In each regression the coefficient on time is statistically insignificant mimicking the results of the comparison of mean reversal rates across experimental round. In both regressions age, gender, and income are all significant contributing factors in determining the preference reversal rate yielding result 2.
392
C. Settle et al.
Table 22.5 Logit regression results for preference reversals Variable
Real money reversal
Wildlife reversal
Time Age Male Income category 2 Income category 3 Income category 4 Income category 5
–0.004 (0.019) 0.010* (0.006) –0.194* (0.116) 0.019 (0.178) 0.088 (0.194) 0.046 (0.211) –0.385** (0.178)
–0.029 (0.022) 0.013* (0.006) –0.254** (0.129) –0.080 (0.190) –0.566** (0.244) –0.639** (0.266) –0.734*** (0.206)
Notes Coefficient estimates presented with standard errors in parentheses. In both regressions, n = 2690. In the real money reversal regression, chi-squared = 23.92 (p-value = 0.0012). In the wildlife reversal regression, chi-squared = 12.66 (p-value = 0.081). *, **, and *** indicate significance at the 10 percent, 5 percent, and 1 percent level.
Result 2: Preference reversal rates are dependent upon subjects’ socio-economic attributes. This result sheds light on the preference reversal rates across participants in addition to the previous results on preference reversal rates across time. While participants’ preference reversal rates are not significantly changing across time, the preference reversal rates are significantly different depending on the personal characteristics of the individual. Age, gender, and income level all seem to be determining factors in how far preference reversal rates can fall within one experimental session. Preference reversals were less likely for younger people, men, and higher income groups.
Concluding remarks The assumption of a rational consumer in economic modeling comes into question in many instances, including when consumers reverse preferences. These preference reversals can decline in frequency by placing a consumer in an arbitrage setting and forcing the consumer either to hold consistent preferences or lose money through arbitrage. Cherry et al. (2003) showed in a laboratory experimental setting how preference reversals will not fall without the presence of arbitrage,9 but can significantly decline in frequency as consumers are introduced to arbitrage. This research extends the previous laboratory experimental setting to the field by conducting the experiments both in Yellowstone National Park as well as over the internet. An important difference in the two settings, allowing participants to engage in as many practice rounds as they choose, leads to consistent preference reversal rates over time. However, this may be due to learning already taking place in the practice rounds. Therefore, this research also extends the previous work on preference reversals by allowing for an extension of the arbitrage setting beyond when learning initially takes place. What we observe is
Rationality spillovers in Yellowstone 393 consistency in preference reversal rates over time, but significantly different preference reversal rates across subject type. Age, gender, and income level all play an important and statistically significant role in determining the preference reversal rate for the participant. Further research into preference reversals could highlight the results of both types of settings by allowing for both within one experiment. Extending the number of rounds participants can play, not allowing for arbitrage to occur in any practice rounds, controlling for socio-economic characteristics, and conducting the experiment both in the laboratory and in the field can shed important light on any further differences between laboratory and field testing of subjects.
Notes 1 See Conlisk’s (1996) summary article for a discussion on potential reasons and experimental evidence. 2 See Smith (1991) for a comparison of economics and psychology. For a discussion of how rationality affects law and economics see Korobkin and Ulen (2000). 3 Evidence suggests arbitrage leads to a change in stated valuations instead of changes in preferences (Gunnarsson et al., 2003). 4 See Varley and Schullery (1995) for a detailed discussion of the lake trout introduction into Yellowstone Lake. 5 For a more complete analysis of visitor preferences regarding lake trout in Yellowstone Lake see Cherry and Shogren (2001). 6 The specific functions determining probabilities of seeing and viewing species (sighting and viewing functions) and the underlying data are included in Settle and Shogren (2006). 7 For consistency, this is the same logit regression technique used in Cherry et al. (2003). 8 The regression is run twice, once in which Yit is a preference reversal for the real money lottery and once in which Yit is a preference reversal for the wildlife lottery. 9 Exposure to arbitrage must occur either in the market itself or the paired market in the side-by-side experiments in both Cherry et al. (2003) and this research.
References Chamberlain, G., 1980, Analysis of Covariance with Qualitative Data, Review of Economic Studies, 47(1): 225–238. Cherry, T. L. and J. Shogren, 2001, Invasive Species Management for the Yellowstone Lake Ecosystem: What do Visitors Think? Yellowstone Science, 9(2): 10–15. Cherry, T. L., T. D. Crocker, and J. F. Shogren, 2003, Rationality Spillovers, Journal of Environmental Economics and Management, 45(1): 63–84. Conlisk, J., 1996, Why Bounded Rationality? Journal of Economic Literature, 34(2): 669–700. Grether, D. M. and C. R. Plott, 1979, Economic Theory of Choice and the Preference Reversal Phenomenon, American Economic Review, 69(4): 623–638. Gunnarsson, S., J. Shogren, and T. Cherry, 2003, Are Preferences for Skewness Fixed or Fungible? Economics Letters, 80(1): 113–121. Heiner, R., 1983, The Origins of Predictable Behavior, American Economic Review, 73(4): 560–595.
394
C. Settle et al.
Korobkin, R. B. and T. S. Ulen, 2000, Law and Behavioral Science: Removing the Rationality Assumption from Law and Economics, California Law Review, 88(4): 1051–1144. Settle, C. and J. F. Shogren, 2006, Does Integrating Economic and Biological Systems Matter for Public Policy? The Case of Yellowstone Lake, B. E. Journals in Economic Analysis and Policy 6 (1). Online available at: bepress.com/bejeap/topics/vol6/iss1/art9/ (accessed January 2006). Shogren, J., 2006, A Rule of One, American Journal of Agricultural Economics, 88(5): 1147–1159. Simon, H., 1955, A Behavioral Model of Rational Choices, Quarterly Journal of Economics, 69(1): 99–118. Simon, H., 1990, Invariants of Human Behavior, Annual Review of Psychology, 41(1): 1–19. Smith, V. L., 1989, Theory, Experiment and Economics, Journal of Economic Perspectives, 3(1): 151–169. Smith, V. L., 1991, Rational Choice: The Contrast between Economics and Psychology, Journal of Political Economy, 90(4): 877–897. Smith, V. L. and J. M. Walker, 1993, Monetary Rewards and Decision Cost in Experimental Economics, Economic Inquiry, 31(2): 245–261. Varley, J. D. and P. Schullery, 1995, The Yellowstone Lake Crisis: Confronting a Lake Trout Invasion. A Report to the Director of the National Park Service, Yellowstone Center for Resources, National Park Service, Yellowstone National Park.
23 Wind hazard risk perception An experimental test Bradley T. Ewing, Jamie B. Kruse, and Mark A. Thompson
Introduction This study reports the results of an economic experiment designed to discover individual perception of risk and the tendency to insure against errors in calibrating the risk. We have chosen the context of straight line wind hazard and the possibility of damage to manufactured housing at a series of induced high wind speeds. The choice of context has special importance in this case because manufactured housing (mobile homes) appears to be less resistant to wind damage in severe windstorms. For example, Hurricane Andrew, the destructive 1992 windstorm that cut across Florida south of Miami destroyed 11 percent of conventionally built homes and 97 percent of the mobile homes in its path. In Homestead, FL, more than 99 percent (1167 of 1176) of mobile homes were completely destroyed (Rappaport, 1993). The National Hurricane Center provides an explanation of the Saffir–Simpson scale in terms of wind speeds and potential damage (Online, available at: nhc.noaa.gov/aboutsshs.shtml September, 2007). For all hurricane categories, damage to mobile homes is distinguished from likely damage to other structures. For example, Category Two hurricanes with estimated wind speeds from 96 to 110 miles per hour (mph) will produce “Some roofing material, door and window damage of buildings” whereas it will produce “considerable damage to mobile homes.” The repeated destruction of mobile homes in minor (lower than Category Three) windstorms led to requirements that manufactured housing adhere to Housing and Urban Development (HUD) Code requirements after 1994. In spite of the increased code requirements, damage assessment teams for the 2004 storm, Hurricane Charley, noted that mobile homes “sustained by far the highest degree of damage” (Adams et al., 2004). Taking this vulnerability into account in conjunction with the fact that manufactured homes represent the fastest growing component of the residential building stock, a better understanding of perception of wind damage risk is important in and of itself. On the one hand, this study presents basic research that examines individuals’ subjective assessments of risk and their propensity to transfer the risk. On the other hand, we seek to measure perception of wind damage risk to two types of factory built housing: modular and manufactured (mobile) construction.
396
B.T. Ewing et al.
The decision to choose one type of housing over another is admittedly complex. Among the many attributes that enter into the decision is the belief that the structure will provide shelter under adverse conditions. This amounts to the individual perception of risk due to building failure during an extreme event. Although the home purchase decision reveals information about risk perception, the number of observable and unobservable sources of variation makes precise identification and measurement impossible. A laboratory economic experiment offers the control necessary to measure response to risk of wind damage. Further the controlled wind engineering experiment provides a unique opportunity to describe the risk in a rich replicable decision environment. We embed the wind engineering experiment in the economic experiment to elicit individual risk perceptions. The wind engineering experiment used the propeller wash from a C130 aircraft to generate full scale wind speeds up to 100 mph that flowed over a modular home in one setting and a mobile home in a second position. We asked participants in the economic experiment to predict the wind speed that would produce damage. Second, we offered experimental subjects the opportunity to transfer risk by selling their answers to the experimenter. This setup allows for the measurement of both risk perception and the confidence people place in their assessment of the risk. Both are important determinants of whether individuals will mitigate or seek insurance coverage. We will now outline the engineering and the economic experiment, and describe how they fit together in this study of human behavior.
The experiment(s) C-130 propeller wash The Wind Science and Engineering Research Center (WISE) at Texas Tech University (TTU) in collaboration with the National Institute of Standards and Technology and with cooperation from the 136th Airlift Wing of the Texas Air National Guard conducted a scientific experiment to monitor the structural response of a modular and a manufactured (mobile) home in natural and artificial winds. A C-130 Hercules cargo aircraft was used to produce turbulent gale force winds exceeding 100 mph. The general goals of this research were to establish: (1) the structural response of buildings and their components during high winds; (2) the change in building permeability as a result of exposure to high winds; and (3) possible strategies that will mitigate both building damage and loss of energy efficiency resulting from high-wind events. The full-scale tests were conducted in 2001 and 2004 at TTU’s Wind Science and Engineering Research Facility located at the former Reece Air Force Base in Lubbock, Texas. Testing focused on low-rise residential construction, in particular, a modular home and a manufactured (mobile) home. Detailed information on the experiment, experimental procedures, and the results of the wind engineering experiment are contained in Smith et al. (2004a, 2004b). We used the description and results from the 2004 test in the economic experiment.
Wind hazard risk perception 397 The modular home used in the engineering experiment was 30 feet wide and 30 feet long. The home was designed for HUD Wind Zone I. Under the Wind Zone I criteria, designs are for 70 mph fastest-mile wind loads. Siding on exterior walls was fastened with staples over OSB (oriented strand board) sheathing. The roof covering comprised asphalt shingles over OSB. The structure had a hip roof. The foundation system was designed for Wind Zone II to provide adequate anchorage of the test specimen. When the test area was secured, the pilot of the C-130 started each of the four engines and increased the propeller revolution to ground idle. After a short warm-up cycle, the induced wind field was created by increasing the propeller pitch. Each desired speed was held for 5 to 10 minutes with a brief reduction in flow speed to differentiate test run segments. Estimated mean wind speeds in each test run were 20 mph, 45 mph, 60 mph, and 90 mph, respectively. A system simultaneously recorded wind measurements and negative roof pressures (uplift) at the sampling rate of 30 Hz. The experimental protocol for the manufactured home was similar to that for the modular home. The manufactured (mobile) home test specimen had a 15 feet 5 inch by 60 feet 2 inch footprint and was designed for HUD Wind Zone I. The exterior wall cladding was 5/8-inch masonite siding attached to 2 × 4 inch wood studs. Asphalt shingles over OSB deck was used for the roof cladding. The foundation system used for the experiment is designed for Wind Zone II. This lessened the likelihood that the test specimen would fail by one of the primary modes observed for manufactured homes subjected to high winds, namely overturning and structure-chassis separation. Estimated wind speeds in each test run were 20 mph, 45 mph, 60 mph, 20 mph, 45 mph, 60 mph, and 80 mph, respectively. Data collected during the experiment included meteorological, pressure, displacement measurements, and failure modes. The economic experiment The economic experiment was conducted in midweek just prior to the 2004 C-130 propeller wash experiment. The results were revealed 1 week later and cash payments distributed to all participants. The sample comprised 28 experimental subjects all of whom were graduate students at Texas Tech University. Of the 28 subjects, 12 were students from the interdisciplinary Wind Science and Engineering (WISE) graduate program with backgrounds in atmospheric sciences and civil engineering. Graduate students from TTU’s Health Organization Management (HOM) and MD/MBA program represented the other 16 members of the subject pool. The HOM program is a specialized MBA program housed in the College of Business. The MD/MBA program is a joint program in which medical students may earn the MBA degree while simultaneously obtaining the MD. Participants were provided with the news release describing the C-130 engineering experiment and the plan for predicted wind flow speeds to be produced by the aircraft. The modular and manufactured home were described and the layout showing the relationship of the aircraft to each test specimen was provided to each participant. The two failure modes
398
B.T. Ewing et al.
(1) loss of shingles and (2) breach of the building envelope were also defined. Subjects then answered a set of four questions that asked them to choose the lowest wind flow speed that would result in each type of failure for the modular and manufactured home. A certainty equivalent was elicited for each answer using a Becker–Degroot– Marschak (BDM) procedure (Becker et al., 1964). The BDM method proceeds as follows: the subject reports a selling price for the lottery. In our case the lottery is the risky bet on whether the answer chosen is in fact the correct answer. After the selling price is chosen, a random offer price is drawn from a uniform distribution. If the offer price is greater than or equal to the subject’s selling price, then the subject surrenders the lottery and receives the offer price for sure. The BDM is an incentive compatible (demand revealing) procedure in that it is in the subject’s best interest to reveal truthfully her valuation. After participants answered all questions and stated a certainty equivalent for each, one of the four questions was drawn at random to determine the subject’s monetary reward. To summarize, the instructions included a description of the engineering experiment, an explanation of failure modes, a synopsis of the decision task, and, finally, an explanation and demonstration of the BDM mechanism. The experimental instructions are available upon request. The first meeting and experiment took approximately 45 minutes. The following week, all subjects were told the outcome of the C-130 engineering experiment, paid their cash earnings, and thanked for their participation. The second meeting took approximately 20 minutes. The average participant payment, including show-up fee was $13.50. We will next discuss some of the relevant literature and the testable hypotheses. This will be followed by a description and discussion of our experimental results.
Background and related literature Wind resistance and quality perception Evidence from model scaled wind tunnel tests indicates that when compared to gable roofs, hip roofs are subject to lower negative pressures for a given wind speed thus making them less likely to fail (Meecham et al., 1991; Xu and Reardon, 1998). Gable is the most common roof design. A gable roof consists of two planes that meet at the central peak and slope down to the building’s long walls. In contrast a hip roof rises by inclining planes from all four sides of a building. Therefore, ex ante we expect the modular home with hip roof to withstand higher measured wind flow speeds than the manufactured home with gable roof. The mathematical relationship between wind loads and roof geometry is part of the course of study for WISE students, whereas HOM and MD/MBA students are expected to be no more familiar than the typical graduate student. Whether or not quality differences are perceived for both WISE and HOM and MD/MBA student is testable – leading to our first hypothesis.
Wind hazard risk perception 399 Hypothesis 1: Experimental subjects will predict failure at lower wind speed for the manufactured home than the modular. Further, WISE students will differ from the HOM and MD/MBA students in their predictions. A fundamental problem in the analysis of choices under uncertainty is the distinction between risk and ambiguity. For the case of pure risk, it is accepted that the underlying probability distribution and payoffs are known. In contrast, most real world decision environments require beliefs about uncertain events that are ill defined. There may be uncertainty about the underlying data generating process itself. This uncertainty about uncertainty has gone by many labels (e.g. second order uncertainty, ambiguous probabilities, and Knightian uncertainty (Knight, 1921)). Emphasis on ambiguity obviously stems from its relevance for the evaluation of real life decisions. The decision to insure or mitigate natural hazard risk is such a real life decision. By embedding an imminent engineering experiment in the economics experiment, we created an environment with twosided ambiguity, meaning that no one (engineer, experimenter, or subject) knew the outcome of the C-130 experiment at the time subjects were asked to make a decision. The decision to present the economic experiment in a way that clearly implied that no one knew the outcome or the precise data generating process was to control for biases described as comparative ignorance by Fox and Tversky (1995). Psychometric measures Psychological measures that affect individual propensity to insure a risky decision include overconfidence and competence. Overconfidence is a tendency to overestimate one’s own skills, prospects for success, or the probability of a positive outcome. In the context of our experiment, overconfident individuals will attach a higher probability to the state that their chosen answer is correct and assign it a higher certainty equivalent. This behavior is akin to failure to insure or underinsure property. The result that individuals are overconfident, i.e. overestimate their self-assessed knowledge, is a consistent finding in the psychology of judgment literature (DeBondt and Thaler, 1995). Recently, the concept has received more attention in the economics literature. Miscalibration or overconfidence in financial decision making has been explored in analytical studies (Odean,1999; Gervais and Odean, 2001; Daniel et al., 1998) as well as empirical studies (Kirchler and Maciejovsky, 2002; Camerer and Lovallo, 1999).1 These studies use interpretations of overconfidence derived from the psychological literature (Laschke and Weber, 1999). Elicited certainty equivalents from individuals that exceed expected earnings based on revealed success (accuracy) rate would be evidence of overconfidence. The certainty equivalent elicited using the BDM procedure can be used to infer an individual’s subjective probability that her answer is correct. The precision of this estimate relies on the characteristic “approximate risk neutrality” of expected utility maximizers (Rabin, 2000). This leads to our next testable hypothesis.
400
B.T. Ewing et al.
Hypothesis 2: Decision makers exhibit overconfidence in the decision task. Overconfident individuals will state certainty equivalents for the bet on the correctness of their answer that exceed the payoff expectation based on their accuracy. Ever since Ellsberg (1961) presented his famous paradox, researchers have been interested in modeling and understanding the distinction between risk and ambiguity. Camerer and Weber (1992) provided a review of empirical work and theoretical models of decisions under ambiguity.2 Tversky and Kahneman (1992) described ambiguity effects as preferences that differ with the source of uncertainty. Using the Tversky and Kahneman line of reasoning, in order to understand individual attitudes towards ambiguity one has to understand source preferences (what makes the individual prefer one source of uncertainty over another). Heath and Tversky (1991) offered perceived competence as one explanation for source preferences. According to their competence hypothesis, the willingness to accept ambiguity depends on more than the judged probability of future outcomes and the information available. It also depends on the individual’s assessment of his or her own ability to evaluate and process the information accurately in the relevant decision context. Heath and Tversky (1991) concluded that subjects were ambiguity seeking (averse) only in the contexts that they felt relatively more (less) knowledgeable and interpreted this positive relationship as a competence effect. As stated above our sample of participants had almost half the participants from the WISE graduate program and the other half from HOM and MD/MBA programs. A manifestation of source preferences for risk and ambiguity consistent with the competence hypothesis would show the WISE students to be more ambiguity seeking than the HOM and MD/MBA students after controlling for accuracy in judgment. This leads to the following testable hypothesis. Hypothesis 3: Participants with graduate training relevant to the context of the engineering experiment will reveal subjective probabilities that exceed the observed probability correct by a larger proportion than the HOM and MD/MBA students (which is consistent with the competence hypothesis). Next, we will discuss our results in light of the three aforementioned hypotheses.
Results First of all we will reveal the outcome of the engineering experiment. The original plan for the engineering experiment was to go through three flow speeds intended to produce mean wind speeds of 20, 45, and 60 mph. In fact, a second series of wind flow speeds included an 80 mph segment. Since this change in the engineering experimental design took place after the economic experiment, we could not take advantage of the more balanced approach to the two types of structures. In fact, the manufactured home suffered a breach in the building envelope at 80 mph. A windward window broke and there was some separation between the side walls and the roof. Both structures lost shingles at 45 mph
Wind hazard risk perception 401 (flow speed 2). Neither building suffered a breach of the building envelope at the flow speeds that we asked subjects to consider. Figure 23.1 shows the predictions by all participants with a star on the outcome subsequently observed for the C-130 experiment. The modal response for the modular home was 60 mph for shingle failure and 80+ and “no failure” for breach of the building envelope. For the case of the manufactured home, the modal response was 45 mph and 60 mph for shingle failure and 60 mph for breach of the building envelope. Hypothesis 1: Experimental subjects will predict failure at lower wind speed for the manufactured home than the modular. Further, WISE students will differ from the HOM students in their predictions. When we examine the predictions on loss of shingles, we find that 29 percent of the subjects predicted the modular home to lose shingles at wind speeds of 45 mph or less whereas 50 percent of subjects predicted this type of failure at 20 MPH 45 MPH 60 MPH 80+ MPH No failure
Subject predictions shingle failure – modular
20 MPH 45 MPH 60 MPH 80+ MPH No failure
Subject predictions breach – modular
20 MPH 45 MPH 60 MPH No failure
Subject predictions shingle failure–manufactured
20 MPH 45 MPH 60 MPH No failure
Subject predictions breach–manufactured
Figure 23.1 Subject predictions on failure by shingle loss and building breach (modular and manufactured test specimens).
402
B.T. Ewing et al.
wind speeds of 45 mph or less for the manufactured home. For the failure mode, “breach of the building envelope” at wind speeds of 60 mph or less, the predictions were 14 percent and 64 percent for modular and manufactured structures respectively. A χ2 test for differences in proportions was used to evaluate whether the group viewed the modular and manufactured homes differently. For building failure by loss of shingles, we cannot reject the hypothesis of equal proportions predicting failure at wind speeds of 45 mph or less. However, for building failure by breach of the building envelope, we find a significant difference in the proportion that predict a failure at 60 mph or less (α = 0.05). As for differences between the WISE and the HOM and MD/MBA students, we could not reject the hypothesis of equal proportions except in the case of breach of the modular building; in particular, the WISE students were more pessimistic than the HOM and MD/MBA students. Hypothesis 2: Decision makers exhibit overconfidence in the decision task. Overconfident individuals will state certainty equivalents for the bet on the correctness of their answer that exceed the payoff expectation based on their accuracy. In order to evaluate our results pertaining to hypothesis 2, we use the certainty equivalent or selling price for each answer and compare it to the payoff that would have occurred ($0 or $10) if the answer had been played out. If the mean certainty equivalent equals the mean answer-contingent payoff, then that implies that, at least as a group, the participants were accurate in their self assessment. Using a t-test (assuming unequal variances), we can reject the null hypothesis that the means are equal in favor of the null that the mean certainty equivalent exceeds the mean answer-contingent payoff (t-statistic = 4.512, p-value < 0.01). As a group, our subjects were overconfident. In addition, we calculated the expected payoff for each subject based on her choices and compared it to the certainty equivalents across the four cases. The average certainty equivalent was significantly higher than the answer-contingent payoff (t-statistic = 4.11, p-value < 0.01). Descriptive statistics on the certainty equivalent for the modular-shingle failure case, modular-breach failure case, manufactured-shingle failure, and manufactured-breach failure mode are reported in Table 23.1. The mean and median values across the four cases are similar. Two sample t-tests were performed to test whether there is any difference in the certainty equivalent between the homes for shingles and for breach. We failed to reject any hypothesized difference in mean selling price. Figure 23.2 shows the certainty equivalents for incorrect and correct answers to each question. Interestingly, the variances in certainty equivalents are larger for the manufactured home. We compare the sample variances for shingles by home type and breach by home type using an F-test on sample variances where the null hypothesis is equal population variances. We find significantly higher variances in the certainty equivalents elicited for the manufactured home (shingle F-statistic = 2.97, p-value < 0.01; breach F-statistic = 2.20, p-value = 0.02) indicating a wide range in confidence that subjects place in their answers relative to the modular home.
Wind hazard risk perception 403 Hypothesis 3: Participants with graduate training relevant to the context of the engineering experiment will reveal subjective probabilities that exceed the observed probability correct by a larger proportion than the HOM and MD/MBA students if consistent with the competence hypothesis. This hypothesis relates to the notion of comparative ignorance and source preferences. Whereas the HOM and MD/MBA students would be presumably more comfortable with decisions associated with health risks, the WISE students are more familiar with wind hazards. To detect differences in source preferences, we will again rely on the relationship between certainty equivalents and accuracy. A higher certainty equivalent indicates that an individual assigns a higher probability that her answer is correct. Table 23.1 shows the mean certainty equivalent and accuracy (percentage of correct answers) for HOM and MD/MBA and the WISE students. We find no statistical differences in the performance of the two groups. Further, the certainty equivalents are not statistically different either. At least for this small sample, we do not find evidence to support the competence hypothesis.
Concluding remarks This study has explored judgment under uncertainty in the context of wind damage to modular and manufactured homes. Further, we measure willingness to insure against losses stemming from errors in judgment by eliciting a certainty equivalent for the gamble based on the accuracy of participants’ predictions. By embedding an engineering experiment to provide the context of the decision environment we can elicit judgment concerning the effect of measurable wind speeds on two test specimens within a manageable time frame and within a controlled environment. We find that individuals exhibit overconfidence in that they state certainty equivalents for gambles that significantly exceed their expected payoffs based on the accuracy of their predictions. This would be equivalent to underinsuring property for losses against wind damage. Table 23.1 Descriptive statistics certainty equivalents and accuracy Modular
All-mean All-median All-maximum All-minimum All-variance HOM, MD/MBA – mean WISE – mean HOM, MD/MBA – accuracy WISE – accuracy
Manufactured
Shingle
Breach
Shingle
Breach
5.811 5.325 8.000 4.380 1.116 5.804 5.821 25.0% 33.3%
5.698 5.250 8.000 3.750 1.088 5.487 5.979 25.0% 66.7%
5.925 5.745 10.000 2.500 3.316 5.773 6.129 50.0% 16.7%
5.400 5.375 8.000 2.160 2.396 5.210 5.654 37.5% 33.3%
404
B.T. Ewing et al. Modular– shingles
Elicited certainty equivalent ($)
Elicited certainty equivalent ($)
Modular – breach
10
10 9 8 7 6 5 4 3 2
9 8 7 6 5 4 3 2 1
1
0
0 No
No
Yes
Manufactured – shingles
Modular – breach
10 Elicited certainty equivalent ($)
Elicited certainty equivalent ($)
10 9 8 7 6 5 4 3 2
Yes Correct
Correct
9 8 7 6 5 4 3 2 1
1
0
0
No
No
Yes
Yes Correct
Correct
Figure 23.2 Certainty equivalents for incorrect and correct answers.
As of 2000, there were ten million manufactured homes housing 22 million Americans. This comprised 7.5 percent of all housing units in the United States. Further, manufactured homes represent the fastest growing segment of the housing market with 13 percent of new starts of single family units in 2001. It is important that we understand what people believe about the wind resistance of manufactured and modular homes in order to craft effective policy that supports well informed purchase and mitigation decisions.
Wind hazard risk perception 405
Acknowledgments This work was performed under the Department of Commerce NIST/TTU Cooperative Agreement Award 70NANB8H0059. We thank Robert E. Chapman, Economist, Office of Applied Economics, BFRL, NIST for his valuable input.
Notes 1 For a survey of research on overconfidence see Laschke and Weber (1999) (in German). 2 Recent literature in the modeling of ambiguity focuses on rank dependent models. A good literature review on that issue can be found in Diecidue and Wakker (2001). For a list of annotated references on decisions and uncertainty by P. Wakker, see Online, available at: fee.uva.nl/creed/wakker/refs/rfrncs.htm (accessed March 2006).
References Adams, B. J., J. A. Womble, M. Z. Mio, J. B. Turner, K. C. Mehta, and S. Ghosh (2004) “Field Report: Collection of Satellite-referenced Building Damage Information in the Aftermath of Hurricane Charley.” MCEER/NHRAIC Quick Response Report. Angbazo, L. A. and R. Narayanan (1996) “Catastrophic Shocks in the Property-Liability Insurance Industry: Evidence on Regulatory and Contagion Effects,” Journal of Risk and Insurance, 63:619–637. Becker, G. M., M. H. DeGroot, and J. Marschak (1964) “Measuring Utility by a SingleResponse Sequential Method,” Behavioral Science, 9:226–232. Camerer, C. and D. Lovallo (1999) “Overconfidence and Excess Entry: An Experimental Approach,” American Economic Review, 89:306–318. Camerer, C. and M. Weber (1992) “Recent developments in Modeling Preferences: Uncertainty and Ambiguity,” Journal of Risk and Uncertainty, 5:325–370. Daniel, K., D. Hirshleifer, and A. Subrahmanyam (1998) “Investor Psychology and Security Market Under- and Overreactions,” Journal of Finance, 53:1839–1885. De Bondt, W. F. M. and R. H. Thaler (1995) “Financial Decision-Making in Markets and Firms: A Behavioral Perspective,” in Jarrow, R., V. Maksimovic and W. Ziemba (eds.), Handbooks in Operations Research and Management Science: Finance, Amsterdam: Elsevier, pp. 385–410. Diecidue, E. and P. Wakkes (2001) “On the Intuition of Rank-Dependent Utility,” Journal of Risk and Uncertainty, 23:281–298. Ellsberg, D. (1961) “Risk, Ambiguity and the Savage Axioms,” Quarterly Journal of Economics, 75:643–669. Fox, C. and A. Tversky (1995) “Ambiguity Aversion and Comparative Ignorance,” Quarterly Journal of Economics, 110:585–603. Gervais, S. and T. Odean (2001) “Learning to be Overconfident,” Review of Financial Studies, 14:1–27. Heath, C. and A. Tversky (1991) “Preference and Belief: Ambiguity and Competence in Choice under Uncertainty,” Journal of Risk and Uncertainty, 4:5–28. Kirchler, E. and B. Maciejowsky (2002) “Simultaneous Over- and Underconfidence: Evidence from Experimental Asset Markets,” Journal of Risk and Uncertainty, 25:65–85. Knight, F. H. (1921) Risk, Uncertainty, and Profit. Boston, MA: Houghton Mifflin.
406
B.T. Ewing et al.
Laschke, A. and M. Weber (1999) “Der Overconfidence Bias und seine Konsequenzen in Finanzmärkten,” Sonderforschungsbereich 504, Working Paper Series No. 99–63. Meecham D., D. Surry, and A.G. Davenport (1991) “Magnitude and Distribution of Wind-induced Pressures on Hip and Gable Roofs,” Journal of Wind Engineering and Industrial Aerodynamics, 38:257–272. Merrell, D., K. Simmons, and D. Sutter (2005) “The Determinants of Tornado Casualties and the Benefits of Tornado Shelters,” Land Economics, 81:87–99. National Hurricane Center, “The Saffir-Simpson Hurricane Scale.” Online, available at: nhc.noaa.gov/aboutsshs.shtml (accessed February 2 2006). Odean, T. (1999) “Do Investors Trade Too Much?” American Economic Review, 89:1279–1298. Rabin, M. (2000) “Risk Aversion and Expected-Utility Theory: A Calibration Theorem,” Econometrica, 68:1281–1292. Rappaport, E. (1993, updated 2005) “Preliminary Report: Hurricane Andrew 16–28 August 1992,” National Hurricane Center. Online, available at: nhc.noaa.gov/1992andrew.html (accessed February 2006). Simmons, K. and D. Sutter (2005) “WSR-88D Radar, Tornado Warnings and Tornado Causalities,” Weather and Forecasting, 20:301–310. Smith, D. A., C. Letchford, K. Mehta, and H. Zhu (2004a) “Full-scale Testing of a Modular Home Using the Flow from a C-130 Aircraft,” Report to the Department of Commerce, National Institute of Standards and Technology, Windstorm Mitigation Initiative, Gaithersburg, MD, USA. Smith, D. A., C. Letchford, K. Mehta, and H. Zhu (2004b) “Full-scale Testing of a Manufactured Home Using the Flow from a C-130 Aircraft,” Report to the Department of Commerce, National Institute of Standards and Technology, Windstorm Mitigation Initiative, Gaithersburg, MD, USA. Tversky, A. and D. Kahneman (1992) “Advances in Prospect Theory: Cumulative Representations of Uncertainty,” Journal of Risk and Uncertainty, 5:297–323. Xu, Y. L. and G. F. Reardon (1998) “Variation of Wind Pressure on Hip Roofs with Roff Pitch,” Journal of Wind Engineering and Industrial Aerodynamics, 73:267–284.
24 Consequentiality and demand revelation in double referenda Katherine S. Carson, Susan M. Chilton, and W. George Hutchinson
Introduction Estimating the potential benefits from proposed changes to environmental policies may require the use of survey techniques, particularly when the public’s preferences for these goods are significantly driven by non-use values. The contingent valuation (CV) method is a survey technique in which respondents supply information about their willingness to pay (WTP) for proposed policy options after receiving a description of how the policy changes are likely to affect such environmental goods as air quality, water quality, or amount of open space available. Often, questions about willingness to pay are phrased in the form of hypothetical referendum questions, such as, “If your cost of program X were $D, would you vote for program X?” Referendum questions have the advantage of cognitive simplicity and a market-like “take-it-or-leave-it” setting that is familiar to the respondent. The use of referendum questions in CV surveys was also recommended by the National Oceanic and Atmospheric Administration’s (NOAA) Panel on Contingent Valuation. In addition to the cognitive reasons cited above, the panel cited the presumed lack of incentive for subjects to strategically misrepresent their preferences as a reason for recommending their use (Arrow et al., 1993). Although referendum questions are cognitively simple for respondents to answer, surveys employing a single referendum question require extremely large sample sizes to generate reliable results. For this reason, survey practitioners employ surveys in which subjects respond to multiple referendum questions. One variant of the simple referendum mechanism is the double-bounded dichotomous choice (DBDC or double referendum) mechanism. In this mechanism, respondents answer two referendum questions of the form described above. Respondents’ costs in the second referendum question are contingent upon their responses to the first question. If a respondent answers no to the first question, s/he receives a lower cost offer in the second question. The reverse is true for a respondent who initially replies yes. This format allows respondents’ WTP to be bounded into four closer intervals, rather than the two large intervals that result from a single referendum question. Hanemann et al. (1991) describe the statistical efficiency advantages of the double referendum mechanism. Two recent
408
K.S. Carson et al.
examples of surveys employing the DBDC mechanism are Bergstrom et al. (2000), which evaluates the benefits of alternative programs to prevent streambed erosion due to grazing along the Little Tennessee River watershed; and Banzhaf et al. (2004), which estimates the benefits of ecosystem improvements in Adirondack Park resulting from reduced acid rain under the new SO2 and NOx emissions standards proposed in the Bush administration’s Clear Skies initiative. Although implementing the DBDC method can result in more efficient estimates of willingness to pay, these estimates are often biased in ways that call into question whether respondents’ preferences are consistent with the assumptions of economic theory. Mean or median willingness to pay from responses to the first referendum question often differs from that estimated using responses to both questions. In addition, the error terms of the two estimated valuations are not perfectly correlated, a result inconsistent with the existence of a single underlying distribution of respondents’ WTP. Frequently, the number of no responses to the second valuation question is higher than would be predicted from the distribution of WTP based on responses to the first valuation question alone (Hanemann et al., 1991; McFadden and Leonard, 1993). A higher than expected frequency of yes–yes and no–no responses has also been observed (Cameron and Quiggin, 1994). Both the studies by Bergstrom et al. (2000) and Banzhaf et al. (2004) exhibit some of these inconsistencies. Such inconsistencies in the WTP data can call into question the reliability of the survey mechanism and complicate their use in policy analysis. Researchers have proposed numerous solutions to the problems found in double referenda to include econometric approaches (Cameron and Quiggin, 1994; Alberini, 1995), redesigning the survey mechanism (Cooper et al., 2002), and theoretical explanations for respondents’ inconsistent responses (Carson et al., 1999; Bateman et al., 2001; DeShazo, 2002). This last approach depends on the assumption that the respondents perceive the hypothetical surveys to be consequential. That is, although the scenario presented to the respondent is hypothetical in nature, the respondent perceives that his or her response will have an influence on the policy decision, and ultimately on the respondent’s utility from the policy decision. This assumption diverges from the literature on hypothetical bias (e.g. Cummings et al., 1997), which typically attributes bias in hypothetical referenda to the hypothetical nature of the decision. However, if the respondents view the survey as purely hypothetical, or inconsequential (Carson et al., 1999), then they have no incentive to respond either truthfully or strategically, since neither action has an effect on their expected utility. Therefore, even though the survey may be presented to respondents as hypothetical in nature, they may not perceive it as such. It is an open question whether or not respondents view hypothetical surveys as consequential, but given that effort is made to add as much realism as possible to both the description of the possible policy alternatives and the effects of these policies on environmental conditions, it is plausible that respondents believe their answers will influence policy outcomes. It is important to note that respondents voting in order to influence provision of the public good without incurring any costs in a survey they perceive to be consequential would
Double referenda 409 generate results consistent with those in the hypothetical bias literature. The purpose of this research is to investigate whether introducing consequentiality into a double referendum removes or increases divergences from demand revealing behavior in voting. Removing such divergences from demand revelation also removes observed inconsistencies in responses to the first and second votes. We report a laboratory experiment designed to explore subjects’ responses in a consequential double referendum and the behavioral heuristics invoked when voting. The quantitative results indicate that introducing consequentiality into the double referendum mechanism reduces the rate of non-demand revelation in the first vote and increases the rate of non-demand revelation in the second vote. Some of these responses are due to subjects strategically responding to the second referendum question in a non-demand revealing manner so as to maximize their earnings. The results indicate that developing techniques for field data analysis based on alternative theories of subject behavior in these mechanisms may be an effective way to eliminate the biased results without sacrificing statistical efficiency. The remainder of the chapter proceeds as follows. We summarize the design and results of an experiment on behavior in an induced value inconsequential double referendum (Burton et al., 2002) that motivated the current study. We follow with a description of the experimental design of the consequential double referendum experiment, results of the experiment, discussion and concluding remarks.
Inconsequential double referendum experiment A total of 144 students from the US Air Force Academy participated in the inconsequential double referendum experiment. In each experimental session, subjects in groups of nine voted on the provision of a group good, which for the purposes of the experiment was described as an investment. Subjects were initially endowed with 100 experimental tokens. In the first vote, if the group voted to make the investment, every subject received a return of Ri (40 or 80 tokens) and paid a cost of Ci (50 or 70 tokens). The vote required a two-thirds plurality to pass. If the vote passed, all subjects received the return and paid the cost of the investment, regardless of whether they had voted for the investment or not. Because the vote was inconsequential, subjects received $10, the dollar equivalent of 100 tokens, regardless of the outcome of the vote. Following the first vote, subjects received a second envelope and were asked to vote again. In order to be consistent with how field double referendum surveys are administered, subjects did not know that they would be voting a second time until they received the instructions for the second vote. The rules of the second vote were identical to the first, except that subjects’ costs were equal to Ci plus or minus 20, depending on whether they voted yes or no in the first vote. As with the first round vote, subjects received $10 in earnings regardless of the outcome of the second round vote. Figure 24.1 presents the double referendum experimental design. The demand revealing voting paths are in bold.
410
K.S. Carson et al. C = 70 C = 50 48 inconsequential 48 consequential
Yes
Yes No Yes
No C = 30
No
R = 40 C = 90 C = 70 48 inconsequential 48 consequential
Yes
No
No
Yes C = 50
C = 70 C = 50 24 inconsequential 96 consequential
Yes
No Yes No Yes
No C = 30
R = 80 C = 90 C = 70 24 inconsequential 96 consequential
Yes
Yes
No Yes No Yes
No C = 50
No
Figure 24.1 Schematic of the design of the double referendum experiments.
The distribution of Ri was one-third Ri = 80 tokens and two-thirds Ri = 40 tokens. The distribution of Ci was one-half Ci = 50 tokens and one-half Ci = 70 tokens. All experimental packets were assembled prior to the first experimental session, using a random number generator to assign the distribution of (Ri, Ci) for each session. In this way, the distribution of (Ri, Ci) was different in each session, but the overall distribution of (Ri, Ci) was preserved. This distribution of returns and costs, combined with the two-thirds voting rule, makes it likely that the first round vote will fail. Subjects knew that different subjects had different values of Ri and Ci, and that these values were randomly assigned from an underlying distribution, but they did not know the other possible values of Ri and Ci or the distribution from which these values were drawn. Subjects also did not know the link between their first round vote and second round costs. Subjects knew that the experiment had multiple parts, but did not know what each part entailed until they received the instructions for the next part of the experiment.
Double referenda 411 After the second round vote, subjects filled out a qualitative questionnaire asking them why they voted the way they did. Their responses were content analyzed (Krippendorff, 1980) in order to gain insight into how subjects voted.1 Table 24.1 reports the predicted and observed vote distributions in the inconsequential double referendum experiment. The predicted votes are those consistent with demand revealing votes in both the first and second voting rounds. Note that since the experiment is inconsequential and has no economic incentives towards voting one way or the other, it is not correct to say that these votes are incentive compatible from a theoretical perspective. Despite the inconsequential nature of the task, an overwhelming majority of subjects (83 percent) cast demand revealing votes in both voting rounds. A chisquared test (Agresti, 1996) comparing the distribution of non-demand revealing votes to the underlying distribution of Ri (one-third Ri = 80, two-thirds Ri = 40) shows that subjects with Ri = 40 are more likely to cast a non-demand revealing first round vote at the 10 percent level of significance (p = 0.082). The rates of non-demand revelation in the second vote and in both votes are not significantly different across subject types (p = 0.317 and p = 0.344). Although non-demand revealing WTP boundings were relatively rare, when they occurred they originated in the first vote, rather than in the second as postulated in both the theoretical and empirical literature (e.g. Carson et al., 1999; DeShazo, 2002; Cameron and Quiggin, 1994). When the votes are aggregated across subject types, the overall vote distribution in Table 24.1 is significantly different from the demand revealing prediction at the 5 percent level of significance. The qualitative analysis revealed that subjects who cast demand revealing votes generally treated the hypothetical task as though it were real. These subjects cited self-interested, financially motivated heuristics such as potential profits, losses, or comparisons of their returns and costs when explaining how they made their voting decision, even though these potential profits or losses did Table 24.1 Predicted and observed vote distributions – inconsequential double referendum
R = 40, C = 50 predicted observed R = 40, C = 70 predicted observed R = 80, C = 50 predicted observed R = 80, C = 70 predicted observed
Yes–Yes
Yes–No
No–Yes
No–No
Total
n
n
n
%
n
n
%
%
%
%
0 6
0.0 12.5
0 5
0.0 10.4
48 35
100.0 72.9
0 2
0.0 4.2
48 48
100.0 100.0
0 5
0.0 10.4
0 1
0.0 2.1
0 1
0.0 2.1
48 41
100.0 85.4
48 48
100.0 100.0
24 23
100.0 95.8
0 0
0.0 0.0
0 0
0.0 0.0
0 1
0.0 4.2
24 24
100.0 100.0
0 1
0.0 4.2
24 21
100.0 87.5
0 0
0.0 0.0
0 2
0.0 8.3
24 24
100.0 100.0
412
K.S. Carson et al.
not translate into experimental earnings. Demand revealing subjects rarely mentioned altruistically driven considerations about others in the group or the hypothetical nature of the task, in contrast to subjects whose voting patterns were not demand revealing in either or both votes. The quantitative and qualitative results from the inconsequential double referendum raise two questions about field double referendum surveys. First, do the anomalies in field responses arise from behavior in the first vote, rather than in the second? Second, do respondents to hypothetical field surveys treat these surveys as consequential in the same way that laboratory respondents to the inconsequential referendum did? The theoretical explanations for anomalies in responses to field double referenda are relevant only if the answer to this second question is yes. It is beyond the scope of this study to determine whether CV survey respondents treat the decision as consequential or inconsequential, but we can address the question of how they respond to consequential and inconsequential versions of the double referendum mechanism. Consequentiality is a key component of the incentive properties of the survey as a whole. Having examined the inconsequential version of the double referendum mechanism above, we now turn to an investigation of a consequential double referendum to determine if the introduction of consequentiality removes or increases the anomalies in voting and explore the implications for CV surveys.
Consequential double referendum experimental design A total of 288 students at the US Air Force Academy participated in the experiment. These subjects were randomly assigned to one of four treatments (72 subjects per treatment). Each treatment consisted of eight groups of nine subjects making a decision about a group investment using a consequential double referendum mechanism. The basic design of the double referendum experiment is identical to that of the inconsequential double referendum experiment. Each subject has an endowment of 100 tokens, an Ri of either 40 or 80 tokens, and an initial Ci of 50 or 70 tokens. If six of the nine subjects in a group vote yes, then the group makes the investment. If the first vote passes, all subjects’ token balance changes to 100 − Ci + Ri tokens, regardless of whether a subject individually voted yes or no. Because the referendum is consequential, subjects’ token balance is convertible to dollars at the rate of 10 cents per token. In the second vote, subjects’ costs either increase or decrease by 20, depending on whether a subject voted yes or no in the first round vote. After the second vote, subjects complete a qualitative questionnaire about why they voted the way they did. Because the referendum is consequential, there are some additional experimental design issues that must be addressed. The first is how to avoid strategic behavior in the first vote. This is accomplished by telling subjects that the experiment has multiple parts, and that they will learn the rules in the latter parts of the experiment when they get to them. At the conclusion of the first round vote,
Double referenda 413 subjects know the outcome of the vote and compute their part one token balance. Subjects know that their balance may change as a result of the decisions they make in the latter parts of the experiment. The second question is what determines subjects’ earnings in the second vote. In field double referendum surveys, it is unclear whether subjects are voting to have the good at the second cost vs. no good at all, or to have the good at the second cost vs. having it at the first cost. Therefore, we include both possibilities as treatments in the experimental design. One important hypothesis about non-demand revelation in field surveys that has been postulated but not tested is that subjects’ behavior stems from the fact that they perceive the link between their first vote and their second cost and, therefore, use their second vote in order to influence possible future costs. To explore this issue in a controlled setting, we include information on the relationship between the first vote and second cost as a treatment variable as well. If the information has a significant effect on behavior, the result will indicate to CV survey designers whether it is appropriate to provide such information explicitly, or to take steps to ensure that the link between the response to the first question and the second cost is not perceived. The resulting 2 × 2 experimental design contains the following four treatments: Second Vote vs. No Investment (2 vs. None), No Information. The second vote is making the investment at the second cost versus no investment at all, regardless of the outcome of the first round vote. Subjects do not know the link between the first vote and second cost. Second Vote vs. First Vote (2 vs. 1), No Information. The second vote is making the investment at the second cost versus the outcome of the first round vote. Subjects do not know the link between their first vote and second cost. Second Vote vs. No Investment (2 vs. None), Information. The second vote is making the investment at the second cost versus no investment at all, regardless of the outcome of the first round vote. Subjects know the link between their first vote and second cost. Second Vote vs. First Vote (2 vs. 1), Information. The second vote is making the investment at the second cost versus the outcome of the first round vote. Subjects know the link between their first vote and second cost. The Second Vote vs. No Investment, No Information treatment most closely mirrors the inconsequential double referendum experimental design. However, the introduction of consequentiality does not permit a perfect one-to-one mapping between the inconsequential double referendum and one of the consequential double referendum treatments. Figure 24.1 presents a schematic of the double referendum experimental design. The bold pathways are the demand revealing vote paths for all treatments. For all treatments, the incentive structure is such that subjects maximize their earnings by casting demand revealing votes in the first round. The Second Vote vs. No Investment treatments are also incentive compatible in the second voting round. However, the Second Vote vs. First Vote treatments are not
414
K.S. Carson et al.
incentive compatible in the second round if the first round vote passes. In these treatments, demand revealing subjects who have (Ri, Ci) combinations of (40, 70) or (80, 50) in the first vote will have second costs of 50 and 70, respectively, in the second vote. Subjects with an Ri = 40 and second cost of 50 will prefer the second vote outcome to the first. Therefore, their profit-maximizing strategy is to vote yes, even though their cost exceeds their return in the second vote. Similarly, demand revealing subjects with an initial (Ri, Ci) of (80, 50) will have a second cost of 70. These subjects will earn more money if they stay with the first vote outcome, resulting in a non-demand revealing vote of no as their profit-maximizing choice in the second vote. The grey shaded boxes report the profit maximizing second votes for subjects in the Second Vote vs. First Vote treatments if the first round vote passes. These votes are reported for both demand revealing and non-demand revealing first round votes. Therefore, the profit maximizing and demand revealing votes differ in the Second Vote vs. First Vote treatments if the first round vote passes. Although the induced values are the same as in the inconsequential double referendum experiment, the distribution in the consequential referendum experiment differs. There are two-thirds Ri = 80 and one-third Ri = 40 subjects in this experiment to increase the chances that the first round vote will pass, thus creating more opportunities for strategic misrepresentation of preferences in the second vote of the Second Vote vs. First Vote treatments. As in the inconsequential double referendum experiment, the first round costs are evenly split between the two return types. Figure 24.1 reports the numbers of subjects of each (Ri, Ci) type in the inconsequential and consequential experiments below each first round cost. Since the consequential experiment contains four treatments, subjects of each type are evenly split among the four treatments. The distribution of induced values for each session of each treatment was determined a priori using a random number generator. All experimental packets were assembled in advance such that the appropriate materials would be available to the moderator whether a subject voted yes or no in the first round vote. Subjects knew that different subjects had different values of Ri and Ci and that these values were drawn from an overall distribution of values. Subjects did not know the other possible values of Ri and Ci besides their own or the distribution from which their assigned values of Ri and Ci were drawn. All subject participation was completely voluntary. Subjects were informed of the amount of time the experiment would take (approximately 40 minutes), average earnings ($10.00), and were well aware of the alternative uses of their time.2 Subjects were free to withdraw from the experiment at any time.
Results The experimental design results in the following testable hypotheses. Hypothesis 1: The rate of demand revelation is not different in an inconsequential double referendum and a consequential double referendum.
Double referenda 415 Hypothesis 2: If the first vote fails, there is no difference in the vote distributions in the 2 vs. None and 2 vs. 1 treatments. Hypothesis 3: If the first vote passes, there is no difference in the vote distributions in the 2 vs. None and 2 vs. 1 treatments. Hypothesis 4: The inclusion of information about the relationship between first round votes and second round costs has no effect on the vote distributions. Investigation of hypothesis 1 provides insight about the effect of consequentiality in a double referendum. We have no a priori expectations about whether or not this hypothesis will be rejected. Given that the opportunity for strategic behavior in the Second Vote vs. First Vote treatments exists only if the first round vote passes, we expect to fail to reject hypothesis 2 and reject hypothesis 3. If subjects use their second votes in order to bargain down their future costs, we expect to reject hypothesis 4. We employ both the quantitative and qualitative data from the experiments to examine these four hypotheses in the discussion below. Consequentiality and demand revelation Table 24.2 reports the demand revealing predictions and observed votes for all four treatments by subject type. Table 24.3 reports the results of chi-squared tests (Agresti, 1996) comparing the vote distribution for each subject type for each treatment of the consequential double referendum to the vote distribution in the inconsequential double referendum. There are no differences for subjects with (Ri, Ci) combinations of (40, 50) and (80, 70). In addition, there is no difference between behavior in the Second Vote vs. No Investment, No Information treatment and behavior in the inconsequential double referendum for any subject type. Therefore, it appears that behavior in the inconsequential double referendum mirrors that in a consequential double referendum when the two voting rounds are treated independently and no information on the link between the first vote and second cost is provided. There are significant differences in behavior in the inconsequential double referendum and the Second Vote vs. First Vote treatments for subjects with (Ri, Ci) combinations of (40, 70) and (80, 50). These subject types have incentives to misrepresent strategically their preferences in the second vote if the first vote passes. Therefore, it appears that if subjects believe that a hypothetical referendum is consequential, and believe that in the second vote they are voting to have the good at the second cost versus having it at the first cost, the referendum may show anomalous response patterns in the second vote. In addition, there is a significant difference in the vote distribution for Ri = 80, Ci = 50 subjects in the Second Vote vs. No Investment, Information treatment of the consequential double referendum and the inconsequential double referendum. This result indicates a possible effect of information about the relationship between a subject’s first vote and second cost. We will investigate this result further later in this chapter by examining the heuristics subjects used in determining their vote in the second round of the referendum.
416
K.S. Carson et al.
Table 24.2 Demand revealing predictions and observed vote distributions – consequential double referendum treatments
R = 40, C = 50 predicted 2 vs. None, No Info 2 vs. 1, No Info 2 vs. None, Info 2 vs. 1, Info R = 40, C = 70 predicted 2 vs. None, No Info 2 vs. 1, No Info 2 vs. None, Info 2 vs. 1, Info R = 80, C = 50 predicted 2 vs. None, No Info 2 vs. 1, No Info 2 vs. None, Info 2 vs. 1, Info R = 80, C = 70 predicted 2 vs. None, No Info 2 vs. 1, No Info 2 vs. None, Info 2 vs. 1, Info
Yes–Yes
Yes–No
No–Yes
No–No
Total
n
n
n
%
n
n
%
%
%
%
0 0
0.0 0.0
0 0
0.0 0.0
12 12
100.0 100.0
0 0
0.0 0.0
12 12
100.0 100.0
2
16.7
0
0.0
10
83.3
0
0.0
12
100.0
0
0.0
1
8.3
9
75.0
2
16.7
12
100.0
0
0.0
0
0.0
12
100.0
0
0.0
12
100.0
0 1
0.0 8.3
0 0
0.0 0.0
0 1
0.0 8.3
12 10
100.0 83.3
12 12
100.0 100.0
0
0.0
2
16.7
2
16.7
8
66.6
12
100.0
0
0.0
0
0.0
1
8.3
11
91.7
12
100.0
0
0.0
0
0.0
5
41.7
7
58.3
12
100.0
24 22
100.0 91.7
0 2
0.0 8.3
0 0
0.0 0.0
0 0
0.0 0.0
24 24
100.0 100.0
12
50.0
11
45.8
1
4.2
0
0.0
24
100.0
19
79.2
5
20.8
0
0.0
0
0.0
24
100.0
10
41.2
14
58.3
0
0.0
0
0.0
24
100.0
0 2
0.0 8.3
24 18
100.0 75.0
0 1
0.0 4.2
0 3
0.0 12.5
24 24
100.0 100.0
1
4.2
22
91.6
0
0.0
1
4.2
24
100.0
5
20.8
18
75.0
0
0.0
1
4.2
24
100.0
4
16.7
20
83.3
0
0.0
0
0.0
24
100.0
Table 24.4 reports the levels and rates of non-demand revelation for the first vote only, second vote only, and both votes for the inconsequential double referendum experiment and the four treatments from the consequential double referendum experiment. Table 24.5 reports the results of tests for differences in the rates of non-demand revelation in the inconsequential double referendum and consequential double referendum treatments. In general, the rates of non-demand revealing voting in the first vote only are not significantly different in the inconsequential
Double referenda 417 Table 24.3 Chi-squared p-values for differences in vote distributions in inconsequential double referendum and consequential double referendum treatments, by subject type
Ri = 40, Ci = 50 Ri = 40, Ci = 70 Ri = 80, Ci = 50 Ri = 80, Ci = 70
2 vs. none, no info
2 vs. 1, no info
2 vs. none, info
2 vs. 1, info
p = 0.246 p = 0.850 p = 0.221 p = 0.624
p = 0.570 p = 0.020 p = 0.001 p = 0.837
p = 0.284 p = 0.445 p = 0.041 p = 0.199
p = 0.246 p = 0.001 p = 0.000 p = 0.148
Table 24.4 Levels and rates of non-demand revealing voting Treatment
Inconsequential (N = 144) Consequential – all treatments (N = 288) 2 vs. None, No Info (N = 72) 2 vs. 1, No Info (N = 72) 2 vs. None, Info (N = 72) 2 vs. 1, Info (N = 72)
First vote only
Second vote only
First and second votes
n
%
n
%
n
%
6 5
4.2 1.7
4 55
2.8 19.1
14 8
9.7 2.8
1 3 1 0
1.4 4.2 1.4 0.0
5 14 13 23
6.9 19.4 18.1 31.9
4 3 1 0
5.6 4.2 1.4 0.0
Table 24.5 Chi-squared p-values for tests of differences between rates of non-demand revelation in the inconsequential double referendum and consequential double referendum treatments Treatment
First vote only
Second vote only
First and second votes
2 vs. None, No Info 2 vs. 1, No Info 2 vs. None, Info 2 vs. 1, Info
p = 0.278 p = 1.000 p = 0.278 p = 0.079
p = 0.149 p = 0.000 p = 0.000 p = 0.000
p = 0.296 p = 0.152 p = 0.023 p = 0.006
double referendum and any of the consequential double referendum treatments. The rate of non-demand revelation in the second vote only is not different between the inconsequential double referendum and the Second Vote vs. None, No Information treatment. However, there are significantly more non-demand revealing votes in the other three consequential double referendum treatments than in the inconsequential double referendum. In addition, there are significantly fewer nondemand revealing double votes in both of the Information treatments than in the inconsequential double referendum. There is no significant difference in the rate of non-demand revealing voting in both voting rounds of the No Information treatments relative to the inconsequential double referendum. Therefore, although it
418
K.S. Carson et al.
appears that the introduction of consequentiality does not significantly affect the rate of non-demand revelation in the first round of a double referendum, it can significantly increase the rate of non-demand revealing voting in the second round of a double referendum if subjects believe that the second vote is for the good at the second cost vs. the outcome of the first vote. Information appears to reduce the rate of double-vote non-demand revelation. These results indicate that the introduction of consequentiality has little effect on first vote non-demand revelation, increases second vote non-demand revelation, and reduces double vote non-demand revelation. Qualitatively, demand revealing voters make their decisions using selfinterested, financially motivated heuristics. Demand revealing subjects in the consequential referendum frequently stated that they compared returns and costs, chose to vote so as to make a profit, or to avoid a loss. These comments are very similar to the heuristics employed by demand revealing voters in the inconsequential double referendum. The qualitative results indicate that demand revealing subjects employ similar methods to make their voting decisions, regardless of whether or not the referendum is consequential. Incentives in the second vote As depicted in Figure 24.1, some subjects in the Second Vote vs. First Vote treatments have an incentive to cast non-demand revealing second votes if the first vote passes in order to increase their experimental earnings. If the first round vote fails, then all four treatments have the same incentive properties. Because the distribution of induced values is randomly assigned to each experimental session of each treatment, and because not all subjects will cast demand revealing first votes, some first round votes will fail. Table 24.6 reports the results of tests comparing the vote distributions in the Second Vote vs. No Investment and Second Vote vs. First Vote treatments, conditioned on the outcome of the first vote in the Second Vote vs. First Vote Treatments. There are no significant differences between either of the Second Vote vs. No Investment treatments and the Second Vote vs. First Vote, No Information treatment when the first vote of the Second Vote vs. First Vote, No Information treatment failed. However, there are differences in the vote distributions between the Second Vote vs. No Investment treatments and the Table 24.6 Chi-squared p-values for tests of differences between vote distributions, conditioned on first vote outcome First vote failed
2 vs. None, No Info 2 vs. None, Info
First vote passed
2 vs. 1, no info
2 vs. 1, info
2 vs. 1, no info
2 vs. 1, info
p = 0.567 p = 0.290
p = 0.066 p = 0.027
p = 0.016 p = 0.054
p = 0.000 p = 0.000
Double referenda 419 Second Vote vs. First Vote, Information treatment when the first vote of the Second Vote vs. First Vote, Information treatment failed. This result may point to a role for information in influencing subjects’ second votes. If the first vote passed, there is significant difference in behavior in both of the Second Vote vs. First Vote treatments and the Second Vote vs. No investment treatments. Therefore, we fail to reject hypothesis 2 when there is no information in the Second Vote vs. First vote treatment, but reject it when there is information added to this treatment. There is strong evidence to reject hypothesis 3, consistent with the predictions of economic theory. If the first vote passes and subjects cast strategically non-demand revealing second votes, the Second Vote vs. First Vote treatments should have more Yes–No and No–Yes votes than the demand revealing prediction. An examination of Table 24.2 indicates that the vote distributions in these treatments deviate from the demand revealing prediction in the expected direction. These deviations are significant at the 5 percent level of significance when the vote distributions are aggregated across subject types. In the Second Vote vs. First Vote, No Information treatment, 13 of the 31 subjects with opportunities to cast strategically non-demand revealing votes did so. In the Second Vote vs. First Vote, Information treatment, 18 of 25 subjects voted in a non-demand revealing manner in order to increase their earnings. The qualitative data from these subjects reinforce the quantitative results. Of the 13 subjects in the Second Vote vs. First Vote, No Information treatment, 11 made explicit comparisons between their part one and part two earnings when describing how they chose to cast their second votes. Fifteen of the 18 non-demand revealing subjects in the corresponding treatment with information made similar comparisons. These qualitative results reinforce the quantitative results that if subjects believe that the second round of the double referendum is to have the good at the second cost versus having the good at the first cost, non-demand revealing second votes may result. The role of information Two results from the analysis of hypotheses 1 through 3 point to a possible role for information. The first is the result that there are significantly fewer subjects who cast non-demand revealing votes in both rounds in the consequential treatments with information than in the inconsequential double referendum. This result may indicate that the information helps subjects to resolve confusion about their second costs. The second result that points to a role for information is the difference in the vote distributions in the Second Vote vs. No Investment, No Information treatment and the Second Vote vs. First Vote, Information treatment when the first vote of the latter failed. The only difference between these two treatments is the presence of information in the latter treatment. This result may indicate that subjects used the information about the first vote and second cost in choosing how to vote in the second voting round. The qualitative results provide more insight into this result.
420
K.S. Carson et al.
A comparison of the overall vote distributions from all sessions of the No Information and Information treatments allows for an additional test on the role of information. The vote distributions in the Second Vote vs. No Investment, No Information and Second Vote vs. No Investment, Information treatments are not significantly different from each other (p = 0.780). This result also holds for the Second Vote vs. First Vote, No Information and Second Vote vs. First Vote, Information treatments (p = 0.842). This result would seem to contradict the results above, as information appears to have no effect on the aggregate vote distributions. The qualitative data help to reconcile these apparently contradictory results. Insight into the effect of information can be gained by examining the qualitative responses of non-demand revealing subjects. Although subjects whose second votes are demand revealing seem to behave consistently across treatments, subjects whose second votes are not demand revealing do not. In the Second Vote vs. No Investment, No Information treatment, there were five subjects whose first votes were demand revealing and whose second votes were not. Four of these five subjects appeared to be confused by the introduction of the second vote, and not to understand what the point of part one of the experiment was. Below are their comments: I was at a loss! I don’t know why I voted Yes since if the investment went through I would’ve lost. I just wanted to see the outcome of Part II. At first I was shocked @ the seemingly large loss (NewC) that I would experience but then I focused on the R and realized it was OK. I did not change my vote because I was going to vote YES as long as I earned 5 or more dollars. What is the point of part 1? NewC was better but still not worth it. I was confused. I thought we routed back to our part 1 balance of and then I would have had 130 tokens. I messed up. I should have voted Yes. Then I was confused again. Part 1 didn’t matter. I should have voted “yes.” In all four of these cases, the first vote passed. As a result, part two of the experiment essentially nullifies the part one vote. The contrast between the comments from the non-demand revealing voters in the Second Vote vs. No Investment, No Information treatment and the nondemand revealing voters in the Second Vote vs. No Investment, Information treatment, which has the same incentive properties, reveals the effect of information on subjects’ decisions. In the treatment with information, 13 subjects cast nondemand revealing second votes. Eleven of these subjects stated that they voted so as to influence their earnings or cost in a presumed part three of the experiment. Of the 11 who voted to influence their cost, six thought that a no vote would lower their cost in part three, and five thought that a yes vote would pay more benefits in the long run. Three of these five subjects noted that thinking that a yes vote would result in increased earnings (presumably through a lowered cost) was contrary to
Double referenda 421 logic or intuition. In the Second Vote vs. First Vote, No Information treatment most non-demand revealing voters cast non-demand revealing votes so as to increase strategically their earnings. Six voters mentioned the possibility that their vote might influence a future cost. One subject stated that s/he voted so as to keep the cost from going up further, and three stated that they thought a yes vote would lower future costs. The effect of information appears to be dominated by the incentive properties of this treatment. These results explain the lack of quantitative difference between the aggregate results in the Information and No Information treatments. Because not all subjects formed the same priors about how their second vote could influence future costs (or benefits), they canceled each other out in terms of the aggregate vote distributions. Two sample comments are: I thought there would be another vote and if I voted NO then my C would go down by 20. I thought there would be a part 3 that would flip the new C for example if I voted Yes, against logic my value for C would add for the presumed next part.
Discussion and conclusions These results provide insight into what may be occurring in field double referendum mechanisms. The empirical and theoretical literature starts with the assumption that anomalous behavior in hypothetical double referenda arises from demand revealing first round votes and non-demand revealing second round votes. The results of the inconsequential double referendum are inconsistent with this behavior, as most non-demand revealing behavior resulted from doubly non-demand revealing votes. This result points to two possibilities: either the empirical and theoretical literature is in error and anomalous behavior in hypothetical double referendum surveys arises in responses to the first vote, or subjects do not treat hypothetical double referenda as purely inconsequential. We test the second proposition here through an investigation of consequential double referenda. The results of the consequential double referendum mechanism appear to be more consistent with the literature on behavior in field hypothetical double referendum mechanisms. Adding consequentiality to the double referendum mechanism does not affect the rate of non-demand revelation in the first round vote relative to the inconsequential double referendum. In both referenda, the rate of non-demand revelation in the first vote is low. However, adding consequentiality can significantly increase non-demand revelation in the second vote if subjects believe that the second vote is having the good at the second cost versus having the good at the first cost. Last, adding consequentiality significantly reduces the rate of non-demand revealing voting in both rounds of the mechanism. These results seem to indicate that if anomalies arise in the responses to field double referendum surveys, they may result because subjects perceive these referenda to be consequential.
422
K.S. Carson et al.
If this is the case, what can survey designers do to minimize the potential for bias in responses to field surveys? First and foremost, the survey must be clear that each round of the referendum is an independent take-it-or-leave-it vote to have the good at the specified cost versus no good at all. Most double referendum surveys implemented in the field do not explicitly state this and leave it to the respondent to infer what they are voting for in the second round of the survey. Second, if the survey is perceived as consequential, not only must respondents perceive the possibility of public good provision to be increasing with the percentage of yes votes to the survey, they must also perceive the possibility of incurring a cost to be increasing with the number of yes responses. Survey designers usually endeavor to meet this second criterion by making all aspects of the survey as realistic as possible. It is less clear what information surveyors should provide respondents about the relationship between their first votes and second costs. Given that the responses to information were inconsistent and contradictory, even when the information was explicitly provided to the subjects, it is unlikely that respondents to field surveys perceive the link between their first vote and second cost when the link is not explicitly stated and act on it in some systematic fashion. Given that not all subjects form the same expectations about how a vote will influence future costs, even in a simple laboratory setting, it is likely that incorporating such information into a field survey will do little more than add noise to the willingness to pay distribution. Therefore, field researchers should avoid incorporating information that will not have a predictable effect on subjects’ responses. The double referendum mechanism that performs the best in terms of both individual and aggregate demand revelation is the consequential mechanism in which the second vote is having the good at the second cost versus no good at all and no information is provided to the subjects about the relationship between their first round votes and second round costs. Field double referendum survey designers should avoid creating any impression of interdependency between the two votes to reduce the possibility of inconsistent responses resulting from confusion about second round vote.
Acknowledgments The researchers gratefully acknowledge the support for this research by the National Science Foundation, Decision, Risk and Management Sciences and Measurement, Methodology, and Statistics Divisions, grant number SES0351946. The opinions expressed herein are solely those of the authors and not those of the National Science Foundation nor of the authors’ institutions.
Notes 1 All experimental instructions are available from the authors upon request. 2 Weekly cadet take-home pay ranges from $65–$100. Therefore, an additional $10 for less than an hour’s time represents a significant addition to a cadet’s weekly income. Given this, there is no doubt that cadets took the experiment seriously.
Double referenda 423
References Agresti, A., 1996. An introduction to categorical data analysis. New York: John Wiley and Sons. Alberini, A., 1995. Efficiency vs. bias of willingness-to-pay estimates: bivariate and interval-data models. Journal of environmental economics and management, 29 (2), 169–180. Arrow, K.J., Solow, R., Portney, P., Leamer, E., Radner, R., and Schuman, H., 1993. Report of the NOAA panel on contingent valuation. Federal register, 58 (10), 4602–4614. Banzhaf, S., Burtraw, D., Evans, D., and Krupnick, A., 2006. Valuation of natural resource improvements in the Adirondacks. Land economics, 82 (3), 445–464. Bateman, I.J., Langford, I.H., Jones, A.P., and Kerr, G.N., 2001. Bound and path effects in multiple bound dichotomous choice contingent valuation. Resource and energy economics, 23 (3), 191–213. Bergstrom, J., Holmes, T., Huszar, E., and Kask, S., 2000. Ecosystem valuation in the southern Appalachians with application to the Little Tennessee watershed. Final Report, Agreement No. SRS 33-CA-99–713, US Forest Service and University of Georgia. Burton, A.C., Carson, K.S., Chilton, S.M., and Hutchinson, W.G., 2002. Incentive compatibility and hypothetical double referenda. Proceedings of the W-133 conference 2002, fifteenth interim report. Cameron, T.A. and Quiggin, J., 1994. Estimation using contingent valuation data from a “dichotomous choice with follow-up” questionnaire. Journal of environmental economics and management, 27 (3), 218–234. Carson, R., Groves, T., and Machina, M., 1999. Incentive and informational properties of preference questions. Plenary Address to the European Association of Resource and Environmental Economists, Oslo, Norway, June. Cooper, J., Hanemann, M., and Signorello, G., 2002. One-and-one-half-bound dichotomous choice contingent valuation. Review of economics and statistics, 84 (4), 742–750. Cummings, R.G., Elliot, S., Harrison, G.W., and Murphy, J., 1997. Are hypothetical referenda incentive compatible? Journal of political economy, 105 (3), 609–620. DeShazo, J.R., 2002. Designing transactions without framing effects in iterative question formats. Journal of environmental economics and management, 43 (3), 360–385. Hanemann, M., Loomis, J., and Kanninen, B., 1991. Statistical efficiency of doublebounded dichotomous choice contingent valuation. American journal of agricultural economics, 73 (4), 1255–1263. Krippendorff, K., 1980. Content analysis: an introduction to its methodology. Beverly Hills, CA: Sage Publications. McFadden, D. and Leonard, G.K., 1993. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In: J. Hausman, ed. Contingent valuation: a critical assessment. Amsterdam: North-Holland, 165–216.
25 Investigating the characteristics of stated preferences for reducing the impacts of air pollution A contingent valuation experiment Ian J. Bateman, Michael P. Cameron, and Antreas Tsoumas
Introduction Airborne pollutants impact upon a variety of receptors including humans, animals, plants, buildings and materials. Individuals who are aware of and concerned by such impacts may value their reduction. This chapter presents the findings of an experiment designed to investigate the nature of stated preferences for reducing air pollution impacts obtained using the contingent valuation (CV) method. The CV method is a technique for assigning monetary values to individual preferences for changes in the provision of a good or set of goods (for a review of the CV method see Mitchell and Carson, 1989, and for more recent debate see Bateman and Willis, 1999). The method typically operates through surveys of individuals in which respondents are presented with a hypothetical or contingent market for a good and asked to state either their willingness to pay (WTP) or willingness to accept (WTA) compensation for either a gain or loss of that good. CV has been extensively used to assess preferences for non-market goods such as those provided by the environment. A key objective of our research was to examine the extent to which values derived by CV were consistent with economic theory or exhibited certain anomalies reported in the literature. By anomalies we mean results that appear to be inconsistent with the expectations of economic theory as set out in many standard texts (for example Varian, 1992). We apply the CV method through a splitsample experimental design that allows investigation of the presence or absence of these anomalous results within the context of the same valuation exercise. The financial confines of the present research precluded investigation of the origin of any observed anomalies. To do so would have required a switch away from the hypothetical contingent market that underpins the CV method to the
Characteristics of stated preferences 425 use of real-payment approaches such as those used in Bateman et al. (1997a, 1997b). Emissions and impacts In this study we focus upon the impacts of air pollutants rather than the emissions themselves.1 In order to motivate the empirical study two hypothetical schemes for reducing air pollution impacts were derived as follows. • •
Scheme H: Reduction of the impacts of toxic vehicle emissions upon human health. Scheme P: Reduction of the impacts of acidic power station emissions upon plant life.
In order to implement our research design these were supplemented by a further combined scheme as follows: Scheme A = Scheme H + Scheme P. The goods described by these three schemes provided the basic building blocks for constructing valuation scenarios. In the next part of the chapter we briefly review theoretical expectations regarding CV values for public goods such as these. Specifically we consider four inter-related issues that have been the focus of recent research concerning arguably anomalous results derived from CV studies. These issues are: i ii iii iv
scope sensitivity; part–whole/substitution effects; ordering effects; and visible choice set effects.
We then describe our novel experimental design for testing for the presence of such effects in values for the three impact reduction schemes mentioned above. We then present our experimental results. This opens by providing sampling details and sample socio-economic and demographic characteristics. Valuation results are then presented and a set of hypotheses regarding theoretical expectations (and hence anomalies) are formulated and tested. Finally, we summarize our findings and present conclusions.
Theoretical expectations and anomalies The basic tenet of welfare economics is that individuals maximize their utility by choosing what they prefer, and preferring what they perceive as yielding maximum utility (Varian, 1992).2 The preferences underpinning these choices can be expressed as values that in turn may be assessed through measures such
426
I.J. Bateman et al.
as WTP for a particular good. Economic theory says very little about the psychological processes that form preferences,3 but does assume a form of rationality and consistency of preferences from which certain testable hypotheses may be derived. Here we review the four issues identified previously, describing theoretical predictions and how anomalous responses may cause deviations between predicted and observed value relationships. Scope sensitivity Scope sensitivity describes the extent to which stated values are sensitive to changes in various dimensions of the good under investigation (Carson, et al., 2001). For example, it may be that values rise with increases in the physical scale of an impact reduction scheme. However, while standard economic theory suggests that values should not fall as scope increases, it does not require that values rise with scope. For instance, an individual may have a positive WTP for setting up a recreational woodland but, once that is provided, be unwilling to pay for a second such woodland. The issue of whether scope sensitivity should be and is observed in a given application is essentially an open empirical question dependent upon the nature of the good and change in provision concerned. Nevertheless, since publication of the US NOAA Panel report on the validity of CV (Arrow et al., 1993), scope sensitivity has been viewed (arguably with dubious justification) as a key indicator of study quality and has generated a substantial empirical and theoretical literature (Goodstein, 1995). Bateman et al. (2004) describe a number of tests including examinations of the consistency of scope sensitivity across valuations of nested goods, i.e. where the scope of one “inclusive” good entirely comprises and exceeds that of another subset good. In this study we adopt a straightforward approach to testing for scope sensitivity; specifically that values for an inclusive good should not be less than values for a subset good. Considering the three air pollution impact reduction schemes this equates to theoretical expectations given in equations (25.1) and (25.2) that: WTP (Scheme H) ≤ WTP (Scheme A)
(25.1)
WTP (Scheme P) ≤ WTP (Scheme A).
(25.2)
and
Satisfaction of these tests is insufficient to prove the theoretical consistency of our contingent values. As Svedsater (2000) points out, scope sensitivity might be observed when respondents are asked to value nested schemes simply because the respondent is influenced by their previously stated values and attempt to act in an internally consistent way. Failure of these tests, however, would be a strong indication of anomalous stated preferences.
Characteristics of stated preferences 427 Part–whole/substitution effects The “part–whole phenomena”4 occurs in the context of CV studies when it appears that the sum of the valuations placed by an individual on the parts of a good is larger than the valuation placed on the good as a whole (i.e. the sum of the part values exceeds that stated for the whole). In the wake of the Exxon Valdez oil spill, part–whole effects emerged as principal focus of debate regarding the validity of the CV method.5 The occurrence of part–whole effects within CV studies was (and still is) seen by critics as a major challenge to the validity of the CV method. However, Bateman et al. (1997a) demonstrate that part–whole effects can be observed in consumers’ real-money purchases of private goods. This suggests that such effects may constitute a true anomaly and shortcoming of standard theory. However, substitution effects mean that the presence of part–whole phenomena for certain goods need not necessarily constitute a theoretical anomaly (Carson et al., 1998). For example, two “part” goods might be regarded as substitutes for each other and then the value of the “whole” bundle consisting of both goods might be less than the sum of the constituent parts.6 In our application we have chosen goods that individuals may or may not consider as substitutes for each other. It may or may not be that the reduction of air pollution impacts upon plants (Scheme P) is a substitute, or partial substitute, for relieving air pollution impacts upon human health (Scheme H). Therefore we cannot distinguish between the part–whole phenomena (a theoretical anomaly) and a substitution effect (a finding that is entirely consistent with theory). However, this chapter is constrained to an empirical investigation of whether part–whole/ substitution effects are observed rather than in disentangling the precise cause of such an effect. Considering our elicited values and remembering that Scheme A involves the joint implementation of Schemes H and P, then part–whole/substitution effects would be observed if equation (25.3) holds: [WTP(Scheme H) + WTP(Scheme P)] > WTP(Scheme A).
(25.3)
For convenience we will refer to the sum [WTP(Scheme H) + WTP(Scheme P)] as the “calculated” value of Scheme A and contrast this with the amount WTP (Scheme A) that we refer to as the “stated” value of Scheme A.
Ordering effects, list direction and list length One of the earliest findings of empirical CV research is that when respondents are presented with a list of goods and asked to provide values for each of those goods, then the stated value for any given good is dependent upon its position such that the nearer to the start of the list that the good is positioned, the higher is the stated value it is accorded (Randall et al., 1981; Hoehn and Randall, 1982; Hoehn, 1983; Tolley et al., 1983). In a recent reassessment of this issue,
428
I.J. Bateman et al.
Bateman et al. (2004) show that whether or not such results are anomalous depends in part upon the type of list in which goods are presented. In an exclusive list, which is the kind of list that choice theory typically addresses, goods are presented as alternatives to any other goods given in that list, with the level of other goods held constant across valuation tasks.7 Here the stated value for a good valued at any position in such a list always refers to the same unit of that good irrespective of its position in that list. Provided that the CV respondent adjusts their perceived holdings of goods back to the initial status quo between valuation tasks, any residual variation associated with presentation is therefore an anomaly and can be termed an ordering effect. Empirical evidence of the presence of such effects in CV studies is mixed (Boyle et al., 1993). We can further characterize lists in terms of their “direction”, i.e. whether they progress from “smaller” to “larger” goods, which we term a “bottom-up” list, or from “larger” to “smaller” to yield a “top-down” list. Typically, for nested goods, list direction can be determined through inspection of how goods are nested. In our experiment we have clear nesting of Schemes H and P within Scheme A. However, without strong priors regarding expected values, list direction is only obvious ex post for non-nested goods, e.g. the relationship of Schemes H and P to each other are not, a priori, obvious (although an anthropocentric world view might suggest that relieving impacts upon humans is more valuable than relieving impacts upon plants). Nevertheless we shall make use of this list direction terminology in discussing our results. A final permutation concerning list definition concerns the length of lists. Evidence exists that raising awareness of all the constituent parts of a good may increase stated values for that good; a phenomena known as event splitting (Starmer and Sugden, 1993; Humphrey, 1995, 1996). In our experiment we vary list length between two or three goods, always including Schemes H and A and either including or excluding Scheme P. By always presenting Scheme A (which embraces Schemes H and P) as the final valuation object we attempt to see whether prior inclusion of Scheme P results in an event-splitting effect, raising the value of Scheme A. As conjectured in Bateman et al. (2004), list length may also have an effect on stated values if warm-glow (individual value associated with the act of giving rather than the value of the good (Andreoni, 1990)) or other-regarding behaviour (Ferraro et al., 2003) is somehow partitioned across all the valuation tasks that an individual understands that they will be asked to complete. Visible choice set effects Bateman et al. (2004) define a further dimension through which CV study design may influence scope sensitivity; the visible choice set. Reflecting recent theoretical developments by Cubitt and Sugden (2001), they define the visible choice set as that set of goods which, at any given point in a valuation exercise, the respondent perceives as being the full extent of purchase options that will be
Characteristics of stated preferences 429 made available in the course of that exercise. For example, prior to any values being elicited respondents might be told that they are going to be presented with three goods, C, B and A and asked to value each in turn; an approach which Bateman et al. (2004) term an advance disclosure visible choice set. Conversely, respondents may be presented initially with only good C and a value elicited on the basis of that visible choice set alone; then they are told about good B (i.e. the visible choice set changes relative to that held at the initial valuation) and a further valuation elicited; finally they are presented with good A and a value elicited. Bateman et al. (2004) characterize such approaches as exhibiting a stepwise disclosure visible choice set. Note that in the stepwise approach each valuation task is undertaken in ignorance of the subsequent expansion of the choice set. Evidence for the occurrence of such effects is presented in Bateman et al. (2004), who analyzed visible choice set and list direction effects within a nested set of improvements to an open-access lake in Norfolk, UK. They found that within each treatment increases in the scope of goods are synonymous with rises in WTP, and that a treatment presenting respondents with the lowest value good first, and where they are at that time unaware of a wider choice set, yields higher values both for that initial good and for those presented subsequently. This interaction of visible choice set and ordering effects is explicitly tested for in the experimental design used in the present analysis, with visible choice set effects appearing if values for the same good differed according to whether they were obtained from stepwise or advance disclosure treatments. Whether or not such effects constitute theoretical anomalies is a debatable point. For private goods, choice theory states that preferences are independent of the choice set and therefore we should expect no difference in stated values elicited from either a stepwise or advance disclosure choice set. Yet, for public goods, choice theory predicts that strategic incentives may affect stated values where the visible choice set contains more than one such good. Because such strategies could be complex and vary across individuals, we will proceed with the assumption that respondents treat the choices offered as independent. This allows us to test the hypothesis that WTP responses will be invariant to visible choice set type.
Study design Scenarios: air pollution impact reduction schemes A study design was defined to examine whether the various anomalies and effects under investigation were present within a CV study focusing upon values for the reduction of air pollution impacts. The various anomalies were assessed through a split sample design with each subsample being presented with a somewhat different questionnaire (full questionnaires for all design permutations may be obtained from the first author listed).
430
I.J. Bateman et al.
The objective of this research was purely to investigate the relative nature of values for reducing air pollution impacts. Resources were insufficient to investigate the absolute level of those values within an incentive compatible structure. Given these constraints we adopted a simple open-ended response format for eliciting WTP answers. It is recognized that the open-ended format is liable to strategic behavior by respondents (Carson et al., 2001) with under-representation of true WTP being a frequently cited strategy.8 However, in a split sample context, such as adopted in this study, the open-ended approach is acceptable for detecting differences in WTP responses between treatments (see for example, Bateman and Langford, 1997). The open-ended method is also highly statistically efficient in that each respondent is asked to state their maximum WTP, which in turn dramatically reduces sample size requirements relative to the more incentive compatible dichotomous choice approach (Hanemann and Kanninen, 1999), thus facilitating a sufficient sample size within the confines of the available research budget. Given our focus upon differences in WTP between treatments, rather than a concern for the validity or defensibility of absolute WTP values, efforts were made to simplify the cognitive task faced by respondents. Providing the level of information is kept constant across treatments, any significant difference between subsamples (other than those due to sample characteristics) may indicate the presence of anomalies. Given this we were able to justify reliance upon respondents’ prior levels of information, assuming that this is randomly distributed across subsamples. This was clarified to respondents in the opening statement of all questionnaires, which also introduced the subject of air pollution impacts. Respondents were then appraised of the valuation tasks before them by informing them that they would be presented with details regarding one or more air quality improvement schemes and that they would be asked to value the implementation of these schemes. Respondents facing advance disclosure visible choice sets were told from the outset the number of air quality improvement schemes (two or three) that they would be presented with during the entire course of the experiment. However, respondents facing the stepwise information treatment were only told of the first scheme that they would face. Respondents were then presented with various combinations of scheme details and valuation tasks in exclusive list formats. The various combinations employed over the split sample design are detailed subsequently. Split sample design and corresponding tests Investigation of the various anomalies discussed previously dictated the various treatments that together define the study design. Combining the scope sensitivity and part–whole substitution tests with examinations of ordering and visible choice set effects led us to devise a study design consisting of five subsamples of respondents, described in points (i) to (v) below. i Here a stepwise disclosure approach was adopted. Respondents were presented with Scheme H and asked to value it. Respondents were then
Characteristics of stated preferences 431
ii
iii
iv
v
presented with Scheme A and asked to value that. Comparison of these values provides a simple scope test. We label this subsample SHA. Here a stepwise disclosure approach was again adopted. Respondents were presented with Scheme H and asked to value it. This process was then repeated for Scheme P and finally for Scheme A. Comparison of these values provides a further simple scope test. Furthermore, the derived values for Scheme H, P and A allow us to conduct a part–whole test for a stepwise treatment. Comparison with subsample SHA allows us to see if there is an ordering effect with regard to the value of Scheme A. We label this subsample SHPA. Here an advance disclosure approach was adopted. Respondents were presented with Scheme H and Scheme A before being asked to value both in turn. Comparison of these values provides a further simple scope test. Comparison with subsample SHA allows us to see if there is a visible choice set effect with regard to the value of Schemes H and A. We label this subsample AHA. Here an advance disclosure approach was again adopted. Respondents were presented with Schemes H, P and A before being asked to value each in turn. Comparison of these values provides a further simple scope test. Furthermore, the derived values for Scheme H, P and A allow us to conduct a part–whole test for an advance information treatment. Comparison with subsample SHPA allows us to see if there is a visible choice set effect with regard to the values of Schemes H, P and A. We label this subsample AHPA. Here a stepwise information approach was adopted. Respondents were presented with Scheme P and asked to value it. Respondents were then presented with Scheme H and asked to value that. Comparison of these values with those for the same schemes elicited from subsamples SHPA and SHA provide tests of ordering effects for these values. We label this subsample SPH.
Table 25.1 summarizes the split sample design discussed above. Here bold type indicates the choice set visible to participants prior to the initial valuation task, while italic type shows the subsequent expansion of the visible choice set just prior to the second valuation task for participants in stepwise treatments. Finally, normal type indicates the further expansion of the visible choice set experienced by participants in the SHPA treatment just prior to their third and last valuation task. The fourth column provides labels for the various values directly stated by respondents in each treatment, indicating both the subsample from which that value was obtained and, in subscripts, the scheme valued. For example the value stated by respondents in subsample SHA for Scheme H is denoted SHAH. Calculated values are labeled in a similar manner but include a subscript c. Therefore while SHPAA indicates the stated value for Scheme A derived from the SHPA subsample, the inferred value of Scheme A (calculated by summing the stated values for Schemes H and P from the same subsample) is denoted SHPAcA.
432
I.J. Bateman et al.
Table 25.1 Experimental design and sub-sample structure Group
Disclosure type
Design (ordering of information provision and valuation questions)
Stated values
Inferred values
SHA (n = 40)
Stepwise
Information: Scheme H WTP Scheme H Information: Scheme A WTP Scheme A Information: Scheme H WTP Scheme H Information: Scheme P WTP Scheme P Information: Scheme A WTP Scheme A Information: Scheme H Information: Scheme A WTP Scheme H WTP Scheme A Information: Scheme H Information: Scheme P Information: Scheme A WTP Scheme H WTP Scheme P WTP Scheme A Information: Scheme P WTP Scheme P Information: Scheme H WTP Scheme H
SHAH
SHAcP = (SHAA – SHAH)
SHPA (n = 40)
AHA (n = 40)
Stepwise
Advance
AHPA (n = 28)
Advance
SPH (n = 40)
Stepwise
SHAA SHPAH SHPAP
SHPAcA = (SHPAH + SHPAP)
SHPAA AHAH AHAA
AHAcP = (AHAA – AHAH)
AHPAH AHPAP AHPAA
AHPAcA = (AHPAH + AHPAP)
SPHP
SPHcA = (SPHP + SPHH)
SPHH
Results Data were collected through one-to-one, in-person surveys of students at their residential addresses at the University of East Anglia, addresses being selected at random. A total sample of 238 respondents was collected of which 50 were used in a pilot survey refining the wording of questionnaires. As wording was substantially simplified following the pilot survey, those presented with the pilot questionnaire are excluded from our analysis. Subsample demographic characteristics All respondents were asked a number of socio-economic and demographic questions. These were used to examine possible differences between subsamples that may complicate our subsequent analyses. Summary statistics for key variables within and across subsamples are presented in Table 25.2. Considering respondents’ expected income over the next 12 months, while subsample AHPA appears to have a somewhat higher income than other subsamples these differences proved to be barely insignificant. Similarly no
6138 6411 7171 9059 6581 6977
Mean
676 732 956 1007 895 393 0.054
s.e.
Gross expected income in the next 12 months (£)
Notes * One missing value. ** Significance of differences.
SHA SHPA AHA AHPA SPH All subsamples S.o.D.**
Group
5000 4500 5000 6650 4000 5000
Median 3 2 2 0 1 8 0.446
Number of non-UK respondents
Table 25.2 Socio-economic and demographic profile of subsamples
16 16 20 14 19 85 0.189
Male
Gender
24 24 20 14 21 103
Female 20.1 20.3 21.7 23.3 20.8 21.1
Mean 0.40 0.24 0.33 0.73 0.49 0.20 0.000
s.e.
Age last birthday
19.0 20.0 21.5 23.0 20.0 20.0
Median
Yes 30 10 33 7 31 9 21 6 32 8 147* 40* 0.945
No
Previously studied economics?
40 40 40 28 40 188
Total subsample size
434
I.J. Bateman et al.
significant differences were found either in gender or in the number of non-UK respondents in each subsample (who arguably would be less likely to receive the long term benefits of any air pollution impact reduction scheme). Considering respondent age, while the descriptive statistics shown in Table 25.2 show that mean age for all subsamples was within the range 20 to 24 years, nevertheless significant differences were found with subsample AHPA again appearing to be the most different from other subsamples. Taking into account that this is also the subsample with the smallest number of respondents, it seems likely that there are a few older (and probably higher income) respondents within this subsample. Although these are not substantial differences they are worth keeping in mind when we consider our subsequent valuation results. WTP for air pollution impact reduction schemes by subsample Descriptive statistics for the various stated and calculated WTP measures obtained from each subsample are detailed in Table 25.3. Examining these we can see that WTP stated values for Scheme H are relatively stable between subsamples, with mean measures ranging from about £72–£85 and median values being between £50–£70 (notice that the highest median values are obtained from subsample SPH, which is the only one where Scheme H is not presented first). Stated values for Scheme P are also relatively stable, with means ranging from £44–£54 and medians varying from £30–£47. However, these values differ substantially from the calculated values for Scheme P (found by subtracting stated values for Scheme H from those for Scheme A), with mean values ranging from £18–£28 and medians of £5–£10.9 This large excess of stated over calculated values suggest either strong part–whole effects or that Schemes H and P are at least partial substitutes for each other. Stated values for Scheme A are also relatively similar across treatments with means ranging from £100–£113 and medians varying from £70–£100. Calculated values for Scheme A are consistently above their stated equivalents with means from £117–£131 and medians from £90–£120. Again this would be expected if we were either witnessing part–whole or substitution effects. These results have some important messages for regulatory policy assessment. First, given the lower bound nature of open-ended responses, these results tentatively suggest that values for air pollution impact reduction schemes may be significant. Second, the findings suggest that these values may be reasonably robust (although we investigate this issue further below). Third, and perhaps most importantly, these findings suggest the presence of significant part–whole or substitution effects. This suggests that simply adding across schemes to obtain estimates of the value of wider schemes ignores the substitution effects that may exist between schemes and therefore risks the likelihood that the value of wider schemes may be overestimated. Finally inspection of the distributional information contained in Table 25.3 suggests that, as often observed in CV studies, distributions of WTP responses are positively skewed. The final column of the table reports a formal test for
5.00 5.00 10.00 0.00 5.00 4.00 0.00 5.00 –30.00 –10.00 5.00 5.00 10.00 0.00 9.00 0.00 10.00
9.75 5.00 10.00 0.00 15.25 4.95 0.00 10.25 –11.00 0.00 9.75 9.75 10.00 0.00 9.95 0.00 25.75
20.00 30.00 30.00 16.25 50.00 10.00 12.50 30.00 0.00 0.00 32.50 40.00 50.00 21.25 55.00 32.50 80.00
52.00 55.00 50.00 50.00 70.00 30.00 45.00 47.50 5.00 10.00 70.00 77.50 100.00 90.00 90.00 100.00 120.00
105.00 100.00 100.00 127.50 100.00 50.00 73.75 58.75 25.75 25.00 150.00 112.50 110.00 195.00 150.00 200.00 161.25
300.00 500.00 300.00 200.00 240.00 500.00 120.00 120.00 400.00 100.00 500.00 900.00 300.00 280.00 1000.00 290.00 360.00
0.015 0.010 0.010 0.020 0.073 0.010 0.035 0.010 0.010 0.010 0.015 0.010 0.081 0.010 0.010 0.054 0.348
Note * Shapiro-Wilk test of normality. Here p denotes the probability that the difference between a normal distribution and that of the observed WTP values is due to random chance.
72.79 107.20 68.98 63.31 48.72 97.95 36.02 29.32 68.71 22.89 109.21 156.68 71.71 84.42 203.17 91.89 74.46
11.81 17.39 11.05 11.96 7.70 15.89 6.81 4.64 11.15 3.66 17.72 25.42 11.48 15.95 32.96 17.37 11.77
79.50 84.61 81.41 72.18 81.53 54.16 44.50 49.85 27.63 18.31 107.13 113.29 99.72 104.18 138.76 116.68 131.38
SHAH SHPAH AHAH AHPAH SPHH SHPAP AHPAP SPHP SHAcP AHAcP SHAA SHPAA AHAA AHPAA SHPAcA AHPAcA SPHcA
40 40 40 28 40 40 28 40 40 40 40 40 40 28 40 28 40
s.e. mean Std deviation Minimum Percentile 05 Percentile 25 Median Percentile 75 Maximum p non-normal*
Measure Count Mean
Table 25.3 Descriptive WTP statistics by subsample and scheme
436
I.J. Bateman et al.
normality indicating that, in every case bar one of the calculated measures, normality is rejected at p < 0.1. This indicates that parametric tests relying upon such normality assumptions may be unreliable. Given this, in our analysis we employ non-parametric techniques for testing relationships between the measures collected. Tests of scope sensitivity and value consistency In order to examine differences between WTP measures for schemes both within and across treatments a series of non-parametric tests were conducted.10 A summary of findings is presented in Table 25.4. The counts given in the cells of this table indicate the number of tests that show either a significant (sig) or nonsignificant (ns) difference between the WTP values concerned. Figures in bold are tests between comparable WTP sums for identical schemes elicited from different treatments. Here theoretical expectations are that all treatments should yield similar values.11 All of the 23 such tests reported show no significant difference in values (i.e. p > 0.05) suggesting strong valuation consistency across treatments. Considering comparisons between different value measures (figures in normal typeface), numbers in parentheses are within-sample (internal) tests, while numbers outside parentheses are between-sample (external) tests. All of the former internal tests hold treatment constant and show consistently significant differences between measures. This confirms that scope sensitivity is indeed statistically significant and that, within any given treatment, values for Scheme H are significant larger than those for Scheme P and significantly smaller than those for Scheme A. This supports the anthropocentric prior that individuals value reduction of air pollution impacts upon human health more than the reduction of impacts upon plants. The external tests, shown by the figures outside parentheses in normal typeface, are considerably less consistent and indicate that treatment differences across subsamples do have significant impacts upon WTP values. It is to an analysis of these treatment differences that we now turn. Table 25.4 Significance of differences in WTP values for schemes* WTP for scheme
H
Significance
sig
ns
sig
ns
sig
ns
sig
ns
sig
ns
0 7(3) 9(2) 1(4) 3(3)
10 6 0 15 8
0 7 10(2) 6(3)
3 0 0 0
0 6(2) 6
1 0 0
0 2(2)
6 8
0
3
WTP for scheme
H P cP A cA
P
cP
A
cA
Note * Numbers in cells indicate the number of tests that show either a significant (sig) with p ≤ 0.05 or non-significant (ns) difference between the WTP values concerned.
Characteristics of expressed preferences 437 Tests of treatment effects We have five distinct treatments, each of which yields stated and/or calculated values for each of the three schemes under consideration. This experimental design permits inspection of the impact of the dimensions discussed in our theoretical expectations. First, in order to test whether the values for a given good varied across treatments,12 each response was categorized by the treatment type and scheme as follows: Type 1 = stated values from stepwise disclosure treatments with Scheme H valued first (SHA and SHPA);13 Type 2 = calculated values from stepwise disclosure treatments with Scheme H valued first (SHPA); Type 3 = stated values from advance disclosure treatments with Scheme H valued first (AHA and AHPA);14 Type 4 = calculated values from advance disclosure treatments with Scheme H valued first (AHPA); Type 5 = stated values (Schemes P and H) or calculated value (Scheme A) from stepwise disclosure treatments with Scheme P valued first (SPH). Mean and median WTP values for all three schemes across all five treatments are given in Table 25.5 together with non-parametric tests of the null hypothesis that values for a given scheme do not vary across the levels of the Type variable.15 Here figures in bold indicate calculated values while those in normal typeface indicate those which were directly stated by respondents. Inspecting Table 25.5 a number of clear messages can be seen. First, values for Scheme H are consistently higher than within-treatment values for Scheme P and, as noted previously, there is clear evidence of scope sensitivity with Scheme A values consistently higher than those for other schemes. Second, calculated values for both Scheme H and Scheme P are consistently and substantially below stated values, and this result is generally reversed for Scheme A. Remembering that, for Scheme A, calculated values are obtained by adding together stated values for Scheme H and Scheme P, overall this pattern provides some evidence for a part–whole/substitution effect with the sum of parts exceeding the stated value of the whole. The last row of Table 25.5 gives a formal test of the null hypothesis of the equality of values for given schemes across the levels of the Type variable. Equality is clearly rejected for both Scheme H and Scheme P. For Scheme A the test statistic falls just outside the conventional 5 percent significance level, although it is clearly significant at the 10 percent level.16 Table 25.6 presents details of the influence of various experimental design variables upon WTP values for the three schemes. First, results show that when values are stated rather than calculated, mean WTP for Scheme H is £80.29 compared to £59.36 for calculated values. Corresponding median values are £60 and £40 respectively, and the non-parametric test statistic shows that this
40
68
28
40
256
2
3
4
5
Total
82.05 (10.44) 59.13 (13.91) 77.55 (8.10) 59.68 (10.67) 81.53 (7.70) 74.74 (4.74) p = 0.026 50.00
70.00
35.00
50.00
40.00
52.00
183
40
39
28
38
38
54.16 (15.89) 27.63 (11.15) 44.50 (6.81) 18.31 (3.66) 49.85 (4.64) 38.59 (4.43) p < 0.001
Mean WTP (s.e.)
25.00
47.50
10.00
45.00
5.00
30.00
Median WTP
249
40
28
67
38
76
n**
Scheme A
110.21 (15.39) 138.76 (32.96) 101.58 (9.37) 116.68 (17.37) 131.38 (11.77) 116.37 (7.79) p = 0.057
Mean WTP (s.e.)
Notes * Diff = Kruskal–Wallis test of the null hypothesis that values for a given scheme do not vary across the various levels of the Type variable. ** Here n refers to the number of estimates, not to the number of respondents.
Diff*
80
n**
Median WTP
n**
Mean WTP (s.e.)
Scheme P
Scheme H
1
Type
Table 25.5 Mean and median WTP (£) for three air pollution impact reduction schemes, by five treatments
95.00
120.00
100.00
100.00
90.00
72.50
Median WTP
Characteristics of expressed preferences 439 difference is clearly statistically significant (p = 0.005). Results for Scheme P conform to the same pattern. However, as discussed previously, this implies the opposite change in values for Scheme A with calculated mean WTP being £106.17 for stated values compared to £130.14 for stated values (with an increase in medians from £80 to £100). Our test statistic shows that the part–whole/substitution effect suggested by these findings is clearly significant. Second, mean WTP values are higher from stepwise than advance disclosure treatments. However, examination of medians shows that this effect is not clear cut and tests only indicate significance at the 10 percent level for Scheme P and no significance for either of the other schemes. Table 25.6 shows that the impact of adopting the SPH design is to raise significantly the WTP statements for the initially valued good (Scheme P) relative to the comparatively low values accorded to this good under other treatments. This effect is carried over into the values for Scheme H elicited from treatment SPH, which are again significantly higher than those for under other treatments. Unsurprisingly this means that the calculated values for Scheme A from treatment SPH are also significantly higher than those from other subsamples. Given that we now have clear evidence from previous tables that Scheme P is considered to be the lowest value of the goods presented to participants, this finding is consistent with that of Bateman et al. (2004). Finally Table 25.6 also reports results from our test of event-splitting effects in values for Scheme A. Here, respondents asked to value both the constituent parts of Scheme A rather than only valuing Scheme H (and not Scheme P) had higher WTP values for Scheme A with a mean of £122.19 compared to £103.38 (and corresponding median values £100 and £90 respectively). However, while this difference is in accordance with event-splitting expectations, tests show that this effect is not statistically significant in this instance.17 Summarizing Table 25.6 we can see that the within-scheme variation in values is driven by whether a value is calculated or stated, and whether it was derived from the SPH treatment. However, these variables overlap significantly in that all of the values for Scheme A derived from treatment SPH are calculated rather than stated. To control for this a final test was performed that permits examination of the crucial question of whether values derived from single “part” good valuation studies can be added to those for other “parts” to estimate correctly values for embracing “whole” goods. To test this we examine whether the sum of values for Scheme H and Scheme P, presented as the first good encountered by respondents in designs where they are unaware of any subsequent valuation possibilities, yield a calculated value for A that is similar to that obtained from stated values for Scheme A. Here we have two values for Scheme H that are both the first good valued by respondents and where those respondents faced stepwise designs and were unaware of the subsequent opportunities to value other goods (values SHAH or SHPAH). In contrast we have just one such value for Scheme P (value SPHP). By adding the latter value for Scheme P with each in turn of the values for Scheme H we obtain estimates of calculated value for Scheme A based exclusively upon
Sig. diff. (p)*
40
SPH design
70.00
50.00
0.039 40
143
116
67
77
106
49.98 (6.18) 22.91 (5.79) 29.25 (3.87) 43.98 (6.59) 35.44 (5.50) 49.85 (4.64) 47.50
20.00
30.00
20.00
10.00
40.00
0.000
0.097
0.000
40
209
154
95
106
143
106.17 (9.26) 130.14 (13.34) 106.03 (8.34) 122.75 (11.50) 113.50 (9.01) 131.38 (11.77) 103.38 (10.44) 122.19 (10.26)
Mean WTP (s.e.)
Notes * Mann-Whitney U-test of the null hypothesis of no significant difference in values across the two levels of the variable in question. ** To test for the presence of the event splitting effect.
172
216
Not SPH design
51.00
0.933
0.005
n
Both Schemes H/P**
160
Stepwise disclosure
50.00
40.00
60.00
Sig. diff. (p)*
77
96
Advance disclosure
80.29 (5.49) 59.36 (9.14) 72.28 (6.55) 76.26 (6.52) 73.44 (5.45) 81.53 (7.70)
Median
Scheme A
Only Scheme H
68
188
Mean WTP (s.e.)
n
Median
n
Mean WTP (s.e.)
Scheme P
Scheme H
Calculated values
Stated values
Design variable
Table 25.6 Treatment effects
100.00
90.00
120.00
85.00
92.50
100.00
100.00
80.00
Median
0.243
0.006
0.736
0.025
Sig. diff. (p)*
Characteristics of stated preferences 441 Table 25.7 Comparing stated WTP for Scheme A with values calculated from stepwise first responses for Scheme H and Scheme P Scheme A measure
Count
Mean (s.e.)
Median
Stated values for A SHAH + SPHP SHPAH + SPHP
148 40 40
98.83 (8.64) 128.68 (11.45) 133.39 (17.85)
75.00 120.00 100.00
first response values from stepwise designs. This mimics the estimated value of Scheme A that would typically be obtained from combining values from most conventional CV studies of the constituent parts of this good. These values can be contrasted with those stated values for Scheme A derived from our design (values SHAA, SHPAA, AHAA, AHPAA). Table 25.7 compares measures for the stated values for Scheme A with those obtained by summing first response stepwise values. Inspecting this table we can see that calculated values obtained by summing first response values for the constituent parts of Scheme A substantially overestimates the stated values of the Scheme. Overestimates of mean WTP values range from 30–35 percent while overestimates of median values range from 33–66 percent. Non-parametric tests confirm that in both cases these differences are highly significant ( p < 0.01). Therefore, in this case single good (part) valuations, added together, result in very substantial and significant overestimates of combined good (whole) values.
Summary and conclusions This study reports an analysis of certain characteristics of values for the reduction of air pollution impacts as estimated using the CV method. We investigated a number of issues and potential anomalies that have been highlighted in the CV and experimental economics literature, including (i) scope sensitivity; (ii) part–whole/substitution effects; (iii) ordering effects; and (iv) visible choice set effects. A novel split sample experimental design allowed investigation of all these anomalies within the context of the same valuation exercise. Values were elicited for three schemes to reduce the impacts of air pollution upon (i) human health (Scheme H); (ii) plants (Scheme P); and (iii) human health and plants (Scheme A; which combined the effects of Scheme H and Scheme P). Stated values were obtained for each of these schemes across various treatments that define our study design. In addition to these, calculated values were obtained, implicitly assuming the absence of part–whole/substitution effects between the values of Schemes H and P. By comparing stated and calculated values for different treatments and schemes we can test these assumptions and for the presence of the other anomalies. Our experiments yielded a number of findings. There was considerable value consistency within stated values for each scheme suggesting that respondents were
442
I.J. Bateman et al.
referring to some underlying (although not necessarily theoretically consistent) preferences or valuation process. Furthermore, no anomalies were found regarding sensitivity to the scope of schemes, instead general evidence of significant scope sensitivity was observed. However, the use of stepwise designs that present participants with low value goods first (i.e. our SPH treatment) appears to generate significantly different values from other approaches. Specifically, when a good that is valued at a relatively low level in other treatments is presented at the beginning of a stepwise list its value is elevated. This finding could be interpreted as either a theoretically consistent substitution effect (Carson et al., 1998) or as the impact of a theoretically inconsistent “moral satisfaction of giving” to a good cause being attached to first responses (Kahneman and Knetsch, 1992). Disentangling the different potential drivers of an identical effect is problematic and would require a considerable “verbal protocol” extension to our design (Schkade and Payne, 1994). However, the consequent effect upon Scheme H values in the SPH treatment cannot be explained by economic theory, which would expect that the movement from first position in all other treatments to second position in the SPH list would result in a reduction in stated values arising from substitution effects. Instead, as shown in Table 25.6, values for Scheme H from the SPH treatment are significantly higher than those in other treatments. While this is inconsistent with economic theory it does conform to psychological expectations based on an “anchoring and adjustment” heuristic (Tversky and Kahneman, 1974), wherein the high values stated previously for Scheme P feed through into elevated values for Scheme H stated subsequently. While the latter finding is of most concern from a theoretical and methodological perspective, perhaps the most important practical finding concerned the clear evidence found of significant part–whole/substitution effects. In particular, we found that summing the values obtained from several single good valuation exercises (i.e. corresponding to first responses in our stepwise disclosure designs) to calculate estimates for wider goods risks the likelihood of significantly overestimating the value of the latter wider goods. Policy makers need to be aware of the potential for such relationships when assessing valuation evidence as part of efforts to design appropriate economic instruments for regulatory purposes. In summary, our findings raise a number of theoretical and methodological and applications issues that need to be borne in mind when undertaking valuation work regarding air pollution externalities. Indeed we might expect that a number of these concerns may well apply to many public good valuation exercises. However, in conclusion we should remember that this was a relatively simple exercise dictated by resource constraints that precluded the use of incentive compatible designs. Therefore its findings should be treated with caution. Nevertheless the fundamental nature of the concerns raised suggest that these issues are worthy of further investigation within a more controlled and incentive compatible framework.
Characteristics of stated preferences 443
Acknowledgments The support of the Economic and Social Research Council (ESRC) and the Economics for the Environment Consultancy (EFTEC) is gratefully acknowledged. The authors are grateful to Philip Cooper for detailed comments on an earlier version of this chapter. Remaining errors are the responsibility of the authors alone. Ian Bateman is Professor of Environmental Economics and Senior Research Associate at the Centre for Social and Economic Research on the Global Environment (CSERGE), University of East Anglia, Norwich, NR4 7TJ, United Kingdom and Adjunct Professor at the Department of Agricultural and Resource Economics, University of Waikato Management School, Hamilton, New Zealand.
Notes 1 Arguably individuals may hold values for reducing emissions that have no discernible impact (e.g. colorless, odorless gasses that have no effect upon any receptor) if they object to the fact that these are non-natural. For simplicity we assume that an individual’s values will be driven by impacts rather than emissions. 2 This is a positive rather than normative theory in which the individual is the sole arbiter of what they feel maximizes their own utility. So, for example, despite the associated health risks, smoking cigarettes can contribute to maximizing a particular individual’s utility. 3 Indeed as Varian (1992) notes, “A utility function is often a very convenient way to describe preferences, but it should not be given any psychological interpretation” (p. 95). 4 The terms “part–whole” and “embedding” are employed in the cognitive psychology literature dealing with the perception of visual parts and wholes, where evidence suggests that one hemisphere of the brain is responsible for perception of wholes, while another deals with the parts of an object (Robertson and Lamb, 1991; Tversky and Hemenway, 1984). 5 For example see Kahneman and Knetsch (1992), Smith (1992), Harrison (1992), Carson and Mitchell (1993), Boyle et al. (1994), as well as through the interchanges in Hausman (1993) and between Hanemann (1994) and Diamond and Hausman (1994). 6 It is also theoretically possible that goods are viewed as complements. In such an instance, the sum of the parts would add up to less than the value of the whole. 7 By comparison, in an inclusive list goods are presented as additions to (or subtractions from) any good(s) presented previously in that list. Carson and Mitchell (1995) show that in such lists since the value stated by a respondent for any given good is dependent upon their current endowment of private and public goods, the value for a good as the first good presented to an individual will be different from the value stated when the same good appears later in the list. Such sequencing effects are an expected prediction of economic theory (Carson and Mitchell, 1995; Randall and Hoehn, 1996), and can apply to both nested and non-nested goods (for example see Carson et al., 1998). 8 Although over-statement is an equally plausible strategy (see Bateman et al., 2004). 9 Note that the lower end of the distribution of calculated values for Scheme P includes a number of negative values derived from cases where WTP for Scheme H exceeds that for Scheme A. This may be a cause for some concern and possible explanations for such responses are considered in Bateman et al. (2002).
444
I.J. Bateman et al.
10 Where the two samples were related a Wilcoxon test was employed; for independent samples a Mann-Whitney U-test was employed. 11 Assuming that schemes are seen as independent. 12 Only if significant differences exist between treatments can our analysis then examine whether there are significant part–whole effects, visible choice set effects, ordering effects, or event-splitting effects. 13 Non-parametric tests clearly fail to reject hypothesis of no significant difference between measures included within this category (for SHAA vs. SHPAA p = 0.896; for SHAH vs. SHPAH p = 0.888). 14 Non-parametric tests clearly fail to reject hypothesis of no significant difference between measures included within this category (for AHAA vs. AHPAA p = 0.888; for AHAH vs. AHPAH p = 0.473). 15 Outlier sensitivity analysis confirmed that parametric tests are sensitive to outliers while non-parametric tests are stable. Details are available from the first listed author. 16 This result becomes significant if the three highest values stated by respondents are omitted (p = 0.033). 17 We also examined the possibility that the list length seen by respondents in the initial visible choice set may impact upon values for other schemes even after controlling for position within a list. Some, albeit weak, evidence for a list length effect can be gleaned by examining the stated values for Scheme H obtaining from advance disclosure treatments AHA and AHPA. Here Scheme H is always valued first and the only difference between the treatments is in terms of list length. From Table 25.3 we can see that mean stated WTP for Scheme H from treatment AHA (i.e. AHAH) is £81.41 while the comparable value for Scheme H from treatment AHPA is £72.18. This suggests that values might decline as list length increases. However, these values are not significantly different yielding equal medians. Nevertheless, we believe that this might be a fertile area for future research.
References Andreoni, J. (1990) Impure altruism and donations to public goods, Economic Journal, 100 (401): 464–477. Arrow, K., Solow, R., Portney, P.R., Leamer, E.E., Radner, R. and Schuman, H. (1993) Report of the NOAA Panel on Contingent Valuation, Resources for the Future, Washington DC. Bateman, I.J. and Langford, I.H. (1997) Budget constraint, temporal and ordering effects in contingent valuation studies, Environment and Planning A, 29(7): 1215–1228. Bateman, I.J. and Willis, K.G. (eds) (1999) Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method in the US, EU, and Developing Countries, Oxford University Press, Oxford, p. 645. Bateman, I.J., Munro, A., Rhodes, B., Starmer, C. and Sugden, R. (1997a) Does part–whole bias exist? An experimental investigation, Economic Journal, 107(441): 322–332. Bateman, I.J., Munro, A., Rhodes, B., Starmer, C. and Sugden, R. (1997b) A test of the theory of reference-dependent preferences, Quarterly Journal of Economics, 112(2): 479–505. Bateman, I.J., Cole, M., Cooper, P., Georgiou, S., Hadley, D. and Poe, G.L. (2004) On visible choice sets and scope sensitivity, Journal of Environmental Economics and Management, 47: 71–93. Bateman, I.J., Carson, R.T., Day, B., Hanemann, W.M., Hanley, N., Hett, T., Jones-Lee, M., Loomes, G., Mourato, S., Özdemirog˘lu, E., Pearce, D.W., Sugden, R. and
Characteristics of stated preferences 445 Swanson, J. (2002) Economic Valuation with Stated Preference Techniques: A Manual, Edward Elgar Publishing, Cheltenham. Boyle, K., Desvourges, W.H., Johnson, F.R., Dunford, R.W. and Hudson, S.P. (1994) An investigation of part–whole biases in contingent valuation studies, Journal of Environmental Economics and Management, 27(1): 64–38. Boyle, K.J., Welsh, M.P. and Bishop, R.C. (1993) The role of question order and respondent experience in contingent-valuation studies, Journal of Environmental Economics and Management, 25(1): S-80–S-99. Carson, R.T. (1997) Contingent valuation surveys and tests of insensitivity to scope, in Kopp, R.J., Pommerehne, W.W. and Schwarz, N. (eds) Determining the Value of NonMarketed Goods: Economic, Psychological, and Policy Relevant Aspects of Contingent Valuation Methods, Kluwer Academic Publishers, Boston. Carson, R.T. and Mitchell, R.C. (1993) The issue of scope in contingent valuation studies, American Journal of Agricultural Economics, 75(5): 1265–1267. Carson, R.T. and Mitchell, R.C. (1995) Sequencing and nesting in contingent valuation surveys, Journal of Environmental Economics and Management, 28(2): 155–173. Carson, R.T., Flores, N.E. and Hanemann, W.M. (1998) Sequencing and valuing public goods, Journal of Environmental Economics and Management, 36(3): 314–323. Carson, R.T., Flores, N.E. and Meade, N.F. (2001) Contingent valuation: controversies and evidence, Environmental and Resource Economics, 19(2): 173–210. Cubitt, R.P. and Sugden, R. (2001) On money pumps, Games and Economic Behavior, 37(1):121–160. Diamond, P.A. and Hausman, J. (1994) Contingent valuation: is some number better than no number? Journal of Economic Perspectives, 8(4): 43–64. Ferraro, P.J., Rondeau, D. and Poe, G.L. (2003) Detecting other-regarding behavior with virtual players, Journal of Economic Behavior and Organization, 51(1): 99–109. Goodstein, E.S. (1995) Economics and the Environment, Prentice Hall, Englewood Cliffs, NJ. Hanemann, W.M. (1994) Valuing the environment through contingent valuation, Journal of Economic Perspectives, 8(4): 19–43. Hanemann, W.M. and Kanninen, B. (1999) The statistical analysis of discrete-response CV data, in Bateman, I.J. and Willis, K.G. (eds) Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method in the US, EU, and Developing Countries, Oxford University Press, Oxford, pp. 302–442. Harrison, G.W. (1992) Valuing public goods with the contingent valuation method: a critique of Kahneman and Knetch, Journal of Environmental Economics and Management, 23(3): 248–257. Hausman, J.A. (ed.) (1993) Contingent Valuation: A Critical Assessment, North-Holland, Amsterdam. Hoehn, J.P. (1983) The benefits-costs evaluation of multi-part public policy: a theoretical framework and critique of estimation methods, PhD. Dissertation, University of Kentucky. Hoehn J.P. and Randall, A (1982). Aggregation and disaggregation of program benefits in a complex policy environment: A theoretical framework and critique of estimation methods, Paper presented at the annual meetings of the American Agricultural Economics Association, Logan, Utah. Humphrey, S.J. (1995) Regret-aversion or event-splitting effects: more evidence under risk and uncertainty, Journal of Risk and Uncertainty, 11(3): 263–274.
446
I.J. Bateman et al.
Humphrey, S.J. (1996) Do anchoring effects underlie event-splitting effects? An experimental test, Economics Letters, 51(3): 303–308. Kahneman, D. and Knetsch, J.L. (1992) Valuing public goods: the purchase of moral satisfaction, Journal of Environmental Economics and Management, 22(1): 57–70. Mitchell, R.C. and Carson, R.T. (1989) Using Surveys to Value Public Goods: The Contingent Valuation Method, Resources for the Future, Washington, DC. Randall, A. and Hoehn, J.P. (1996) Embedding in market demand systems, Journal of Environmental Economics and Management, 30(3): 369–380. Randall, A., Hoehn, J.P. and Tolley, G.S. (1981). The structure of contingent markets: some empirical results, paper presented at the Annual Meeting of the American Economic Association, Washington D.C. Robertson, L.C. and Lamb, M.R. (1991) Neuropsychological contributions to theories of part/whole organization, Cognitive Psychology, 23(2): 299–330. Schkade, D.A. and Payne, J.W. (1994) How people respond to contingent valuation questions: a verbal protocol analysis of willingness to pay for an environmental regulation, Journal of Environmental Economics and Management, 26(1): 88–109. Smith, V.K. (1992) Comment: arbitrary values, good causes, and premature verdicts, Journal of Environmental Economics and Management, 22(1): 71–79. Starmer, C. and Sugden, R. (1993) Testing for juxtaposition and event-splitting effects, Journal of Risk and Uncertainty, 6(3): 235–254. Svedsater, H. (2000) Contingent valuation of global environmental resources: test of perfect and regular embedding, Journal of Economic Psychology, 21(6): 605–623. Thaler, R. (1985) Mental accounting and consumer choice, Marketing Science, 4(3): 199–214. Tolley, G.S., Randall, A. Blomquist, G., Fabian, R., Fishelson, G., Frankel, A., Hoehn, J., Krumm, R., and Mensah, E. (1983) Establishing and valuing the effects of improved visibility in the Eastern United States, Interim Report to the US Environmental Protection Agency. Tversky, A. and Kahneman, D. (1974) Judgment under uncertainty: heuristics and biases, Science, 185(4157): 1124–1130. Tversky, B. and Hemenway, K. (1984) Objects, parts, and categories, Journal of Experimental Psychology: General, 113(2): 169–197. Varian, H.R. (1992) Microeconomic Analysis, 3rd edn, W.W. Norton & company Inc., New York.
26 Forecasting hypothetical bias A tale of two calibrations F. Bailey Norwood, Jayson L. Lusk, and Tracy Boyer
Stated preference methods such as contingent valuation and conjoint analysis have become standard tools for economic and public policy analysis. In cases where policy makers are interested in estimating the value of non-market goods or those with passive use values, stated preference methods are sometimes the only tools available. The validity of stated preference methods has been debated for several decades in the contingent valuation literature. While not all researchers agree, the general consensus is that most of the arguments against stated preference methods can be avoided by careful design and implementation (see Carson et al., 2001). However, even the most ardent supporters of stated preference methods would attest to its disadvantages. Perhaps the greatest drawback to stated preference techniques is its hypothetical nature. People can easily say they will pay a certain amount for a good, but often find giving up actual money to be more difficult. The tendency to overcommit to a theoretical payment for a good is referred to as “hypothetical bias” and has been found in close to 90 percent of studies comparing hypothetical to non-hypothetical values (Harrison and Rutstrom, 2006). Because hypothetical bias can lead to an overestimation of a good’s value, the NOAA panel on contingent valuation recommended that values estimated from hypothetical questions simply be divided by two. The level of hypothetical bias can be profound, and has been measured as high 300 percent of a good’s true value (List and Gallat, 2001).1 For years economists have sought to explain hypothetical bias and to discover methods of removing it from stated preference surveys. Increasingly, researchers are using calibration techniques to remove hypothetical bias from stated values. Various methods for calibrating hypothetical values to real values have been proposed by Champ and Bishop (2001), Fox et al. (1998), Hofler and List (2004), and List et al. (1998), just to name a few. However, virtually all calibration studies suffer from a failure to use theory to substantiate why individuals might overstate their values in a hypothetical setting. That is, published studies tend to report successful attempts to develop calibration methods, despite any theoretical or behavioral evidence that the methods should be successful. Since calibration tests that succeed in removing hypothetical bias are more publishable than tests that fail,
448
F.B. Norwood et al.
readers are left wondering whether the proposed calibration methods would succeed in repeated experiments. Practitioners and policy makers would be more confident in calibration techniques if theoretical or behavioral reasons were provided as to why a particular calibration technique should perform well. This study argues that hypothetical bias is partially due to the fact that people are uncertain about the utility they will derive from purchasing a good. We refer to this as self-uncertainty. Two economic models are presented that suggest selfuncertainty translates into hypothetical bias through risk aversion and commitment costs. Therefore, if self-uncertainty can be measured, it can be used to predict and remove hypothetical bias. It seems plausible that individuals possessing self-uncertainty are able to express this uncertainty when asked. Indeed, in stated preference experiments, many calibration techniques ask subjects how certain they are that they would pay their stated amount if a real purchasing opportunity arose. The selfreported uncertainty is then used to adjust hypothetical values downward, and is referred to in this chapter as certainty-calibration. Certainty-calibration has been shown to improve inferences empirically, but to our knowledge this is the first study to offer a theoretical explanation for why higher levels of uncertainty lead to hypothetical bias, as opposed to just greater statistical noise. Evidence supporting this hypothesis is provided from a classroom experiment. Another calibration technique is the frontier-calibration developed by Hofler and List (2004). This calibration operates by estimating bids from hypothetical auctions using a stochastic frontier function, and assuming deviations from this stochastic frontier are identically equal to hypothetical bias. Like certaintycalibration, frontier-calibration has performed well empirically, but no theoretical motivation has been offered. We show that, under plausible assumptions, frontier-calibration like certainty-calibration may be an indirect measure of selfuncertainty – both calibrations have a similar theoretical foundation that has been so far ignored. Since the calibrations share a theoretical foundation, one may wonder if they could be combined, producing a hybrid-calibration. Indeed, a hybrid-calibration is constructed for auction bids, and non-parametric bootstraps are used to show that combining calibrations improves predictions of true values. However, all calibrations tested under-predict true values. This suggests researchers can combine calibrated and uncalibrated values to construct a lower and upper bound to true values. This chapter is organized as follows. We next describe the certainty- and frontier-calibrations, and what we feel are plausible theoretical justifications for their empirical success. An experiment used to collect hypothetical and real auction bids and to elicit information on the source of self-uncertainty is then described. Then, a method for combining the two methods for predicting real bids based on hypothetical bids is illustrated. The chapter ends with a summary and concluding comments.
Forecasting hypothetical bias 449
Certainty-calibration The certainty-calibration has been used exclusively for dichotomous choice questions and has taken three different forms in the literature. Champ and Bishop (2001) used the method to calibrate stated values for wind energy with actual payments. The authors asked respondents if they would like to purchase a particular amount of wind energy at a particular price. For half of the respondents this was a hypothetical question while for the other half the offer was real. In the hypothetical setting, if the respondent indicated “yes” to the hypothetical purchase opportunity, the following certainty question was posed: On a scale of 1 to 10 where 1 means “very uncertain” and 10 means “very certain,” how certain are you that you would purchase the wind power offered in Question 1 if you had the opportunity to actually purchase it? Champ and Bishop then used answers to the certainty question to calibrate the “yes/no” responses to the hypothetical question. Not surprisingly, the percentage of “yes” responses to the hypothetical question was larger than percent of “yes” responses to the real offers, indicating a hypothetical bias. However, by assuming that only those who checked eight, nine, or ten on the certainty question would actually pay the amount asked (i.e. after changing the “yes” responses to “no” if the answer to the certainty question was less than the threshold of eight), the distribution of stated values was indistinguishable from the distribution of actual values. Champ and Bishop’s recoding scheme ensured that calibrated values would be lower than stated values, and logically some threshold for the certainty question scale had to exist that would make hypothetical and true values statistically indistinguishable. Even if self-uncertainty and hypothetical bias were unrelated, there would still be some threshold for which calibrated bids values would equal true values. More confidence could be given to the results if the threshold of eight been chosen a priori, and then used to calibrate and predict true values. However, it should be noted that some support for the eight threshold is given by the fact that there are demographic differences for respondents on either side of the threshold. Johannesson et al. (1999) provide another example of where the certaintycalibration appears to provide unbiased estimates of true values, but the reliability of the certainty-calibration remains suspect because they used in-sample as opposed to out-of-sample predictions. Subjects were asked if they would purchase a particular good at a particular price, and then if they answered “yes,” they were presented with a certainty question like that in Champ and Bishop. A follow-up question then allowed the subjects actually to purchase the good at that price. Not surprisingly, the percentage of “yes” responses to the hypothetical purchase opportunity was larger than the real purchase opportunity, indicating a hypothetical bias. A probit regression was then used to predict the probability of a “yes–yes” response (yes to both the hypothetical and the real
450
F.B. Norwood et al.
purchase opportunity) as opposed to a “yes–no” response based on a subject’s answer to the certainty question. The certainty-calibration by Johannesson et al. was then performed as follows. If a subject answered “yes” to the hypothetical question, but the predicted probability of a “yes–yes” response from the probit model was less than 50 percent, the “yes” answer to the hypothetical question was changed to “no.” After recoding the data, the percent of “yes” responses in the hypothetical and real samples were statistically indistinguishable. Thus, they conclude based on within-sample calibration that this certainty-calibration provides unbiased predictions of real values. The parameter associated with the certainty question in the probit regression by Johanneson et al. was significantly positive, indicating self-uncertainty indeed influenced hypothetical bias. However, it is not surprising that the probit regression provided an unbiased prediction of the true percentage of “yes–yes” responses, because it was an in-sample prediction. In the probit estimation, the parameters were chosen to provide (asymptotically) unbiased predictions. Though their results help validate the certainty-calibration, their arguments would be more compelling if the probit regression had been used to predict “yes–yes” responses from a different pool of subjects, i.e. out-of-sample predictions would provide a much better test of calibration performance than in-sample predictions.2 There are some studies that test calibration performance using out-of-sample predictions. Blumenschein et al. (1998) used a certainty question where, if a subject stated “yes” she would hypothetically purchase the good, she could only select “definitely sure” and “probably sure” instead of the one-to-ten scale. After recoding “yes” and “probably sure” answers to “no,” answers to the hypothetical and real dichotomous choice question were statistically indistinguishable. The salient feature of this study is that the good calibration performance was not a tautology. It was not designed so that it must predict well. This occurred because answers to the real purchase opportunity did not enter the calibration design, and therefore the results were out-of-sample predictions whose performance, a priori, had just as much chance to fail as it did to succeed. Only when we turn to out-of-sample predictions does the certainty-calibration appear imperfect. The Johannesson et al. (1998) experiment used the same methods as Blumenschein et al., but found the certainty-calibration under-predicted the number of true “yes–yes” responses. Norwood’s application of the one to ten certainty scale towards choice experiments also found the certainty-calibration to under-predict true values (Norwood, 2005). While there is some variation in the performance of the certainty-calibration, across all studies a common feature is that calibrated values will either be an unbiased predictor of true values or will underestimate true values. This is useful information, because it allows researchers to use calibrated and uncalibrated values as a lower and upper bound for true values. For this reason, we feel the certainty-calibration thus far has been an empirical success. This is backed by the meta-analysis of Little and Berrens (2004) who find hypothetical bias is lower using variants of the certainty-calibration. But the question of why it should work well empirically still remains unanswered.
Forecasting hypothetical bias 451 At this point, it would be prudent to ask why greater self-uncertainty is positively correlated with greater hypothetical bias. The essential question that warrants explanation is why self-uncertainty causes real bids to be lower than hypothetical bids, instead of just increasing the volatility of bids. Two explanations, risk aversion and commitment costs, are offered below. When risk aversion and/or commitment costs are present and individuals are uncertain over the utility they will derive from a good, it creates hypothetical bias. While they do not exhaust the set of possible theories, and while no conclusive test of the theories is conducted, this is a needed first step in capturing hypothetical bias within a theoretical framework. Self-uncertainty and risk aversion It can be argued that self-uncertainty is positively related to hypothetical bias when subjects are risk averse and make decisions that maximize expected utility. The pivotal assumption is that subjects are less risk averse in hypothetical situations than in real situations; an intuitive notion backed by the experimental results of Holt and Laury (2002). First consider an intuitive argument. In situations where subjects are uncertain about the utility they will derive from purchasing a good, this uncertainty translates to a lower value. If risk aversion is indeed higher in real settings compared to hypothetical settings, then this uncertainty has a larger impact in depressing values in real settings, and subjects will state a higher value in hypothetical compared to real settings. To illustrate mathematically, suppose that X is a good where utility for that good is given by the function U(X,Y), where Y is income. Utility is assumed increasing and concave in all arguments. The level of happiness derived from the good is uncertain, where this uncertainty is projected through the level of X consumed. The level of X will equal X K with probability P and X + K with probability 1 P, where K is an arbitrary constant. Assuming that U(X,Y) possesses only two derivatives with respect to X, using a Taylor Series Expansion, the expected utility is given by
EU = PU(X − K,Y ) + (1 − P)[U(X − K,Y ) + UX2K + UXX2K2].
(26.1)
The derivatives UX and UXX are assumed to be evaluated at X − K and Y throughout this section. Assuming individuals maximize expected utility, willingness to pay for the good X is the value WTP that satisfies
U(0,Y − WTP) = PU(X − K,Y ) + (1 − P)[U(X − K,Y ) + UX2K + UXX2K2]. (26.2) Note that WTP increases as the expected utility of X rises. If we measure risk aversion by the coefficient of absolute risk aversion r = UXX /UX (Laffont), equation (26.1) can be written as
EU = U(X − K,Y ) + (1 − P)2KUX (1 – rK).
(26.3)
452
F.B. Norwood et al.
As seen by equation (26.3), the greater the risk aversion coefficient r, the lower the expected utility and the lower the submitted bid. As mentioned earlier, experiment evidence exists that individuals are more risk averse in real versus hypothetical situations (Holt and Laury, 2002). To reflect this, let rH and rR denote the risk aversion coefficient in hypothetical and real situations, respectively, where rH < rR. It is assumed that UX is identical for real and hypothetical situations, but greater curvature in utility due to greater risk aversion is revealed through a larger absolute value of UXX in real versus hypothetical settings. The expected utility difference for a good in hypothetical minus real situations is then
EU H – EU R = (1 − P)2K 2UX (rR – rH) > 0.
(26.4)
Since individuals calculate a higher expected utility in hypothetical situations, their hypothetical bids will tend to be larger. The next step is to notice that the difference in expected utility levels can be partly explained by the degree in uncertainty over the happiness from the good. Greater uncertainty over the enjoyment from the good can be seen as an increase in K, which influences hypothetical and real expected utility as
∂{EU H – EU R}/∂K = (1 − P)4KUX (rR – rH) > 0. Thus, greater uncertainty about the utility derived from the good leads to greater differences in hypothetical and real utility, and should lead to greater differences in hypothetical and real bids. The end result is greater hypothetical bias. It is plausible that individuals can express this uncertainty by indicating a lower value on the one to ten certainty scale. If the above model is correct, a low certainty level should then be associated with greater hypothetical bias, allowing researchers to calibrate stated values accordingly. Commitment costs and option value A wealth of literature has begun to appear using an option value paradigm to guide investment decisions (e.g. Calcagnini, 1997; Dixit and Pindyck, 1994; Kandel and Pearson, 2002; Majd and Pindyck, 1987). The key insight is that optimal investment decisions depend on the degree of uncertainty about the value of an investment, the degree to which more can be learned about the investment’s value in the future, and the reversibility of the decision. Recently Zhao and Kling (2001, 2004) expanded this approach to consumer decision making. In particular, assume an individual does not know their value for a good with certainty, but knows the distribution of possible values. Also, assume the individual expects to learn more about their value for the good in the future, perhaps finding out exactly what their value is. If the individual is forced to state a willingness to pay
Forecasting hypothetical bias 453 today and give up their opportunity to wait and learn more about their value, they will state a willingness to pay of WTP1. By contrast, consider an individual that has perfect certainty about the value of a good. In this case, there is no value to waiting to gain more information; such an individual would state a willingness to pay today of WTP2. Zhao and Kling show that WTP2 > WTP1. They denote the difference between WTP2 and WTP1 as a commitment cost, e.g. CC = WTP2 − WTP1. The more uncertain an individual is about their value for a good, the larger the cost of commitment. Now we describe how the commitment cost issue is related to self-uncertainty. It seems quite plausible that when individuals make hypothetical value statements, they ignore the cost, CC, of foregoing future learning opportunities. Evidence for this claim is found later in the chapter. However, if a decision task is non-hypothetical, it is likely individuals will take CC into consideration as this is a cost that must be actually incurred. If these assumptions are valid, uncertainty can be directly linked to hypothetical bias. For individuals that are certain about their value for a good, WTP2 = WTP1 and they state identical values in hypothetical and non-hypothetical settings. However, as self-uncertainty increases, CC grows, and as a result, hypothetical bias becomes more pronounced. Next, we turn to a more recent calibration method, so recent it has only been empirically tested once. At first, there seems no obvious reason why it should work well. The assumptions behind the calibration appear ad hoc, making its one empirical success seem fortuitous. Yet, we contend it has a theoretical justification, one identical to the certainty-calibration.
Frontier-calibration The frontier-calibration designed by Hofler and List (2004) for use in hypothetical Vickrey Auctions, assumes a particular statistical structure governing hypothetical and real bids. The process generating actual bids in a Vickrey Auction for a private good for person i is assumed to follow
YiA = Xiβ + vi where YiA is the actual (non-hypothetical) bid or true value, vi is normally distributed with a zero mean, Xi is a vector of demographics and β is a conformable parameter vector. A hypothetical bias exists when people overstate their true bid in hypothetical questions. Assume this bias can be depicted by the non-negative random error µi = YiH − YiA where YiH is a hypothetical bid. The process driving hypothetical bids can then be modeled as
YiH = Xiβ + vi + µi = Xiβ + εi.
454
F.B. Norwood et al.
In stated preference studies researchers can only observe the error term εi = vi + µi. However, by assuming particular distributions for vi and µi , one can obtain an estimate of µi based on the estimate of εi . It is common to assume that vi ~ N(0,σ2) and that µi is half-normal (Beckers and Hammond, 1987; Hofler and List, 2004; Reifschneider and Stevenson, 1991; Kumbhakar and Knox-Lovell, 2000; Jondrow et al., 1982), making the expected value of µi increasing in εi (see Jondrow et al., 1982). Therefore, a larger bid residual, εi, implies a larger predicted hypothetical bias. Thus, the frontier-calibration works by assuming hypothetical bias is positively correlated with conditional bid residuals (residuals conditional on the value of Xi). The steps to obtaining the calibrated bids are as follows. Step (A). Estimate a hypothetical bid function using a stochastic frontier function YiH = Xiβ + vi + µi where vi ~ N(0,σ2) and µi is a non-negative random variable. Step (B). For each individual, calculate the expected value of µi conditional on the observed residuals εi = vi + µi , denoted E(µi 冟εi). ^ by calculating Step (C). Obtain a frontier-calibrated bid Y i ⎛ ⎞ Xiȕˆ ˆ =⎜ ⎟ H Y i ⎜ X ȕˆ + E(ȝ | İ )⎟ Yi i i ⎠ ⎝ i
where β is the maximum likelihood estimate of β.3 The above model is constructed such that hypothetical bias µi must be a component of the observed error εi. However, it is just as easy to construct a model where this is not the case. If we specify hypothetical bids to follow a stochastic frontier, then hypothetical and real bids could be stated as given below:
YiH = Xiβ + vi = Xiβ + εi YiA = Xiβ + vi – µi.
(26.5)
In this case, εi = vi and conditional bid residuals are uncorrelated with hypothetical bias. Are bid residuals correlated with hypothetical bias? A scientific answer first requires a theoretical explanation of how they could be correlated, followed second by an empirical test. The theoretical explanation follows naturally from earlier in the chapter, where it was shown that individuals who are less certain about the utility they will obtain from the good will display a greater hypothetical bias due to risk aversion and commitment costs. So long as this self-uncertainty is not measured, it is denoted by a random variable µi and creates a wedge between hypothetical and real
Forecasting hypothetical bias 455 bids. Then, if we assume that neither Xiβ or vi are correlated with self-uncertainty, then conditional bid residuals must be correlated with both self-uncertainty and hypothetical bias. Variables in Xi are usually of the demographic nature. To our knowledge, there is no theoretical or empirical evidence that hypothetical bias is demographic specific. Empirical evidence confirming a positive relationship between conditional bid residuals and hypothetical bias is provided later in the chapter. Individuals exhibiting a larger hypothetical bias will tend to submit larger bids compared to those of similar demographics. This information can then be exploited as an indirect measure of hypothetical bias for calibration.
Experiment description The data used in this study are from an auction designed to mimic the Hofler and List field experiment. The subjects were students from two undergraduate agricultural economics classes. Both were agricultural marketing and price classes taught in the spring and fall of 2004. The experiment format was identical for both classes. Students were shown a portable lawn chair with the university colors and logo and were asked to submit sealed hypothetical bids. Students were asked to bid as if the auction were real. The hypothetical auction was described as a Vickrey Auction and students were given examples of the Vickrey Auction process prior to the experiment. Students were encouraged to ask questions about the auction to ensure that the rules were fully understood, and were verbally instructed to participate as if the bidding process were real. After writing their hypothetical bids, they were asked to complete a certainty question asking them how certain they are on a scale of one to ten they would submit a bid equal to or greater than their hypothetical bid if the auction were real. The certainty question was similar to the one used by Champ and Bishop (2001). Comparing hypothetical to non-hypothetical bids allows a direct measurement of hypothetical bias, so an actual auction followed the first experiment. After the hypothetical bids were collected, the students were asked to submit real bids for a Vickrey Auction. The students were told that the auction winner would have to pay the second highest bid amount by cash or check within 1 week. Students were told to sign a form indicating that they understood this second auction was real. The two auctions were held at the beginning of class, so there was no reason for the students to hurry through the experiment. Table 26.1 provides the descriptive statistics of the experiment. Hypothetical bias was pervasive in this experiment; the ratio of hypothetical to real bids was greater than two for 25 percent of subjects. The experiment differed between the two classes in only one way described below. For students in the fall 2004 class, we also asked them to describe verbally how they think we should interpret their answer to the certainty question. The exact question was: If you circled a number less than 10, indicating some uncertainty over how you would bid in a real auction, in your own words, could you please
456
F.B. Norwood et al. Table 26.1 Descriptive statistics of experiment Variable
Mean (standard deviation)
Hypothetical bid Actual bid Answer to certainty question (Scale of 1 to 10) Males Number of participants in spring 2004 Number of participants in fall 2004 Number of participants
$24.65 (17.38) $15.07 (9.19) 8.00 (1.96) 58.00% 35.00 48.00 83.00
explain why you were uncertain over how you would bid in a real auction? The responses are shown in the Appendix to this chapter. Numerous times students reported they were unsure about whether they really wanted the chair and what it was really worth to them. This is the same as being uncertain over the utility they will derive from the good, which as argued earlier should lead to greater hypothetical bias if agents are risk averse or if commitment costs exist. Many students also referenced the future. Some said in a real auction, which may or may not take place in a different time period, they may have more or less money. Some indicated they were interested in shopping around to see what the chair sold for elsewhere. Since chairs sold in surrounding stores are substitutes for chairs in the auction, there is an option value to waiting to gain more information, and lends credence to the commitment cost explanation. Not all responses reference uncertain utility as the reason submitting a certainty rating less than ten. It is evident that some students paid more attention to how other students bid than their true value for the chair. However, we feel that, after perusing the responses, there is some evidence that low certainty levels influenced uncertainty over the utility buyers will obtain from the good. We now look at the empirical evidence confirming this lower certainty rating does lead to greater hypothetical bias and greater conditional bid residuals, providing a theoretical foundation for the certainty- and frontier-calibrations.
Self-uncertainty, hypothetical bias and bid residuals Hypothetical bias may arise from confusion, a desire to answer the question quickly, a strategic maneuver to free ride, or a desire to manipulate the outcome. Bias from confusion and anxious participants can be avoided by a well-designed survey or experiment. This chapter focuses on the hypothetical bias for stated preference studies dealing with private goods, so free riding is irrelevant, though certainly a major factor in other settings such as the valuation of public goods. Here, we test the hypothesis that hypothetical bias is partially the result of selfuncertainty as indicated in response to the certainty question described above.
Forecasting hypothetical bias 457 A test is also conducted for a positive correlation between conditional bid residuals and self-uncertainty. Hypothetical bias is calculated by subtracting actual bids from hypothetical bids. The bias is then regressed against answers to the certainty scale question, a male dummy variable and a dummy variable for the fall 2004 class. The results shown in Table 26.2 demonstrate a positive relationship between self-uncertainty and hypothetical bias. A lower value on the certainty question is associated with greater hypothetical bias. Although this finding does not imply that self-uncertainty is the only cause of hypothetical bias, it is a significant component. This corroborates the Johannesson et al. (1999) finding that self-uncertainty provides information on hypothetical bias, and that certainty-calibration is not simply an arbitrary method of reducing hypothetical values. Next, we test the underlying assumption of the frontier-calibration that the bid residuals are positively correlated with hypothetical bias. First, hypothetical bids are estimated as YiH = β0 + β1 (male dummy variable) + β2 (fall semester dummy variable) + ηi using ordinary least squares where ηi is an error term. While ηi is not a non-negative error term like those employed in frontier models, a higher value still implies a greater conditional bid residual. Thus, a larger value of ηi implies a larger value of εi in a frontier model. Table 26.2 shows the result of a regression of hypothetical bias on bid residuals, and the correlation is statistically significant and positive. While hypothetical bids enter both the dependent and independent variables, this correlation is not a tautology.
Table 26.2 Relationship between certainty question, hypothetical bid residuals and hypothetical bias Independent variables Dependent variable = hypothetical bias = (hypothetical bid – real bid) Parameter estimate (T-statistic in parenthesis) Constant 34.9756 (7.17)*** Value of hypothetical – bid residual Answer to certainty –2.6073 (–3.18)*** question Male dummy variable –1.9609 (–0.60) Dummy variable for –8.6491*** (–2.65) fall semester class Coefficient of 0.17 determination
Dependent variable = hypothetical bid residuala
9.2664 (9.18)*** 15.4088 (2.00)** 0.7532 (12.66)*** – –
–1.9261 (–2.06)***
– – 0.66
– – 0.05
Notes *** and ** indicate significances at the 1 percent and 5 percent level, respectively. a Hypothetical bid residuals are calculated as the residuals from the regression Predicted hypothetical bids = 29.67 – 4.42 (male dummy variable) – 5.69 (fall semester dummy variable). Only the intercept was significant at the 10 percent level.
458
F.B. Norwood et al.
Individuals submitting higher hypothetical bids are not necessarily the same individuals with greater hypothetical bias. If they are not, the correlation will be zero. The salient assumption behind the frontier-calibration is that individuals submitting higher hypothetical bids are also the individuals displaying greater hypothetical bias, and the regression in Table 26.2 confirms this assumption is correct. Also, the high coefficient of determination compared to the certainty question regression suggests that conditional bid residuals explain more hypothetical bias than answers to certainty questions. Table 26.2 also shows a regression of hypothetical bid residuals against answers to the certainty question, revealing a positive and significant correlation between self-uncertainty and bid residuals. From this, one can conclude that selfuncertainty causes people not only to overestimate how much they are willing to pay, but also to submit higher hypothetical bids than those of similar demographics. Self-uncertainty results in greater bid residuals and greater hypothetical bias, thus providing theoretical justification for the certainty- and frontier-calibration. The auction results suggest that the frontier-calibration and the certaintycalibration are intimately related. This naturally leads one to wonder whether they can be profitably combined. The certainty-calibration was designed for dichotomous choice questions, but can easily be modified for use in auctions. It was previously shown that the frontier-calibration works because the error term µi in equation (26.2) is an indirect measure of self-uncertainty. Since the certainty question is a direct measure of self-uncertainty, one can make the distribution of µi conditional on answers to the certainty question. We next test whether this modification improves inferences obtained from hypothetical bids.
Combining calibrations Here, we utilize the auction data described previously to determine whether embedding answers to the certainty question within the frontier-calibration approach provides better forecasts of observed bids. This is referred to as a hybrid-calibration. Hypothetical bids are modeled as:
YiH = Xiβ + vi + µi = Xiβ + εi where Xiβ + vi is the stochastic true bid and µi is the hypothetical bias. The distribution of vi is assumed N(0,σ2) while µi is allowed to follow a half-normal distribution where µi ~ |N(0,αi2)|. Since µi is the indirect measure of hypothetical bias in the frontier-calibration approach, this measure might be enhanced by including information about individuals’ level of uncertainty. Individuals’ answers to the certainty question (Ci) are incorporated by specifying αi to follow
αi = α0 + α1(Ci/10) where Ci is divided by ten to facilitate convergence in non-linear estimation. To test whether the certainty question improves predictions of actual bids, the
Forecasting hypothetical bias 459 model is also estimated where α1 is constrained to equal zero. This constraint produces the frontier-calibration model proposed by Hofler and List (2004). With some modification, the log-likelihood function for εi = vi + µi is given by Aigner et al. (1977) as ⎡ ⎛ ⎛İ ⎞ ⎞⎤ LLF = ∑ ⎢ln (Į i )+ 0.5Į i2ı 2 − Į i (İ i )+ ln ⎜⎜ ĭ ⎜ i − ıĮ i ⎟ ⎟⎟⎥ ⎠ ⎠⎦ i ⎣ ⎝ ⎝ı
where is the standard normal cumulative distribution function. Predicted hypothetical bias for each individual is calculated by determining the expected value of µi given the residual εi. This expectation is (Aigner et al., 1977): E( i |
i
⎛ (A i ) ⎞ ⎜⎜ − A i ⎟⎟ ⎝ 1 − (A i ) ⎠
)=
Where Ai = (εi)/σ + σαi. As the estimated residual, εˆ i = YiH − Xi ˆ increases, the level of the predicted hypothetical bias increases as well. The log-likelihood function for this model is (Kumbhakar and Knox-Lovell, 2000) ⎧⎪ ⎡ LLF = ∑ ⎨−ln ~ 2 + ln ⎢1 − ⎪ i ⎩ ⎣ ~= 2+ 2
( )
(
⎛ − YiH − X i ⎜⎜ ~ ⎝
) ⎞⎟⎤ − 0.5~ (Y ⎟⎥ −2
⎠⎦
H i
− Xi
⎫
) ⎪⎬ 2
⎪⎭
i
=
i/
and the estimated hypothetical bias is ⎡ ⎤ ⎛ İiȜ i ⎞ ⎢ ij⎜ ı ⎟ İiȜ i ⎥ ⎝ ⎠ ⎥ E(ȝ i | İ i ) = ı * ⎢ + ı ⎥ ⎢ − ĭ⎛⎜ − İ i Ȝ i ⎞⎟ ⎥⎦ ⎢⎣ ⎝ ı ⎠ ~ = ı2 + Į 2 ı i
~ -1 ı i * = Į iıı Ȝ i = Į i /ı
Estimates were obtained by maximizing the above log-likelihood function using the fminunc algorithm in MATLAB. The parameter estimates with and without the certainty question are shown in Table 26.3. As expected, the coefficient on the certainty question, α1, is negative, implying that individuals who express greater certainty are projected to exhibit less hypothetical bias. This coefficient is significant according to t-tests and likelihood ratio tests.4 This further confirms the positive relationship between self-uncertainty and bid residuals.
460
F.B. Norwood et al.
An interesting question is whether incorporating answers to the certainty question in the frontier model of hypothetical bids improves forecasts of actual bids. As Table 26.3 shows, the root-mean squared error of real bid forecasts is lower when using the certainty question, but the difference does not appear large. To determine whether a significant different exists, a statistical test is conducted. This test is conducted using a bootstrap method that works as follows. Let WiNOCERTAIN be the squared forecast error, which is the squared difference between the calibrated bid and the actual bid for the ith person when α1 (the indirect hypothetical bias measure) is constrained to equal zero. Similarly, let WiCERTAIN be the squared forecast error when the value of α1 is unrestrained. Finally, let Wi = WiNOCERTAIN − WiCERTAIN. The contribution of the certainty question to frontier-calibration is measured by testing the null hypothesis that E(Wi) = 0 versus the alternative hypothesis the E(Wi) > 0. Values of Wi are expected to be non-normally distributed, so a non-parametric bootstrap is conducted. A total of 1,000 bootstraps are conducted where, within each bootstrap, individual values of Wi are randomly chosen with replacement to yield 83 simulated Wi values, where the average Wi at each boot— — strap is denoted Wi. The percent of positive Wi s can serve as a p-value for the — statistical test. In 100 percent of bootstraps Wi was positive, indicating that embedding the certainty-calibration within the frontier-calibration reduces the forecast error between predictions of actual bids from hypothetical bids and actual bids. Therefore, we conclude that the inclusion of certainty question in the hybrid-calibration significantly improves forecasts of actual bids. The hybrid-calibration does not, however, provide unbiased estimates of true bids – true bids are systematically underestimated. This is confirmed by another Table 26.3 Stochastic frontier estimation (sample size = 83) Independent variables
Normal/half-normal model Without certainty question
Parameter estimate (t-statistic) Intercept 8.3507 (3.58)*** Male dummy variable –4.7209 (–1.84)* Fall dummy variable 1.4239 (0.59) σ 3.6669 (3.18)*** α0 25.0394 (11.12)*** α1 – Log-likelihood function –321.53 Mean calibrated bid 6.04 Root-mean squared error 12.99 from using calibrated bids to forecast true bids
With certainty question 8.6529 (3.57)*** –4.5184 (–1.85)* 1.3611 (0.5369) 3.8250 (3.28)*** 37.4089 (5.12)*** –16.6951 (–2.05)** –318.68 6.40 12.68
Note The superscripts *, **, and *** denote significance at the 10 percent, 5 percent and 1 percent level, respectively.
Forecasting hypothetical bias 461 non-parametric bootstrap. Let BC be the calibrated bid using the hybrid-calibration and BR be the real bid. A statistic D = DC − BR is constructed, which equals the difference between the calibrated and real bid for each subject. A total of 1,000 bootstraps are then conducted where individual values of D are sampled with replacement, and the average of the simulated Ds are calculated at each bootstrap. Across all bootstraps the average simulated D was negative, confirming calibrated bids under predict true bids. Identical results are found using the frontier-calibration without the certainty question. This is consistent with the results of Johannesson et al. (1999) and Norwood (2005) who also found calibrated values to under-predict true values. This does not mean that calibration is not useful though. Uncalibrated values from stated preference methods are known to over-predict true values, so if calibrated values under-predict then researchers can easily construct a lower and upper bound to true values. The contribution of the hybrid-calibration is to decrease the width of this interval, providing more accurate assessments of true values.
Results and implications Despite the plethora of research regarding hypothetical bias, no study to our knowledge has attempted to formulate a theory of hypothetical bias and seek empirical evidence of its validity This study argues that hypothetical bias is partly caused by uncertainty of the utility to be gained from a good. The presence of uncertainty causes the value of the good to fall. Studies have shown that people are less risk averse in hypothetical compared to real settings, and so the discount stemming from uncertainty has a stronger effect in real settings than hypothetical setting, producing a hypothetical bias. When the utility from a good is uncertain, subjects will also place an option value on having more time to gather information before the purchase rather than committing to a purchase now, decreasing a subject’s willingness to pay at the present. This option value will be taken into consideration more in real than hypothetical situations, leading hypothetical values to be larger than real values. Thus, uncertainty in the utility for a good leads to hypothetical bias through both risk aversion and commitment costs. Moreover, subjects appear able to express this uncertainty on a one to ten scale, as experimental data confirm a positive relationship between selfexpressed uncertainty and hypothetical bias. If fact, this “self-uncertainty” appears to be the theoretical foundation for two calibration techniques: the certainty-calibration and the frontier-calibration. Until now, these two calibrations have been used separately; the certainty-calibration for discrete choice experiments and the frontier-calibration for auctions. This study shows that the two can be easily combined to produce a hybrid-calibration in auctions, and that combining the two calibrations does improve forecasts of real bids. This hybridcalibration could also be extended to the discrete case by modeling utility as a stochastic frontier function as in Norwood (2005) and letting deviations from the frontier be contingent upon answers to the certainty scale. Whether this would
462
F.B. Norwood et al.
improve inferences in discrete experiments as it improves inferences in this study is left to future research. This study is only a small step in explaining hypothetical bias. One should use caution in transferring these results to other settings, especially those involving public goods, auctions other than the Vickrey auction, and when the pool of subjects submitting hypothetical and real values differ. Also, the theory offered here should not be interpreted as a complete theory of hypothetical bias. Numerous other factors may also contribute. On a final note, although this study exposes the theoretical foundation for calibration, the theory does not in any way imply that calibrated values from stated preference experiments will be unbiased estimates of true values. If anything, they will likely underestimate true values. This has important implications for how calibrated values are interpreted. Calibrated values must be interpreted as a lower bound to true values. When combined with uncalibrated values, which is an upper bound, they provide an interval capturing true values.
Appendix Table 26.a.1 Written answers to question: “If you circled a number less than 10, indicating some uncertainty over how you would bid in a real auction, in your own words, could you please explain why you were uncertain over how you would bid in a real auction?” Stated response for those indicating a certainty level less than 10 I was uncertain because I wasn’t sure how much the chair is really worth to me, and so I didn’t know the chair’s worth to me. Because I was not certain the real value of the item to me. Because I wasn’t sure I could afford it at this time. I was not sure of the worth of the chair. I am not sure if I really want the chair, or I just am caught up in the game. In a real auction I would be held responsible to buy the chair if I won. Depending on money availability. Right now, yes I would pay for it. A week from now, maybe not. Depends on the financial situation I would be in at the time, and how bad in a real auction I wanted the chair. I was a little unsure as to what to bid because I am not sure what it would cost at a store. I don’t want to bid more here than what I could buy it for at the store. It would depend on when the auction was and how much money I had at that time. Also if I wanted the chair or not. Also would depend on where the money went. I would give more to charity than back to OSU. Because I didn’t know exactly how much money I would want to give up. That is the thing with auctions, you have to figure out how much money you are willing to give up for an item. Plus, I already have a lot of those chairs! continued
Forecasting hypothetical bias 463 Table 26.a.1 continued Due to the supply out of this room on the market I am able to shop around for another chair. I was uncertain because I really didn’t need the chair but if it was cheap enough I would buy it. I would go to Wal-Mart first and check their prices. I was not 100 percent sure that I really want the item. Buyers remorse. Maybe I would and wouldn’t want the chair. I wasn’t sure if I would bid the same. I did bid a little higher. It would depend on how much money you had at the time. I wasn’t absolutely certain that I would pay that amount. There are different and more people at a real auction, and its harder to guess what other people or you might do on a different day in a real auction. Because there was no money at stake. I am a poor college student and don’t have all the money that I need to just put into an auction. Depends on how many people want it, how bad I want it, and if I have enough money to go higher. Overall, this is an auction that is composed primarily of college students. Many of us are living off of our parents or working part-time jobs to fund their living expenses or various habits. Giving the preceeding, I was confident that my bid would have won. I really don’t need this product so I would set a price. If I won, good, if not, no problem. I didn’t really have a purpose for buying the chair in the first auction, but in the second auction (only because I had more time to think about it), I decided I could probably buy it cheap and sell it to someone for a higher price. I was uncertain because it would have to depend on if I really needed the chair, if I had extra money, and how much I felt I could afford. I was uncertain about the $25 because I don’t need the chair for that moment. I don’t know exactly how bad I want the chair. I do know that it is not a necessity and that adds to the uncertainty. I did not know how the other bidders felt about the chair. I did not feel that I really needed the chair. There are different reasons of uncertainty. What your checking account is, what mood you are in, and when the next football game is. I don’t really know what the chair is worth.
Notes 1 The definition of “true” values throughout this chapter is values revealed through actual payments of money in an incentive-compatible valuation mechanism. In our data, the true values are real Vickrey Auction bids. 2 Let f(Ci,β) be a function predicting the probability of a “yes–yes” response based on the ith subject’s answer to the certainty question, denoted Ci. A value of Ci = 1
464
F.B. Norwood et al.
(Ci = 10) indicates low (high) certainty. β is a parameter vector. Let Ii = 1 if the ith subject answers “yes–yes” and zero otherwise. The Johannesson study evaluated forecasts of Iis using a parameter vector β that was estimated from the true value of the Iis. That is, the same group of individuals was used to estimate β and to judge calibration performance. For this reason, the predictions of the Johannesson study are in-sample predictions, and the value of β was chosen to maximize prediction accuracy. If β were estimated from one group of students, and then used to predict Iis from another group of students, the predictions would be out-of-sample. 3 Hofler and List also consider an alternative calibration where hypothetical bids are calibrated using the unconditional expectation of µi. This entails replacing Step 3 of the frontier-calibration with “Obtain a frontier-calibrated bid by multiplying YiH by
⎛ Xi ˆ ˆ =⎜ Y i ⎜ X ˆ + E( ⎝ i
⎞ H ⎟Yi ⎟ i )⎠
^ where β is the maximum likelihood estimate of β.” This method performed equally well at removing hypothetical bias. 4 The likelihood ratio test statistic for the null hypothesis that α1 = 0 is 2(321.53 − 318.68) = 5.7. The p-value for the null hypothesis that α1 = 0 is 0.017, so we conclude α1 is indeed significantly negative.
References Aigner, D. C., A. K. Lovell and P. Schmidt, 1977. “Formulation and Estimation of Stochastic Frontier Production Function Models.” Journal of Econometrics, 6 (1), 21–37. Beckers, D. E. and C. J. Hammond, 1987. “A Tractable Likelihood Function For the Normal-Gamma Stochastic Frontier Model.” Economics Letters, 24 (1), 33–8. Blumenschein, K., M. Johannesson, G. C. Bloomquist, B. Liljas, and R. M. O’Connor, 1998. “Experimental Results on Expressed Certainty and Hypothetical Bias in Contingent Valuation.” Southern Economic Journal, 65 (1), 169–77. Calcagnini, G., 1997. “Small Firm Investment and Financing Decisions: An Option Value Approach.” Small Business Economics, 9 (6), 491–502. Carson, R. T., N. E. Flores, and N. F. Meade, 2001. “Contingent Valuation: Controversies and Evidence.” Environmental and Resource Economics, 19 (2), 173–210. Champ, P. and R. C. Bishop, 2001. “Donation Payment Mechanisms and Contingent Valuation: An Empirical Study of Hypothetical Bias.” Environmental and Resource Economics, 19 (4), 383–402. Dixit, A. K. and R. S. Pindyck, 1994. Investment Under Uncertainty. Princeton University Press, Princeton, NJ. Fox, J. A., J. F. Shogren, D. J. Hayes, and J. B. Kliebenstein, 1998. “CVM-X: Calibrating Contingent Values with Experiment Auction Markets.” American Journal of Agricultural Economics, 80 (3), 455–65. Greene, W. H. Personal correspondence. 27 April 2004. Harrison, G. W. and E. E. Rutstrom, 2002. “Experimental Evidence on the Existence of Hypothetical Bias in Value Elicitation Methods.” In C. R. Plott and V. L. Smith (eds.) Handbook of Results in Experiment Economics. Elsevier Science, New York. Hofler, R. and J. A. List, 2004. “Valuation On The Frontier: Calibrating Actual and Hypothetical Statements of Value.” American Journal of Agricultural Economics, 86 (1), 213–21. Holt, C. A. and S. K. Laury, 2002. “Risk Aversion and Incentive Effects.” American Economic Review, 92 (5), 1644–55.
Forecasting hypothetical bias 465 Johannesson, M., B. Liljas, and P. Johannsson, 1998. “An Experimental Comparison of Dichotomous Choice Contingent Valuation Questions and Real Purchase Decisions.” Applied Economics, 30 (5), 643–7. Johannesson, M., G. C. Blomquist, K. Blumenschein, P. Johannsson, and B. Liljas, 1999. “Calibrating Hypothetical Willingness to Pay Responses.” Journal of Risk and Uncertainty, 18 (1), 21–32. Jondrow, J., I. Materov, K. Lovell, and P. Schmidt, 1982. “On the Estimation of Technical Inefficiency in the Stochastic Frontier Production function Model.” Journal of Econometrics, 19 (2–3), 233–8. Kandel, E. and N. D. Pearson, 2002. “Option Value, Uncertainty and the Investment Decision.” Journal of Financial and Quantitative Analysis, 37 (3), 341–74. Kumbhakar, S. C. and C. A. Knox-Lovell, 2000. Stochastic Frontier Analysis. Cambridge University Press, New York. Laffont, J., 1995. The Economics of Uncertainty and Information. Massachusetts Institute of Technology Press, Cambridge, MA. List, J. A. and C. Gallet, 2001. “What Experimental Protocol Influence Disparities Between Actual and Hypothetical Stated Values?” Environmental and Resource Economics, 20 (3), 241–54. List, J. A., M. Margolis, and J. F. Shogren, 1998. “Hypothetical-Actual Bid Calibration of a Multigood Auction.” Economics Letters, 60 (3), 263–8. Little, J. and R. Berrens, 2004. “Explaining Disparities Between Actual and Hypothetical Stated Values: Further Investigation Using Meta-Analysis.” Economics Bulletin, 3 (6), 1–13. Majd, S. and R. S. Pindyck, 1987. “Time to Build, Option Value, and Investment Decisions.” Journal of Financial Economics, 18 (1), 7–27. Murphy, J. J., P. G. Allen, T. H. Stevens, and D. Weatherhead, 2004. “A Meta-Analysis of Hypothetical Bias in Stated Preference Valuation.” Environmental and Resource Economics, 30 (3), 313–15. Norwood, F. B., 2005. “Can Calibration Reconcile Stated and Observed Preferences?” Journal of Agricultural and Applied Economics, 37 (1), 237–48. Reifschneider, D. and R. Stevenson, 1991. ‘Systematic Departures from the Frontier: A Framework for the Analysis of Firm Efficiency.” International Economic Review, 32 (3), 715–23. Umberger, W. J. and D. M. Feuz, 2004. “The Usefulness of Experimental Auctions in Determining Consumers’ Willingness-to-Pay for Quality-Differentiated Products.” Review of Agricultural Economics, 26 (2), 170–85. Zhao, J. and C. L. Kling, 2001. “A New Explanation for the WTP/WTA Disparity.” Economics Letters, 73 (3), 293–300. Zhao, J. and C. L. Kling, 2001. “Willingness-to-Pay, Compensating Variation, and the Cost of Commitment.” Economic Inquiry, 42 (3), 503–17.
27 Discussion Valuation and preferences John C. Whitehead
Introduction As all environmental economists well know, ad nauseam, on 24 March 1989 the Exxon Valdez spilled 11 million gallons of oil into Prince William Sound. Less well known, on the same day I was busy working on my dissertation on the effects of substitute environmental goods on existence value. For better or for worse, I bumbled into one of the key issues surrounding the magnitude of natural resource damages associated with the oil spill and, more broadly, one of the most active research literatures in the annals of environmental economics. This unfortunate event shaped the research agendas of many environmental economists during the 1990s due to the magnitude of the disaster and the pecuniary and non-pecuniary richness of the intellectual activity. What became known as the “contingent valuation debate” focused on the issue of what were then known as existence values, then non-use values and finally have come to be known as passive use values. Passive use values are the values that people have for the environment even if they do not use the environmental resources on-site (e.g. a fishing trip). People may value the existence of an environmental resource without revealing their value through behavior (e.g. a fishing trip), which makes the measurement of passive use values a considerable challenge. The contingent valuation method, a hypothetical survey approach in which “willingness to pay” questions are asked, is still the most commonly used method to assess the extent of passive use values. Back in those dark days experimental economists were already attempting to debunk the notion that hypothetical willingness to pay statements had much validity at all. The publication of Mitchell and Carson’s Contingent Valuation Method (CVM) book in 1989 was a trumpet call for contingent valuation economists (i.e. those that faithfully “do” CVM) to circle the wagons and claim that hypothetical willingness to pay statements are equivalent to true willingness to pay if CVM is done correctly (i.e. conducted by likeminded folks with significant research budgets). A large number of experimental and survey researchers have disputed this claim, and they are probably correct.
Discussion 467 Fifteen years later, most all contingent valuation economists, including myself, have grudgingly accepted the fact that hypothetical bias exists and is the biggest challenge confronting the use of CVM estimates for policy analysis. Attention has turned to what to do about hypothetical bias and other problems. Grudgingly, I must admit that much has been learned from the experimental economics laboratory. Indeed, many contingent valuation economists (not me) have become real scientists and conducted their own laboratory experiments. This self-absorbed story serves to illustrate the main contribution of experimental economics to applied environmental valuation research. The CVM is a very simple and powerful valuation methodology. You can estimate the use values and passive use values of the craziest things. But, in the wrong hands, it can be embarrassingly abused. Experimental economists can bring contingent valuation economists out of their orbit around a far off hypothetical planet (e.g. Pluto) and back down to earth. The valuation-related experimental economics chapters in this book are a good representation of this “back down to earth” literature. The topics covered run the gamut of CVM frailties: hypothetical bias, risk perception, preference asymmetries and incentive compatibility.
Compare and contrast Data gathered from the real world is messy. Experiments are usually sold as a way to isolate factors that might influence behavior in a controlled setting. As an example, consider a telephone survey respondent who is considering whether policy X is worth $A. As part of the valuation scenario, the respondent is told that there is a P percent chance that the policy goals might not be achieved. The contingent valuation economist might naively conclude that the survey respondent actually believes that there is a P percent chance. Yet, despite the best efforts of the survey design team, the survey respondent might actually think there is a P + ∆ percent chance that the policy goals will be achieved. The failure to control adequately the uncertainty of the respondent might lead to valuation responses that contain their own uncertainty or even appear to be irrational. The laboratory economist can explicitly control for the uncertainty faced by subjects in the valuation exercise. So, what’s not to like about experimental economics? The biggest concern about experiments among contingent valuation economists is the sample size and composition. Experiments are expensive and time consuming so the samples are necessarily small. In order to generate samples large enough for data analysis, convenience samples are often used, instead of samples drawn from the general population. Yet, statistical inferences are drawn. Typically, experiments are conducted with undergraduate students who may or may not have partied ’til dawn on the day of the experiment. In fact, the incentive to participate in the experiment might be that night’s beer money. Just once I wish that I would hear an experimental economist admit that their sample consists of a small number of 20-somethings nursing hangovers. In contrast, contingent valuation economists
468
J.C. Whitehead
have become adept at obtaining large samples that are fairly representative of the population on small research budgets (this wasn’t always the case). One of the foundations of the experimental method is the avoidance of context. Experimental subjects play dice and card games without a clue as to what the researcher is trying to accomplish. This seems OK until, again, you consider the subjects. Some of the 20-something college males that watch TV poker are bound to be trying to win at gambling instead of answering questions in an incentive compatible manner. At times, economic experiments seem too artificial in a valuation context. It has become routine for the contingent valuation economist to rudely and naively roll their eyes as the experimenter describes how the “dice and two-layer card game” (I just made that up) can be applied to the intricacies of estimating benefits of an environmental policy. In contrast, contingent valuation economists go overboard on supplying contextual information. In attempts to make valuation scenarios believable and incentive compatible they can become unbelievable. Scenarios are becoming so contextualized that it is difficult to extrapolate values beyond the current situation. This is good news for the practitioner/policy analyst who wants to run another survey instead of resorting to a boring benefit transfer. It is bad news for the efficiency of policy analysis. Experimental economists argue that their data are based on real economic incentives. This is good, even great, especially in comparison to the valuation context where survey respondents must pretend that they are actually paying money. Sometimes, though, the real money seems like a very small incentive. Is it enough to get the subject to think hard about a problem? If you are paid a $10 show-up fee that covers your beer money, working hard for an extra $2 or $5 might be subject to diminishing returns to beer. Experimental economists will sometimes admit that their subjects admit that they are not paying full attention. Yet, contingent valuation economists face the same problem (nod your head if you have ever entered your own survey data into the computer and wondered what the respondent was thinking). In defense of both experimental and stated preference methods, I’m convinced that real, live consumers don’t always fully pay attention when they are participating in markets. And yet, they tend to get the job done. Numerous other quibbles might exist. The laboratory is an artificial environment. But then, so is a telephone interview. If these methods of data collection are invalid then most labor economics, and maybe some macroeconomics, uses bogus data too. Without artificially generated data we’d all be left with scanner data and property value data as the source of our economic insights. Another problem is that much valuation-related laboratory research employs induced values – respondents are told how much they want the poker chip (i.e. product). It becomes a simple behavior to state that you’d be willing to pay 87 if your slip of paper has an 87 written on it. Less simple are experiments with “homegrown” values, i.e. values formulated by the experimental subject. High bidders with real money can be sold private goods, such as candy bars, and even goods with public characteristics.
Discussion 469 These are reasons that make the experimental laboratory a tough place actually to develop estimates of benefits and costs for policy analysis. As has become the word, experiments are test beds. The laboratory is where you begin to understand the incentive structure of an auction, game or valuation question. Experiments and stated preference surveys should not be considered substitutes (if anyone still thinks this way) or opposing approaches (if anyone still thinks this way). Experiments and stated preference surveys should be considered complementary approaches. The strengths of experiments are the weaknesses of the CVM (and, maybe, vice versa). Some of the best CVM research has used experimental and survey methods as complements. Some of the best valuationrelated experimental research has been taken out of the laboratory and put into the field with real people as subjects.
The examples in this book Those that are directly related to CVM Several chapters in this volume have direct relevance and, in fact, are motivated by the biggest problems faced by CVM-ers. Carson, Chilton and Hutchinson consider the incentive compatibility of dichotomous choice valuation questions. Norwood, Lusk and Boyer tackle the eminent problem of hypothetical bias. Noussair, Robin and Ruffieux use experiments to contrast demand behavior against opinion surveys. Carson, Chilton and Hutchinson A valuation question is incentive compatible when respondents have reasons to tell the truth. A major breakthrough in CVM research occurred with the dichotomous choice valuation question (i.e. yes or no). Before its discovery the openended valuation question was dominant. With an open-ended question, respondents are asked how much they are willing to pay. The classic free rider problem, common with open-ended questions, is overcome with the dichotomous choice question since it more closely resembles a market transaction (e.g. do I want to purchase this product or not?). In theory, if you would buy or not buy the product you should say as much. Then, CVM-ers get greedy. Instead of being satisfied that the willingness to pay estimate was about right, we also want its estimation to be as precise as possible. The dichotomous choice question doesn’t provide much information about values, only whether willingness to pay is above or below a certain dollar threshold, and so the confidence intervals are fairly wide. The double-bounded dichotomous choice question became the rage. With a double-bounded question, respondents are asked a follow-up question with a higher (lower) dollar threshold if they are willing (not willing) to pay the first. Confidence intervals are much tighter but the second valuation question doesn’t generate the same willingness to pay estimate as the first (the second question is incentive incompatible).
470
J.C. Whitehead
Carson, Chilton and Hutchinson approach this issue by supposing that the problems with the second question might be due to its hypothetical (i.e. inconsequential) nature. They find that consequentiality does not affect truth telling in the first question but decreases truth telling in the second. The problem is that it is difficult to convince respondents to consider both questions separately. Norwood, Lusk and Boyer Hypothetical bias exists when stated preference survey respondents fail to consider fully their preferences and/or income constraints when stating willingness to pay. As a result, hypothetical payments sometimes rise above payments made with real money in the same situation. This is a problem that won’t go away with improved valuation scenarios and questions. As Norwood, Lusk and Boyer state: “‘hypothetical bias’ has been found in close to 90 percent of studies comparing hypothetical to non-hypothetical values.” Note, however, that CVM estimates of non-market use values (e.g. recreation, health) don’t seem to suffer from hypothetical bias. Curiously, it is only the extremes – market goods and environmental goods that provide passive use values – that seem to suffer the most from hypothetical bias. In addition to being in the vanguard of discovery of this problem, experimental economists have led the charge to develop mitigation approaches. Two have been advocated. The so-called “cheap talk” approach describes hypothetical bias to research subjects and exhorts them not to fall into this trap. Cheap talk has been successful in the laboratory, with controlled conditions and a captive audience. In the field, cheap talk has had mixed results as the long laboratory cheap talk script is boiled down into something that won’t cause a less captive audience to bolt for the door. The second approach is the certainty rating. After respondents state they are willing to pay money for a product they are asked how certain they are about their response. Experimental studies have found that respondents who are fairly certain that they would pay, actually pay. Adjusting hypothetical valuations by a certainty rating adjusts for hypothetical bias. A third “calibration” approach exists. Frontier-calibration is an empirical approach that adjusts hypothetical willingness to pay statements downward. Norwood, Lusk and Boyer compare certainty rating calibration to frontiercalibration. This is important since both approaches are a bit ad hoc. Certainty adjustment requires that the researcher discard payments below a threshold of certainty chosen by the researcher. Frontier-calibration requires the assumption that the empirical adjustment is an adjustment for hypothetical bias. Norwood, Lusk and Boyer find that the combination of the two approaches may outperform either lone approach. Noussair, Robin and Ruffieux Genetically modified foods raise the red food safety flag in the minds of consumers. If big corporate farms have altered the taste, smell, look and feel of
Discussion 471 foods then consumers perceive that they must be altered for the worst. Consumers don’t want to buy these products, or, they will only buy them at a steep discount. The last part of the previous sentence is the good news since genetically modified foods can be produced at lower cost (it is a technological advance, for gosh sake) and sold at lower prices. The policy problem is that governments tend to ban genetically modified foods based on popular opinion instead of food science. Noussair, Robin and Ruffieux deal with many of the contingent valuation economist’s complaints about experimental economics from above. The most encouraging aspect of their research is the use of a fairly large sample of the general population and home-grown values. As such, it is difficult to criticize the direct applicability of this study’s results to policy, unless we get overly hung up on the artificiality of the laboratory. Noussair, Robin and Ruffieux find that some of their laboratory food customers are willing to purchase the genetically modified food product at the same price as the regular product and a majority will purchase it at a discount. Interestingly, willingness to pay for the food falls when the genetically modified food label is shown on an overhead relative to its actual size on the package. The policy implications of this result are clear: an efficient information policy will require overhead projectors and large screens at all food stores that sell genetically modified food. Finally, they also compare a second-price sealed bid auction, similar to how the US Treasury sells public debt, with a random price market. The theory says that both of these should produce statements of maximum willingness to pay yet the sealed bid auction performed better. Again the policy implications are clear: the US Treasury should not sell bonds by drawing the actual price out of a lottery urn. Seriously, these results could have important implications for the adoption of genetically modified foods. Those that are indirectly related to CVM Three chapters in this volume deal with risk and are indirectly related to valuation. Blackwell, Grijalva and Berrens consider decision making when respondents are faced with “hard” uncertainty. Ewing, Kruse and Thompson consider the role that information plays when formulating risk perceptions in the context of wind hazards. These studies can inform valuation research since stated preference survey respondents often must formulate risk perceptions before they formulate willingness to pay contingent upon that risk. Haab and Roe consider why consumers might sometimes change their preferences (when all well-trained economists know that this is not possible). Blackwell, Grijalva and Berrens Hard uncertainty exists when an outcome is uncertain and the probabilities are unknown. For example, a decision maker may think there is a chance that an
472
J.C. Whitehead
endangered species will become extinct but the chance could be 1 percent, 10 percent or 50 percent. When this much uncertainty exists, a safe minimum standard approach to endangered species protection may be warranted. In contrast to the standard benefit–cost analysis, which would determine that the optimal population of a species is one that maximizes the net benefits of the species to society, the safe minimum standard is a decision rule that would set a minimum threshold below which the species population should not fall as long as the costs are bearable. The threshold might be above or below the optimal population. Climate change might also be subject to hard uncertainty. We think that global warming is happening but we don’t know the chances of the various outcomes. The US Endangered Species Act and the Kyoto Protocol have been interpreted as an application of the safe minimum standard. Blackwell, Grijalva and Berrens investigate the decision rules adopted by subjects faced with hard uncertainty. Consistent with benefit–cost analysis is the maximum expected value rule. Consistent with the safe minimum standard decision rule is the minimax decision rule where the respondent “maximizes the minimum value.” Another is the minimax regret decision rule where decision makers minimize the maximum possible regret (regret is the difference between the ex post outcome and the ex ante choice). The authors find that, in a variety of situations, respondents are most likely to adopt the minimax regret rule. The least likely choice is the most conservative: the minimax decision rule. What is left to consider is whether these results would be replicated in the laboratory in the endangered species or climate change contexts. Ewing, Kruse and Thompson Participants in the markets for natural hazard (i.e. hurricane) insurance must confront the well-known problems of adverse selection and moral hazard. With adverse selection, those most likely to need insurance are those most likely to try to purchase it while insurance companies would prefer to sell insurance to those who will not need it. With moral hazard, those who purchase insurance are more likely to engage in risky behavior. Those with homeowners insurance might be less likely to board windows before a hurricane. Add to these problems, the problem of risk ambiguity (i.e. “uncertainty about uncertainty”). Consumers who confront risk ambiguity might overestimate or underestimate their ability to deal with hazardous situations. Consider the housing quality choice. Residents of hazard-prone areas can purchase insurance, self-protect or pursue both strategies. In the hurricane context, self-protection might involve the purchase of hurricane shutters and manufactured home tie downs and anchors. Households that self-protect and have uncertainty about the uncertainty of the ability of their self-protection to withstand a storm might overestimate the ability of their self-protection products and fail to purchase insurance. Others might underestimate the ability of their self-protection products and purchase too much insurance.
Discussion 473 Ewing, Kruse and Thompson investigate risky decision making in the context of the housing quality choice. Both engineering and economic experiments are conducted. The engineering experiment places manufactured homes behind a propeller airplane and determines the damage caused by the winds generated by the propellers. The economic experiment involves respondents making educated guesses about the damage caused by various wind speeds and then allowing them to sell their decisions. In this way the authors simulate the market for insurance with risk ambiguity. They find that respondent decisions are consistent with underinsuring behavior. Haab and Roe Early on, environmental valuation researchers discovered the problem of a divergence between willingness to pay and willingness to accept for the same commodity. For example, you might be willing to pay, at most, $5 for the same coffee mug that you would not sell for less than $10. Various theories have been developed to explain this behavior. My current favorite, or, at least, the one that is most related to the Haab and Roe chapter, is self-selection. Consumers who value a product most are more likely to own it. In the laboratory, subjects who are endowed with a commodity come to see themselves as self-selected owners with a reluctance to sell. This is a problem for environmental policy analysis since it is not always clear which estimate of value, willingness to pay or willingness to accept, should be used in the benefit–cost analysis. An understanding of property rights can help. If property rights are held by the seller (buyer), then the willingness to pay (accept) measure of value is appropriate. Unfortunately, in much environmental policy the appropriate property rights assignment is implicit or even under intense debate. Willingness to pay questions are highly appropriate for fishing and hunting contexts where anglers and hunters have many years of experience of purchasing the rights to fish and hunt. But, some argue that business firms have the right to pollute the air due to a long history of law pre-1970 regulation. Others argue that people have a property right to breathe clean air. Both willingness to pay and willingness to accept questions will generate responses that indicate a protest of the implicit assignment of property rights. Haab and Roe show that preferences can be manipulated given the context of the experiment. Subjects are confronted with two labor intensive tasks and asked to rank them in terms of enjoyment if the pay was the same. Respondents are then asked to report their compensation demanded for the tasks. Some respondents are asked a question prior to reporting their compensation demanded that provides an anchor for the compensation demanded question. The authors find evidence that the anchors can influence the relative rankings of the labor tasks. Considering the self-selection explanation of the willingness to pay and willingness to accept debate, different anchors can lead to different property rights regimes, which can further muddle the choice of the appropriate value estimate for benefit–cost analysis.
474
J.C. Whitehead
Implications What have we learned from this body of research that can be used to improve valuation of benefits or costs for environmental policy? The most robust result in all of CVM research is the price effect in the dichotomous choice question. As the hypothetical price goes up, the percentage of hypothetical willingness to pay responses goes down. At the most fundamental level, survey respondents/ consumers/citizens are rational decision makers. With this result in hand, any perverse behavior or anomaly that arises can be dealt with in some way. Some problems aren’t big problems anymore The major problem with the CVM, hypothetical bias, has gone from being one of a mistake (e.g. perceived poor questionnaire wording or minimum research budgets) to one that won’t go away and must be dealt with empirically. The experimental literature has identified empirical fixes and is beginning the comparison to determine their validity. The use of follow-up certainty ratings questions seems most promising. Future experimental and CVM research should help determine the appropriate certainty cut-off and further explore its validity. As mentioned earlier, double-bound dichotomous choice questions were developed because contingent valuation economists got greedy. We weren’t satisfied with rational behavior, we wanted rational behavior and tight confidence intervals. Tight confidence intervals don’t matter much in real world policy making. In fact, I don’t recall ever reading a benefit–cost analysis report that did anything other than report the confidence intervals. So, while I don’t object to continued research into double bounds (in fact, I’m currently pursuing some myself), I don’t view incentive incompatibility, anchoring or any other source of divergence between first and second willingness to pay estimates to be a major issue. We should settle on the first willingness to pay question and be happy that it works. Some problems can be hand-waved away with the appropriate level of sanguinity Consumers have problems dealing with risk and uncertainty. Shoppers might spend 15 minutes agonizing over the tradeoff between products with divergent health benefits and minimal price differentials. Sometimes the uncertainty is so great that they throw up their hands and refuse to make a decision. It is, therefore, not an anomaly, or overly problematic in any way, when non-expected value maximizing behavior arises in an experimental or field, real or hypothetical, or whatever type of setting. It is a wonderful thing if willingness to pay estimates change in the expected ways with probabilities; but, if they don’t, the CVM or any other preference generating method is not invalidated. Consumers are free to adopt simple decision rules to help them deal with uncertainty. That said, research that continues to try to understand less than über-rational behavior is highly productive.
Discussion 475 More on this theme: consumers are quirky. I’ve long known that my timeconstrained mall shopping behavior depends on which door I enter. My ask price for my 10 year old beater is higher than my bid price for the same car. Preference reversals are a part of normal consumer behavior. The economist’s rigid adherence to the assumption of stable preferences seems absurd to noneconomists and, maybe, it is. So, again, I don’t view preference reversals as a major problem. The first principles of consumer theory are fine in that they explain 90ish percent of consumer behavior. Attempts at understanding the anomalies is interesting and fun but not referenda on neoclassical theory.
Conclusions At this point, the only thing that I am sure of is that I’ve embarrassed myself enough with a few half-thought-through assertions. Hopefully, though, this chapter provides a glimpse into the complementarities between experimental economics and applied valuation research. For what it is worth, I encourage more research with a focus on the complementarities between experiments and surveys, especially within a single research project. For example, collaboration between experimental economists and contingent valuation economists on a 2 year project that focuses on experiments in year one and valuation in year two seems especially exciting.
Reference Mitchell, R.C. and R.T. Carson (1989) Using Surveys to Value Public Goods: The Contingent Valuation Method. Baltimore, MD: Johns Hopkins University Press for Resources for the Future.
Index
Abreu, D. 212 Acheson, J.M. xix, 213, 214 Adams, B.J. 395 adverse selection 472 Agresti, A. 411 Aigner, D.C. 459 air pollution impact reduction schemes, contingent valuation experiment 424, 425, 429–44; socio-economic/ demographic profile of subsamples 432–4; split sample design 429, 430–1, 441; willingness to pay (WTP) measures 426, 434–6 (differences within/across treatments 430, 436–41; event-splitting effects 439, 440; ordering effects 431; part–whole/substitution effects 434, 437, 439, 442; scope sensitivity 436, 437, 442; value consistency 436, 441–2; visible choice set effects 430, 431, 437, 438, 439, 440) Ajzen, I. 345 Akerlof, G.A. 348 Alberini, A. 408 Aldrich, L. 345 Alm, J. 256, 257 Alpizar, F. 307 altruism 308; and voluntary contributions mechanism (VCM) 195, 197, 199, 200, 201, 202, 203, 204 ambient pollution instruments, compliance with 307–23, 326; with enhanced instructions (recommended play) 308, 311, 312–21 (aggregate decision numbers 309, 312–15; efficiency 308, 311–12, 312–15; large capacity subjects 310, 315–16, 317, 318, 319, 320, 321; medium capacity subjects 310, 315–16, 317, 318, 319, 320, 321; Nash equilibrium 307, 308,
309, 310–11; tax instrument treatment 308, 309, 310, 312, 313, 314, 315, 316, 317, 319, 321; tax/subsidy instrument treatment 308, 309, 310, 312, 313, 314, 315, 316, 317, 318) ambiguity 368, 399, 400 ambiguity aversion 369, 400 Ames, R.E. 195 Anderson, C.M. 29–46, 132–3 Anderson, K. 347 Anderson, L.R. 280–92, 326 Anderson, S.P. 312 Andreoni, J. 185, 195, 196, 207, 214, 307, 428 arbitrage, and preference reversals 390, 391, 392 Ariely, D. 332, 337 Arnason, R. 29 Arrow, K.J. 367, 368–9, 378, 379, 407, 426 Atlantic States Marine Fishery Council (ASMFC) 31 auctions: demand revealing 349–50; discriminant price 106–7, 109, 114–15, 117–20, 125, 136; double 80, 81; oral 80, 135; uniform price 106–7, 109, 114–15, 117–20, 122–4, 125, 136; Vickrey 349–50, 352, 358–61, 362, 453, 455 audits and environmental information disclosure: mandatory programs 249–50, 251–3, 254–5, 256, 257, 264; voluntary programs 261, 263, 266, 270, 271, 274, 276, 326 Balasubrimanian, S. 344 Banzhaf, S. 408 Barbera, S. 369 Baron, R. 48, 51
Index 477 Barrett, S. 47, 48 baseline-and-credit emission permit trading 9–28, 131–2; aggregate emissions 21, 23, 26, 27, 131–2; capacity 21, 22; cost parameters 18; efficiency 19–20, 23, 24, 131; equilibrium 18, 19–20, 27; output volume 21, 22, 26, 27; performance standard 10, 19, 26; permit inventories 21, 26, 27; permit prices 25, 26; surplus, consumer and producer 19–20, 24; variable capacity predictions 18 Bateman, I.J. 408, 424–46 Batie, S. 366 Bauml, W.J. 105 Beard, T.R. 243 Becker–deGroot–Marschak (BDM) mechanism 296, 334, 349–50, 351, 352, 358–61, 362, 398 Beckers, D.E. 454 behavior, patterns of 2 Beil, R.O. 195 Ben-David, S. 81 Bergstrom, J. 408 Bergstrom, T.C. 185, 212 Berrens, R.P. 366–82, 450, 471–2 Bet Bet dryland community, Australia 100, 102–3, 136 Binmore, K. 107, 108, 126 Bishop, R. 234, 366, 367, 380, 447, 449, 455 Blackwell, C. 185, 366–82, 471–2 Blaikie, P. 213 Blamey, R. 345 Blisard, N. 345 Blumenschein, K. 450 Bohm, P. 2, 51, 57, 79, 349, 350 Bolton, G. 294 bonus invariance axiom 367 Boyer, J. 47 Boyer, T. 447–65, 469, 470 Boyle, K.J. 428 Braga, J. 107–8 Brehm, J. 244–5 Bromley, D.W. 108, 212 Brookshire, D.S. 345 Brosig, J. 176 Brown, G. 234 Brown-Kruse, J. 195 Bryan, B. 106, 107 Buckley, N.J. 9–28, 81, 131–2, 137–8, 139 Bulte, E. 366 Burton, A.C. 409 Burtraw, D. 3, 131–9
buyer and seller liability in emission permit trading 47–76, 133–4; efficiency outcomes 59–60, 67, 71, 72, 73, 134; emission outcomes 58, 59, 60, 67, 72, 73; and enforcement environment 68–71; joint liability 134; and overproduction-to-total sales ratio 70, 71, 73, 75 n9; and permit holdings 63, 69, 70; and permit prices 57, 61, 62, 63, 64–6, 67–8, 69, 70; and permit trade volume 58, 66–8, 70, 73; and production choices 60, 61, 62, 63, 64, 69, 70; and production outcomes 58, 60, 73, 133; random ending point sessions 72–3; reputation effects 74 n3, 134 Buzby, J.C. 349 Cadsby, C.B. 195 Calcagnini, G. 452 Camerer, C. 110, 281, 368, 369, 399, 400 Cameron, M.P. 424–46 Cameron, T.A. 408, 411 cap-and-trade programs: emission permit trading 9, 10–28, 131–2 (aggregate emissions 21, 23, 26, 131; capacity 21, 22; cost parameters 18; efficiency 19–20, 23, 24, 131; equilibrium 17–20, 26; output volume 21, 22, 26; permit inventories 21, 25–6, 27; permit prices 25, 26; surplus, consumer and producer 19–20, 24; variable capacity predictions 18); see also fishery management; recharge credit trade scheme Capra, C.M. 143–56, 234–5 Caputo, M. 212 Cardenas, J.C. 110, 143 Carlen, B. 51 Carson, K.S. 407–23, 469–70 Carson, R.T. 408, 411, 424, 426, 427, 430, 442, 447, 466 ‘Carte di Regola’ mechanism 214 Casari, M. 213, 214 Cason, T.N. 2, 10, 48, 49, 51, 77–99, 107, 114, 120, 134–5, 214 Castle, E. 366, 367, 379 certainty-calibration 448, 449–53, 457, 461, 470 certainty equivalents 399, 402, 404 Chamberlain, G. 391 Champ, P. 447, 449, 455 Chan, K.S. 184 charitable organizations 184–5, 236 Charness, G. 293, 294, 296, 308 Chaudhuri, A. 155
478
Index
Chayes, A. and Chayes, A. 48 Chen, K.-P. 246, 257 Cherry, T.L. 1–6, 184–93, 236, 383–94 Chilton, S.M. 407–23, 469–70 choice see decision making, under hard uncertainty; preferences Choquet Expected Utility Theory 369 Chu, C.Y.C. 257 Ciriacy-Wantrup, S.V. 366 Clark, K. 143 Clarke, J. 207–8 Clean Development Mechanism (Kyoto Protocol) 9, 134 Clifton, C. 104, 106 closed call market 109, 111–12, 115, 120–2, 126, 136 Coase, R.H. xix–xx Cochard, F. 307, 321 Cohen, M.A. 244 Combris, P. 353 commitment costs 453, 454, 456, 461 common-pool resources (CPRs) see common property; natural resource appropriation common property 4, 234–5; distributional concerns 293; see also natural resource appropriation communication 215; chat room versus face-to-face 145; face-to-face 145, 214, 215, 226, 227; and natural resource appropriation 143–56, 213, 214, 226, 227, 234–5; random effects estimation of 154; and strategic uncertainty 150 competence 399, 400 competence hypothesis 400, 403 confusion effects, voluntary contributions mechanism (VCM) experiments 195–211, 237 congestion: entry behavior 280–90; entry fees (tolls) 280, 286–7, 288–90; information provision 280, 287, 288, 290; stylized model of 281–4; welfare consequences of 280, 281–90 Connor, Jeffrey 100–30, 136–7 consequentiality in double referendum surveys 407–23 consolidation, industry see fishery management consumer behavior, and genetically modified foods 344–65, 471–2 context issues 3, 4–5, 109–10, 236, 238, 325, 327, 468 contingent valuation (CV) studies 345, 407, 447, 466–75; event splitting effects
428; list direction and length effects 428, 429; ordering effects 427–8, 429; part–whole effects 427; scope sensitivity 426; substitution effects 427; visible choice set effects 428–9; see also air pollution impact reduction schemes; double-bounded dichotomous choice (DBDC) referenda; hypothetical bias control versus context 4–5 Cooper, J. 408 Cooper, R. 50, 143 cooperation: and distributional preferences 294, 299–300, 303; leadership as inducement to (emissions abatement) 157–8, 160, 163, 173, 176–8; natural resource appropriation 218, 226, 227 coordination games: communication and natural resource extraction 143–56, 235; positive feedback as motivator in 154–5; strategic uncertainty in 150, 154, 155 Coppinger, V.M. 350 Cordell, J. 213 Cornes, R. 184, 185 Cotten, S.J. 194–211, 236–7 Coursey, D.I. 345 Cox, J. 204, 341, 349, 350 Crocker, T. 47 Croson, R. 308 Cubitt, R.P. 428 Cummings, R.G. 3, 79, 251, 345, 408 Dales, J.H. 47, 105 Daniel, K. 399 Davidse, W.P. 30 DeBondt, W.F.M. 399 decision making: under ambiguity 368, 400; expected value max (EV) criterion 371, 372, 373, 374, 375, 376, 377, 378, 472; under hard uncertainty 366–82, 471–2; minimax (MM) criterion 367, 368, 371, 372, 373, 374, 375, 376, 377, 378, 472; minimax regret (MMR) criterion 367, 368, 371, 372, 373, 374, 375, 378, 379, 472; optimal 137–8, 368; Safe Minimum Standard (SMS) rule 366–7, 368, 379, 380, 472; Simultaneous Expected Utility Maximization (SIMEU) 369 demand revelation 349–50, 358, 362; and consequentiality in double referenda 407–23 DeShazo, J.R. 408, 411 Dewees, D. 10 Dhanda, K.K. 257
Index 479 dichotomous choice valuation questions: incentive compatibility of 469; see also double-bounded dichotomous choice (DBDC) referenda Dickinson, D.L. 184–93, 236, 307 diffusion, industry see fishery management disclosure of environmental violations see environmental information disclosure programs; voluntary discovery and disclosure of environmental violations discovered preference hypothesis 107 discriminant price auctions 106–7, 109, 114–15, 117–20, 125, 136 distributional preferences 325; common history and 294, 298, 301, 302; cooperation and 294, 299–300, 303; earnings and 295, 296–7, 302, 303; and regulatory change 293–306; unconditional 293–4, 304 n1 Dixit, A.K. 452 double auction markets 80, 81 double-bounded dichotomous choice (DBDC) referenda: consequentiality and demand revelation in 407–23, 469–70; information effects 419–21, 422 Dubner, S.J. 324, 325 Duke, C. 77–99, 134–5 Dutta, P.K. 212 earnings, and distributional preferences 295, 296–7, 302, 303 Eckel, C. 3, 195 Edmonds, J. 48 efficiency: ambient pollution instruments 308, 311–12, 312–15; baseline-andcredit emission permit trading 19–20, 23, 24, 131; buyer and seller liability in emission permit trading 59–60, 67, 71, 72, 73, 134; cap-and-trade programs 19–20, 23, 24, 131; and industry consolidation and diffusion (fisheries) 38–40; natural resource appropriation 218, 220, 221, 222, 223 ‘El Farol’ dilemma 281 Ellermann, D.A. 293 Ellsberg, D. 369, 400 emission permit trading 2, 131–2, 133–7, 257; and command and control mechanisms, compared 78; distributional concerns 293, 294; overselling 48; see also baseline-andcredit emission permit trading; buyer and seller liability in emission permit
trading; cap-and-trade programs; water and salinity rights emissions: baseline-and-credit plans 21, 23, 26, 27, 131–2; and buyer and seller liability in permit markets 58, 59, 60, 67, 70, 72, 73; cap-and-trade plans 21, 23, 26, 131; efficient abatement condition 16; efficient output condition 16; information disclosure programs see environmental information disclosure programs; intensity of output 10, 16; level of output 10; unilateral abatement; see also ambient pollution instruments; Hoel model emissions-input/output (emission intensity) ratio 9, 10 Endangered Species Act (ESA), 1973 367 Energy Star program 243 enforcement of emission commitments 47, 48; see also environmental information disclosure programs; sanctions Engelmann, D. 296 Environmental Defense Fund 49 environmental information disclosure programs 243–60; compliance/noncompliance with 243, 244–5 (audit probability and 249–50, 251–3, 254–5, 256, 257, 264; managerial incentive structures and 245, 246–60); investor reactions to information 244; effect on subsequent environmental performance 244; see also voluntary discovery and disclosure of environmental violations Environmental Protection Agency (EPA), US 2, 9; Audit Policy 261; Energy Star program 243; Open Market Rule 50; sulfur dioxide auction 9, 79; Toxics Release Inventory (TRI) 243, 244 Erev, I. 281 escrow accounts 51 European Business Council for a Sustainable Energy Future 49 Evans, M.F. 243–60, 325, 326, 327 event splitting 428 Ewing, B.T. 395–406, 471, 472–3 Falck-Zepeda, J.B. 347 Farmer, M. 366, 367, 379, 380 Fehr, E. 184, 212, 213, 214, 215, 294, 307, 308 Ferraro, P.J. 194–211, 236–7, 428 financial disclosure requirements 256 Fischbacher, U. 195, 197–8, 253, 281 Fischer, C. 10, 132
480
Index
Fisher, J. 184 fishery management: commmunication and coordination in 143–56, 234–5; monitoring and 227, 235; tradable allowance (permits) systems and industry consolidation and diffusion in 29–46, 132–3 (efficiency 38–40; market shares 30, 32, 34, 40–2, 43, 44; prices 36–8, 44; profit functions 34–5) Ford, W. 30 Fox, J.A. 349, 399, 447 free riding xix, 125, 136, 178, 186, 194, 208–9, 345 Freeman, M.A. 29–40, 132–3 Frey, B. 195 frontier-calibration 448, 453–5, 457–8, 459, 460, 461, 470 Gächter, S. 158, 184, 195, 197–8, 212, 213, 214, 215, 307 Gallet, C.A. 345, 447 ‘gambler’s fallacy’ behavior 255, 325 Gangadharan, L. 77–99, 107, 114, 120, 134–5 Gardner, R. 212, 213–14, 215, 217, 226 Gaskell, G. 344 genetically modified foods: consumer preferences and 344–65, 470–1; labeling 346, 347, 348, 357–8, 361, 362, 471; segregation of market 347–8, 362; threshold levels 348–9, 355–6, 361; welfare costs 362; welfare gains 347–8, 362; and willingness to pay 345, 349–50, 351, 358–62, 471 George, J.G. 293–306, 325, 327 Gervais, S. 399 Gilboa, I. 369 Gilpatric, S.M. 243–60 Gintis, H. 108, 110, 125 Glance, N.S. 212 Glimcher, P. 281 Godby, R. 47–76, 133–4 Goeree, J. 195, 197, 199–205, 281 Goodstein, E.S. 426 Gordon, H.S. xx Gordon, S. 212 Grether, D. 332, 341, 349, 383 Grijalva, T. 366–82, 471–2 Grossman, P. 195 groundwater recharge and salinity see recharge credit trade scheme Guala, F. 107 Güth, W. 158
Haab, T. 331–43, 471, 473 Hahn, R. 107 Haites, E. 50 Halloran, M.A. 215 Hamilton, J.T. 244–5 Hammack, J. 234 Hammond, C.J. 454 Hanemann, M. 407, 408, 430 hard uncertainty see decision making, under hard uncertainty Hardin, G. xix, xx, 212 Harper, D.G.C. 280 Harrington, W. 243 Harrison, D. Jr. 293 Harrison, G. 379, 447 Hasselknippe, H. 10 Hayek, F. von xx Hayes, D.J. 348, 349 Heath, C. 400 Heberlein, T. 234 Hechter, M. 212 Heiner, R. 383 Helland, E. 243 Hoehn, J.P. 427 Hoel model of unilateral emissions abatement 158, 159–78, 237; and leadership as inducement to cooperative behavior 157–8, 160, 163, 173, 176–8; mixed-sequential decision protocol (seq) 159–76; (efficiency index 175–6; profits 173–6; subgame perfect equilibrium (SPE) 160, 161, 162, 163, 165, 166, 167, 169, 171–3, 174, 177, 180–1); Pareto optimum (PO) 159–60, 161, 162, 165, 166, 167, 169, 170, 172, 174, 176, 177, 181; and profits 173–6, 178; simultaneous decision protocol (sim) 160–78, 178 (efficiency index 175–6; Nash equilibrium (NE) 159, 160, 161, 162, 163, 165, 166, 167, 169, 170, 171–3, 174, 179–80; profits 173–6) Hoffman, E. 294, 349 Hofler, R. 447, 448, 453, 454, 455, 459 Hogarth, R.M. 369 Hohl, A. 366 Holcomb, J.H. 214, 215 Holt, C. 3, 106, 199–205, 280–92, 451, 452 Horan, R.D. 104 Houser, D. 196, 197 Housing and Urban Dwevelopment (HUD) Code 395 Hsee, C. 332 Huberman, B.A. 212
Index 481 Huffman, W. 346, 349 Hummels, D. 195 Humphrey, S.J. 428 Hurwicz, L. 368–9, 378, 379 Hurwicz decision criterion 368, 378 Hutchinson, W.G. 407–23, 469–70 hybrid-calibration 448, 458–61 hypothetical bias 408–9, 447–65, 467, 470, 474; bid residuals and 454–5, 457–8; certainty-calibration and 448, 449–53, 457, 461, 470; cheap talk approach 470; commitment costs and 453, 454, 456, 461; frontier-calibration and 448, 453–5, 457–8, 459, 460, 461, 470; hybridcalibration and 448, 458–61; risk aversion and 451–2, 454, 461; selfuncertainty and 448, 450, 451–2, 454–5, 456–7, 461 ignorance, comparative 399, 403 individual transferable quota (ITQ) management 29 induced values 349–50, 351–2, 362, 468 industry consolidation and diffusion see fishery management Innes, R. 261, 265, 266, 277 institutional design 2 inter-dependent utility 195 Irwin, J.R. 350 Isaac, R.M. 143, 184, 186, 197, 215, 216 Itaya, J. 184, 185 Jackson, M. 369 Johannesson, M. 449–50, 457, 461 Johnson, E. 333 Johnson, L.T. 293–306 Johnstone, N. 105, 293 Jondrow, J. 454 Jong, E. 238 Joskow, P. 138 Kagel, J. 106, 350 Kahn, E. 138 Kahneman, D. 108, 281, 333, 400, 442 Kallbekken, S. 3 Kandel, E. 452 Kanninen, B. 430 Kaplow, L. 261, 262, 265, 266, 267, 277 Katz, L. 255 Keller, L.R. 350 Kelsey, D. 369 Kemp, M.C. 185 Keohane, N. 105 Khan, F. 214
Khanna, M. 244 Kim, S. 246 Kirchler, E. 399 Kling, C. 1, 3, 4, 234–9, 452, 453 Knetsch, J.L. 442 Knight, F.H. 368, 399 Knox-Lovell, C.A. 454, 459 Kolmogorov–Smirnov test 316, 321 Konar, S. 244 Kopp, R. 49 Krause, K. 110 Kreps, D. 212 Krippendorf, K. 411 Kroll, S. 1–6 Kruse, J.B. 395–406, 471, 472–3 Kruskal–Wallis test 62, 67 Krutilla, J.V. 345 Kumbhakar, S.C. 454, 459 Kunreuther, H. 369 Kurzban, R. 196, 197 Kyoto Protocol 9, 47, 48, 133, 134 Laband, D.N. 195 land management see recharge credit trade scheme; water and salinity rights Langford, I.H. 430 LaPlace decision criterion 368 Larkin, S. 33, 44 Larson, B. 243 Laschke, A. 399 Latacz-Lohman, U. 106 Laury, S. 195, 199–205, 451, 452 Lazear, E.P. 245 leadership, as inducement to cooperative behavior (emissions abatement) 157–8, 160, 163, 173, 176–8 Ledyard, J.O. 124, 125, 184, 308 Lei, V. 149 Lence, S.H. 348 Leonard, G.K. 408 Levin, D. 350 Levitt, S.D. 324, 325 lighthouse services xix–xx Lin, W. 347 List, J. 332, 345, 349, 350, 379, 447, 448, 453, 454, 455, 459 Little, J. 450 Lobster Conservation Management Team (LCMT) 31 Loewenstein, G. 208 Loomes, G. 108, 110 lottery funding of public goods 185 Lovallo, D. 281, 399 Lowenstein, G. 108, 110
482
Index
Lucking-Reiley, D. 349 Lueck, D. 212 Lusk, J.L. 346, 349, 447–65, 469, 470 McFadden, D. 408 Maciejovsky, B. 399 McKean, M. 213 McKee, M. 185, 243–60 Majd, S. 452 Malik, A. 261, 262, 263, 266, 267 Mann-Whitney U-test 62, 67, 165, 176, 275, 312, 313, 316, 321 marginal per capita return (MPCR), voluntary contributions mechanism (VCM) experiments 186, 187, 188, 189, 190, 204–5 Market Based Instruments (MBI) Pilots Program, Australia 67 market ‘bubbles’ 138 market failure xix market power 138 market shares, and industry consolidation and diffusion (fisheries) 30, 32, 34, 40–2, 43, 44 Marks, M. 308 Marshall, G.R. 108–9 Marwell, G. 195 Masclet, D. 154–5 Maskin, E. 369 Matulich, S. 30 Maynes, E. 195 Meecham, D. 398 Meier,S. 195 Mestelman, S. 9–28, 50–1, 79, 131–2, 137–8, 139 Meyer, D.J. 281 Milgrom, P. 80, 106, 107, 114, 246 Milnor, J. 367, 368 Milon, J.W. 33, 44 Mishra, B.K. 277 Missfeldt, F. 50 Mitchell, R.C. 424, 466 modular/mobile homes, wind hazard risk perception and 395–406 Moir, R. 185, 212–33 monitoring, and natural resource appropriation 212–33, 235–6 Montgomery, W. 105, 138 Montreal Protocol 47 Moon, W. 344 moral hazard 472 Morgan, D. 281 Morgan, J. 185 Moxnes, E. 158
Muller, R.A. 9–28, 50–1, 79, 131–2, 137–8, 139 multiple public goods, voluntary contributions with 184–93, 236; baseline treatment (single public good) 186, 187, 188, 189, 190; marginal per capita return (MPCR) 186, 187, 188, 189, 190; multiple heterogeneous treatment 186, 187, 188, 189, 190–1; multiple homogeneous treatment 186, 187, 188–90 Munysiwalla, A. 380 Murphy, J.J. 257, 261–79, 326, 327 Murray Darling Basin Commission 77 Murray Darling Basin, water and salinity rights trading 77–81, 135 Myerson, R.B. 107 Nakamura, H. 214 Nalbantian, H. 309 Nalebuff, B. 245 Nash equilibrium (NE): ambient pollution instruments, compliance with 307, 308, 309, 310–11; congestion pricing entry experiment 283, 284, 285; natural resource appropriation 217, 218, 219, 220, 221, 222, 226; unilateral emissions abatement 159, 160, 161, 162, 163, 165, 166, 167, 169, 170, 171–3, 174, 179–80 National Action Plan for Salinity and Water Quality (NAP), Australia 77–8 National Institute of Standards and Technology 396 National Oceanic and Atmospheric Administration (NOAA), Panel on Contingent Valuation 407, 426, 447 national sovereignty 47 natural resource appropriation: communication effects 143–56, 213, 214, 226, 227, 234–5; cooperation 218, 226, 227; distributional concerns 293; efficiency 218, 220, 221, 222, 223; information effects 218, 220, 222, 223–4; monitoring and sanctioning effects 212–33, 235–6; Nash equilibrium 217, 218, 219, 220, 221, 222, 226; symmetric subgame perfect equilibrium (SSPE) 217, 218, 220, 222 Nehring, K. 369, 378–9 Neill, H. 345 Nelson, P.S. 214, 215 neo-classical growth models, Ramsey–Cass–Koopmans (RCK) 144–5 Netting, R. xx
Index 483 Newell, R. 33, 44 non-point source pollution problem see ambient pollution instruments Nordhaus, W. 47 Norwood, F.B. 447–65, 469, 470 Noussair, C. 42, 149, 344–65, 469, 470–1 Nowell, C. 195 Nyborg, K. 345 Oates, W.E. 105 Ochs, J. 281 Ockenfels, A. 294 Odean, T. 399 OECD 29 Ones, U. 215 open call market 109, 111–12, 115, 120–2, 126, 136 optimal decision making 137–8, 368 option value 452–3, 456, 461 oral auctions 80, 135 Ostrom, E. 102, 108, 124, 125, 143, 212, 213–14, 215, 217, 226, 227 overconfidence 399, 400, 402 Oxoby, R.J. 209, 307–23, 326, 327 Palfrey, T.R. 195, 197, 214 Palmini, D. 367, 379 parallelism 251 Pareto optimum (PO), unilateral emissions abatement 159–60, 161, 162, 165, 166, 167, 169, 170, 172, 174, 176, 177, 181 Payne, J.W. 442 Pearson, N.D. 452 Peña-Torres, J. 213 permit prices 138; baseline-and-credit 25, 26; buyer and seller liability compared 57, 61, 62, 63, 64–6, 67–8, 69, 70; capand-trade 25, 26; and industry consolidation and diffusion in fisheries 36–8, 44; water and salinity rights, simultaneous markets for 86, 87–9, 90–5, 96 PERT emission trading market 50 Petrie, R. 185, 214 Pfaff, A. 261, 265, 277 Pindyck, R.S. 452 Platteau, J.P. 212 Plott, C.R. 2, 10, 79, 80, 107–8, 110, 126, 213, 214, 251, 290, 332, 341, 349, 383 Poe, G.L. 108, 110, 124, 125, 205–6, 307, 321 Porter, D.P. 110 preference reversals 383, 390–3, 475; anchoring and adjustment mechanisms
332–3, 335–8, 339–41; arbitrage and 390, 391, 392; gamble reversals (GR) 332–3, 338; socio-economic attributes and 391–2, 393; static choice 331–43 preferences 325, 383, 384, 387; distributional 325 (common history and 294, 298, 301, 302; cooperation and 294, 299–300, 303; earnings and 295, 296–7, 302, 303; unconditional 293–4, 304 n1); reciprocal 293, 294, 295, 300–1, 303, 308, 325; and regulatory change 293–306; risk 267, 278 n5; selfinterested 293, 294; source 400, 403; see also air pollution impact reduction schemes; contingent valuation (CV) studies; double referendum surveys; genetically modified foods Prendergast, C. 246 prices, permit see permit prices Prisbey, J.E. 195, 197 property rights xix, 473 public goods 1, 4, 236–7, 307–8, 327; anonymous versus non-anonymous 185; excludable xx; local versus global 185; lottery funded 185; private provision of xix; willingness to pay for see willingness to pay; see also multiple public goods; voluntary contributions mechanism (VCM) experiments public goods theory xix–xx Putterman, L. 215 Quiggin, J.C. 369, 408, 411 Rabin, M. 293, 294, 297, 308, 399 Rabkin, J. 47, 48 Ramsey–Cass–Koopmans (RCK) neoclassical growth model 144–5 Randall, A. 107, 366, 367, 379, 380, 427 Rapoport, A. 281 Rappoport, E. 395 rationality spillovers 383–94 Ready, R.C. 367 Reardon, G.F. 398 recharge credit trade scheme 100–30, 136–7; communication and social payment treatment 122–4, 125–6; with discriminant and uniform price auctions 106–7, 109, 114–15, 117–20, 122–4, 125, 136; and information processing cost and requirements 107–8, 115; open and closed call treatments 109, 111–12, 115, 120–2, 126, 136; participation rates and non-market motivation 108–9;
484
Index
recharge credit trade scheme continued with social payments 115–17, 122–4, 125–6, 136 reciprocal preferences 293, 294, 295, 300–1, 303, 308 regulatory change, social preferences and 293–306 Reifschneider, D. 454 Reiley, David 280–92 Renner, E. 158 reputation effects 74 n3, 134 risk 472, 474; and ambiguity, distinction between 399, 400; and uncertainty, distinction between 368 risk aversion 451–2, 454, 461 risk neutrality 131, 399; and voluntary disclosure of environmental violations 263–7 risk perception, wind hazard 395–406, 471, 472–3 risk preferences 267, 278 n5 Roberts, J. 246 Robin, S. 344–65, 469, 470–1 Rocco, E. 145 Roe, B. 331–43, 471, 473 Rondeau, P.J. 205–6 Rosen, S. 245 Rosenthal, H. 214 Roth, A.E. 79 Ruffieux, B. 344–65, 469, 470–1 Rutström, E.E. 293–306, 350, 447 Sadiraj, V. 204 Safe Minimum Standard (SMS) rule 366–7, 368, 379, 380, 472 Sagoff, M. 345, 379 Saijo, T. 214 salinity credit trading see recharge credit trade scheme; water and salinity rights salt abatement costs, private 84, 85 salt interception cost 85 Samuelson, P.A. xix, xx Sanchirico, C.W. 261, 265, 277 sanctions: and emissions reduction 47–8, 50; and natural resource appropriation 212–33, 235–6; and voluntary disclosure of environmental violations 262–3, 266–7, 271, 274, 275–6 Savage decision criterion 368 Schary, C. 104 Schelling, T. 50 Schkade, D. 333, 442 Schlager, E. 293 Schmalensee, R. 47
Schmeidler, D. 369 Schmidt, K.M. 294, 308 Schotter, A. 309 Schweinberger, A.G. 185 Seabright, P. 212 Sefton, M. 185, 215 Segerson, K. 4, 307, 308, 309, 321, 324–8 Seidl, C. 331, 332, 341 self-interest 293, 294 self-uncertainty 448, 449; and bid residuals 457, 458, 459; commitment costs and 453; and hypothetical bias 448, 450, 451–2, 454–5, 456–7, 461 Sethi, R. 212, 217 Settle, C. 383–94 Shavell, S. 243, 261, 262, 265, 266, 267, 277 Shawhan, D. 3, 131–9 Shogren, J.F. 1–6, 47–76, 133–4, 331, 345, 346, 349, 350, 383–94 Shortle, J.S. 104 Simon, H. 107, 383 Simultaneous Expected Utility Maximization (SIMEU) 369 Sky Trust 294 Smith, D.A. 396 Smith, V.L. xix–xxi, 4, 33, 107, 108, 125, 251, 351, 383 Soberg, M. 51 Soboil, M. 29 social preferences see preferences Solomon, B. 47, 217 Somanathan, E. 212 Spraggon, J. 209, 307–23, 326, 327 Squires, D. 29 Stafford, S.L. 261 Starmer, C. 107–8, 428 Stavins, R. 47, 101, 105 Sterman, J.D. 107 Stevenson, R. 454 Stiglitz, J. 245 Stoneham, G. 106, 107 Stranlund, J.K. 257, 261–79, 326, 327 Strobel, M. 296 Sturm, B. 157–83, 237 subgame perfect equilibrium (SPE): unilateral emissions abatement 160, 161, 162, 163, 165, 166, 167, 169, 171–3, 174, 177, 180–1; see also symmetric subgame perfect equilibrium (SSPE) Sugden, R. 428 Sundali, J.A. 281 Sundaram, R.K. 212 Sutinen, J.G. 29–46, 132–3 Svedsater, H. 426
Index 485 symmetric subgame perfect equilibrium (SSPE), natural resource appropriation 217, 218, 220, 222 Tanaka, T. 143–56, 234–5 Tang, S.Y. 293 tax compliance behavior 256, 257, 258 Taylor, L. 195, 204 technology choice, water and salinity rights trading 81, 83–4, 87, 88, 95–6 Thaler, R.H. 341, 399 theory testing 2 Thompson, M.A. 395–406, 471, 472–3 Thomson, D. 107, 108 Thomson, J. 213 Thöni, C. 281 threshold externalities, natural resource extraction 143–56 Tietenburg, T. 105, 293 Tinkler, S. 195 Tisdell, C.A. 366, 367 Tisdell, J. 100–30, 136–7 Tolley, G.S. 427 Toman, M. 49, 366 Topel, R. 246 Toxics Release Inventory (TRI) 243, 244 tradable permit markets 3, 131–9; see also emission permit trading; fishery management tragedy of the commons xix, xx, 212 Tsoumas, Antreas 424–46 Tversky, A. 108, 341, 399, 400, 442 uncertainty 399, 474; hard see decision making, under hard uncertainty; normative theories 368–9; positive theories 369–70; and risk, distinction between 368; soft 368, 369; strategic, in coordination games 150, 154, 155; see also self-uncertainty uncertainty tolerance 374, 375 uniform price auctions 106–7, 109, 114–15, 117–20, 122–4, 125, 136 unilateral emissions abatement 157–8; see also Hoel model values: induced 349–50, 351–2, 362, 468; passive use 466 Van der Hamsvoort, C. 106 Van der Heijden, E. 158 Van Kooten, G.C. 366 Varian, H.R. 424, 425 Vatn, A. 108 Vaughn, G. 366
Vercelli, A. 368, 369 Vickery, W. 107 Vickrey auctions 349–50, 351, 358–61, 362, 453, 455 Victor, D. 48 Victorian Bush Tender Program 78 voluntary contributions mechanism (VCM) experiments 194–211, 214–15, 236–7; across different subpopulations 195, 205–6, 207; all-human treatments 196, 197, 198–9, 200, 201, 203, 204, 205–6; conditional cooperation effects 195, 197–8; confusion effects 195–211, 237; with context-enhanced instructions 208–9; estimated logit equilibrium models 201, 202–3, 204; free riding 194, 208–9; internal and external returns 197, 199–200; marginal per capita return (MPCR) 204–5; pure altruism effects 195, 197, 199, 200, 201, 202, 203, 204; virtual-player method 196, 198–208, 202; warm-glow effects 195, 197, 199, 200, 201, 202, 203; see also multiple public goods, voluntary contributions with voluntary discovery and disclosure of environmental violations 243, 261–79, 326; audits 261, 263, 266, 270, 271, 274, 276, 326; costs 263, 265, 266, 268–9, 272, 273, 275–6; environmental quality and increases in 262, 276; with risk neutral firms 263–7; and risk preferences 267, 278 n5; sanctions and 262–3, 266–7, 271, 274, 275–6; violation probabilities 263, 264, 265, 267, 270, 271, 272, 273, 274 Vossler, Christian A. 194–211, 243–60, 327 Wade, R. 213, 236–7 Wald decision criterion 368 Walker, J. 143, 184, 212, 213–14, 215, 217, 226, 383 Ward, John R. 100–30, 136–7 water and salinity rights, simultaneous markets for 77–99, 134–5; salt market (opportunity cost 84; transaction prices 86, 90–3, 94–5, 96; transaction quantities 92, 93, 95); and technology choice 81, 83–4, 87, 88, 95–6; water market (demand and supply) 83, 85; random effects estimation models 93–4; transaction prices 86, 87–9, 90, 93–5; transaction quantities 88, 89–90, 95
486
Index
water quality see recharge credit trade scheme; water and salinity rights Weber, M. 369, 399, 400 Weimann, J. 157–83, 237 Weissing, F. 227 Whitehead, J.C. 3, 4, 466–75 Wilcoxon signed-rank test 165, 168, 169, 174, 176 willingness to accept (WTA) 424, 473 willingness to pay (WTP) 296, 383, 384, 407, 408, 424, 429, 451, 453–4, 466, 469, 473; and genetically modified foods 345, 349–50, 351, 358–62, 471; see also air pollution impact reduction schemes Willis, K.G. 424
wind hazard risk perception 395–406, 471, 472–3 Wind Science and Engineering Research Center (WISE) 396 Woerdman, E. 49 Woodward, R.T. 366, 380 Xu, Y.L. 398 Yamagishi, T. 215 Yellowstone Lake 384, 385–8 Yezer, A.M. 195 Young, M. 105 Zelmer, J. 308 Zhao, J. 452, 453