CONTENTS JEAN-MARC ROBIN: On the Dynamics of Unemployment and Wage Distributions . . . . . .
1327
JOSEPH P. KABOSKI AND ROBERT M. TOWNSEND: A Structural Evaluation of a Large-Scale
Quasi-Experimental Microfinance Initiative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JAN DE LOECKER: Product Differentiation, Multiproduct Firms, and Estimating the Impact of Trade Liberalization on Productivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JONATHAN EATON, SAMUEL KORTUM, AND FRANCIS KRAMARZ: An Anatomy of International Trade: Evidence From French Firms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ISABELLE PERRIGNE AND QUANG VUONG: Nonparametric Identification of a Contract Model With Adverse Selection and Moral Hazard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT: Nonparametric Instrumental Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ROBERT J. BARRO AND TAO JIN: On the Size Distribution of Macroeconomic Disasters . . JEAN-PIERRE BENOÎT AND JUAN DUBRA: Apparent Overconfidence . . . . . . . . . . . . . . . . . . . . . OLIVIER GOSSNER: Simple Bounds on the Value of a Reputation . . . . . . . . . . . . . . . . . . . . . . ANDREW MCLENNAN, PAULO K. MONTEIRO, AND RABEE TOURKY: Games With Discontinuous Payoffs: A Strengthening of Reny’s Existence Theorem . . . . . . . . . . . . . . . . . . . . .
1357 1407 1453 1499 1541 1567 1591 1627 1643
ANNOUNCEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665 FORTHCOMING PAPERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667 REPORT OF THE PRESIDENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1669
VOL. 79, NO. 5 — September, 2011
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org EDITOR DARON ACEMOGLU, Dept. of Economics, MIT, E52-380B, 50 Memorial Drive, Cambridge, MA 021421347, U.S.A.;
[email protected] MANAGING EDITOR GERI MATTSON, 2002 Holly Neck Road, Baltimore, MD 21221, U.S.A.; mattsonpublishingservices@ comcast.net CO-EDITORS MATTHEW O. JACKSON, Dept. of Economics, Stanford University, 579 Serra Mall, Stanford, CA 943056072, U.S.A;
[email protected] PHILIPPE JEHIEL, Dept. of Economics, Paris School of Economics, 48 Bd Jourdan, 75014 Paris, France; University College London, U.K.;
[email protected] WOLFGANG PESENDORFER, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A.;
[email protected] JEAN-MARC ROBIN, Dept. of Economics, Sciences Po, 28 rue des Saints Pères, 75007 Paris, France and University College London, U.K.;
[email protected] JAMES H. STOCK, Dept. of Economics, Harvard University, Littauer M-26, 1830 Cambridge Street, Cambridge, MA 02138, U.S.A.;
[email protected] ASSOCIATE EDITORS YACINE AÏT-SAHALIA, Princeton University JOSEPH G. ALTONJI, Yale University JUSHAN BAI, Columbia University MARCO BATTAGLINI, Princeton University PIERPAOLO BATTIGALLI, Università Bocconi DIRK BERGEMANN, Yale University YEON-KOO CHE, Columbia University XIAOHONG CHEN, Yale University VICTOR CHERNOZHUKOV, Massachusetts Institute of Technology DARRELL DUFFIE, Stanford University JEFFREY ELY, Northwestern University JIANQING FAN, Princeton University MIKHAIL GOLOSOV, Princeton University PATRIK GUGGENBERGER, University of California, San Diego FARUK GUL, Princeton University PHILIP A. HAILE, Yale University YORAM HALEVY, University of British Columbia JOHANNES HÖRNER, Yale University MICHAEL JANSSON, University of California, Berkeley FELIX KUBLER, University of Zurich JONATHAN LEVIN, Stanford University OLIVER LINTON, Cambridge University BART LIPMAN, Boston University THIERRY MAGNAC, Toulouse School of Economics (GREMAQ and IDEI)
THOMAS MARIOTTI, Toulouse School of Economics (GREMAQ and IDEI) DAVID MARTIMORT, Paris School of Economics– EHESS ROSA L. MATZKIN, University of California, Los Angeles ANDREW MCLENNAN, University of Queensland SUJOY MUKERJI, University of Oxford ULRICH MÜLLER, Princeton University LEE OHANIAN, University of California, Los Angeles WOJCIECH OLSZEWSKI, Northwestern University MICHAEL OSTROVSKY, Stanford University JORIS PINKSE, Pennsylvania State University BENJAMIN POLAK, Yale University SUSANNE M. SCHENNACH, University of Chicago FRANK SCHORFHEIDE, University of Pennsylvania ANDREW SCHOTTER, New York University NEIL SHEPHARD, University of Oxford MARCIANO SINISCALCHI, Northwestern University JEROEN M. SWINKELS, Northwestern University ELIE TAMER, Northwestern University EDWARD J. VYTLACIL, Yale University IVÁN WERNING, Massachusetts Institute of Technology ASHER WOLINSKY, Northwestern University
EDITORIAL COORDINATOR: MARY BETH BELLANDO-ZANIBONI, Dept. of Economics, New York University, 19 West 4th Street, New York, NY 10012, U.S.A.;
[email protected] Information on MANUSCRIPT SUBMISSION is provided in the last two pages. Information on MEMBERSHIP, SUBSCRIPTIONS, AND CLAIMS is provided in the inside back cover.
JOHN H. MOORE President of the Econometric Society, 2010 First Vice-President of the Econometric Society, 2009 Second Vice-President of the Econometric Society, 2008
SUBMISSION OF MANUSCRIPTS TO ECONOMETRICA 1. Members of the Econometric Society may submit papers to Econometrica electronically in pdf format according to the guidelines at the Society’s website: http://www.econometricsociety.org/submissions.asp Only electronic submissions will be accepted. In exceptional cases for those who are unable to submit electronic files in pdf format, one copy of a paper prepared according to the guidelines at the website above can be submitted, with a cover letter, by mail addressed to Professor Stephen Morris, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 085441021, USA. 2. There is no charge for submission to Econometrica, but only members of the Econometric Society may submit papers for consideration. In the case of coauthored manuscripts, at least one author must be a member of the Econometric Society for the current calendar year. Note that Econometrica rejects a substantial number of submissions without consulting outside referees. 3. It is a condition of publication in Econometrica that copyright of any published article be transferred to the Econometric Society. Submission of a paper will be taken to imply that the author agrees that copyright of the material will be transferred to the Econometric Society if and when the article is accepted for publication, and that the contents of the paper represent original and unpublished work that has not been submitted for publication elsewhere. If the author has submitted related work elsewhere, or if he does so during the term in which Econometrica is considering the manuscript, then it is the author’s responsibility to provide Econometrica with details. There is no page fee and no payment made to the authors. 4. Econometrica has the policy that all results (empirical, experimental and computational) must be replicable. 5. Current information on turnaround times is available is published in the Editor’s Annual Report in the January issue of the journal. They are re-produced on the journal’s website at http:// www.econometricsociety.org/editorsreports.asp. 6. Papers should be accompanied by an abstract of no more than 150 words that is full enough to convey the main results of the paper. 7. Additional information on submitting papers is available on the journal’s website at http:// www.econometricsociety.org/submissions.asp. Typeset at VTEX, Akademijos Str. 4, 08412 Vilnius, Lithuania. Printed at The Sheridan Press, 450 Fame Avenue, Hanover, PA 17331, USA. Copyright ©2011 by The Econometric Society (ISSN 0012-9682). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation, including the name of the author. Copyrights for components of this work owned by others than the Econometric Society must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works, requires prior specific permission and/or a fee. Posting of an article on the author’s own website is allowed subject to the inclusion of a copyright statement; the text of this statement can be downloaded from the copyright page on the website www.econometricsociety.org/permis.asp. Any other permission requests or questions should be addressed to Claire Sashi, General Manager, The Econometric Society, Dept. of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA. E-mail:
[email protected]. Econometrica (ISSN 0012-9682) is published bi-monthly by the Econometric Society, Department of Economics, New York University, 19 West 4th Street, New York, NY 10012. Mailing agent: Sheridan Press, 450 Fame Avenue, Hanover, PA 17331. Periodicals postage paid at New York, NY and additional mailing offices. U.S. POSTMASTER: Send all address changes to Econometrica, Journals Department, John Wiley & Sons Inc., 350 Main Street, Malden, MA 02148, USA.
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Membership Joining the Econometric Society, and paying by credit card the corresponding membership rate, can be done online at www.econometricsociety.org. Memberships are accepted on a calendar year basis, but the Society welcomes new members at any time of the year, and in the case of print subscriptions will promptly send all issues published earlier in the same calendar year. Membership Benefits • Possibility to submit papers to Econometrica, Quantitative Economics, and Theoretical Economics • Possibility to submit papers to Econometric Society Regional Meetings and World Congresses • Full text online access to all published issues of Econometrica (Quantitative Economics and Theoretical Economics are open access) • Full text online access to papers forthcoming in Econometrica (Quantitative Economics and Theoretical Economics are open access) • Free online access to Econometric Society Monographs, including the volumes of World Congress invited lectures • Possibility to apply for travel grants for Econometric Society World Congresses • 40% discount on all Econometric Society Monographs • 20% discount on all John Wiley & Sons publications • For print subscribers, hard copies of Econometrica, Quantitative Economics, and Theoretical Economics for the corresponding calendar year Membership Rates Membership rates depend on the type of member (ordinary or student), the class of subscription (print and online or online only) and the country of residence (high income or middle and low income). The rates for 2011 are the following:
Ordinary Members Print and Online Online only Print and Online Online only
1 year (2011) 1 year (2011) 3 years (2011–2013) 3 years (2011–2013)
Student Members Print and Online Online only
1 year (2011) 1 year (2011)
High Income
Other Countries
$100 / €80 / £65 $55 / €45 / £35 $240 / €192 / £156 $132 / €108 / £84
$60 / €48 $15 / €12 $144 / €115 $36 / €30
$60 / €48 / £40 $15 / €12 / £10
$60 / €48 $15 / €12
Euro rates are for members in Euro area countries only. Sterling rates are for members in the UK only. All other members pay the US dollar rate. Countries classified as high income by the World Bank are: Andorra, Aruba, Australia, Austria, The Bahamas, Bahrain, Barbados, Belgium, Bermuda, Brunei Darussalam, Canada, Cayman Islands, Channel Islands, Croatia, Cyprus, Czech Republic, Denmark, Equatorial Guinea, Estonia, Faeroe Islands, Finland, France, French Polynesia, Germany, Gibraltar, Greece, Greenland, Guam, Hong Kong (China), Hungary, Iceland, Ireland, Isle of Man, Israel, Italy, Japan, Rep. of Korea, Kuwait, Latvia, Liechtenstein, Luxembourg, Macao (China), Malta, Monaco, Netherlands, Netherlands Antilles, New Caledonia, New Zealand, Northern Mariana Islands, Norway, Oman, Poland, Portugal, Puerto Rico, Qatar, San Marino, Saudi Arabia, Singapore, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Taiwan (China), Trinidad and Tobago, Turks and Caicos Islands, United Arab Emirates, United Kingdom, United States, Virgin Islands (US). Institutional Subscriptions Information on Econometrica subscription rates for libraries and other institutions is available at www.econometricsociety.org. Subscription rates depend on the class of subscription (print and online or online only) and the country classification (high income, middle income, or low income). Back Issues and Claims For back issues and claims contact Wiley Blackwell at
[email protected].
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Administrative Office: Department of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA; Tel. 212-9983820; Fax 212-9954487 General Manager: Claire Sashi (
[email protected]) 2011 OFFICERS BENGT HOLMSTRÖM, Massachusetts Institute of Technology, PRESIDENT JEAN-CHARLES ROCHET, University of Zurich, FIRST VICE-PRESIDENT JAMES HECKMAN, University of Chicago, SECOND VICE-PRESIDENT JOHN MOORE, University of Edinburgh and London School of Economics, PAST PRESIDENT RAFAEL REPULLO, CEMFI, EXECUTIVE VICE-PRESIDENT
2011 COUNCIL DARON ACEMOGLU, Massachusetts Institute of Technology (*)MANUEL ARELLANO, CEMFI ORAZIO ATTANASIO, University College London MARTIN BROWNING, University of Oxford DAVID CARD, University of California, Berkeley JACQUES CRÉMER, Toulouse School of Economics MATHIAS DEWATRIPONT, Free University of Brussels DARRELL DUFFIE, Stanford University GLENN ELLISON, Massachusetts Institute of Technology HIDEHIKO ICHIMURA, University of Tokyo (*)MATTHEW O. JACKSON, Stanford University MICHAEL P. KEANE, University of Technology Sydney LAWRENCE J. LAU, Chinese University of Hong Kong CHARLES MANSKI, Northwestern University CESAR MARTINELLI, ITAM
ANDREU MAS-COLELL, Universitat Pompeu Fabra and Barcelona GSE AKIHIKO MATSUI, University of Tokyo HITOSHI MATSUSHIMA, University of Tokyo ROSA MATZKIN, University of California, Los Angeles ANDREW MCLENNAN, University of Queensland COSTAS MEGHIR, University College London and Yale University MARGARET MEYER, University of Oxford STEPHEN MORRIS, Princeton University JUAN PABLO NICOLINI, Universidad Torcuato di Tella (*)ROBERT PORTER, Northwestern University JEAN-MARC ROBIN, Sciences Po and University College London LARRY SAMUELSON, Yale University ARUNAVA SEN, Indian Statistical Institute JÖRGEN W. WEIBULL, Stockholm School of Economics
The Executive Committee consists of the Officers, the Editors of Econometrica (Daron Acemoglu), Quantitative Economics (Orazio Attanasio), and Theoretical Economics (Martin J. Osborne), and the starred (*) members of the Council.
REGIONAL STANDING COMMITTEES Australasia: Andrew McLennan, University of Queensland, CHAIR; Maxwell L. King, Monash University, SECRETARY. Europe and Other Areas: Jean-Charles Rochet, University of Zurich, CHAIR; Helmut Bester, Free University Berlin, SECRETARY; Enrique Sentana, CEMFI, TREASURER. Far East: Hidehiko Ichimura, University of Tokyo, CHAIR. Latin America: Juan Pablo Nicolini, Universidad Torcuato di Tella, CHAIR; Juan Dubra, University of Montevideo, SECRETARY. North America: Bengt Holmström, Massachusetts Institute of Technology, CHAIR; Claire Sashi, New York University, SECRETARY. South and Southeast Asia: Arunava Sen, Indian Statistical Institute, CHAIR.
Econometrica, Vol. 79, No. 5 (September, 2011), 1327–1355
ON THE DYNAMICS OF UNEMPLOYMENT AND WAGE DISTRIBUTIONS BY JEAN-MARC ROBIN1 Postel-Vinay and Robin’s (2002) sequential auction model is extended to allow for aggregate productivity shocks. Workers exhibit permanent differences in ability while firms are identical. Negative aggregate productivity shocks induce job destruction by driving the surplus of matches with low ability workers to negative values. Endogenous job destruction coupled with worker heterogeneity thus provides a mechanism for amplifying productivity shocks that offers an original solution to the unemployment volatility puzzle (Shimer (2005)). Moreover, positive or negative shocks may lead employers and employees to renegotiate low wages up and high wages down when agents’ individual surpluses become negative. The model delivers rich business cycle dynamics of wage distributions and explains why both low wages and high wages are more procyclical than wages in the middle of the distribution. KEYWORDS: Unemployment dynamics, wage distribution, inequality, search-matching.
1. INTRODUCTION THE MAIN MOTIVATION for this paper is to explore the role of heterogeneous worker ability in a search-matching model of unemployment and wages with aggregate productivity shocks. There has recently been considerable interest for noncompetitive macrodynamic models of the labor market with heterogeneous agents. The monumental empirical literature on wage equations has long demonstrated the importance of worker and firm heterogeneity in wage dispersion; and the possibility of separately identifying a distribution of worker characteristics and a distribution of firm characteristics is in itself an indication that the process of matching workers to jobs is not competitive (see Mortensen (2005)). In the macroeconomic literature, the role of search frictions as a cause for unemployment is now well understood (e.g., Mortensen and Pissarides (1994)). It was only a matter of time before the macroeconomic and the microeconomic search-matching literatures converged. This question admittedly calls for different answers, depending on the way search frictions are modelled. For example, Moscarini and Postel-Vinay (2008, 2009) studyied the non-equilibrium dynamics of the Burdett–Mortensen 1 This article is based on my Walras–Bowley Lecture, presented at the North American Summer Meetings of the Econometric Society, Boston University, June 2009. Helpful comments were received from participants in seminars at IFS, Stockholm University, Sciences Po, CRESTINSEE, Cambridge University, the Paris School of Economics, and conferences at the Milton Friedman Institute (University of Chicago), Jerusalem (Bank of Israel), and Venice (CSEifo). I am particularly grateful to Rob Shimer, Iouri Manovskii, Jean-Olivier Hairault, Thomas Piketty, Pierre Cahuc, Francis Kramarz, Guy Laroque, Boyan Jovanovic, Tom Sargent, and Fabien PostelVinay whose questions and comments have definitely influenced this paper.
© 2011 The Econometric Society
DOI: 10.3982/ECTA9070
1328
JEAN-MARC ROBIN
wage posting model. Workers are identical but firms are different. This model yields very interesting insights on the business-cycle dynamics of firm size distributions. Menzio and Shi (2009, 2010a) also considered a wage posting model, but they assumed directed search in lieu of random search. The model is elegant and easy to solve in and out of the steady state because the equilibrium is block-recursive and agents only need to forecast the exogenous aggregate shocks. However, with directed search, the distance that these frictions introduce with a purely competitive equilibrium is rather minimal. For example, with ex ante heterogeneous workers, the labor market is perfectly segmented: any active submarket is visited by only one type of worker (Menzio and Shi (2010b)) so that search frictions do not seem to generate mismatch in this setup. In this paper, as in Moscarini and Postel-Vinay, I also assume random search, but assume wage bargaining instead of wage posting. The recent survey by Hall and Krueger (2010) showed mixed evidence about whether workers bargain over pay before accepting a job (about one hire in three) or whether they had precise information about pay before their first meeting with an employer (another third). Now bargaining offers a considerable gain in simplicity with respect to wage posting as far as modelling is concerned. This is because, with wage posting, complicated strategic interactions have to be accounted for. Hall and Krueger also noted that about 40% of workers could have remained on their earlier jobs at the time that they accept their new job. This indicates that it is important for any model of employment and turnover to allow for on-thejob search. To introduce on-the-job search in a bargaining model, I use Postel-Vinay and Robin’s (2002) sequential auction model in a similar way to Lise, Meghir, and Robin (2009) except than I allow for aggregate shocks to productivity instead of firm-specific shocks. Wage contracts are long term contracts that can be renegotiated by mutual agreement only. Employees search on the job and employers counter outside offers. It is assumed that firms have full monopsony power vis-à-vis unemployed workers and hire them at a wage that is only marginally greater than their reservation wage. A worker paid such a low wage has a strong incentive to look for an alternative employer and trigger Bertrand competition. Bertrand competition between identical firms raises the wage to the maximal wage firms are willing to pay, that is, the wage that gives the employer a negligible share of the surplus. In this environment, any steady-state equilibrium is such that each worker type is associated with only two wages: a starting wage and a promotion wage, which are the lower and the upper bounds of the bargaining set—either the firm or the worker gets all the surplus. Aggregate productivity shocks multiply the number of potential couples of wages, many of which have a positive probability of being observed in any state of nature because of the stickiness induced by long term contracting. The model offers a theoretical framework to study the link between the dynamics of average wages (the usual concern of representative–agent models)
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1329
and the dynamics of individual wages. In particular, it explains why different wage deciles can exhibit different volatilities and different responses to productivity shocks. A second important application of the model concerns the link between unemployment and business cycle. In a very influential paper, Shimer (2005) argued that the search-matching model of Mortensen and Pissarides (1994) cannot reproduce unemployment dynamics well. The reason is that with wage renegotiation in every period, any productivity shock is immediately absorbed into the wage with little effect on unemployment. A long series of papers have tried to solve the puzzle, essentially by making wages sticky (Hall (2005), Hall and Milgrom (2008), Gertler and Trigari (2009), Pissarides (2009)) or by reducing the match surplus to a very small value (Hagedorn and Manovskii (2008)).2 Interestingly, although endogenous job destruction is at the heart of the Mortensen–Pissarides model, this literature has neglected endogenous job destruction as a possible amplifying mechanism when coupled with worker or match heterogeneity. Yet, it is easy to understand that if frictional unemployment due to exogenous job destruction shocks is about 4% and if a fraction of 6% of low-skilled workers are at risk of negative surplus when aggregate productivity goes below a certain threshold, then one has a simple way to explain how unemployment can vary between 4% and 10% with a volatility that is 10 times greater than the volatility of productivity.3 The paper is organized as follows. A dynamic sequential-auction model with heterogeneous workers and identical firms is first developed. The DSGE model is recursive with only one aggregate state variable, labor productivity, so that it can be easily simulated in and out of the steady-state equilibrium. Then the model’s parameters are estimated by simulated generalized method of moments (GMM), and the results are interpreted. 2. THE MODEL 2.1. Setup 2.1.1. Aggregate Shocks Time is discrete and indexed by t ∈ N. The global state of the economy is an ergodic Markov chain yt ∈ {y1 < · · · < yN } with transition matrix Π = (πij ) 2 Mortensen and Nagypál (2007) reviewed this literature and considered alternative mechanisms. Pissarides (2009) suggested a novel approach to solve the unemployment volatility puzzle by assuming that productivity shocks change entry wages in new jobs differently from wages in ongoing jobs. My model follows Pissarides’s suggestion, as entry changes are also different from later wages. Gertler and Trigari (2009) generated wage stickiness using a Calvo-type mechanism such that only a fraction of contracts are renegotiated in every period. Both models generate cross-sectional wage dispersion, but they do not address the issue of wage inequality dynamics. 3 The specification calibrated in this paper will have a continuum of worker types and a smoother response of unemployment to aggregate productivity shocks.
1330
JEAN-MARC ROBIN
(with a slight abuse of notation, yt denotes the stochastic process and yi denotes an element of the support). Aggregate shocks accrue at the beginning of each period. 2.1.2. Workers
M There are M types of workers and m workers of each type (with m=1 m = 1). Each type m = 1 M is characterized by a time-invariant ability xm , with xm < xm+1 . Workers are paired with identical firms to form productive units. The per-period output of a worker of ability xm when aggregate productivity is yi is denoted as yi (m). In the application we shall set yi (m) = xm yi . This seems the most neutral specification as far as the cyclicality of relative match productivity dispersion is concerned, that is, yi (m)/yi (m ) only depends on worker types m and m , not on the economy’s state i. We denote by St (m) the surplus of a match including a worker of type xm , that is, the present value of the match minus the value of unemployment and minus the value of a vacancy (assumed to be nil). Only matches with positive surplus St (m) > 0 are viable. 2.1.3. Turnover Matches form and break at the beginning of each period, after the aggregate state has been reset. Let ut (m) denote the proportion of unemployed in the population of workers of ability xm at the end of period t − 1, and M let ut = m=1 ut (m)m define the aggregate unemployment rate. A the beginning of period t, a fraction 1{St (m) ≤ 0}[1 − ut (m)]m is endogenously laid off—or temporarily decides to become inactive—and another fraction δ1{St (m) > 0}[1 − ut (m)]m is exogenously destroyed. Firms cannot direct their search to specific worker types. For simplicity and to discipline the model as much as possible, I also assume that workers meet employers at exogenous rates. It is yet straightforward to work out an extension with a general matching function. Thus, a fraction λ0 1{St (m) > 0}ut (m)m of employable unemployed workers meet an employer and a fraction λ1 (1 − δ)1{St (m) > 0}[1 − ut (m)]m of employees meet an alternative employer, where λ0 and λ1 are the respective search intensities of unemployed and employed workers.4 2.1.4. Wages Employers have full monopsony power with respect to workers. Hence, unemployed workers are offered their reservation wage, the employer taking all the surplus. Rent sharing accrues via on-the-job search, triggering competition between employers for workers. Because firms are identical and there is no 4 Unproductive unemployed workers search because they may turn productive in the next period.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1331
mobility cost, Bertrand competition transfers the whole surplus to the worker, who then gets paid the firm’s reservation value. This wage dynamics is similar to the optimal wage–tenure contracts studied by Stevens (2004). She showed that an infinity of wage–tenure contracts are optimal. In particular, employers could pay the workers their productivity and charge them an entry fee. All firms being identical, a worker is indifferent between staying with the incumbent employer or moving to the poacher. I assume that the tie is broken in favor of the poacher with probability τ.5 2.1.5. Turnover Rates The following turnover rates can then be computed. • Exit rate from unemployment: 1{St (m) > 0}ut (m)m ft = λ0
m
ut
• Quit rate (job-to-job mobility): 1{St (m) > 0}[1 − ut (m)]m qt = τλ1 (1 − δ)
m
1 − ut
• Job destruction rate: st = δ + (1 − δ)
1{St (m) ≤ 0}(1 − ut (m))m
m
1 − ut
Notice that the quit rate and the job separation rate are related by the deterministic relationship qt = τλ1 (1 − st ) 2.2. Unemployment Dynamics 2.2.1. The Value of Unemployment Let Ui (m) denote the present value of remaining unemployed for the rest of period t for a worker of type m if the economy is in state i. We do not index this value by any state variable other than the state of the economy for reasons that will immediately become clear. An unemployed worker receives a flow payment zi (m) for the period. At the beginning of the next period, the state of 5 The randomness in the eventual mobility may explain why employers engage in Bertrand competition in the first place if part of the outcome of the production process is nontransferable.
1332
JEAN-MARC ROBIN
the economy changes to yj with probability πij and the worker receives a job offer with some probability. However, because the employer has full monopsony power and takes the whole surplus, the present value of a new job to the worker is only marginally better than the value of unemployment. Consequently, the value of unemployment solves the linear Bellman equation Ui (m) = zi (m) +
1 πij Uj (m) 1+r j
2.2.2. The Match Surplus After a productivity shock from i to j, all matches yielding negative surplus are destroyed. Otherwise, if the worker is poached, Bertrand competition transfers the whole surplus to the worker whether she moves or not. The surplus of a match with a worker of type m when the economy is in state i thus solves the Bellman equation Si (m) = yi (m) − zi (m) +
1−δ πij max{Sj (m) 0} 1+r j
This almost linear system can be solved numerically by value-function iterations. As for the unemployment value, the match surplus only depends on the state of the economy, and in particular not on calendar time. Hence the match surplus process for workers of type m, St (m), is also a Markov chain with support {Si (m) i = 1 N} and transition matrix Π . 2.2.3. The Unemployment Process The joint process of ut (m) and St (m) or ut (m) and 1{St (m) > 0} is Markovian, the law of motion of individual-specific unemployment rates being ut+1 (m) = 1 − (1 − δ)(1 − ut (m)) + λ0 ut (m) 1{St (m) > 0} 1 if St (m) ≤ 0, = ut (m) + δ(1 − ut (m)) − λ0 ut (m) if St (m) > 0. Unemployment dynamics is thus independent of how the surplus is split between employers and employees. Thus, aggregate shocks are transmitted to unemployment independently of how wages are determined. 2.2.4. Steady State In state i’s steady-state equilibrium, the unemployment rate in group m is ui (m) =
δ 1{Si (m) > 0} + 1{Si (m) ≤ 0} δ + λ0
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1333
The aggregate unemployment rate follows as ui =
M
ui (m)m =
m=1
δ λ0 Li + 1 − Li = 1 − Li δ + λ0 δ + λ0
M
where Li = m=1 m 1{Si (m) > 0} is the number of employable workers. The state-contingent equilibrium unemployment values are bounded from δ . The aggregate unemployment rate is greater than this lower below by δ+λ 0 bound when a low aggregate productivity value yi induces endogenous job destruction or nonparticipation (Li < 1 in steady state). 2.3. Wages 2.3.1. The Worker Surplus Let Wi (w m) denote the present value of a wage w in state i to a worker of type m. The surplus flow for the current period is w − zi (m). In the following period, the worker is laid off with probability 1{Sj (m) ≤ 0} + δ1{Sj (m) > 0} and suffers zero surplus. Otherwise, with probability λ1 , the worker receives an outside offer and enjoys the whole surplus. In the absence of poaching (with probability 1 − λ1 ), wage contracts may still be renegotiated if a productivity shock moves the current wage outside the bargaining set. We follow MacLeod and Malcomson (1993) and the recent application by Postel-Vinay and Turon (2010), and assume that the new wage contract is the closest point in the bargaining set to the old, now infeasible wage. That is, if Wj (w m) − Uj (m) < 0, the worker has a credible threat to quit to unemployment and her employer accepts renegotiation of the wage up to the point where the worker obtains zero surplus. If Wj (w m) − Uj (m) > Sj (m), the employer has a credible threat to fire the worker unless she accepts renegotiation downward to the point where she gets the whole surplus and no more. The worker surplus, Wi (w m) − Ui (m), therefore satisfies the Bellman equation Wi (w m) − Ui (m) = w − zi (m) 1−δ + πij 1{Sj (m) > 0} 1+r j × λ1 Sj (m) + (1 − λ1 )(Wj∗ (w m) − Uj (m)) where Wj∗ (w m) − Uj (m) = min max{Wj (w m) − Uj (m) 0} Sj (m) is the renegotiated worker surplus.
1334
JEAN-MARC ROBIN
Note that here again there is only one aggregate state variable: aggregate productivity. The assumption that the rate of offer arrival is exogenous is important to justify this point. With a matching function, the unemployment rate should be included in the state space. However, a very good approximation would be obtained by assuming that the unemployment rate and market tightness jump to their steady-state value after a productivity shock. 2.3.2. The Set of Equilibrium Wages For all aggregate states yi and all worker types xm , there are only two possible wages. Either the worker was offered a job while unemployed and she can only claim a wage w i (m) such that Wi (w i (m) m) = Ui (m) (her reservation wage) or she was already employed and she benefits from a wage rise to wi (m) such that Wi (wi (m) m) = Ui (m) + Si (m) (the employer’s reservation value). I now explain how these wages can be solved for. For all k, let us denote the worker surpluses when the economy is in state k and wages are either w i (m) or wi (m) as W ki (m) = Wk (w i (m) m) − Uk (m) W ki (m) = Wk (wi (m) m) − Uk (m) and let also W ∗ki (m) = min max{W ki (m) 0} Si (m) W ∗ki (m) = min max{W ki (m) 0} Si (m) Making use of the definitions of wages, we find that W ii (m) = 0 and W ii (m) = Si (m). These worker surpluses therefore satisfy the following modified Bellman equations: W ki (m) = W ki (m) − W ii (m) = zi (m) − zk (m) +
1−δ (πkj − πij ) 1+r j
× 1{Sj (m) > 0}[λ1 Sj (m) + (1 − λ1 )W ∗ji (m)] and W ki (m) − Si (m) = W ki (m) − W ii (m) = zi (m) − zk (m) +
1−δ (πkj − πij ) 1+r j
× 1{Sj (m) > 0}[λ1 Sj (m) + (1 − λ1 )W ∗ji (m)]
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1335
Value-function iteration delivers a simple numerical solution algorithm, using for a starting value the solution of the linear system that is obtained by removing the “stars” from the continuation values. Having determined W ki (m) and W ki (m) for all k i, and m, wages then follow as w i (m) = zi (m) −
1−δ πij 1{Sj (m) > 0} 1+r j
× [λ1 Sj (m) + (1 − λ1 )W ∗ji (m)] and wi (m) = Si (m) + zi (m) −
1−δ πij 1{Sj (m) > 0} 1+r j
× [λ1 Sj (m) + (1 − λ1 )W ∗ji (m)] 2.3.3. The Dynamics of Wage Distributions The support of the wage distribution is the union of all sets Ωm = {w i (m) wi (m) ∀i}. Let gt (w m) denote the measure of workers of ability m employed at wage w ∈ Ω at the end of period t − 1. Conditional on yt = yi (maybe equal to yt−1 ) at the beginning of period t, no worker can be employed if Si (m) ≤ 0. The inflow to the stock of workers paid the minimum wage w i (m) is otherwise made of all unemployed workers drawing an offer (λ0 ut (m)m ) plus all employees paid a wage w such that Wi (w m) − Ui (m) < 0, who were not laid off but still were not lucky enough to get poached. The outflow is made of those workers previously paid w i (m) who are either laid off or poached. That is, gt+1 (w i (m) m) = 1{Si (m) > 0} × λ0 ut (m)m + (1 − δ)(1 − λ1 )
× gt (w i (m) m) + 1{Wi (w m) − Ui (m) < 0}gt (w m) w∈Ωm
The inflow to the stock of workers paid wi (m) has two components. First, any employee paid less than wi (m) (in present value terms) who is contacted by another employer benefits from a pay rise to wi (m). Second, any employee paid more than wi (m) (in present value) has to accept a pay cut to wi (m) to
1336
JEAN-MARC ROBIN
avoid layoff. The only reason to flow out is layoff. Hence, gt+1 (wi (m) m) = 1{Si (m) > 0}(1 − δ) × λ1 (1 − ut (m))m + (1 − λ1 )
× gt (wi (m) m) +
1{Wi (w m) − Ui (m) > Si (m)}gt (w m)
w∈Ωm
For all w ∈ Ωm \ {w i (m) wi (m)}, only those workers paid w greater than w i (m) and less than wi (m) (in value terms), who are not laid off or poached, keep their wage: gt+1 (w m) = 1{Si (m) > 0}(1 − δ)(1 − λ1 ) × 1{0 ≤ Wi (w m) − Ui (m) ≤ Si (m)}gt (w m) Note that summing gt+1 (w m) over all wages and dividing by m yields the law of motion for the unemployment rates ut (m): 1 − ut+1 (m) = 1{Si (m) > 0} λ0 ut (m) + (1 − δ)(1 − ut (m)) The joint process of distributions and surpluses is Markovian with a finite state space. In principle one can certainly calculate its ergodic distribution, but this is a rather cumbersome calculation. In practice, I shall use simulations to approximate the theoretical moments to match with the data moments used for the estimation of structural parameters (Robin (2011)). 3. DATA 3.1. Monthly Unemployment I use the Bureau of Labor Statistics’s (BLS) series of seasonally adjusted monthly employment and unemployment levels of all employees, 16 years and over, constructed from the Current Population Survey (CPS). The BLS also offers to decompose unemployment according to unemployment duration (less than 5 weeks, between 5 and 14, more than 15, and more than 27 weeks).6 6 Series LNS12000000, LNS13000000, LNS13008396, LNS008756, LNS0085145, and LNS008636.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1337
Following usual pratice, I then construct the series of exit rates from unemployment and job destruction rates from unemployment flows as f = 1 − (FU − FU5)/U s = FU5/E where U and U5 denote total unemployment and unemployment with less than 5 weeks duration, E is total employment, and F is the forward operator. The Job Openings and Labor Turnover Survey (JOLTS) offers an alternative and independent way to calculate these rates. Let H, S, and Q, respectively, denote the BLS series of hires, total separations, and quits.7 Let then fJOLTS = (H − Q)/U sJOLTS = (S − Q)/E qJOLTS = Q/E Figure 1 shows that both couples of series (f fJOLTS ) and (s sJOLTS ) are sufficiently close to cast doubts on the necessity to adjust f and s for the rupture in the continuity of the CPS in the 1990s. With exponentially distributed unemployment duration and homogeneous workers, the distribution of elapsed duration in the stock is also exponential with the same distribution as in the population. One can thus construct alternative series of exit rates as f 5 = −4 × log(U5p/U)/5 f 15 = −4 × log(U15p/U)/15 f 27 = −4 × log(U27p/U)/27 where U5p, U15p, and U27p denote the number of unemployed workers with duration greater than 5, 15, and 27 weeks, and where the multiple 4 is used to adjust for durations being measured in weeks instead of months. Figure 2 displays these different series. The fact that f 5 > f 15 > f 27 clearly suggests heterogeneous rates among unemployed workers. 3.2. Quarterly Series I use the BLS quarterly series of seasonally adjusted real output per person in the non-farm business sector to construct the aggregate productivity process. For wages, I use hourly compensation divided by the implicit output deflator, 7
Series JTS00000000HIL, JTS00000000TSL, and JTS00000000QUL.
1338
(b) Job destruction rate FIGURE 1.—CPS versus JOLTS.
JEAN-MARC ROBIN
(a) Unemployment exit rate
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1339
FIGURE 2.—Unemployment exit rate from stocks.
readjusted per person by multiplying by hours and dividing by employment.8 These data cover the period 1947q1–2009q1. Quarterly series of job finding rates and job destruction rates can then be constructed by iterating the law of motion of unemployment at monthly frequency. For example, ut+2 = st+1 (1 − ut+1 ) + (1 − ft+1 )ut+1 = st+1 + [1 − st+1 − ft+1 ]ut+1 = st+1 + (1 − st+1 − ft+1 )[st + (1 − st − ft )ut ] = st+1 + (1 − st+1 − ft+1 )st + (1 − st+1 − ft+1 )(1 − st + ft )ut Hence, we calculate s = F 2 s + (1 − F 2 s − F 2 f ) ∗ (Fs + (1 − Fs − Ff ) ∗ s) f = 1 − s − (1 − s − f ) ∗ (1 − Fs − Ff ) ∗ (1 − F 2 s − F 2 f ) where F 2 = FF . Finally, quarterly series of inflow and outflow rates, as well as quarterly series of employment and unemployment, are obtained by keeping only the first month of each quarter (taking means would reduce the volatility artificially). 8
BLS series PS85006163, PRS85006103, PRS85006113 PRS85006033 and PRS85006013.
1340
JEAN-MARC ROBIN TABLE I QUARTERLY MOMENTSa
Productivity
Mean Std Skewness Kurtosis Autocorrelation Corr. with prod. Corr. with unempl. Reg. on prod. Reg. on unempl.
0.0226 −0.19 3.06 0.91 1 −0.51 1
Unempl. Rate
Unempl. Exit Rate
Job Destruction Rate
0.058 0.214 0.12 2.52 0.95 −0.51 1 −4.79 1
0.785 0.0784 −0.96 5.05 0.94 0.43 −0.91 1.49 −0.33
0.0470 0.163 −0.06 2.69 0.91 −0.54 0.97 −3.88 0.74
Wage
0.0202 0.20 3.06 0.96 0.74 −0.46 0.659
a Balanced sample: 1951q1–2010q3. The row labelled “mean” refers to the mean of levels, while the other rows refer to the log of the variable in each column. Row series are HP-filtered with a smoothing parameter of 25e05.
The raw data are successively log-transformed, Hodrick–Prescott (HP)filtered, and exponentiated. I use a smoothing parameter of 25e05, which is the value that maximizes the correlation between detrended productivity and detrended unemployment.9 We think of quarterly transition rates s and f as data features delivering moments for the model to match. Raw data provide monthly flows that can be time-aggregated at quarterly frequency in a way that preserves the standard law of motion of unemployment rates. The quarterly frequency is dictated by the availability of productivity data and this is why monthly employment series first must be time-aggregated before calculating levels, volatilities, and elasticities. Table I shows the main moments of the quarterly series. The main difference with respect to Shimer’s calculations is the much lower volatility and unemployment elasticity of unemployment outflow rates, contrasting with job destruction rates for which we estimate both higher volatility and elasticity. Notice that, particularly at the quarterly level, s is low and f is large, so a good approximation of the unemployment rate u = U/(U + E) is s/f .10 This explains why regressing log unemployment rates on log exit rates and log destruction rates yields an R2 of 99% and coefficients of −098 and 090, very close to 1. Moreover, as f s/u, if s has an unemployment elasticity of 07, then f should have an unemployment elasticity of approximately −03. The kurtosis is usually close to 3, which suggests a normal shape for the distribution, except for exit rates, which exhibit more peakedness than the normal. 9 Shimer (2005) used a smoothing parameter of 105 instead of the usual 1600 with quarterly data. I tried to find an objective way to choose the smoothing parameter, and I came very close to Shimer’s preferred choice. 10 f = 1 − FU/U + sE/U sE/U as FU U. Gross flows in and out of unemployment are large with respect to net flows. Unemployment thus evolves much like a random walk.
1341
UNEMPLOYMENT AND WAGE DISTRIBUTIONS TABLE II CYCLICAL PATTERNS OF INDIVIDUAL WAGE DECILES—PRODUCTIVITY ELASTICITY, AND VOLATILITYa Males
P100 P90 P80 P70 P60 P50 P40 P30 P20 P10
Females
All
Elasticity
Volatility
Elasticity
Volatility
Elasticity
Volatility
0.42 0.35 0.33 0.31 0.37 0.41 0.50 0.62 0.69 0.92
0.016 0.012 0.011 0.012 0.013 0.015 0.018 0.020 0.025 0.033
0.36 0.30 0.30 0.36 0.43 0.43 0.52 0.54 0.60 0.64
0.015 0.010 0.010 0.010 0.012 0.012 0.014 0.016 0.019 0.032
0.43 0.44 0.41 0.43 0.51 0.54 0.63 0.67 0.76 0.85
0.029 0.013 0.011 0.013 0.014 0.015 0.018 0.020 0.023 0.032
a Wage deciles were calculated by Heathcote, Perri, and Violante (2010) from CPS, 1967–2005. Available at http:// www.economicdynamics.org/RED-cross-sectional-facts.htm. The log of each wage decile was detrended using a linear trend before being regressed on log aggregate productivity (BLS series PS85006163). The volatility is the standard deviation of detrended deciles.
This together with the negative skewness indicates a certain propensity for exit rates to take “abnormally” low values. The estimated elasticity of wages to productivity is close to (yet higher than) that calculated by Gertler and Trigari (2009) from CPS data (series posterior to 1967). 3.3. Wage Inequality I use the data on wage (per hour) inequality constructed by Heathcote, Perri, and Violante (2010) from the series of CPS surveys, available from the website of the Review of Economic Dynamics, to calculate the volatilities and productivity elasticities of wage deciles. In practice, I regress the log of each wage decile on a linear trend and log productivity (mean per year). Table II shows that both volatilities and elasticities are definitely decreasing in rank until about the 6th or 7th decile, and are then approximately constant or moderately increasing above the 7th decile. Figure 3 displays detrended wage deciles over time. 4. PARAMETRIC SPECIFICATION AND ESTIMATION 4.1. Parametrization 4.1.1. The Aggregate Productivity Process Let F denote the equilibrium distribution of yt . Let C denote a parametric copula. Let a1 < · · · < aN define a grid on [ε 1 − ε] ⊂ (0 1) of N linearly spaced points including end points ε and 1 − ε. Then yi = F −1 (ai ) and
1342
JEAN-MARC ROBIN
FIGURE 3.—Dynamics of Detrended Wage Deciles D1–D9. Source: CPS, 1967–2005 and Heathcote, Perri, and Violante (2010).
πij ∝ c(ai aj ), where c denotes the copula density and with the normalization j πij = 1. In practice, I use N = 100 and ε = 0002; F is a log-normal distribution with parameters 0 and σ, and c is a Gaussian copula density with parameter ρ. This may look like a rather convoluted way to define a Gaussian AR(1) process, but it has the advantage of potentially allowing for independent changes in the specifications for the marginal distribution F and the stochastic process of ranks c. For example, it might prove useful in some cases to use a student copula or a marginal distribution with fatter tails than the normal. Moreover, it is a simple way to construct a Markov chain. 4.1.2. Worker Heterogeneity I specify match productivity as yi (m) = yi xm , where xm , m = 1 M, is a grid of M linearly spaced points on [x x + 1]. The choice of the support does not matter much (provided it is large enough and contains one) if the distribution of ability within that support is general enough. I assume a beta distribution, namely m = betapdf(xm − x η μ) with the normalization m m = 1. The beta distribution allows for a variety of shapes for the density (increasing, decreasing, nonmonotone, concave, or convex). I will use a very dense grid of M = 500 points to guarantee a good resolution in the left tail.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1343
Note that aggregate productivity is the mean of yt (m) across all m such that St (m) > 0: (1 − ut (m))m yt (m)1{St (m) > 0} yt =
m
=
1 − ut (1 − ut (m))m (x + xm )1{St (m) > 0}
m
= xt yt
1 − ut
yt
(say)
The dynamics of y t differs from the dynamics of yt if composition effects make xt , the mean ability of employees, differ from the mean ability of all workers, employed and unemployed. In practice, only a small fraction of workers face nonparticipation risk and therefore xm does not differ much from the unconditional mean. 4.1.3. Opportunity Cost of Employment Last, the opportunity cost of employment (leisure utility, UI benefits, etc.) is specified as zi (m) = z0 + α[yi (m) − z0 ]. I allow for a potential indexation of unemployment flow utility on productivity. Otherwise the reservation wage of high-skill workers is lower in booms than in busts as unemployed workers face better future prospects in booms than in busts. Also, if α is low, high-skill workers have lower reservation wages than low-skill workers, for exactly the same reason. Note that α is not identified from employment and turnover data. As yi (m) − zi (m) = α[yi (m) − z0 ], changing α into α changes Si (m) into αα Si (m) without changing unemployment rates and composition. 4.1.4. Estimation/Calibration I set the unit of time equal to a quarter. The parameters to be estimated are the turnover parameters λ0 , λ1 , and δ, the probability of moving upon receiving an outside offer τ, the leisure cost parameters z0 and α, and parameters x η, and μ of the distribution of heterogeneity. These parameters will be calibrated so as to fit unemployment, turnover, and wage dynamics as I now explain. Using results in Jolivet, Postel-Vinay, and Robin (2006), I estimate the proportion of employees’ contacts with alternative employers resulting in actual mobility to 53%. I thus set τ = 05. The model predicts a job-to-job mobility rate qt such that qt = τλ1 (1 − st ) Using JOLTS data, an on-the-job offer arrival rate of λ1 = 012 × λ0 can be calculated. For a given normalization of α, the remaining parameters (λ0 δ x η μ z0 ) are estimated by simulating very long series of T = 10000 observations so as to match the following moments:
1344
JEAN-MARC ROBIN
• The mean productivity is 1, the standard deviation of log productivity is equal to 0.0223, and its autocorrelation is 0.91. • The mean unemployment rate is 5.8%, the standard deviation of log unemployment is 0.214, and its kurtosis is 2.52. • The mean exit rate from unemployment is 78.5%. I purposely do not use any wage information. Thus one can determine how much of the observed wage patterns the model can reproduce without programming the model to fit them beforehand. In a second step, a value for α is selected so as to fit the elasticity of wages to productivity.
5. ESTIMATES AND FIT 5.1. Parameter Estimates and Fit of Moments The GMM criterion exhibits many local optima and it was necessary to try many initial values to find that z0 = 077, σ = 0023, ρ = 094, λ0 = 099, s = 0042, x = 073, η = 200, and μ = 556 are values of the structural parameters that best match the empirical moments. Then α = 064 is found to match the elasticity of aggregate wages optimally. Table III displays the simulated moments using these parameter values. The model manages very well to amplify productivity shocks. The elasticity of unemployment to productivity is too high, but that is expected in a model with only one exogenous source of shocks. The job destruction rate is not volatile enough and the exit rate from unemployment is too volatile. The job destruction rate also exhibits too much excess kurtosis.
TABLE III FIT OF EMPLOYMENT AND TURNOVER MOMENTSa
Productivity
Mean Std Skewness Kurtosis Autocorrelation Corr. with prod. Corr. with unempl. Reg. on prod. Reg. on unempl.
1 0.022 −0.10 3.33 0.93 1 −0.97 1
Unempl. Rate
Unempl. Exit Rate
Job Destruction Rate
0.057 0.213 0.69 3.20 0.94 −0.97 1 −9.47 1
0.753 0.220 −0.96 4.30 0.95 0.94 −0.98 9.45 −0.99
0.0435 0.0573 2.27 10.81 0.02 −0.31 0.27 −0.81 0.0023
Wage
0.932 0.0153 0.085 3.81 0.99 0.93 −0.92 0.659
a The rows labelled “mean” refer to the mean of levels, while the other rows refer to the log of the variable in each column. These moments were calculated from a simulation of 10,000 observations.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1345
5.2. Worker Heterogeneity Figure 4 shows the distribution of worker heterogeneity and how it affects individual productivity given the state of the economy. Every dotted line in panel (a) of the figure corresponds to a different ability type. The thick line in the middle is the 45-degree line and IT corresponds to the productivity of a worker of ability 1. The other thick line at the bottom indicates the viability threshold: for a given aggregate state i, all individual types m such that Si (m) ≤ 0 have their productivity below the threshold. Only a small fraction, about 6%, of all workers bear the risk of endogenous layoff. Panel (b) displays the distributions of worker types in the whole population and in the subpopulation of unemployed workers. As expected, low ability workers are overrepresented among the unemployed. Figure 5, panel (a) displays the actual unemployment rates within various education groups as a function of the overall unemployment rate (all workers, 25 years and over; 1992m1–2010m10). There is a clear linear relationship, which implies that the composition of unemployment by education should remain stable over time if the composition of the labor force did not change, which is not true. Notice also that less educated workers face a higher risk of unemployment. Panel (b) shows the unemployment rate over time,11 as a function of the aggregate unemployment rate, for various subpopulations of lowskill workers (25%, 50% and 90%, most unskilled). The same linear relationship emerges, which favors the following possible interpretation of the actual pattern by education: The proportion of low-skill workers at risk of nonparticipation via a productivity shock annihilating the surplus is just simply higher among low-education workers. 5.3. The Amplification Mechanism The mechanism that amplifies productivity shocks is simple to understand. In a boom, unemployment is steady and all separations follow from exogenous shocks. When aggregate productivity falls more workers lose their jobs as more match surpluses become negative (see Figure 6). About 4% unemployment accrues because of the 4.2% exogenous layoff rate. One may call this minimum unemployment level frictional unemployment. Unemployment due to business-cycle conditions ranges between 0 and 6% depending on the severity of the recession. Note that the correlation between unemployment rates and productivity shocks is high, because the link shown in Figure 6 is smooth and monotone. A smaller correlation requires more nonlinearity that can be obtained by reducing the aggregate productivity threshold that triggers endogenous layoff. The mean leisure cost zt (m) averaged over worker types and time is 086, somewhere between Hagedorn and Manovskii’s (2008) 0.95 calibration, and 11
Straight lines are obtained in both the steady-state and the out-of-equilibrium simulations.
1346
JEAN-MARC ROBIN
(a) Match productivity
(b) Distribution of types FIGURE 4.—Heterogeneous productivity.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1347
(a) Unemployment rate by education (CPS)
(b) Unemployment rate among various low-skill groups (model’s prediction) FIGURE 5.—Composition of unemployment.
Hall and Milgrom’s 0.70 (2008). Workers in the low range of abilities have a mean unemployment benefit/productivity ratio close to 1, whereas high productivity workers have one that is closer to 0.70 (see Figure 7). The ability of
1348
JEAN-MARC ROBIN
FIGURE 6.—Unemployment rate as a function of the aggregate shock. For each step down, a new group of low ability workers becomes employable as aggregate productivity rises. When the aggregate productivity index reaches about 1.04, all workers have positive surplus. The unemployment rate does not quite jump to its state-contingent equilibrium value (continuous line), but scatters around it, as the dots, corresponding to one simulation, indicate.
the model to match the volatilities of aggregate productivity and unemployment depends on there being a small fraction of workers at risk of endogenous job destruction. So, the argument of this paper does not contradict the small surplus argument of Hagedorn and Manovskii. However, with heterogeneous workers, not all workers need to face the same small surplus and not at all times. 5.4. Fit of Main Series Given parameter estimates, I filter out aggregate productivity shocks yt so − xt yt )2 for all t, where Yt denotes observed productivity as to minimize (Y t (1−u (m))m (x+xm )1{St (m)>0} and where xt = m t depends on yt via the sign of St (m).12 1−ut A dynamic simulation consists of implementing the model’s law of motions for unemployment and wage distributions driven by aggregate productivity shocks. With N = 100 and M = 500, I thus keep track of 105 separate trajecto12 For all periods, I could find a value of yt such that the predicted aggregate productivity is essentially indistinguishable from the actual one.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1349
FIGURE 7.—Mean unemployment benefit/output ratio (zt (m)/yt (m)) by ability.
ries (2NM). This is a lot but well within the possibilities of modern laptops.13 A few seconds suffice to make these calculations. Estimation was less demanding, as only employment and turnover had to be simulated. A few minutes were enough to reach convergence. Figure 8 displays the predicted series of unemployment rates, (assuming revelation of yt at the beginning of each period), unemployment exit rates, job destruction rates, and aggregate wages. The overall R2 coefficients are, respectively, 27.8%, 18.8%, 2.6%, and 43.8%. For the period 1948–1990, the fit is much better: 45.0%, 37.1%, 3.7%, and 56.3%. After 1990, obviously, other factors intervene to decouple unemployment, turnover, and wages from productivity shocks. These model fit measures are more informative than traditional impulse– response functions. In an economy that never reaches a steady-state equilibrium, they tell us how good the model is at capturing the directional changes of endogenous variables following the continuous flow of macroeconomic shocks. Of course, with many latent sources of exogenous shocks, given the nonlinear channels of their effects on endogenous outcomes in DGSE models, it may prove more difficult to filter out exogenous innovations. The model does not capture the dynamics of job destruction rates very well. The predicted dynamics is not smooth enough compared to the true one. This 13
I use standard MacBooks and Matlab.
1350
(b) Unemployment exit rate
(c) Job destruction rate
(d) Aggregate wages
FIGURE 8.—Prediction of unemployment, turnover, and wages.
JEAN-MARC ROBIN
(a) Unemployment
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1351
absence of smoothness could already be seen from the overestimated kurtosis (11 instead of the normal 3). The unemployment exit rate is also too volatile. It is worth emphasizing at this point the strikingly good prediction of aggregate wages. Note that it is not very sensitive to the choice of α, the only parameter that was not estimated from turnover data.14 For comparison, note that regressing log wages on log productivity yields an R2 of 54% (the square of the correlation). 5.5. Wage Inequality Dynamics Last, let us turn to wage distribution dynamics. Figure 9 shows how predicted wage deciles change over time. The average interdecile ratios D9/D5 and D5/D1 are around 2 (2.05 and 2.20) in the data. They are close to 1.2 (1.29 and 1.23) in the simulation. There is thus room for additional sources of wage dispersion, yet the model already generates a fair amount of inequality. As in the data, extreme deciles are more elastic and volatile than intermediate ones; a minimum is attained somewhere between the second and third deciles (see Table IV). This is because the lower part of the distribution essentially comprises starting wages, whereas the upper part of the wage distribution es-
FIGURE 9.—Dynamics of wage deciles (simulated using filtered productivity shocks). 14
With α = 04, I obtain an R2 of 40% (53% after 1990); with α = 075, I get 45% and 58%.
1352
JEAN-MARC ROBIN TABLE IV FIT OF CYCLICAL PATTERNS OF INDIVIDUAL WAGE DECILES (CALCULATED FROM A SIMULATION OF 10,000 OBSERVATIONS) Actual (Males)
P90 P80 P70 P60 P50 P40 P30 P20 P10
Simulated
Elasticity
Volatility
Elasticity
Volatility
0.35 0.33 0.31 0.37 0.41 0.50 0.62 0.69 0.92
0.012 0.011 0.012 0.013 0.015 0.018 0.020 0.025 0.033
0.83 0.80 0.78 0.74 0.71 0.65 0.46 0.31 0.35
0.0191 0.0186 0.0181 0.0175 0.0169 0.0156 0.0122 0.0070 0.0081
sentially comprises promotion wages.15 Moreover, the most able individuals exhibit both the lowest starting wages and the highest promotion wages (see Figure 10). This is inherent to the nature of search models: higher future prospects
FIGURE 10.—Starting wages (circles) and promotion wages (squares) for various ability quantiles (1/10, 1/4, 1/2, 3/4, 9/10). The size of the symbol is proportional to ability rank. 15
There may be overlapping due to wage rigidities.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1353
generate lower reservation wages.16 At the same time, high ability workers face higher wage elasticity with respect to aggregate productivity shocks (thanks to the complementarity between x and y in match productivity). Notice that long term contracts tend to attenuate the volatility of individual wage contracts. A worker who obtains a promotion because she managed to make her employer compete with another firm will not obtain any additional wage rise if the economy booms, unless she finds another would-be employer. A similar reasoning applies to workers paid their reservation wage. On the other hand, wage rises are expected to be bigger in a boom than in a bust. To quantify how much wage inequality is generated by worker heterogeneity and how much is left to individual wage dynamics, I next consider the following variance decomposition exercise. Let wit denote the wage of an individual i at time t and let zit denote some characteristics. Then Var wit = Var E(wit |zit ) + E Var(wit |zit )
between
within
I find that about a third of the cross-sectional wage variance can be explained by ability difference (see Figure 11). This is a bit low compared to available estimates, yet of a similar order of magnitude. Abowd, Creecy, and Kramarz
FIGURE 11.—Wage variance decomposition. 16 This is not true if the utility of leisure is sufficiently increasing in worker ability (via the dependence to match productivity).
1354
JEAN-MARC ROBIN
(2002) thus estimated a 46% contribution (covariance divided by total variance) of person effects using the State of Washington UI data.17 6. CONCLUSION We have proposed a simple dynamic search-matching model with crosssectional wage dispersion and worker heterogeneous abilities. Worker heterogeneity interacts with aggregate shocks to match productivity in a way that allows for endogenous job destruction. It suffices that a small fraction of the total workforce be at risk of a shock to productivity that renders the match surplus negative to amplify productivity shocks enough to generate the observed unemployment volatility. Moreover, we show that the model can generate sizeable cross-sectional wage inequality and with a dynamic that is similar to the observed pattern: wages in the middle of the distribution are less procyclical than wages in the bottom and the top. Our prototypical model is extremely simple to simulate outside the steadystate equilibrium and still generates very rich dynamics. This is due to two very strong assumptions: firms have full monopsony power and they are identical. Giving workers some bargaining power as in Cahuc, Postel-Vinay, and Robin (2006) and Dey and Flinn (2005), and allowing for firm heterogeneity as in Lise, Meghir, and Robin (2009), in a macrodynamic model, are very exciting avenues for further research. REFERENCES ABOWD, J. M., R. H. CREECY, AND F. KRAMARZ (2002): “Computing Person and Firm Effects Using Linked Longitudinal Employer–Employee Data,” Technical Paper 2002-06, Longitudinal Employer-Household Dynamics, Center for Economic Studies, U.S. Census Bureau. [1353,1354] CAHUC, P., F. POSTEL -VINAY, AND J.-M. ROBIN (2006): “Wage Bargaining With On-the-Job Search: Theory and Evidence,” Econometrica, 74, 323–364. [1354] DEY, M. S., AND C. J. FLINN (2005): “An Equilibrium Model of Health Insurance Provision and Wage Determination,” Econometrica, 73, 571–627. [1354] GERTLER, M., AND A. TRIGARI (2009): “Unemployment Fluctuations With Staggered Nash Wage Bargaining,” Journal of Political Economy, 117, 38–86. [1329,1341] HAGEDORN, M., AND I. MANOVSKII (2008): “The Cyclical Behavior of Equilibrium Unemployment and Vacancies Revisited,” American Economic Review, 98, 1692–1706. [1329,1345] HALL, R. E. (2005): “Job Loss, Job Finding, and Unemployment in the U.S. Economy Over the Past Fifty Years,” in NBER Macroeconomics Annual 2005, ed. by M. Gertler and K. Rogoff. Cambridge, MA: MIT Press, 103–137. [1329] HALL, R. E., AND A. B. KRUEGER (2010): “Evidence on the Determinants of the Choice Between Wage Posting and Wage Bargaining,” Working Paper 16033, NBER. [1328] HALL, R. E., AND P. R. MILGROM (2008): “The Limited Influence of Unemployment on the Wage Bargain,” American Economic Review, 98, 1653–1674. [1329,1347] 17 They estimated about 47% for firm effects and 27% for residuals. This does not sum to 1 because of correlations between factors.
UNEMPLOYMENT AND WAGE DISTRIBUTIONS
1355
HEATHCOTE, J., F. PERRI, AND G. L. VIOLANTE (2010): “Unequal We Stand: An Empirical Analysis of Economic Inequality in the United States: 1967–2006,” Review of Economic Dynamics, 13, 15–51. [1341,1342] JOLIVET, G., F. POSTEL -VINAY, AND J.-M. ROBIN (2006): “The Empirical Content of the Job Search Model: Labor Mobility and Wage Distributions in Europe and the US,” European Economic Review, 50, 877–907. [1343] LISE, J., C. MEGHIR, AND J.-M. ROBIN (2009): “Matching, Sorting and Wages,” Discussion Paper, University College London. [1328,1354] MACLEOD, W. B., AND J. M. MALCOMSON (1993): “Investments, Holdup, and the Form of Market Contracts,” American Economic Review, 83, 811–837. [1333] MENZIO, G., AND S. SHI (2009): “Efficient Search on the Job and the Business Cycle,” Discussion Paper 14905, NBER. [1328] (2010a): “Block Recursive Equilibria for Stochastic Models of Search on the Job,” Journal of Economic Theory, 145, 1453–1494. [1328] (2010b): “Directed Search on the Job, Heterogeneity, and Aggregate Fluctuations,” American Economic Review, 100, 327–332. [1328] MORTENSEN, D., AND E. NAGYPÁL (2007): “More on Unemployment and Vacancy Fluctuations,” Review of Economic Dynamics, 10, 327–347. [1329] MORTENSEN, D. T. (2005): Wage Dispersion. Why Are Similar Workers Paid Differently? Cambridge, MA: MIT Press. [1327] MORTENSEN, D. T., AND C. A. PISSARIDES (1994): “Job Creation and Job Destruction in the Theory of Unemployment,” The Review of Economic Studies, 61, 397–415. [1327,1329] MOSCARINI, G., AND F. POSTEL -VINAY (2008): “The Timing of Labor Market Expansions: New Facts and a New Hypothesis,” in NBER Macroeconomics Annual 2008. University of Chicago Press. [1327] (2009): “Large Employers Are More Cyclically Sensitive,” Working Paper 14740, NBER. [1327] PISSARIDES, C. A. (2009): “The Unemployment Volatility Puzzle: Is Wage Stickiness the Answer?” Econometrica, 77, 1339–1369. [1329] POSTEL -VINAY, F., AND J.-M. ROBIN (2002): “Equilibrium Wage Dispersion With Worker and Employer Heterogeneity,” Econometrica, 70, 2295–2350. [1327,1328] POSTEL -VINAY, F., AND H. TURON (2010): “On-the-Job Search, Productivity Shocks and the Individual Earnings Process,” International Economic Review, 51, 599–629. [1333] ROBIN, J.-M. (2011): “Supplement to ‘On the Dynamics of Unemployment and Wage Distributions’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ ecta/Supmat/9070_data and programs.zip. [1336] SHIMER, R. (2005): “The Cyclical Behavior of Equilibrium Unemployment and Vacancies,” American Economic Review, 95, 25–49. [1327,1329,1340] STEVENS, M. (2004): “Wage-Tenure Contracts in a Frictional Labour Market: Firms’ Strategies for Recruitment and Retention,” Review of Economic Studies, 71, 535–551. [1331]
Dept. of Economics, Sciences Po, 28 rue des St. Pères, 75007 Paris, France and University College London;
[email protected]. Manuscript received January, 2010; final revision received January, 2011.
Econometrica, Vol. 79, No. 5 (September, 2011), 1357–1406
A STRUCTURAL EVALUATION OF A LARGE-SCALE QUASI-EXPERIMENTAL MICROFINANCE INITIATIVE BY JOSEPH P. KABOSKI AND ROBERT M. TOWNSEND1 This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high-yield indivisible investment opportunities. After estimation of parameters using preprogram data, we evaluate the model’s ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost–benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 30 percent more than the sum of these benefits. KEYWORDS: Microfinance, structural estimation, policy evaluation.
1. INTRODUCTION THIS PAPER USES A STRUCTURAL MODEL to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. Understanding and evaluating microfinance interventions, especially such a large-scale government program, is a matter of great importance. Proponents argue that microfinance allows the provision of credit that is both effective in fighting poverty and more financially viable than other means; detractors point to high default rates, reliance on (implicit and explicit) subsidies, and the lack of hard evidence of their impacts on households. The few efforts to evaluate the impacts of microfinance institutions using reduced form methods and plausibly exogenous data have produced mixed and even contradictory results.2 To our knowledge, this is the first structural attempt to model and evaluate the impact of microfinance. Three key advantages of the structural approach are the potential for quantitative interpretation of 1
Research funded by NICHD Grant R03 HD04776801, the Bill and Melinda Gates Foundation grant to the University of Chicago Consortium on Financial Systems and Poverty, John Templeton Foundation, and NSF. We thank Sombat Sakuntasathien, Aleena Adam, Francisco Buera, Flavio Cunha, Xavier Gine, Donghoon Lee, Audrey Light, Ben Moll, Masao Ogaki, Anan Pawasutipaisit, Mark Rosenzweig, Shing-Yi Wang, Bruce Weinberg, and participants at FRBChicago, FRB-Minneapolis, Harvard–MIT, Michigan, NIH, Ohio State, U.W. Milwaukee, NYU, Yale, NEUDC 2006, 2006 Econometric Society, BREAD 2008, World Bank Microeconomics of Growth 2008, UC-UTCC, NYU Development Conference, and SED 2009 presentations. Bin Yu, Taehyun Ahn, and Jungick Lee provided excellent research assistance on this project. 2 Pitt and Khandker (1998), Pitt, Khandker, Chowdury, and Millimet (2003), Morduch (1998), Coleman (1999), Gertler, Levine, and Moretti (2003), Karlan and Zinman (2009), and Banerjee, Duflo, Glennerster, and Kinnan (2010) are examples. Kaboski and Townsend (2005) estimated positive impacts of microfinance in Thailand using nonexperimental data. © 2011 The Econometric Society
DOI: 10.3982/ECTA7079
1358
J. P. KABOSKI AND R. M. TOWNSEND
the data, counterfactual policy–out-of-sample prediction, and well defined normative program evaluation. The Thai Million Baht Village fund program is one of the largest scale government microfinance initiatives of its kind.3 Started in 2001, the program involved the transfer of one million baht to each of the nearly 80,000 villages in Thailand to start village banks. The transfers themselves sum to about 1.5 percent of Thai gross domestic product (GDP) and substantially increased available credit. We study a panel of 960 households from 64 rural Thai villages in the Townsend Thai Survey (Townsend, Paulson, Sakuntasathien, Lee, and Binford (1997)). In these villages, funds were founded between the 2001 and 2002 survey years, and village fund loans amounted to 80 percent of new short-term loans and one-third of total short-term credit in the 2002 data. If we count village funds as part of the formal sector, participation in the formal credit sector jumps from 60 to 80 percent. Though not a randomized treatment, the program is viewed as a quasiexperiment that produced plausibly exogenous variation in credit over time and across villages. The program was unanticipated and rapidly introduced. More importantly, the total amount of funding given to each village was the same (one million baht) regardless of the number of households in the village. Although village size shows considerable variation within the rural regions we study, villages are administrative geopolitical units and are often subdivided or joined for administrative or political purposes. Indeed, using GIS maps, we have verified that village size patterns are not much related to underlying geographic features and vary from year to year in biannual data. Hence, there are a priori grounds for believing that this variation and the magnitude of the per capita intervention is exogenous with respect to the relevant variables. Finally, village size is not significantly related to preexisting differences (in levels or trends) in credit market or relevant outcome variables. Our companion paper (Kaboski and Townsend (2009)), examines impacts of the program using a reduced form regression approach, and many of the impacts are puzzling without an explicit theory of credit-constrained behavior.4 In particular, households increased their borrowing and their consumption roughly one for one with each dollar put into the funds. A perfect credit model, such as a permanent income model, would have trouble explaining the 3 The Thai program involves approximately $1.8 billion in initial funds. This injection of credit into the rural sector is much smaller than Brazil experience in the 1970s, which saw a growth in credit from about $2 billion in 1970 to $20.5 billion in 1979. However, in terms of a government program implemented through village institutions and using microlending techniques, the only comparable government program in terms of scale would be Indonesia’s KUPEDES village bank program, which was started in 1984 at a cost of $20 million and supplemented by an additional $107 million in 1987 (World Bank (1996)). 4 This companion paper also provides additional evidence on the exogeneity of village size, examines impacts in greater detail, and looks for general equilibrium effects on wages and interest rates.
STRUCTURAL MICROFINANCE EVALUATION
1359
large increase in borrowing, since reported interest rates on borrowing did not fall as a result of the program. Similarly, even if households treated loans as a shock to income rather than a loan, they would only consume the interest of the shock (roughly 7 percent) perpetually. Moreover, households were not initially more likely in default after the program was introduced, despite the increase in borrowing. Finally, household investment is an important aspect of household behavior. We observe an increase in the frequency of investment, but, oddly, impacts of the program on the level of investment were difficult to discern. This is a priori puzzling in a model with divisible investment, if credit constraints are deemed to play an important role. The structural model we develop in this paper sheds light on many of these findings. Given the prevalence of income shocks that are not fully insured in these villages (see Chiappori, Schulhofer-Wohl, Samphantharak, and Townsend (2008)), we start with a standard precautionary savings model (e.g., Aiyagari (1994), Carroll (1997), Deaton (1991)). We then add important features central to the evaluation of microfinance, but also key characteristics of the preprogram data: borrowing, default, investment, and growth. Short-term borrowing exists but is limited, and so we naturally allow borrowing but only up to limits. Similarly, default exists in equilibrium, as does renegotiation of payment terms, and so our model incorporates default. Investment is relatively infrequent in the data but is sizable when it occurs. To capture this lumpiness, we allow households to make investments in indivisible, illiquid, high-yield projects whose size follows an unobserved stochastic process.5 Finally, income growth is high but variable, averaging 7 percent but varying greatly over households, even after controlling for life-cycle trends. Allowing for growth requires writing a model that is homogeneous in the permanent component of income, so that a suitably normalized version attains a steady state solution, giving us time-invariant value functions and (normalized) policy functions. In an attempt to quantitatively match central features of the environment, we estimate the model using a method of simulated moments (MSM) on only the preprogram data. The parsimonious model broadly reproduces many important aspects of the data, closely matching consumption and investment levels, and investment and default probabilities. Nonetheless, two features of the model are less successful, and the overidentifying restrictions of the model are rejected.6 5 An important literature in development examined the interaction between financial constraints and indivisible investments. See, for example, Banerjee and Newman (1993), Galor and Zeira (1993), Gine and Townsend (2004), Lloyd-Ellis and Bernhardt (2000), and Owen and Weil (1997). 6 The income process of the model has trouble replicating the variance in the data, which is affected by the Thai financial crisis in the middle of our pre-intervention data, and the borrowing and lending rates differ in the data but are assumed equal in the model. Using the model to match year-to-year fluctuations is also difficult.
1360
J. P. KABOSKI AND R. M. TOWNSEND
For our purposes, however, a more relevant test of the estimated model’s usefulness is its ability to predict out-of-sample responses to an increase in available credit, namely the village fund intervention. Methodologically, we model the microfinance intervention as an introduction of a borrowing– lending technology that relaxes household borrowing limits. These limits are relaxed differentially across villages so as to induce an additional one million baht of short-term credit in each village; hence, small villages get larger reductions of their borrowing constraint. Given the relaxed borrowing limits, we then simulate the model with the stochastic income process to create 500 artificial data sets of the same size as the actual Thai panel. These simulated data do remarkably well in reproducing the above impact estimates. In particular, they predict an average response in consumption that is close to the dollar-to-dollar response in the data. Similarly, the model reproduces the fact that effects on average investment levels and investment probabilities are difficult to measure in the data. In the simulated data, however, these aggregate effects mask considerable heterogeneity across households, much of which we treat as unobservable to us as econometricians. Increases in consumption come from roughly two groups. First, hand-to-mouth consumers are constrained in their consumption because either they have low current liquidity (income plus savings) or are using current (preprogram) liquidity to finance lumpy investments. These constrained households use additional availability of credit to finance current consumption. Second, households that are not constrained may increase their consumption even without borrowing, since the increase in available credit in the future lowers their desired buffer-stock savings. Third, for some households, increased credit induces them to invest in their high-yield projects. Some of these households may actually reduce their consumption, however, as they supplement credit with reduced consumption so as to finance sizable indivisible projects. (Again, the evidence we present for such behavior in the pre-intervention data is an important motivation for modeling investment indivisibility.) Finally, for households that would have defaulted without the program, available credit may simply be used to repay existing loans and so have little effect on consumption or investment. Perhaps most surprising is that these different types of households may all appear ex ante identical in terms of their observables. The estimated model not only highlights this underlying heterogeneity, but also shows the quantitative importance of these behaviors. Namely, the large increase in consumption indicates the relative importance of the first two types of households, both of which increase their consumption. Also, the estimated structural parameters capture the relatively low investment rates and large skew in investment sizes. Hence, overall investment relationships are driven by relatively few large investments, and so very large samples are needed to accurately measure effects on average investment. The model generates these effects but for data that are larger than the actual Thai sample. Second and related, given the lumpiness of projects, small amounts of credit are relatively
STRUCTURAL MICROFINANCE EVALUATION
1361
unlikely to change investment decisions on the large projects that drive aggregate investment. Finally, our normative evaluation compares the costs of the Million Baht program to the costs of a direct transfer program that is equivalent in the sense of providing the same utility benefit. The heterogeneity of households plays an important role, and indeed the welfare benefits of the program vary substantially across households and villages. Essentially, there are two major differences between the microfinance program and a well directed transfer program. First, the microfinance program is potentially less beneficial because households face the interest costs of credit. To access liquidity, households borrow more, and while they can always carry forward more debt into the future, they are left with larger interest payments. Interest costs are particularly high for otherwise defaulting households, whose debts are augmented to the more liberal borrowing limit and so they bear higher interest charges. On the other hand, the microfinance program is potentially more beneficial than a direct transfer program because it can also provide more liquidity to those who potentially have the highest marginal valuation of liquidity by lowering the borrowing constraint. Hence, the program is relatively more cost-effective for nondefaulting households with urgent liquidity needs for consumption and investment. Quantitatively, given the high frequency of default in the data7 and the high interest rate, the benefits (i.e., the equivalent transfer) of the program are 30 percent less than the program costs, but this masks the interesting variation among losers and gainers. Beyond the out-of-sample and normative analyses, we also perform several alternative exercises that build on the strengths of the structural model: long run out-of-sample predictions showing the time-varying impacts; a counterfactual “investment-contingent credit” policy simulation that greatly outperforms the actual policy; and reestimation using the pooled sample, which confirmed the robustness of our exercise. The paper contributes to several literatures. First, we add a structural modeling approach to a small literature that uses theory to test the importance of credit constraints in developing countries (e.g., Banerjee and Duflo (2008)). Second, we contribute to an active literature on consumption and liquidity constraints, and the buffer-stock model, in particular. Studies with U.S. data have also found a high sensitivity of consumption to current available liquidity (e.g., Zeldes (1989), Gross and Souleles (2002), Aaronson, Agarwal, and French (2008)), but like Burgess and Pande (2005), we study this response with quasiexperimental data in a developing country.8 Their study used a relaxation of 7
Default rates on short-term credit overall were 19 percent of households, but less than 3 percent of village fund credit was in default, and one-fourth to one-third of households reported that they borrowed from other sources to repay the loans. 8 Banerjee et al. (2010) found large impacts on durable expenditures using a randomized microfinance experiment in Hyrabad, India.
1362
J. P. KABOSKI AND R. M. TOWNSEND
branching requirements in India that allowed for differential bank expansion across regions of India over 20 years so as to assess impacts on poverty head count and wage data. Third, methodologically, our quasi-experimental analysis builds on an existing literature that uses out-of-sample prediction, and experiments in particular, to evaluate structural models (e.g., Lise, Seitz, and Smith (2004, 2005), Todd and Wolpin (2006)). Finally, we contribute to the literature on measuring and interpreting treatment effects (e.g., Heckman, Urzua, and Vytlacil (2006)), which emphasizes unobserved heterogeneity, nonlinearity, and time-varying impacts. We develop an explicit behavioral model where all three play a role. The remainder of the paper is organized as follows. The next section discusses the underlying economic environment—the Million Baht village fund intervention—and reviews the facts from reduced form impact regressions that motivate the model. The model, and resulting value and policy functions, are presented in Section 3. Section 4 discusses the data and presents the MSM estimation procedure and resulting estimates. Section 5 simulates the Million Baht intervention, performs policy counterfactuals, and presents the welfare analysis. Section 6 concludes. Data and programs are available as Supplemental Material (Kaboski and Townsed (2011)). 2. THAI MILLION BAHT CREDIT INTERVENTION The intervention that we consider is the founding of village-level microcredit institutions by the Thai government—the Million Baht Fund program. Former Thai Prime Minister Thaksin Shinawatra implemented the program in Thailand in 2001, shortly after winning election. One million baht (about $24,000) was distributed to each of the 77,000 villages in Thailand to found selfsustaining village microfinance banks. Every village, whether poor or wealthy, urban9 or rural was eligible to receive the funds. The size of the transfers alone, about $1.8 billion, amounts to about 1.5 percent of GDP in 2001. The program was overwhelmingly a credit intervention: no training or other social services were tied to the program, and although the program did increase the fraction of households with formal savings accounts, savings constituted a small fraction (averaging 14,000 baht or less than 2 percent) of available funds, and we measured no effect on the actual levels of formal savings during the years we study. The design of the program was peculiar in that the money was a grant program to village funds (because no repayment was expected or made), yet the money reaches borrowers as microcredit loans with an obligation to repay to 9 The village (moo ban) is an official political unit in Thailand, the smallest such unit, and is under the subdistrict (tambon), district (amphoe), and province (changwat) levels. Thus, “villages” can be thought of as just small communities of households that exist in both urban and rural areas.
STRUCTURAL MICROFINANCE EVALUATION
1363
the fund. As noted earlier, default rates to these funds themselves were low (less than 3 percent up through available 2005 data), and all village funds in the sample we use continue in operation, indicating that the borrowers’ obligation to repay was well understood in the rural villages we study. (In contrast, default rates to village funds in urban areas are substantially higher, roughly 15 percent.) Also, the quasi-experiment is quite different and less clean than typical randomizations, since the villagers themselves get to organize the funds, and in randomizations there is typically much greater control over what happens. Thus, one must be careful not to extrapolate our results across all environments and microfinance interventions. We are not evaluating a microfinance product via randomized trials. The design and organization of the funds were intended to allow all existing villagers equal access to these loans through competitive application and loan evaluation handled at the village level. Villages elected committees, who then drew up the rules for operation. These rules needed to satisfy government standards, however, and the village fund committees were relatively large (consisting of 9–15 members) and representative (e.g., half women, no more than one member per household) with short, 2-year terms. To obtain funds from the government, the committees wrote proposals to the government administrators outlining the proposed policies for the fund.10 For these rural villages, funds were disbursed to and held at the Thai Bank of Agriculture and Agricultural Cooperatives, and funds could only be withdrawn with a withdrawal slip from the village fund committee. Residence in the village was the only official eligibility requirement for membership, so although migrating villagers or newcomers would likely not receive loans, there was no official targeting of any subpopulation within villages. Loans were uncollateralized, though most funds required guarantors. Repayment rates were quite high; less than 3 percent of funds lent to households in the first year of the program were 90 days behind by the end of the second year. Indeed, based on the household level data, 10 percent more credit was given out in the second year than in the first, presumably partially reflecting repaid interest plus principal. There were no firm rules regarding the use of funds, but reasons for borrowing, ability to repay, and the need for funds were the three most common loan criteria used. Indeed many households were openly granted loans for consumption. The funds make shortterm loans—the vast majority of lending is annual—with an average nominal interest rate of 7 percent. This was about a 5 percent real interest rate in 2001, and about 5 percent above the average money market rate in Bangkok.11 10
These policies varied somewhat, but were not related to village size. For example, some funds required membership fees but all were under 100 baht ($2.50); interest rates averaged 7 percent, but the standard deviation was 2 percent; the number of required guarantors varied with an average of 2.6 and a standard deviation of 1. 11 More details on the funds and the program are presented in Kaboski and Townsend (2009).
1364
J. P. KABOSKI AND R. M. TOWNSEND
2.1. Quasi-Experimental Elements of the Program As described in the Introduction, the program design was beneficial for research in two ways. First, it arose from a quick election, after the Thai parliament was dissolved in November, 2000, and was rapidly implemented in 2001. None of the funds had been founded by our 2001 (May) survey date, but by our 2002 survey, each of our 64 villages had received and lent funds, lending 950,000 baht on average.12 Households would not have anticipated the program in earlier years. We therefore model the program as a surprise. Second, the same amount was given to each village, regardless of the size, so villages with fewer households received more funding per household. Regressions below report a highly significant relationship between household’s credit from a village fund and inverse village size in 2002 after the program. Our policy intervention is not a clean randomized experiment, so we cannot have the same level of certainty about the exogeneity of the program. Several potential problems could contaminate the results. First, variables of interest for households in small villages could differ from those in large villages even before the program. Second, different trends in these variables across small and large villages would also be problematic, since the program occurs in the last years of the sample. If large villages had faster growth rates, we would see level differences at the end of the period and attribute these to the intervention during those years. Third, other policies or economic conditions during the same years could have affected households in small and large villages differentially.13 Other issues and caveats arise from all of our variation coming at the village level. On the one hand, village-level variation has important benefits because, in many ways, each village is viewed as its own small economy. These village economies are open but not entirely integrated with one another and the rest of the broader economy (nearby provinces, regions, etc.) in terms of their labor, credit, and risk-sharing markets and institutions. This gives us confidence that program impacts are concentrated at the village level.14 On the other hand, one could certainly envision potential risks involved with our use of village size. For example, even if credit itself were exogenous, its impact could differ in small and large villages. Small villages might be more closely connected, with better
12 We know the precise month that the funds were received, which varies across villages. This month was uncorrelated with the amount of credit disbursed, but may be an additional source of error in predicting the impacts of credit. 13 Other major policies initiated by the Thaksin government included the “30 Baht Health Plan” (which set a price control at 30 baht per medical visit), and “One Tambon–One Product” (a marketing policy for local products). However, neither was operated at the village level, since the former is an individual-level program while the latter is at the tambon (subdistrict) level. 14 GIS analysis including neighboring villages in Kaboski and Townsend (2009) support this claim.
STRUCTURAL MICROFINANCE EVALUATION
1365
information or less corruption, and so might show larger impacts not only because they received more credit per household, but because the credit was used more efficiently. Conversely, small villages might have smaller markets and so credit might have smaller impacts. Keeping this caveat in mind, our approach is to take a stand on a plausible structural model in Section 3. Within this structural model, village size will be fully excluded from all equations, so that when we introduce the policy in Section 4, the only role of village size will be in determining the expansion of credit. We are encouraged that the simple model does well to replicate the out-of-sample patterns in the data. Despite the potential risks and caveats, there are both a priori and a posteriori reasons to pursue our exclusion restriction and accept inverse village size as exogenous with respect to important variables of interest. First, villages are geopolitical units, and villages are divided and redistricted for administrative purposes. These decisions are fairly arbitrary and unpredictable, since the decision processes are driven by conflicting goals of multiple government agencies (see, for example, Puginier (2001) and Arghiros (2001)), and splitting of villages is not uncommon. Data for the relevant period (1997– 2003 or even the years directly preceding this, which might perhaps be more relevant) are unavailable, but growth data are available for 1960–2007 and for 2002–2007, so we know that the number of villages grew on average by almost 1 percent a year both between 1960 and 2007 and during the more recent period. Clearly, overall trends in new village creation are driven in part by population growth, but the above literature indicates that the patterns of this creation are somewhat arbitrary. Second, because inverse village size is the variable of interest, the most important variation comes from a comparison among small villages (e.g., between 50 and 250 households). Indeed, the companion paper focuses its baseline estimates on these villages, but shows that results are robust to including the whole sample. That is, the analysis is not based on comparing urban areas with rural areas, and we are not picking up the effects of other policies biased toward rural areas and against Bangkok. Third, village size is neither spatially autocorrelated nor correlated with underlying geographic features like roads or rivers, which might arise if village size were larger near population centers or fertile areas. Using data from Community Development Department (CDD), Figure 1 shows the random geographical distribution of villages by decile of village size in the year 2001 over the four provinces for which we have Townsend Thai data (Chachoengsao, Lopburi, Buriram, and Sisaket). The Moran spatial autocorrelation statistics in these provinces are 0.019 (standard error of 0.013), 0.001 (0.014), 0.002 (0.003),
1366 J. P. KABOSKI AND R. M. TOWNSEND
FIGURE 1.—Number of households per villages in four provinces in Thailand.
STRUCTURAL MICROFINANCE EVALUATION
1367
and 0.016 (0.003), respectively.15 Only the Sisaket autocorrelation is statistically significant, and the magnitudes of all of them are quite small. For comparison, the spatial autocorrelation of the daily wage in villages ranges from 0.12 to 0.21. We also checked whether village size was correlated to other underlying geographic features by running separate regressions of village size onto distance to nearest two-lane road or river (conditioning on changwat dummies). The estimated coefficients were 0.26 (standard error of 0.32) and −0.25 (0.24), so neither was statistically significant. Small villages did tend to be located closer to forest areas, however, where the coefficient of 0.35 (0.03) was highly significant, indicating that forest area may limit the size of villages.16 Nonetheless, these regressions explain at most 5 percent of the variation in village size, so the variation is not particularly well explained by geographic features. We have included roads, rivers, and forest in Figure 1. Finally, the regression analysis in our companion paper (Kaboski and Townsend (2009)), strengthens our a posteriori confidence in the exogeneity of village size. Specifically, we present reduced form regressions on a large set of potential outcome variables. Using 7 years of data (1997–2003, so that t = 6 is the first post-program year, 2002), we run a first-stage regression to predict village fund credit VFCR of household (HH) n in year t, VFCRnt =
α1VFCRj
j=67
1000000 It=j + α2VFCR Controlnt # HHs in villagev6
+ γVFCR Xnt + θVFCRt + θVFCRn + εVFCRnt and second-stage outcome equation of the form (2.1)
Znt = α1Z VFCRnt + α2Z Controlnt + γZ Xnt + θZt + θZn + εZnt
where Znt represents an outcome variable of interest for household n in year t. Comparing the two equations, the crucial variable in the first stage is inverse 15
The general formula for Moran’s statistic is ⎛ n n I=
n n n i=1 j=1
wij
⎜ ⎜ ⎜ ⎜ ⎝
i=1
⎞ wij (zi − z)(zj − z) ⎟ ⎟ j=1 ⎟ n n ⎟ ⎠ 2 (zi − z) i=1 j=1
where n is the number of observations (villages), zi is the statistic for observation i (village size of village i), and wij is the weight given villages depending on their spatial distance. Here we use inverse cartesian distance between villages. 16 Forest conservation efforts have driven some redistricting decisions, but these decisions have been largely haphazard and unsystematic. For discussions, see Puginier (2001) and Gine (2005).
1368
J. P. KABOSKI AND R. M. TOWNSEND
village size in the post-intervention years (the latter captured by the indicator function It=j ), since it creates variation in VFCRnt , but is excluded from the second-stage outcome equation. Although there is heterogeneity across households and nonlinearity in the impact of credit, αˆ 1z captures (a linear approximation of) the relationship between the average impact of a dollar of credit on the outcome of Znt . The sets of controls in the above equations are Xnt , a vector of demographic household controls, year fixed effects (θVFCRt and θZt ), household fixed effects (θVFCRt and θZt ), and Controlnt , which captures the general role of village size, so as to emphasize that the impact identified is specific to the post-intervention years. We used two alternative specifications for Controlnt : 1000000 # HHs in villagev6 and 1000000 ∗ t # HHs in villagev6 Given the first specification, αˆ 2Z“levels” captures the relationship between village size and the level of the outcome that is common to both the pre- and post-intervention years. In the latter specification, αˆ 2Z“trends” captures the relationship between village size and the trend in the outcome variable. The level specification is of less interest, since our results are unlikely to be contaminated by level differences; household fixed effects θZt already capture persistent level differences (across households and villages), and our analysis utilizes household fixed effects. Moreover, the αˆ 2Z“levels” is only identified from withinvillage variation in village size (i.e., the sizes of given villages varying over the years of the panel), which constitutes only 5 percent of the total variation in village size, and our analysis only uses village size in 1 year, the first year of the intervention (t = 6). The trend specification is therefore of more relevance. Table I presents αˆ 2Z“trends” results for the 37 different outcome variables Znt from Kaboski and Townsend (2009), and three additional variables relevant to this study: investment probability, default probability, and total consumption. Together, these regressions cover the details of household income, consumption, investment, and borrowing activities. Only 2 of these 40 estimates are significant, even at a conservative 10 percent level; smaller villages were associated with higher growth in the fraction of income coming from rice and faster growth in the amount of credit from commercial banks. In terms of economic significance, this means that for the average village, the rice fraction falls by 1 percentage point a year less than in the largest village. Similarly, the amount of commercial bank credit rises by 500 baht (12 dollars) a year more than in the
STRUCTURAL MICROFINANCE EVALUATION
1369
TABLE I PRE-EXISTING TRENDS BY INVERSE VILLAGE SIZE Outcome Variable, Z
Village fund short-term credit Total short-term credit BAAC credit Commercial bank credit Agricultural credit Business credit Fertilizer credit Consumption credit Short-term interest rate Probability in default Credit in default Informal credit Income growth Fraction of net income from business Fraction of income from wages Fraction of income from rice Fraction of income from other crops Fraction of income from livestock Log asset growth Number of new businesses
a2Z“trends”
Outcome Variable, Z
a2Z“trends”
0.01 (0.02) 0.09 (0.15) 0.04 (0.10) 0.05b (0.03) −0.07 (0.04) 0.04 (0.10) 0.14 (0.10) 0.05 (0.08) −1.6e−7 (5.3e−7) −9.8e−7 (1.3e−6) −1.1e−6 (1.5e−6) 0.00 (0.09) −7.2e−6 (4.5e−6) 2.0e−7 (3.9e−7) 9.1e−7 (6.1e−7) 1.0e−6a (5.6e−7) 7.9e−8 (4.1e−7) 6.2e−8 (3.8e−7) 6.0e−7 (2.9e−6) 4.9e−7 (1.1e−6)
Business investment
0.03 −0.19 0.04 (0.13) 5.1e−5 (2.1e−4) −0.04 (0.06) 0.19 (0.12) 0.19 (0.27) 0.09 (0.21) −0.03 (0.04) 0 (0.01) 0.01 (0.01) −0.01 (0.01) −0.01 (0.01) −0.02 (0.03) −0.01 (0.01) 0.03 (0.02) −0.01 (0.03) 0.06 (0.14) 0.00 (0.03) 0.00 (0.01) 0.00 (0.01)
Agricultural investment Investment probability Fertilizer expenditures Total wages paid to laborers Consumption Nondurable consumption Grain consumption Milk consumption Meat consumption Alcohol consumption, in the house Alcohol consumption, outside of the house Fuel consumption Tobacco consumption Education expenditures Ceremony expenditures Housing repair expenditures Vehicle repair expenditures Clothing expenditures Meal expenditures away from home
a Significant at a 10% level.
largest village.17 Though not presented in the table, the estimates for α2Z“levels” 17 More generally, one million divided by the number of household averages roughly 10,000 in our sample, so the economic magnitude on a per year basis is the coefficient multiplied by 10,000.
1370
J. P. KABOSKI AND R. M. TOWNSEND
also show few significant relationships.18 We also note that our results are robust to whether these controls are included. Thus, we have a measure of confidence that preexisting differences in levels or trends associated with inverse village size are few and small. 2.2. Reduced Form Impacts The above regressions produce several interesting “impact” estimates αˆ 1Z as reported in detail in our companion paper (Kaboski and Townsend (2009)).19 With regard to credit, the program expanded village fund credit roughly one for one, with the coefficient αˆ 1VFCR close to 1. Second, total credit overall appears to have had a similar expansion, with an αˆ 1Z near 1 and there is no evidence of crowding out in the credit market. Finally, the expansion did not occur through a reduction in interest rates. Indeed the αˆ 1Z is positive, though small for interest rates. Household consumption was obviously and significantly affected by the program, with αˆ 1Z point estimate near 1. The higher level of consumption was driven by nondurable consumption and services, rather than by durable goods. While the frequency of agricultural investments did increase mildly, total investment showed no significant response to the program. The frequency of households in default increased mildly in the second year, but default rates remained less than 15 percent of loans. Asset levels (including savings) declined in response to the program, while income growth increased weakly.20 Together, these results are puzzling. In a perfect credit, permanent income model, with no changes in prices, unsubsidized credit should have no effect, while subsidized credit would simply have an income effect. If credit did not need to be repaid, this income effect would be bounded above by the amount of credit injected. Yet repayment rates were actually quite high, with only 3 percent of village fund credit in default in the last year of the survey. But Note that the coefficient on investment probability is positive an order of magnitude larger than our results in Section 5, but the standard deviation is 2 orders of magnitude larger and so it is insignificant. 18 Again using a more conservative 10 percent level of significance, only 3 out of 39 coefficients (8 percent) were significant. Small villages tended to have higher levels of short-term credit in fertilizer (αˆ 2Z“levels” = 114 with a standard error of 0.50) and higher shares of total income from rice (8.3e−6, std. error 3.0e−6) and other crops (4.1e−6, std. error 2.2e−6). Thus, as villages grow, they appear to become somewhat less agrarian. 19 The sample in Kaboski and Townsend (2009) varies slightly from the sample in this paper. Here we necessarily exclude 118 households that did not have a complete set of data for all 7 years. To avoid confusion, we do not report the actual Kaboski and Townsend (2009) estimates here. 20 Wage income also increased in response to the shock, which is a focus of Kaboski and Townsend (2009). The increase is quite small relative to the increase in consumption, however, and so this has little promise in explaining the puzzles. We abstract from general equilibrium effects on the wage and interest rate in the model we present.
STRUCTURAL MICROFINANCE EVALUATION
1371
again,even if credit were not repaid, an income effect would produce at most a coefficient of the market interest rate (less than 0.07), that is, the household would keep the principal of the one-time wealth shock and consume the interest. The fact that households appear to have simply increased their consumption by the value of the funds lent is therefore puzzling. Given the positive level of observed investment, the lack of a response to investment might point to well functioning credit markets, but the large response of credit and consumption indicate the opposite. Thus, the coefficients overall require a theoretical and quantitative explanation. 2.3. Underlying Environment Growth, savings–credit, default, and investment are key features in the Thai villages during the pre-intervention period (as well as afterward). Household income growth averages 7 percent over the panel, but both income levels and growth rates are stochastic. Savings and credit are important buffers against income shocks (Samphantharak and Townsend (2009)), but credit is limited (Puentes (2011)). Income shocks are neither fully insured nor fully smoothed (Chiappori et al. (2008)), and Karaivanov and Townsend (2010) concluded that savings and borrowing models and savings only models fit the data better than alternative mechanism design models. High income households appear to have access to greater credit. That is, among borrowing households, regressions of log short-term credit on log current income yield a coefficient of 0.32 (std. error = 002). Related, default occurs in equilibrium and appears to be one way to smooth against shocks. In any given year, 19 percent of households are over 3 months behind in their payments on short-term (less than 1 year) debts. Default is negatively related to current income, but household consumption is substantial during periods of default, averaging 164 percent of current income, and positively related to income. Using only years of default, regressions of log consumption on log income yield a coefficient of regression of 0.41 (std. error = 003). Finally, investment plays an important role in the data, averaging 10 percent of household income. It is lumpy, however. On average only 12 percent of households invest in any given year. Investment is large in years when investment occurs and highly skewed with a mean of 79 percent of total income and a median of 15 percent. When they invest, high income households make larger investments; a regression of log investment on the (log) predictable component of income yields a significant regression coefficient of 0.57 (std. error = 015).21 High-income households still invest infrequently, however, and indeed the correlation between investment and predictable income is 0.02 and insignificant. 21 These predictions are based on a regression of a regression of log income on age of head of household, squared age, number of males, number of females, number of kids, and log assets.
1372
J. P. KABOSKI AND R. M. TOWNSEND
Related, investment is not concentrated among the same households each year. If the average probability of investing (0.12) were independent across years and households, one would predict that (1 − 0885 =) 47 percent of households would invest at least once over the 5 years of pre-intervention data. This is quite close to the 42 percent that is observed. The next section develops a model broadly consistent with this underlying environment. 3. MODEL We address key features of the data by developing a model of a household facing permanent and transitory income shocks and making decisions about consumption, low-yield liquid savings, high-yield illiquid investment, and default. The household is infinitely lived and, to allow for growth, tractability requires that we make strong functional form assumptions.22 In particular, the problem is written so that all constraints are linear in the permanent component of income, so that the value function and policy functions can all be normalized by permanent income. We do this to attain a stationary, recursive problem. 3.1. Sequential Problem At t + 1 liquid wealth Lt+1 includes the principal and interest on liquid savings from the previous period (1 + r)St (negative for borrowing) and current realized income Yt+1 : (3.1)
Lt+1 ≡ Yt+1 + St (1 + r)
Following the literature on precautionary savings (e.g., Zeldes (1989), Carroll (1997), Gourinchas and Parker (2002)), current income Yt+1 consists of a permanent component of income Pt+1 and a transitory one-period shock, Ut+1 additive in logs: (3.2)
Yt+1 ≡ Pt+1 Ut+1
We follow the same literature in modeling an exogenous component of permanent income that follows a random walk (again in logs) based on shock Nt with drift G. Meghir and Pistaferri (2004) presented strong evidence for the 22 We model an infinitely lived household for several reasons. Using a life-cycle approach in the United States, Gourinchas and Parker (2002) showed that life-cycle savings plays a relatively smaller role until the last 10 years before retirement. In the rural Thai context, there is no set retirement age or pension system, and households often include family from multiple generations. Deaton and Paxson (2000) showed that profiles of household head age versus household savings do not fit the life-cycle theory well.
STRUCTURAL MICROFINANCE EVALUATION
1373
importance of permanent income shocks in the United States, and we believe that the standard ideas of permanent income shocks (e.g., long term illness or disability, obsolescence of specialized human capital, shocks affecting the profitability of businesses or capital) are at least as important in a developing country context. Nonetheless, our innovation in this paper is to also allow for endogenous increases in permanent income through investment.23 Investment is indivisible: the household makes a choice DIt ∈ {0 1} whether to undertake a lumpy investment project of size It∗ or not to invest at all. In sum, (3.3)
Pt+1 = Pt GNt+1 + RDIt It∗
Investment is also illiquid and irreversible, but again it increases permanent income, at a rate R, higher than the interest rate on liquid savings, r, and sufficiently high to induce investment for households with high enough liquidity. Having investment increase the permanent component of future income simplifies the model by allowing us to track only Pt rather than multiple potential capital stocks.24 While we have endogenized an important element of the income process, we abstract from potentially endogenous decisions such as labor supply, and the linearity in R abstracts from any diminishing returns that would follow from a nonlinear production function. Project size is stochastic, governed by an exogenous shock it∗ , and proportional to the permanent component of income: (3.4)
It∗ = it∗ Pt
We assume that investment opportunities It∗ are increasing in permanent income Pt , which the data seem to support. A more flexible specification would be It∗ = it∗ Ptω . A regression of log investment on the log of the component of income predicted by observables (a proxy for Pt ) yields a coefficient of 0.57, indicating an < 1. Still, our assumption of linearity ( = 1) is necessary for analytical tractability and it yields results consistent with investment decisions being uncorrelated with the predictable component of income (as described in 23
Low, Meghir, and Pistaferri (2010) endogenize permanent income in the U.S. context through participation and occupational mobility decisions. 24 This approach ignores many issues of investment “portfolio” decisions and risk diversification. Still, the lumpy investment does capture the important portfolio decision between a riskless, low-yield, liquid asset and a risky, illiquid asset, which is already beyond what is studied in a standard buffer-stock model. We can show this by defining At ≡ Pt /R and using (3.1), (3.2), (3.3), and (3.6) to write At + St = (RUt + GNt )At−1 + St−1 (1 + r) − Ct Physical assets At pay a stochastic gross return of (RUt + GNt ), while liquid savings pay a fixed return of (1 + r).
1374
J. P. KABOSKI AND R. M. TOWNSEND
Section 2.3).25 The linearity we assume is consistent with the empirical literature, where large firms invest higher amounts, and so investment is typically scaled by size. Liquid savings can be negative, but borrowing is bounded by a limit which is a multiple s of the permanent component of income. That is, when s is negative, borrowing is allowed, and the more negative it is, the more that can be borrowed. This is the key parameter that we calibrate to the intervention: (3.5)
St ≥ sPt
For the purposes of this partial equilibrium analysis, this borrowing constraint is exogenous. It is not a natural borrowing constraint as in Aiyagari (1994) and, therefore, it is somewhat ad hoc, but such a constraint can arise endogenously in models with limited commitment (see Wright (2002)) or where lenders have rights to garnish a fraction of future wages (e.g., Lochner and Monge-Naranjo (2008)). Most importantly, it allows for default (see below), which is observed in the data and is of central interest to microfinance interventions. In period 0 the household begins with a potential investment project of size I0∗ , a permanent component of income P0 , and liquid wealth L0 , all as initial conditions The household’s problem is to maximize expected discounted utility by choosing a sequence of consumption Ct > 0, savings St , and decisions DIt ∈ {0 1} of whether or not to invest: ∞
Ct1−ρ ∗ t (3.6) β V (L0 I0 P0 ; s) = max E0 {Ct >0} 1−ρ t=0 {St+1 } {DIt }
subject to (3.1), (3.2), (3.3), (3.4), (3.5), and Ct + St + DIt It∗ ≤ Lt The expectation is taken over sequences of permanent income shocks Nt , transitory income shocks Ut , and investment size shocks it∗ These shocks are each independent and identically distributed (i.i.d.) and orthogonal to one another: • Nt is random walk shock to permanent income. ln Nt ∼ N(0 σN2 ). • Ut is a temporary (one-period) income shock. ut ≡ ln Ut ∼ N(0 σu2 ). • it∗ is project size (relative to permanent income). ln it∗ ∼ N(μi σi2 ). Household policies are to invest in all projects below a threshold It∗ , call it I˜t∗ . If investment opportunities did not increase with Pt (i.e., ω = 0), then high Pt households would invest at a higher rate than poor households, since the threshold I˜t∗ would be higher for high Pt . We cannot solve this case, but we conjecture that it would be quantitatively important, since given the relatively low frequency of investment (12 percent), the cutoff I˜t∗ would typically fall on the left tail of the log normal it∗ distribution where the density and inverse Mills ratio are high. 25
STRUCTURAL MICROFINANCE EVALUATION
1375
If s < 0, an agent with debt (i.e., St−1 < 0), and a sufficiently low income shock may need to default. That is, with Lt = Yt + St−1 (1 + r), even with zero consumption and investment, the liquid assets budget constraint (3.6) could imply St < sPt . Essentially, given (3.5), a bad enough shock to permanent income (i.e., a low Nt ) can produce a “margin call” on credit that exceeds current liquidity. In this case, we assume default allows for a minimum consumption level that is proportional to permanent income (cPt ). Defining the default indicator, Ddeft ∈ {0 1}, this condition for default is expressed as (3.7)
Ddeft =
1 if (s + c)Pt < Lt , 0 otherwise,
and the defaulting household’s policy for the period becomes Ct = cPt St = sPt DIt = 0 This completes the model. The above modeling assumptions are strong and not without costs. Still, as we have seen, they are motivated by the data, and they do have analytical benefits beyond allowing us to deal easily with growth. First, the model is simple and has limited heterogeneity, but consequently has a low dimension, and tractable state space {L I ∗ P} and parameter space {r σN σu G c β ρ μi σi s}. Hence, the role of each state and parameter can be more easily understood. Furthermore, the linearity of the constraints in Pt reduces the dimensionality of the state space to two, which allows for graphical representation of policy functions (in Section 5.2). The next subsection derives the normalized, recursive representation.26 3.2. Normalized and Recursive Problem Above, we explicitly emphasized the value function’s dependence on s, since this is the parameter of most interest in considering the microfinance intervention in Section 5. We drop this emphasis in the simplifying notation that 26 Since all conditions are linear Pt , we avoid the problems that unbounded returns lead to in representing infinite horizon models in recursive fashion (see Stokey, Lucas, and Prescott (1989)). In particular, the conditions for the equivalence of the recursive and sequential problems and existence of the steady state are straightforward extensions of conditions given in Alvarez and Stokey (1998) and Carroll (2004). In particular, for ρ < 1 G and RE[i∗ ] must be sufficiently bounded.
1376
J. P. KABOSKI AND R. M. TOWNSEND
follows. Using lowercase variables to indicate variables normalized27 by permanent income, the recursive problem becomes V (L I ∗ P) ≡ P 1−ρ v(l i∗ ) v(l i∗ ) = max cs dI
c 1−ρ + βE[(p )1−ρ v(l i∗ )] 1−ρ
subject to (3.8)
λ : c + s + dI i ∗ ≤ l φ:s≥s
from (3.6)
from (3.5)
p = GN + RdI i∗
from (3.3)
(3.9)
l = y +
s(1 + r) p
(3.10)
y = U
from (3.2)
from (3.1)
We further simplify by substituting l and y into the continuation value using (3.9) and (3.10), and substituting s out using the liquidity budget constraint (3.8), which holds with equality, to yield
c 1−ρ (1 + r)(l − c − dI i∗ ) ∗ ∗ 1−ρ (3.11) v(l i ) = max + βE (p ) v U + i cdI 1 − ρ p subject to (3.12)
φ : (l − c − dI i∗ ) ≥ s
(3.13)
p = GN + RdI i∗
The normalized form of the problem has two advantages. First, it lowers the dimensionality of the state variable to two. Second, it allows the problem to have a steady state solution. Using an aterisk (∗) to signify optimal decision rules, the necessary conditions for optimal consumption c∗ and investment decisions dI∗ are28 (3.14)
(c∗ )−ρ = β(1 + r)
∂v (1 + r)(l − c∗ − dI∗ i∗ ) ∗ + φ U + i × E (p )−ρ ∂l p
27 Here the decision whether to invest di is not a normalized variable and is, in fact, identical to Di in the earlier problem. We denote it in lowercase to emphasize that it depends only on the normalized states l and i∗ . 28 Although the value function is kinked, it is differentiable almost everywhere, and the smooth expectation removes any kink in the continuation value.
STRUCTURAL MICROFINANCE EVALUATION
(3.15)
1377
(1 + r)(l − c∗ − dI∗ i∗ ) ∗ c∗1−ρ + βE (p )1−ρ v U + i 1−ρ p
c 1−ρ (1 + r)[l − c∗∗ − (1 − dI∗ )i∗ ] ∗ ≥ ∗∗ + βE (p )1−ρ v U + i 1−ρ p
Equation (3.14) is the usual credit-constrained Euler equation. The constraint φ is only nonzero when the credit constraint (3.12) that is, i.e., c∗ = l − s − dI∗ i∗ Equation (3.15) ensures that the value given the optimal investment decision dI∗ exceeds the maximum value given the alternative 1 − dI∗ ; c∗∗ indicates the optimal consumption under this alternative investment decision (i.e., c∗∗ satisfies the analog to (3.14) for 1 − dI∗ ). In practice, the value function and optimal policy functions must be solved numerically, and indeed the indivisible investment decision complicates the computation.29 Figure 2 presents a three-dimensional graph of a computed value function. The flat portion at very low levels of liquidity l comes from the minimum consumption and default option. The dark line highlights a groove that goes through the middle of the value function surfaces along the critical values at which households first decide to invest in the lumpy project. Naturally, these
FIGURE 2.—Value function versus liquidity ratio and project size. 29
Details of the computational approach and codes are available from the authors on request.
1378
J. P. KABOSKI AND R. M. TOWNSEND
FIGURE 3.—Consumption policy for fixed i∗ , baseline, and reduced borrowing constraint.
threshold levels of liquidity are increasing in the size of the project. The slope of the value function with respect to l increases at this point because the marginal utility of consumption increases at the point of investment.30 Consumption actually falls as liquidity increases beyond this threshold. Figure 3, panel A illustrates this more clearly by showing a cross section of the optimal consumption policy as a function of normalized liquidity for a given value of i∗ . At the lowest values, households are in default. At low values of liquidity, no investment is made, households consume as much as possible given the borrowing constraint, and hence the borrowing constraint holds with equality. At higher liquidity levels, this constraint is no longer binding as savings levels s exceed the lower bound s. At some crucial level of liquidity l∗ , the household chooses to invest in the lumpy project, at which point consumption falls and the marginal propensity to consume out of additional liquidity increases. Although not pictured, for some parameter values (e.g., very high R), the borrowing constraint can again hold with equality, and marginal increases in liquidity are used purely for consumption.31 30
Given the convex kink in the value function, households at or near the kink would benefit from lotteries, which we rule out consistent with the idea that borrowing and lending subject to limits are the only forms of intermediation. 31 Using a buffer-stock model, Zeldes (1989) derived reduced form equations for consumption growth, and found that consumption growth was significantly related to current income, but only for low wealth households, interpreted as evidence of credit constraints. We run similar consumption growth equations that also contain investment as an explanatory variable: ln Cnt+1 /Cnt = Xnt β1 + β2 Ynt + β3 Int + εnt
STRUCTURAL MICROFINANCE EVALUATION
1379
Panel B of Figure 3 shows the effect of a surprise permanent decrease in s on the optimal consumption policy for the same given value of i∗ . Consumption increases for liquidity levels in every region, except for the region that is induced into investing by more access to borrowing. An additional interesting prediction of the model is that for a given level of borrowing (st < 0), a household that invests (dIt = 1) has a lower probability of default next period. Conditional on investing, the default probability is further decreasing in the size of investment. Thus, other things equal, borrowing to invest leads to less default than borrowing to consume, because investment increases future income and, therefore, ability to repay. The maximum amount of debt that can be carried over into the next period (i.e., −sPt ) is proportionate to permanent income. Because investment increases permanent income, it increases the borrowing limit next period and, therefore, reduces the probability of a “margin call” on outstanding debt. One can see this formally by substituting the definitions of liquidity (3.1) and income (3.2), and the law of motion for permanent income (3.3) into the condition for default (3.7) to yield (3.16)
E(Ddeft+1 |St Pt DIt It∗ ) = Pr Ut+1 < (s + c) −
St (Pt Nt+1 G + RDIt It∗ )
Since St is negative and R is positive, the right-hand side of the inequality is decreasing in both DIt and It∗ . Since both Nt+1 and Ut+1 are independent of investment, the probability is therefore decreasing in DIt and It∗ 4. ESTIMATION This section addresses the data used and then the estimation approach. The model is quite parsimonious with a total of 11 parameters. Due to poor identification, we calibrate the return on investment parameter, R using a separate data source. After adding classical measurement error on income with log variance σE , we estimate the remaining parameters θ = {r σN σu σE G c β ρ μi σi s} via MSM using the optimal weighting matrix. This estimation is performed using 5 years (1997–2001) of preintervention data, so that t = 1 corresponds to the year 1997. 4.1. Data The data come from the Townsend Thai data project, an ongoing panel data set of a stratified, clustered, random sample of institutions (256 in 2002), For the low wealth sample, we find significant estimates βˆ 2 < 0 and βˆ 3 > 0 which is consistent with the prediction of investment lowering current consumption (thereby raising future consumption growth).
1380
J. P. KABOSKI AND R. M. TOWNSEND
households (960 each year, 715 with complete data in the pre-experiment balanced panel used for estimation, and 700 in 2002 and 2003, respectively, which are used to evaluate the model’s prediction), and key informants for the village (64, one in each village). The data are collected from 64 villages in four provinces: Buriram and Srisaket in the northeast region, and Lopburi and Chachoengsao in the central region. The components used in this study include detailed data from households and household businesses on their consumption, income, investment, credit, liquid assets, and the interest income from these assets, as well as village population data from the village key informants. All data have been deflated using the Thai consumer price index to the middle of the pre-experiment data, 1999. The measure of household consumption we use (denoted C˜ nt for household n at time t) is calculated using detailed data on monthly expenditure data for 13 key items and scaled up using weights derived from the Thai Socioeconomic Survey.32 In addition, we include household durables in consumption, though durables play no role in the observed increases in consumption. The measure of investment (I˜nt ) we use is total farm and business investments, including livestock and shrimp/fish farm purchases. We impute default each year for households that report one or more loans due in the previous 15 months that are outstanding at least 3 months. Note that (i) this includes all loans, and not just short-term, since any (nonvoluntary) default indicates a lack of available liquidity, and (ii) due dates are based on the original terms of the loan, since changes in duration are generally a result of default.33 This only approximates default in the model: it may underestimate default because of underreporting, but overestimate default as defined in the model or to the extent that late loans are eventually repaid. The income measure we use (denoted Y˜ nt ) includes all agricultural, wage, business, and financial income (net of agricultural and business expenses), but excludes interest income on liquid assets such as savings deposits as in the model. Our savings measure (Snt ) includes not only savings deposits in formal and semiformal financial institutions, but also the value of rice holdings in the household. Cash holdings are unfortunately not available. The measure of liquid credit (CRnt ) is short-term credit with loan durations of 1 year or less. The measurement of interest income on liquid savings (EARNED_INTnt ) is interest income in year t on savings in formal and semiformal institutions. The interest owed on credit (OWED_INTnt ) is the reported interest owed on shortterm credit. 32
The tildes represent raw data, which are normalized in Section 4.1.1. According to this definition, default probability is about 19 percent, but alternative definitions can produce different results. The probability for short-term loan alone is just 12 percent, for example. On the other hand, relabeling all loans from nonfamily sources that have no duration data whatsoever as in default yields a default probability of 23 percent. Our results for consumption and default hold for the higher rates of default. 33
STRUCTURAL MICROFINANCE EVALUATION
1381
While the data are high quality and detailed, measurement error is an important concern. Net income measures are complicated when expenditures and corresponding income do not coincide in the same year, for example. If income is measured with error, the amount of true income fluctuations will be overstated in the data, and household decisions may appear to be less closely tied to transitory income shocks, hence credit constraints may not appear to be important. Consumption and investment may also suffer from measurement error, but classical measurement error just adds additional variation to these endogenous variables and does not effect the moments, only the weighting matrix. A major source of measurement error for interest is that savings and borrowing may fluctuate within the year, so that the annual flow of both earned and paid interest may not accurately reflect interest on the end-of-year stocks contained in the data. This measurement error assists in the estimation. Table II presents key summary statistics for the data. 4.1.1. Adjusting the Data for Demographic and Cyclical Variation The model is of infinitely lived dynasties that are heterogeneous only in their liquidity, permanent income, and potential investment. That is, in the model, the exogenous sources of variation among households come from given differences in initial liquidity or permanent income, and histories of shocks to permanent income, transitory income, and project size. Clearly, the data, however, contain important variation due to heterogeneity in household composition, business cycle and regional variation, and unmodeled aspects of unobserved household heterogeneity. Ignoring these sources of variation would be problematic. For household composition, to the extent that changes in household composition are predictable, the variance in income changes may not be capturing uncertainty but rather predictable changes in household composition. Likewise, consumption variation may not be capturing household responses to income shocks but rather predictable responses to changes in household composition. Failure to account for this would likely exaggerate both the size of income shocks and the response of household consumption to these shocks. In the data, the business cycle (notably the financial crisis in 1997 and subsequent recovery) also plays an important role in household behavior—investment and savings behavior, in particular. Although our post-program analysis focuses on the across-village differential impacts of the village fund program, and not merely the time changes, we do not want to confound the impacts with business cycle movements. Finally, differences in consumption, for example, across households, may tell us less about past and current income shocks, and more about unobserved differences in preferences or consumption needs. A common approach in structural modeling is to account for these sources of heterogeneity and predictable variation across households explicitly in the model and estimation (see Keane and Wolpin (1994, 1997, 2001)). These methods have the advantage of incorporating this heterogeneity into the household
1382
TABLE II SUMMARY STATISTICS OF PRE-INTERVENTION HOUSEHOLD DATA Variable
Mean
Std. Dev.
Min
Median
Max
3575 2860 3575 3575 3575 2860 3575 3575 2860 3575 3575
87,200 0.04 75,200 0.12 4760 0.194 17,900 1300 25,000 700 166
202,000 0.98 93,000 0.34 30,200 0.395 51,100 3900 132,000 7200 295
500 −4.94 750 0 0 0 0 0 0 0 21
50,300 0.01 49,800 0 0 0 0 0 5100 0 110
6,255,500 10.28 1,370,300 1 715,700 1 1,021,000 108,400 4,701,600 18,000 3194
3575 3575 3575 3575 3575 3575
1.46 1.56 1.59 0.74 6 41
0.9 0.75 1.21 0.44 3 15
0 0 0 0 0 22
1 1 1 1 7 40
7 6 9 1 15 84
a All values are in baht deflated to 1999. The 1999 PPP conversion rate is 31.6 baht/dollar.
J. P. KABOSKI AND R. M. TOWNSEND
Primary variables Noninterest household incomea Log growth of incomea Household consumptiona Dummy variable for agr/business investment Value of agr./business investmenta Dummy variable for short-term default Short-term credita Interest paida Liquid savingsa Interest earneda Number of households in Village Regressors for demographic/cylical variation Number of male adults Number of female adults Number of children Dummy variable for male head of household Years of education of head of household Age of head of household
Obs.
STRUCTURAL MICROFINANCE EVALUATION
1383
decision making process, but they typically require finite horizons and discretizing the choice variables (e.g., consumption or savings). Within the bufferstock literature, a common approach has been to instead purge business cycle and household composition variation from the data (e.g., Gourinchas and Parker (2002), Carroll and Samwick (1997)). Though the former approach is certainly of interest, given the continuity of consumption, our infinite horizon, and the precedent within the buffer-stock literature, we follow the latter approach. We return to the issue of heterogeneity in the concluding section. Specifically, we run linear regressions of log income, log consumption, and liquidity over income. (We do not take logs of liquidity, since it takes both positive and negative values, but instead normalize by income so that high values do not carry disproportionate weight.34 ) The estimated equations are ln Y˜ nt = γY Xnt + θYjt + eYnt L˜ nt /Y˜ nt = γL Xnt + θLjt + eLnt ln C˜ nt = γC Xnt + θCjt + eCnt ˜ nt = γD Xnt + θDjt + eDnt ln D where Xnt is a vector of household composition variables (i.e., number of adult males, number of adult females, number of children, male head of household dummy, linear and squared terms of age of head of household, years education of head of household, and a household-specific fixed effect) for household n at time t and θ·jt is a time-t-specific effect that varies by region j and captures the business cycle. These regressions are run using only the pre-program data, 1997–2001, which ensures that we do filter out the effects of the program itself. Unfortunately, the pre-program, time-specific effects cannot be extrapolated for the post-program data, so we rely on across-village, within-year variation to evaluate the model’s predictions. The R2 values for the four regressions are 0.63, 0.34, 0.76, and 0.31, respectively, so the regressions indeed account for a great deal of heterogeneity and variation. For the full sample, 1997–2003, we construct the adjusted data for a household with mean values of the explanatory variables (X and θ·j ) using the estimated coefficients and residuals ln Ynt = γˆ Y X + θYj + gy (t − 1999) + eˆ Ynt Lnt /Ynt = γˆ L X + θLj + eˆ Lnt ln Cnt = γˆ C X + θCj + gc (t − 1999) + eˆ Cnt 34 As noted before, 79 of the original 960 households realized negative income (net of business and agricultural income) at some point in the pre-intervention sample. The model yields only positive income, so these households were dropped.
1384
J. P. KABOSKI AND R. M. TOWNSEND
Dnt = γˆ D X + θDj + eˆ Dnt where gy and gc are the average growth rates of the trending variables, income and consumption, respectively, in the pre-program data. Next, we use a multiplicative scaling term to ensure that average income, liquidity ratios, consumption, and default are equal in the raw and adjusted data. Finally, we construct investment data Int by multiplying the measured investment/income ratios (I˜nt /Y˜ nt ) by the newly constructed income data Ynt . 4.2. Returns on Investment In principle, income growth and investment data should tell us something about the return on investment, R. In practice, however, the parameter cannot be well estimated because investment data are endogenous to current income and also because investment occurs relatively infrequently. We instead use data on physical assets rather than investment, and we calibrate R to match the cross-sectional relationship between assets and income. To separate the effect of assets and labor quality on income, we assume that all human capital investments are made prior to investments in physical assets. Let t − J indicate the first year of investing in physical assets.That is, substituting the law of motion for permanent income (equation (3.3)) J times recursively into the definition of actual income (equation (3.2)) yields
J j J J j−1 Yt = Pt−J G Nt+1−j Ut + R It−j G Nt+1−k Ut
j=1
income of investment prior to t−J
j=1
k=1
income from investment after t−J
The first term captures income from the early human capital investments, which we measure by imputing wage income from linear regressions of wages on household characteristics (sex, age, education, region). The second term involves the return R multiplied by the some of the past J years of investments (weighted by the deterministic and random components of growth). We measure this term using current physical assets. That is, R is calibrated using the operational formula εR = Yt − imputed labor incomet − R(physical assetst ) We have the additional issue of how to deal with the value of housing and unused land. Neither source of assets contributes to Yt , so we would ideally exclude them from the stock of assets.35 Using data on the (i) value of the home, (ii) value of the plot of land including the home, and (iii) the value of unused or community use land, we construct three variants of physical assets. 35
Our measure of Yt does not include imputed owner occupied rent.
STRUCTURAL MICROFINANCE EVALUATION
1385
We use a separate data set, the Townsend Thai Monthly Survey, to calibrate this return. The data are obtained from different villages, but the same overall survey area. The monthly survey has the advantage of including wage data used to impute the labor income portion of total income. We us a procedure which is analogous to GMM. We choose R to set the average εR to zero in the sample of households. The baseline value (which excludes categories (i)–(iii) from assets) yields R = 011, while including (iii) or (ii) and (iii) yields R = 008 and R = 004, respectively. If we choose R to solve εR = 0 for each household, then the median R values are identical to our estimates. Not surprisingly, R substantially varies across households, however. This is likely due in part because permanent shock histories and current transitory shocks differ across households, but also in part because households face different ex ante returns to investment. 4.3. Method of Simulated Moments In estimating, we introduce multiplicative measurement error in income, which we assume is log normally distributed with zero log mean and standard deviation σE . Since liquidity Lt is calculated using current income, measurement error also produces measurement error in liquidity. We therefore have 11 remaining parameters θ = {r G σN σu σE c β ρ μi σi s} which are estimated using a method of simulated moments. The model parameters are identified jointly by the full set of moments. We include, however, an intuitive discussion of the specific moments that are particularly important for identifying each parameter. The first two types of moments help identify the return to liquid savings, r: εs (X r) = EARNED_INTt − rSt−1 εcr (X r) = OWED_INTt − rCRt−1 In εs , St−1 is liquid savings in the previous year, while EARNED_INTt is interest income received on this savings. Likewise, in εcr , CR is outstanding shortterm credit in the previous year and OWED_INT is the subsequent interest owed on this short-term credit in the following year.36 The remaining moments require solving for consumption C(Lt Pt It∗ ; θ) = Pt c(lt it∗ ; θ), investment decisions DI (Lt Pt It∗ ; θ) = dI (lt it∗ ; θ), and default decisions Ddef (Lt Pt ; θ) = ddef (lt ; θ) where we now explicitly denote the dependence of policy functions on the parameter set θ. We observe data on decisions Ct , It , Ddeft , and states Lt and Yt Our strategy is to use these policy functions to define deviations of actual variables (policy decisions and income growth) from the corresponding expectations of these variables conditional on 36 In the data there are many low interest loans, and the average difference between households interest rates on short term borrowing and saving is small, just 2 percent.
1386
J. P. KABOSKI AND R. M. TOWNSEND
Lt and Yt .37 By the law of iterated expectations, these deviations are zero in expectation and, therefore, are valid moment conditions. With simulated moments, we calculate these conditional expectations by drawing series of shocks for Ut Nt , it∗ , and measurement error for a large sample, simulating, and taking sample averages. Details are available on request. The income growth moments help to identify the income process parameters and are derived from the definition of income and the law of motion for permanent income, equations (3.2) and (3.3).38 Average income growth helps identify the drift component of growth income growth G: εg (Lt Yt Yt+1 ; θ) = ln(Yt+1 /Yt ) − E[ln(Yt+1 /Yt )|Lt Yt ] The variance of income growth over different horizons (k = 1 3-year growth rates, respectively) helps identify the standard deviation of transitory and permanent income shocks, σu and σN , since transitory income shocks add the same amount of variance to income growth regardless of horizon k, whereas the variance contributed by permanent income shocks increases with k. The standard deviation of measurement error σE also plays a strong role in measured income growth. The deviations are defined as εvk (Lt Yt Yt+k ; θ) 2 = ln(Yt+k /Yt ) − E[ln(Yt+k /Yt )|Lt Yt ] 2 − E ln(Yt+k /Yt ) − E[ln(Yt+k /Yt )|Lt Yt ] Lt Yt for k = 1 2 3 We identify minimum consumption c, the investment project size distribution parameters μi and σi , the preference parameters β and ρ and the variance of measurement error σE using moments on consumption decisions, investment decisions, and the size of investments. Focusing on both investment probability and investment size should help in separately identifying the mean (μi ) and standard deviation (σi ) of the project size distribution. Focusing on deviations in log consumption, investment decisions, and log investments (when investments are made), εC (Ct Lt Yt ; θ) = Ct − E[Ct |Lt Yt ] εD (DIt Lt Yt ; θ) = DIt − E[DIt |Lt Yt ] εI (DIt It Lt Yt ; θ) = DIt It − E[DIt It∗ |Lt Yt ] 37 Since Lt requires the previous year’s savings St−1 , these moments are not available in the first year. 38 Carroll and Samwick (1997) provided techniques for estimating the income process parameters G σN , and σu without solving the policy function. These techniques cannot be directly applied in our case, however, since income depends on endogenous investment decisions.
STRUCTURAL MICROFINANCE EVALUATION
1387
we are left with essentially three moment conditions for five parameters: E[εC ] = 0
E[εD ] = 0
E[εI ] = 0
However, we gain additional moment conditions by realizing that since these deviations are conditional on income and liquidity, their interaction with functions of income and liquidity should also be zero in expectation. Omitting the functional dependence of these deviations, we express the remaining six valid moment conditions as E[εC ln Yt ] = 0
E[εD ln Yt ] = 0
E[εC (Lt /Yt )] = 0
E[εI ln Yt ] = 0
E[εD (Lt /Yt )] = 0
E[εI (Lt /Yt )] = 0
Intuitively, in expectation, the model should match average log consumption, probability of investing, and log investment across all income and liquidity levels, for example, not overpredicting at low income or liquidity levels, while underpredicting at high levels. These moments play particular roles in identifying measurement error shocks σE and c. If the data show less response of these policy variables to income than predicted, that could be due to a high level of measurement error in income. Similarly, high consumption at low levels of income and liquidity in the data indicate a high level of minimum consumption c. Finally, given c, default decision moments are used to identify the borrowing constraint s, which can be clearly seen from equation (3.7): εdef (Lt Yt Ddeft ) = Ddeft − E[Ddeft |Lt Yt ] In total, we have 16 moments to estimate 11 parameters. 4.4. Estimation Results Table III presents the estimation results for the structural model as well as some measures of model fit. The interest rate rˆ (0.054) is midway between the average rates on credit (0.073) and savings (0.035), and is quite similar to the 6 percent interest rate typically charged by village funds. The estimated discount factor βˆ (0.915) and elasticity of substitution ρˆ (1.16) are within the range of usual values for buffer-stock models. The estimated standard deviations of permanent σˆ N (0.31) and transitory σˆ U (0.42) income shocks are about twice those for wage earners in the United States (see Gourinchas and Parker (2002)), but reflect the higher level of income uncertainty of predominantly self-employed households in a rural, developing economy. In contrast, the standard deviation of measurement error σˆ E (0.15) is much smaller than that of actual transitory income shocks and is the only estimated parameter that is not significantly different from zero. The average log project size μˆ i greatly exceeds the average size of actual investments (i.e., log It /Yt ) in the data (1.47 vs. −1.96), and there
1388
J. P. KABOSKI AND R. M. TOWNSEND TABLE III PARAMETER ESTIMATES AND MODEL FIT Parameter Estimates Parameter
Estimate
Std. Error
Borrowing/savings interest rate, r Deviation of log permanent income shock, σN Deviation of log transitory income shock, σU Deviation of log measurement error shock, σE Exogenous income growth, G Minimum consumption, c Discount factor, β Intertemporal elasticity, ρ Mean log project size, μi Deviation of log project size, σi Borrowing limit, s
0.054 0.31 0.42 0.15 1.047 0.52 0.926 1.20 1.47 6.26 −0.08
0.003 0.11 0.07 0.09 0.006 0.01 0.006 0.01 0.09 0.72 0.03
Pre-Intervention Averages Variable
Ct Dt It DEFt ln(Yt+1 /Yt )
Data
Model
75,200 0.116 4600 0.194 0.044
75,800 0.116 4600 0.189 0.049
Test for overidentifying restrictions
J-statistic
Actual value
0.05% value
113.5
12.6
is a greater standard deviation in project size σˆ i than in investments in the data (2.50 vs. 1.22). In the model, these differences between the average sizes of realized investment and potential projects stem from the fact that larger potential projects are much less likely to be undertaken.39 The estimated borrowing constraint parameter sˆ indicates that agents could borrow up to about 8 percent of their annual permanent income as short-term credit in the baseline period. (In the summary statistics of Table II, credit averages about 20 percent of annual income, but liquid savings net of credit—the relevant measure—is 39 In the model, the average standard deviation of log investment (when investment occurs) is 1.37, close to the 1.22 in the data.
STRUCTURAL MICROFINANCE EVALUATION
1389
actually positive and averages 9 percent of income.) The value of cˆ indicates consumption in default is roughly half of the permanent component of income. Standard errors on the model are relatively small. We attempt to shed light on the importance of each of the 16 moments to identification of each the 11 parameters, but this is not trivial to show. Let ε be the (16-by-1) vector of moments and let W be the (16-by-16) symmetric weighting matrix. Then the criterion function is ε Wε and the variance–covariance matrix is [ε Wε]−1 . The minimization condition for the derivative of the criterion function is then ∂ε ∂ε = 0. Table IV presents ∂θ , which is a 16-by-11 matrix that shows the sen2ε W ∂θ sitivity of each moment to any given parameter. The influence of the parameter on the criterion function involves 2ε W, which has both positive and negative elements, however. Hence, the magnitudes of the elements in Table IV vary substantially across parameters and moments. W is also not a simple diagonal matrix so that the parameters are jointly identified. Some moments are strongly affected by many parameters (e.g., income growth and variances), while some parameters have strong effects on many moments (e.g., r G and β). Still, the partial derivatives confirm the intuition above, in that the moments play a role in pinning down the parameters we associate with them. In particular, the interest rate r is the only parameter in the interest moments (rows εS and εCR ). While σN is relatively more important for the variance of 2- and 3-year growth rates (rows εV 2 and εV 3 ), σU is important for the variance of 1-year growth rates (row εV 1 ). σE has important effects on the variance of income growth (rows εV 1 , εV 2 , and εV 3 ), but also on the interaction of consumption and investment decisions with Y (εC ∗ ln Y , εD ∗ ln Y , and εI ∗ ln Y ) and L/Y (rows εC ∗ L/Y , εD ∗ L/Y , and εI ∗ L/Y ). (These moments are even more strongly affected by r, σN , G β and ρ however.) The utility function parameters β and ρ have the most important effect on consumption and investment moments (rows εC –εI ∗ L/Y ). Also, while μi and σi also affect income growth variance (rows εV 1 , εV 2 , and εV 3 ), the investment probability and investment level moments (rows εD –εI ∗ L/Y ) also help identify them. Finally, both s and c affect default similarly, but have opposite-signed effects on the interaction of measured income and liquidity ratios with investment (rows εD ∗ ln Y , εD ∗ L/Y , εI ∗ ln Y , and εI ∗ L/Y ) and, especially, consumption (rows εC ∗ ln Y and εC ∗ L/Y ) decisions. In terms of fit, the model does well in reproducing average default probability, consumption, investment probability, and investment levels (presented in Table III), and indeed deviations are uncorrelated with log income or liquidity ratios. Still, we can easily reject the overidentifying restrictions in the model, which tells us that the model is not the real world. The large J-statistic in the bottom right of Table III is driven by two sets of moments.40 First, the estimaThe J-statistic is the number of households (720) times ε Wε. Since, W is symmetric, we can rewrite this as ε˜ ε˜ . The major elements of the summation ε˜ ε˜ are 0.02 (εs ), 0.02 (εcr ), 0.03 (εv1 ), and 0.04 (εv2 ), while the others are all less than 0.01. 40
1390
TABLE IV IDENTIFICATION: PARTIAL DERIVATIVES OF MOMENTS WITH RESPECT TO PARAMETERS
Moments Savings interest, εS Credit interest, εCR Income growth, εg 1-year variance, εV 1 2-year variance, εV 2 3-year variance, εV 3 Consumption, εC Income covariance, εC ∗ ln Y Liquidity covariance, εC ∗ L/Y Investment probability, εD Income covariance, εD ∗ ln Y Liquidity covariance, εD ∗ L/Y Investment level, εI Income covariance, εI ∗ ln Y Liquidity covariance, εI ∗ L/Y Default, εDEF
r
−65 −107 −06 13 10 05 14 −37 −01 515 −1552 −232 280 −800 −99 00
σN
σU
σE
G
c
β
ρ
μi
σi
s
00 00 25 −01 03 −03 08 −22 −01 187 −559 −110 100 −285 −28 00
00 00 −05 11 −11 08 01 −02 00 −08 25 21 01 −02 01 −16
00 00 −08 −08 −14 −21 −03 09 00 −02 05 00 −02 06 05 02
00 00 −44 03 −02 08 −20 53 02 −400 1200 181 −221 631 82 00
00 00 −02 20 06 22 −05 14 01 −07 19 04 −03 08 01 −36
00 00 15 −24 −23 −03 133 −350 −14 −161 479 94 −86 245 43 00
00 00 02 −11 10 −23 −71 186 08 14 −41 −06 06 −17 −16 00
00 00 −18 −24 −10 −24 00 00 00 −05 13 03 −07 19 01 00
00 00 −15 08 −06 −25 00 00 00 01 −03 −01 01 −04 00 00
00 00 −16 −04 07 −01 07 −19 −01 02 −06 −01 01 −02 00 −36
J. P. KABOSKI AND R. M. TOWNSEND
Parameters
STRUCTURAL MICROFINANCE EVALUATION
1391
tion rejects the notion that the savings and borrowing rates are equal.41 Second, the model does poorly in replicating the volatility of the income growth process, yielding too little volatility. We suspect this is the result of the failure of the income process and our statistical procedures to adequately capture cyclical effects of income growth, in particular the Thai financial crisis and recovery of 1997 and 1998 (survey years 1998 and 1999, respectively). Only mean time-varying volatility is extracted from the data using our regression techniques, but the crisis presumably affected the variance as well.42 Excluding the crisis from the presample is not possible, since it would leave us just 1 year of income growth to identify both transitory and permanent income shocks. An alternative estimation that uses only data from 2000 and 2001, except for 1999 data used to create 2-year income growth variance moments, produced estimates with wide standard errors that were not statistically different from the estimates above. The only economically significant difference was a much lower borrowing constraint (ˆs = −025), which is consistent with an expansion of credit observed in the Thai villages even pre-intervention. Recall that this trend is not related to village size, however. Another way to evaluate the within-sample fit of the model is to notice that it is comparable to that which could be obtained using a series of simple linear regressions to estimate 11 coefficients (rather than 11 parameters estimated by the structural model). By construction, the nine moments defined on consumption, investment probability, and investment levels could be set equal to zero by simply regressing each on a constant, log income, and liquidity ratios. This would use nine coefficients. The two remaining coefficients could simply be linear regressions of growth and default on constant terms (i.e., simple averages). These linear regressions would exactly match the 11 moments that we only nearly fit. On the other hand, these linear predictors would predict no income growth volatility and would have nothing to say about the interest on savings and credit. So the results on the fit of the model are mixed. However, we view the model’s ability to make policy predictions on the impact of credit as a stronger basis for evaluating its usefulness. We consider this in the next section. 41
It would be straightforward to allow for different borrowing and saving rates. This would lead to a kink in the budget constraint, however. The effect would be that one would never observe simultaneous borrowing and saving, and there would be a region where households neither save nor borrow. In the data, simultaneous short-term borrowing and saving is observed in 45 percent of observations, while having neither savings nor credit is observed in only 12 percent. 42 We know from alternative estimation techniques that the model does poorly in matching year-to-year fluctuations in variables. In the estimation we pursue, we construct moments for consumption, investment, and so forth that are based only on averages across the 4 years. For income growth volatility, the moments necessarily have a year-specific component.
1392
J. P. KABOSKI AND R. M. TOWNSEND
5. MILLION BAHT FUND ANALYSIS This section introduces the Million Baht Fund intervention into the model, examines the model’s predictions relative to the data, presents a normative evaluation of the program, and then presents alternative analyses using the structural model. 5.1. Relaxation of Borrowing Constraints We incorporate the injection of credit into the model as a surprise decrease in s.43 That is, for each of 64 villages, indexed by v, we calibrate the new, reduced constraint under the Million Baht Fund intervention smb v as the level for which our model would predict one million baht of additional credit relative to the baseline at s. We explain this mathematically below. Define first the expected borrowing of a household n with the Million Baht Fund intervention, mb ∗ mb |Lnt Ynt ; smb E[Bntv v ] = E I<0 [Lt − C(Lt Pt It ; s v ) ∗ − DI (Lt Pt It∗ ; smb v )It ]|Lnt Ynt and in the baseline without the intervention, E[Bntv |Lnt Ynt ; s] = E I<0 [Lt − C(Lt Pt It∗ ; s)
− DI (Lt Pt It∗ ; s)It∗ ]|Lnt Ynt
where I<0 is shorthand notation for the indicator function that the bracketed expression is negative (i.e., borrowing and not savings). On average, village funds lent out 950,000 baht in the first year, so we choose smb v so that we would have hypothetically predicted an additional 950,000 baht of borrowing in each village in the pre-intervention data44 : N 1 mb |Lnt Ynt ; smb E[Bntv v ] − E[Bntv |Lnt Ynt ; s] N n=1
=
950000 # HHs in villagev
43 Microfinance is often viewed as a lending technology innovation which is consistent with the reduction in s. An alternative would be to model the expansion of credit through a decrease in the interest rate on borrowing, but recall that we did not measure a decline in short-term interest rates in response to the program. 44 Since 1999 is the base year used, the 950,000 baht is deflated to 1999 values. Predicted results are similar if we use the one million baht which might have been predicted ex ante.
STRUCTURAL MICROFINANCE EVALUATION
1393
Here N represents the number of surveyed households in the pre-intervention data. The resulting smb v values average −0.28 across the villages, with a standard deviation of 0.14, a minimum of −0.91, and a maximum of −0.09. Hence, for most villages, the post-program ability to borrow is substantial relative to the baseline (s = −008), averaging about one-fifth of permanent income after the introduction of the program.45 5.2. Predictive Power Using the calibrated values of borrowing limits, we evaluate the model’s predictions for 2002 and 2003 (i.e., t = 6 and 7) on five dimensions: log consumption, probability of investing, log investment levels, default probability, and income growth. Using the observed liquidity (Ln5 ) and income data (Yn5 ) for year five (i.e., 2001), the last pre-intervention year, we draw series of Unt , Nnt , ∗ , and measurement error shocks from the estimated distributions, and simint ulate the model for 2002 and 2003. We do this 500 times and combine the data with the actual pre-intervention data to create 500 artificial data sets. We then ask whether reduced-form regressions would produce similar impact estimates using simulated data as they would using the actual postintervention data, even though statistically the model is rejected. We do not have a theory of actual borrowing from the village fund, so rather than using 950000 , the average a first-stage equation for village fund credit, we put # HHs in villagev injection per household, directly into the outcome equations in place of predicted village fund credit. The reduced form regressions are then
Cnt =
αCj
j=67
Dnt =
αDj
j=67
Int =
αIj
j=67
DEFnt =
950000 It=j + θCt + eCnt # HHs in villagev 950000 It=j + θDt + eDnt # HHs in villagev
950000 It=j + θIt + eInt # HHs in villagev
αDEFj
j=67
ln(Ynt /Ynt−1 ) =
950000 It=j + θDEFt + eDEFnt # HHs in villagev
j=67
α ln Yj
950000 It=j # HHs in villagev
45 These large changes are in line with the size of the intervention, however. In the smallest village, the ratio of program funds to village income in 2001 is 0.42. If half the households borrow, this would account for the 0.83 drop in s
1394
J. P. KABOSKI AND R. M. TOWNSEND
+ θ ln Yt + e ln Ynt Here αˆ Cj , αˆ Dj αˆ Ij , αˆ DEFj , and αˆ ln Yj would be estimates of the year j impact of the program on consumption, investment probability, average investment, default probability, and log income growth, respectively. Beyond replacing vil950000 the lage fund credit (VFCRnt ) and its first-stage regression with # HHs in villagev above equations differ from the motivating regressions (equation (2.1)), in two other ways. First, impact coefficients αZj are now vary by year j. Second, the regressions above omit the household level controls and household fixed effects, but recall Section 4.1.1, where we filtered the data of variation correlated with household level demographic data. We also filtered year-to-year variation out of the pre-program data, so the year fixed effects will be zero for the preprogram years. For the post-program years, however, the year fixed effects will capture the aggregate effect of the program as well as any cyclical component not filtered out of the actual post-program data. We run these regressions on both the simulated and the actual data, and compare the estimates and standard errors. Table V compares the regression results of the model to the data and shows that the model does generally quite well in replicating the results, particularly for consumption, investment probability, and investment. The top panel presents the estimates from the actual data. These regressions yield the surprisingly high, and highly significant, estimates for consumption of 1.39 and 0.90 in the first year and second year, respectively. The estimate on investment probability is significant and positive, but only in the first year. For a village, with the average village fund credit per household of 9600, the point estimate of 6.3e−6 would translate into an increase in investment probability of 6 percentage points. Nonetheless, and perhaps surprising in a world without lumpy investment, the regressions find no significant impact on investment and very large standard errors on the estimates. The impact effects on default are significant, but negative in the first year and positive in the second year, reflecting transitional dynamics. Finally, the impact of the program on log income growth is positive and significant, but only in the second year. Again, given the average village fund credit per household, this coefficient would translate into a 10 percentage point higher growth rate in the second year. The second panel of Table V presents the regressions using the simulated data. The first row shows the average (across 500 samples) estimated coefficient and the second row shows the average standard error on these estimates. The main point is that the estimates in the data are typical of the estimates the model produces for consumption, investment probability, and investment. In particular, the model yields a large and significant estimate of the coefficient on consumption that is close to 1 in the first year, and a smaller though still large estimate in the second year. The standard errors are also quite similar to what is observed. The model also finds a comparably sized significant coefficient on the investment probabilities, although its average coefficients are
Consumption
Investment Probability
Investment
Default Probability
Income Growth
γC2002
γC2003
γD2002
γD2003
γI2002
γI2003
γDEF2002
γDEF2003
γ ln Y2002
γ ln Y2003
Actual data “Impact” coefficientb Standard error
1.39 0.39
0.90 0.39
6.3e−6 2.4e−6
−0.2e−6 2.4e−6
−0.04 0.19
−0.17 0.19
−5.0e−6 2.4e−6
6.4e−6 2.4e−6
−9.4e−6 6.1e−6
12.6e−6 6.1e−6
Simulated data Average “impact” coefficienta Average standard error
1.10 0.48
0.73 0.48
5.6e−6 2.5e−6
3.6e−6 2.5e−6
0.41 0.23
0.35 0.23
−9.0e−6 2.3e−6
−0.2e−6 2.3e−6
0.3e−6 5.9e−6
0.3e−6 5.9e−6
Chow test significance levelc
0.55
0.51
0.99
0.27
0.30
a Boldface represents significance at a 5 percent level. b The impact coefficient is the coefficient on 950,000/number of households in the village interacted with a year dummy, the credit injection per household. c This is the significance level of a Chow test on the actual post-intervention data and the pooled simulated data, where the null hypothesis is no structural break in the impact
coefficients.
STRUCTURAL MICROFINANCE EVALUATION
TABLE V REDUCED FORM REGRESSION ESTIMATES: ACTUAL DATA VERSUS MILLION BAHT SIMULATED DATAa
1395
1396
J. P. KABOSKI AND R. M. TOWNSEND
more similar in both the first and second years, whereas the data show a steep drop off in the magnitude and significance after the first year. The model’s predictions for default and income volatility growth are less aligned with the data. For default, both the model and the data show a marked and significant decrease in default in the first year, though the model’s is much larger. While the data show a significant increase in default in the second year, the model produces no effect.46 The data also show a significant increase in income growth in the second year, whereas regressions from the model measure no impact on income growth. Perhaps both of these shortcomings result from the model’s inability to fully capture year-to-year fluctuations in the volatility of the income growth process in the estimation. The final panel shows formally that the estimates from the model are statistically similar to those in the data. It shows the significance level of a Chow test on the combined sample between the actual post-program data and the simulated post-program data (from all simulations), where the null is no structural break between the actual and simulated data. Using a 5 (or even 10) percent level of significance, the Chow test would not detect a structural break in any of the regressions. One further note is that while the impact coefficients in the data are quite similar to those in the simulated structural model, they differ substantially from what would be predicted using reduced form regressions. For example, if we added credit (CRnt ) as a right-hand side variable in a regression on consumption, a reduced form approach might use the coefficient (say δ1 ) on credit to predict the per baht impact of the village fund credit injection. That is, we 950000 . However, in the remight predict a change in consumption of δ1 # HHs in villagev gression Cnt = δ1 CRnt + δ2
950000 It=j + θCt + eCnt # HHs in villagev
an F -test does indeed reject that δ1 = δ2 . Parallel regressions that replace credit with consumption, investment probability, or default also reject this restriction, and these restrictions are also rejected if credit is replaced with liquidity or income. In sum, we measure large average effects on consumption and insignificant effects on investment, but the structural model helps us in quantitatively interpreting these impacts. First, these average coefficients mask a great deal of unobserved heterogeneity. Consider Figure 4, which shows the estimated policy function for consumption (normalized by permanent income) c as a function of (normalized) project size i∗ and (normalized) liquidity l. Again, the cliff-like drop in consumption running diagonally through the middle of the 46 For the alternative definition of default, where all loans not from relatives with an unstated duration are considered in default, the data actually show a small decrease in the second year.
STRUCTURAL MICROFINANCE EVALUATION
1397
FIGURE 4.—Consumption policy as a function of liquidity and project size.
graph represents the threshold level of liquidity that induces investment. In the simulations, households in a village are distributed along this graph, and the distribution depends on the observables (Y and L) and stochastic draws of the shocks (i∗ and U since P = UY ). We have plotted examples of five potential households, all of which could appear ex ante identical in terms of their observables Y and L (i.e., their state), but resemble a leftward shift in the graphed decision (recall Figure 3, panel B). A small decrease in s can yield qualitatively different responses to the five households labeled. Household (i)’s income is lower than expected, and so would respond to a small decrease in s by borrowing to the limit and increasing consumption. Household (ii) is a household that had higher than expected income. Without the intervention, the household invests and is not constrained in its consumption. Given the lower s, it does not borrow, but nevertheless increases its consumption. Given the lower borrowing constraint in the future, it no longer requires as large a buffer stock today. Household (iii), though not investing, will similarly increase consumption without borrowing by reducing its buffer stock given a small decrease in s. Thus, in terms of consumption, households (i)–(iii) would increase consumption, and households (ii) and (iii) would do so without borrowing. If these households were the only households, the model would deliver the surprising result that consumption increases more than credit, but households (iv) and (v) work against this. Household (iv) is a household in default. A small decrease in s would have no affect on its consumption or investment, but simply would increase the indebtedness of the household and reduce the amount of credit that would have been defaulted. Finally, household (v) is perhaps the target household of microcredit rhetoric: a
1398
J. P. KABOSKI AND R. M. TOWNSEND
small increase in credit would induce the household to invest. But if (as drawn) the household invests in a sizable project, it would finance this by not only increasing its borrowing, but also by reducing its current consumption. One can also see that the effects of changes in s are not only heterogeneous, but also nonlinear. For example, if the decrease in s were large enough relative to i∗ , household (v) would not only invest, but also would increase consumption. Quantitatively, draws from the distributions of i∗ and U (together with the empirical distribution of L/Y ) determine the scattering of households in each village across Figure 3. The high level of transitory income growth volatility leads to a high variance in U, hence a diffuse distribution in the L/P dimension (given L/Y ). We know that in the baseline distribution, the model calibrates that 19 percent of households are in default (like household (iv)) and an additional 26 percent are hand-to-mouth consumers (like household (i), though 3 of the 26 percent are investing).47 Based on the presample years, the relaxation of s would lead to fewer defaulters (12 percent of households) but the same number of hand-to-mouth consumers (26 percent total, 4 percent of whom are investing). Hence, the large share of hand-to-mouth consumers, together with the large share (51 percent) of unconstrained households (like households (ii) and (iii)) that drive down their buffer stocks, explains the big increase in consumption. Similarly, the low investment probability but sizable average investment levels in the data lead to high estimated mean and variance of the i∗ distribution. Given these estimates, most households in the model have very large projects (with a log mean of 6.26), but investment is relatively infrequent (11.6 percent of observations in the model and data). The median investment is 14 percent (22 percent) of annual income in the data (model), so that most investments are relatively small, but these constitute only 4 percent (8 percent) of all investment in the data (model).48 In contrast, a few very large i∗ investments (e.g., a large truck or a warehouse) have large effects on overall investment levels. For example, the top percentile of investments accounts for 36 percent (24 percent) of all investment in the data (model). Hence, while some households lie close enough to the threshold that changes in s induce investment (4 percent of households in the presample years), the vast majority of these investments are small. That is, the density of households resembling household (v) is low, especially for large investments (high levels of i∗ ). 47 Many buffer-stock models (e.g., Aiyagari (1994)) yield a very low level of constrained households in equilibrium. Relative to these models, our model has three important differences. First, we allow for default with minimum consumption, which is empirically observed, so the costs of being liquidity constrained are much lower. Second, investment also causes households to be constrained. Third, we are not modeling a stationary, general equilibrium, but estimating parameters in a partial equilibrium model. 48 An alternative interpretation of the data is that most households do not have potential projects that are on the relevant scale for microfinance. Households with unrealistically large projects may correspond, in the real world, to households that simply have no potential project in which to invest.
STRUCTURAL MICROFINANCE EVALUATION
1399
Since a lower s can never reduce investment, the theoretical effect of increased liquidity on investment levels is clear. It is simply that the samples are too small to measure it. Given enough households, small amounts of available credit will eventually decide whether a very large investment is made or not, and this will occur more often as the decrease in s grows larger. Indeed, when the 500 samples are pooled together, the pooled estimates of 0.40 (standard error = 004) for γI2002 is highly significant. The estimate is also sizable. Given the average credit injection per household, this would be an increase in investment of 3800 baht per household (relative to a pre-sample average of 4600 baht/household). 5.3. Normative Analysis We evaluate the benefits of the Million Baht program by comparing its benefits to a simple liquidity transfer. As our analysis in Figure 4 indicates, reductions in s (leftward shifts in the policy function from the Million Baht program) are similar to increases in liquidity (rightward shifts in the households from the transfer). Both provide additional liquidity. The advantage of the Million Baht program is that it provides more than a million baht in potential liquidity (−(smb v − s)P). That is, (by construction) borrowers choose to increase their credit by roughly a million baht, but nonborrowers also benefit from the increased potential liquidity from the relaxed borrowing constraint in the future. More generally, those who borrow have access to a disproportionate amount of liquidity relative to what they would get if the money were distributed equally as transfers. The disadvantage of the Million Baht program is that it provides this liquidity as credit, and hence there are interest costs, which are substantial given r = 0054. A household that receives a transfer of, say, 10,000 baht earns interest on that transfer relative to a household that has access to 10,000 baht in credit, even if it can be borrowed indefinitely. The relative importance of these two differences depends on households’ need for liquidity. Consider again the household in Figure 3. Households (ii) and (iii), which are not locally constrained (i.e., their marginal propensity to consume is less than 1), benefit little from a marginal decrease in s, since they have no need for it in the current period and may not need it for quite some time. Household (iv), which is defaulting, is actually hurt by a marginal reduction in s, since the household will now hold more debt and be forced to pay more interest next period. On the other hand, households (i) and (v) benefit greatly from the reduction in s, since both are locally constrained in consumption and investment, respectively. A quantitative cost–benefit analysis is done by comparing the cost of the program (the reduction in s) to a transfer program (an increase in l) that is equivalent in terms of providing the same expected level of utility (given Lnt
1400
J. P. KABOSKI AND R. M. TOWNSEND
and Ynt in 2001, just before the program was introduced). That is, we solve the equivalent transfer Tn for each household using the equation ∗ E[V (L P I ∗ ; smb v )|Yn5v Ln5v ] = E[V (L + Tn P I ; s)|Yn5v Ln5v ]
The average equivalent liquidity transfer per household in the sample is just 7000 baht, which is about 30 percent less than the 10,100 baht per household that the Million Baht program cost.49 Again, this average masks a great deal of heterogeneity across households, even in expectation. Ten percent of households value the program at 16,200 baht or more, while another 10 percent value the program at 900 baht or less. Twenty-four percent of households value the program at more than its cost (10,100 baht), but the median equivalent transfer is just 5300 baht. Thus, many households benefit disproportionately from the program because of the increased availability of liquidity, but most benefit much less. Although the Million Baht program is able to offer the typical household more liquidity (e.g., in the median-sized village, (−(smb v − s)P) = 13400 baht for a household with average income, while the average cost per household in that village is 9100 baht), this benefit is swamped by the interest costs to households. 5.4. Alternative Structural Analyses The structural model allows for several alternative analyses, including comparison with reduced form predictions, robustness checks with respect to the return on investment R, estimation using post-intervention data, long-run predictions, and policy counterfactuals. We briefly summarize the results here, but details are available upon request. 5.4.1. Return on Investment Our baseline value of R was 0.11. Recall that two alternative calibrations of the return on assets were calculated based on the whether our measure of productive assets included uncultivated or community use land (R = 008) or the value of the plot of land containing the home (R = 004). We redid both the estimation and the simulation using these alternative values. For R = 008, the estimates were quite similar; only a higher β (0.94), a lower r (0.031), and a lower risk aversion (112) were statistically different than the baseline. The model had even more difficulty matching income growth and volatility, so that the overall fit was substantially worse (J-statistic = 212 vs. 113 in the baseline). The simulation regression estimates were nearly identical. For the low value of R = 004, the estimation required that the return on liquidity be substantially lower than in the data (r = 0018) and that β be substantially higher (097) 49 This includes only the seed fund and omits any administrative or monitoring costs of the village banks.
STRUCTURAL MICROFINANCE EVALUATION
1401
than typical for buffer-stock models. The fit was also substantially worse (Jstatistic = 323). Finally, the regression estimates on the simulated data were qualitatively similar but smaller (e.g., a consumption coefficient of 0.68 in the first year). Indeed, only the reduction of default in the first year was statistically significant at a 0.05 percent level. 5.4.2. Estimation Using Ex Post Data In this analysis, rather than use the post-intervention data to test the model under calibrated borrowing constraints, we use it to estimate the new borrowing constraints and better identify the other parameters in the model. We proceed by specifying a reasonably flexible but parametric function for smb in the post-program years,
s3 1 smbv = s1 + s2 # HHs in villagev where s1 , s2 , and s3 are the parameters of interest.50 The moments for the postprogram years cover interest on savings and borrowing (two moments); income growth (two) and income growth volatility (three); consumption (two), investment probability (two), investment (two), and their interactions with measured income and liquidity ratios (twelve); and default (two). All but the interest moments are year-specific, and the only use of pre-program data is to construct the four income growth moments that require income in 2001. In total, the estimation now includes 27 moments and 14 parameters. The estimated results from the sample are strikingly similar to the baseline estimates from the pre-program sample and the calibration from the postprogram sample, all within 2 standard deviation bands.51 The resulting estimates are sˆ1 = −010, sˆ2 = −45, and sˆ3 = −115. The model fit is comparable to the baseline, performing well along the same dimensions and not well at all along the same dimensions. Finally, the average, standard deviation, and minimum and maximum of smbv implied by the estimates are −0.32 (−0.28 in baseline calibration), 0.16 (0.14), −0.98 (−0.91), and −0.10 (−0.09), respectively. The correlation between the two is very close to 1 by construction, since both increase monotonically with village size. That is, the estimated smbv are quite similar to the calibrated values. The fact that the estimates and calibrated values are quite close indicates that, cross sectionally, the simulated predictions of the model on average approximate a best fit to the variation in the actual data. 50 If all households borrowed every period and had identical permanent income, then the extra borrowing per household (950000/# HHs in villagev ) would translate into borrowing constraints and s3 = 1. with s1 = s (the pre-intervention borrowing constraint), s2 = 950000 P 51 For comparison, the point estimates of the full-sample (baseline) estimation are rˆ = 0059 ˆ = 1052 (1047), cˆ = 051 (052), (0054), σˆ N = 041 (031), σˆ U = 052 (042), σˆ E = 030 (015), G βˆ = 0926 (0926), ρˆ = 116 (120) μˆ i = 142 (147), σˆ i = 254 (250), and sˆ = −009 (−008).
1402
J. P. KABOSKI AND R. M. TOWNSEND
5.4.3. Long-Run Predictions The differences between αˆ Zj estimates in the first and second year (i.e., j = 1 2) of the program indicate that impacts are time-varying, since there are transitional dynamics as households approach desired buffer stocks. The structural model allows for simulation and longer run horizon estimates of impact. We therefore simulate data sets that include five additional years of data and run the analogous regressions. Four years out, none of the αˆ Z4 estimates is statistically significant on average, and the average point estimates are quite small for investment probability (0.27e−6) and default probability (0.01e−6) relative to the first year, the average αˆ C4 for consumption is still sizable (0.78), and that for investment is nearly the same as in the first year (0.32). By the seventh year, however, the consumption coefficient αˆ C7 has actually risen and is greater than in the first year (1.19). The U-shaped dynamics are a result of the transitional lowering of the buffered stock combined with the compounding growth effect of the higher investment and exogenous growth. Still, alternative regression estimates that simply measure a single (common for all post-program years j) coefficient αZ do not capture any statistically significant impact on consumption when 7 years of long-run data are used. This shows the importance of considering the potential time-varying nature of impacts in evaluation. 5.4.4. Policy Counterfactuals From the perspective of policymakers, the Million Baht Village Fund Program may appear problematic along two fronts. Its most discernible impacts are on consumption rather than investment, and it appears less cost-effective than a simple transfer, mainly because funds may simply go to prevent default and the increased borrowing limit actually hurts defaulting households. An alternative policy that one might attempt to implement would be to allow borrowing only for investment. We would assume that the village can observe investment, but since money is fungible, it would be unclear whether these investments would have been undertaken even without the loans, in which case the loans are really consumption loans. Since defaulting households cannot undertake investments, it would prevent households in default from borrowing. Nevertheless, such a policy would also eliminate households like household (i) in Figure 4 from borrowing. The ability to model policy counterfactuals is another strength of a structural model. In a model with this particular policy, households face the constraint in any period in which they decide to invest, while facing the baseline smbalternative v , s if they decide not to invest. The default threshold is also moved to smbalternative v however, to prevent households from investing and borrowing in one period, and then purposely not investing in the next period so as to default. Under this policy, the new borrowing constraints are even lower and have wider variation (a maximum, minimum, and mean of −0.16, −1.13, and −0.67, respectively, vs. −0.09, −0.91, and −0.28 for the actual policy) but only for those who borrow.
STRUCTURAL MICROFINANCE EVALUATION
1403
The policy increases both the impact on consumption and the impact on investment. Pooling all 500 simulated samples yields a significant estimate for consumption that is similar to the actual million baht intervention (1.35 vs. 1.38 in the first year). It also yields a much larger and significant estimate for investment levels (0.66 in the first year), which is expected since the borrowing constraints of investors are much lower under this policy. Naturally, this policy offers less flexibility for constrained households that would rather not invest, but the relatively larger benefits to defaulters and investors help outweigh this loss. The average equivalent transfer for this policy is substantially higher (38,600 vs. 8200), and indeed 84 percent of households value the program at more than its per household cost of 10,100 baht. Thus, this alternative policy outperforms the actual policy. 6. CONCLUSIONS We have developed a model of buffer-stock saving and indivisible investment, and used it to evaluate the impact of the Million Baht program as a quasi-experiment. The correct prediction that consumption increases more than one for one with the credit injection is a “smoking gun” for the existence of credit constraints and is strong support for the importance of bufferstock savings behavior. Nevertheless, the microfinance intervention appears to be less cost effective on average than a simpler transfer program because it saddles households with interest payments. This masks considerable heterogeneity, however, including some households that gain substantially. Finally, we have emphasized the relative strengths of a quasi-experiment, a structural model, and reduced form regressions. One limitation of the model is that although project size is stochastic, the quality of investments, modeled through R, is assumed constant across projects and households. In the data, R varies substantially across households. Heterogeneity in project quality may be an important dimension for analysis, especially since microfinance may change the composition of project quality. Ongoing research by Banerjee, Breza, and Townsend found that high return households do borrow more from the funds, but they also invest less often, which indicates that the data may call for a deeper model of heterogeneity and, related, a less stylized model of the process for projects sizes. Potential projects may not arrive each year, they may be less transient (which allows for important anticipatory savings behavior as in Buera (2008)), or households might hold multiple projects ordered by their profitability. Such extensions might help explain the investment probability results in the second year of the program: a positive impact in the model but no impact in the data. Related, the analysis has also been purely partial equilibrium analysis of household behavior. In a large-scale intervention, one might suspect that general equilibrium effects on income, wage rates, rates of return to investment, and interest rates on liquidity may be important (see Kaboski and Townsend
1404
J. P. KABOSKI AND R. M. TOWNSEND
(2009)). Finally, we did not consider the potential interactions between villagers or between villages; neither were the intermediation mechanism or default contracting explicitly modeled. These are all avenues for future research. REFERENCES AARONSEN, D., S. AGARWAL, AND E. FRENCH (2008): “The Consumption Response to Minimum Wage Increases,” Working Paper 2007-23, FRB Chicago. [1361] AIYAGARI, S. R. (1994): “Uninsured Idiosyncratic Risk and Aggregate Saving,” Quarterly Journal of Economics, 109, 659–684. [1359,1374,1398] ALVAREZ, F., AND N. STOKEY (1998): “Dynamic Programming With Homogeneous Functions,” Journal of Economic Theory, 82, 167–189. [1375] ARGHIROS, D. (2001): Democracy, Development and Decentralization in Provincial Thailand. Democracy in Asia, Vol. 8. Richmond Surrey, U.K.: Curzon Press. [1365] BANERJEE, A., AND E. DUFLO (2008): “Do Firms Want to Borrow? Testing Credit Constraints Using a Direct Lending Program,” Mimeo, Massachusetts Institute of Technology. [1361] BANERJEE, A., AND A. NEWMAN (1993): “Occupational Choice and the Process of Development,” Journal of Political Economy, 101, 274–298. [1359] BANERJEE, A., E. DUFLO, R. GLENNERSTER, AND C. KINNAN (2010): “The Miracle of Microfinance? Evidence From a Randomized Evaluation,” Mimeo, Massachusetts Institute of Technology. [1357,1361] BUERA, F. J. (2008): “Persistency of Poverty, Financial Frictions, and Entrepreneurship,” Mimeo, Northwestern University. [1403] BURGESS, R., AND R. PANDE (2005): “Do Rural Banks Matter? Evidence From the Indian Social Banking Experiment,” American Economic Review, 95, 780–795. [1361] CARROLL, C. D. (1997): “Buffer Stock Saving and the Life Cycle/Permanent Income Hypothesis,” Quarterly Journal of Economics, 112, 1–55. [1359,1372] (2004): “Theoretical Foundations of Buffer Stock Saving,” Unpublished Manuscript, The Johns Hopkins University. [1375] CARROLL, C. D., AND A. A. SAMWICK (1997): “The Nature of Precautionary Wealth,” Journal of Monetary Economics, 40, 41–71. [1383,1386] CHIAPPORI, P., S. SCHULHOFER-WOHL, K. SAMPHANTHARAK, AND R. TOWNSEND (2008): “On Homogeneity and Risk Sharing in Thai Villages,” Unpublished Manuscript, University of Chicago. [1359,1371] COLEMAN, B. (1999): “The Impact of Group Lending in Northeast Thailand,” Journal of Development Economics, 60, 105–141. [1357] DEATON, A. (1991): “Savings and Liquidity Constraints,” Econometrica, 59, 1221–1248. [1359] DEATON, A., AND C. PAXSON (2000): “Growth and Saving Among Individuals and Households,” Review of Economics and Statistics, 82, 212–225. [1372] GALOR, O., AND J. ZEIRA (1993): “Income Distribution and Macroeconomics,” Review of Economic Studies, 60, 35–52. [1359] GERTLER, P., D. LEVINE, AND E. MORETTI (2003): “Do Microfinance Programs Help Insure Families Against Illness?” Research Paper C03-129, Center for International and Development Economics. [1357] GINE, X. (2005): “Cultivate or Rent Out? Land Security in Rural Thailand,” Policy Research Working Paper Series 3734, The World Bank. [1367] GINE, X., AND R. TOWNSEND (2004): “Evaluation of Financial Liberalization: A General Equilibrium Model With Constrained Occupation Choice,” Journal of Development Economics, 74, 269–307. [1359] GOURINCHAS, P.-O., AND J. PARKER (2002): “Consumption Over the Life Cycle,” Econometrica, 70, 47–89. [1372,1383,1387]
STRUCTURAL MICROFINANCE EVALUATION
1405
GROSS, D. B., AND N. S. SOULELES (2002): “Do Liquidity Constraints and Interest Rates Matter for Consumer Behavior? Evidence From Credit Card Data,” The Quarterly Journal of Economics, 117 (1), 149–185. [1361] HECKMAN, J. J., S. URZUA, AND E. VYTLACIL (2006): “Understanding Instrumental Variables in Models With Essential Heterogeneity,” Review of Economics and Statistics, 88, 389–432. [1362] KABOSKI, J. P., AND R. M. TOWNSEND (2005): “Policies and Impact: An Analysis of Village Microfinance Institutions,” Journal of the European Economic Association, 3, 1–50. [1357] (2009): “The Impacts of Credit on Village Economies,” Mimeo, University of Notre Dame, available at http://www.nd.edu/~jkaboski/impactofcredit.pdf. [1358,1363,1364, 1367,1368,1370,1404] (2011): “Supplement to ‘A Structural Evaluation of a Large-Scale QuasiExperimental Microfinance Initiative’,” Econometrica Supplemental Material, 79, http://www. econometricsociety.org/ecta/Supmat/7079_data and programs.zip. [1362] KARAIVANOV, A., AND R. TOWNSEND (2010): “Dynamic Financial Constraints: Distinguishing Mechanism Design From Exogenously Incomplete Contracts,” available at http://www.robertmtownsend.net/sites/default/files/files/papers/working_papers/ DynamicFinancialConstraints.pdf. [1371] KARLAN, D., AND J. ZINMAN (2009): “Expanding Credit Access: Using Randomized Supply Decisions to Estimate the Impacts,” Review of Financial Studies, 23, 433–464. [1357] KEANE, M., AND K. WOLPIN (1994): “The Solution and Estimation of Discrete Choice Dynamic Programming Models by Simulation and Interpolation: Monte Carlo Evidence,” Review of Economics and Statistics, 76, 648–672. [1381] (1997): “The Career Decisions of Young Men,” Journal of Political Economy, 105, 473–522. [1381] (2001): “The Effect of Parental Transfers and Borrowing Constraints on Educational Attainment,” International Economic Review, 42, 1051–1103. [1381] LISE, J., S. SEITZ, AND J. SMITH (2004): “Equilibrium Policy Experiments and the Evaluation of Social Programs,” Working Paper 10283, NBER. [1362] (2005): “Evaluating Search and Matching Models Using Experimental Data,” Working Paper, University of Michigan. [1362] LLOYD -ELLIS, H., AND D. BERNHARDT (2000): “Enterprise, Inequality, and Economic Development,” Review of Economic Studies, 67, 147–168. [1359] LOCHNER, L., AND A. MONGE-NARANJO (2008): “The Nature of Credit Constraints and Human Capital,” Working Paper W13912, NBER. [1374] LOW, H., C. MEGHIR, AND L. PISTAFERRI (2010): “Wage Risk and Employment Risk Over the Life Cycle,” American Economic Review, 100, 1432–1467. [1373] MEGHIR, C., AND L. PISTAFERRI (2004): “Income Variance Dynamics and Heterogeneity,” Econometrica, 72, 1–32. [1372] MORDUCH, J. (1998): “Does Microfinance Really Help the Poor? New Evidence From Flagship Programs in Bangladesh,” Working Paper 198, Princeton University, Woodrow Wilson School of Public and International Affairs. [1357] OWEN, A., AND D. WEIL (1997): “Intergenerational Earnings Mobility, Inequality, and Growth,” Working Paper 6070, NBER. [1359] PITT, M., AND S. KHANDKER (1998): “The Impact of Group-Based Credit Programs on Poor Households in Bangladesh: Does the Gender of Participants Matter?” Journal of Political Economy, 106, 958–996. [1357] PITT, M., S. KHANDKER, O. CHOWDURY, AND D. MILLIMET (2003): “Credit Programs for the Poor and the Health Status of Children in Rural Bangladesh,” International Economic Review, 44, 87–118. [1357] PUENTES, E. (2011): “Use of Financial Instruments in Rural Thailand,” Working Paper Series 344, Universidad de Chile. [1371] PUGINIER, O. (2001): “Hill Tribes Struggling for a Land Deal: Participatory Land Use Planning in Northern Thailand amid Controversial Policies,” Ph.D. Thesis, Humboldt University. [1365, 1367]
1406
J. P. KABOSKI AND R. M. TOWNSEND
SAMPHANTHARAK, K., AND R. TOWNSEND (2009): Households as Corporate Firms: An Analysis of Household Finance Using Integrated Constructing Financial Statements From Integrated Household Surveys and Corporate Financial Accounting. New York: Cambridge University Press. [1371] STOKEY, N., R. E. LUCAS, JR., AND E. C. PRESCOTT (1989): Recursive Methods in Economic Dynamics. Cambridge, MA: Harvard University Press. [1375] TODD, P., AND K. WOLPIN (2006): “Assessing the Impact of a School Subsidy Program in Mexico: Using a Social Experiment to Validate a Dynamic Behavioral Model of Child Schooling and Fertility,” American Economic Review, 96, 1384–1417. [1362] TOWNSEND, R. M., A. PAULSON, S. SAKUNTASATHIEN, T. J. LEE, AND M. BINFORD (1997): “Questionnaire Design and Data Collection for NICHD Grant ‘Risk, Insurance and the Family’ and NSF Grants,” Unpublished Manuscript, University of Chicago. [1358] WORLD BANK (1996): “KUPEDES: Indonesia’s Model Small Credit Program,” OED Precis 104, World Bank. [1358] WRIGHT, M. (2002): “Investment and Growth With Limited Commitment,” Mimeo, Stanford University. [1374] ZELDES, S. (1989): “Consumption and Liquidity Constraints: An Empirical Investigation,” Journal of Political Economy, 97, 305–346. [1361,1372,1378]
Dept. of Economics, University of Notre Dame, 717 Flanner Hall, Notre Dame, IN 46556, U.S.A. and NBER;
[email protected] and Dept. of Economics, Massachusetts Institute of Technology, Cambridge, MA 02138, U.S.A. and NBER;
[email protected]. Manuscript received April, 2007; final revision received May, 2010.
Econometrica, Vol. 79, No. 5 (September, 2011), 1407–1451
PRODUCT DIFFERENTIATION, MULTIPRODUCT FIRMS, AND ESTIMATING THE IMPACT OF TRADE LIBERALIZATION ON PRODUCTIVITY BY JAN DE LOECKER1 This paper studies whether removing barriers to trade induces efficiency gains for producers. Like almost all empirical work which relies on a production function to recover productivity measures, I do not observe physical output at the firm level. Therefore, it is imperative to control for unobserved prices and demand shocks. I develop an empirical model that combines a demand system with a production function to generate estimates of productivity. I rely on my framework to identify the productivity effects from reduced trade protection in the Belgian textile market. This trade liberalization provides me with observed demand shifters that are used to separate out the associated price, scale, and productivity effects. Using a matched plant–product level data set and detailed quota data, I find that correcting for unobserved prices leads to substantially lower productivity gains. More specifically, abolishing all quota protections increases firm-level productivity by only 2 percent as opposed to 8 percent when relying on standard measures of productivity. My results beg for a serious reevaluation of a long list of empirical studies that document productivity responses to major industry shocks and, in particular, to opening up to trade. My findings imply the need to study the impact of changes in the operating environment on productivity together with market power and prices in one integrated framework. The suggested method and identification strategy are quite general and can be applied whenever it is important to distinguish between revenue productivity and physical productivity. KEYWORDS: Production function, imperfect competition, trade liberalization.
1. INTRODUCTION OVER THE LAST DECADE, a large body of empirical work has emerged that relies on the estimation of production functions to evaluate the impact of trade policy changes on the efficiency of producers and industries as a whole. There are two main reasons for this development. First, there is a great interest in evaluating policy changes and, more precisely, whether a change in trade policy had any impact on the efficiency of firms in the economy. In this context, being able to estimate a production function using microdata is imperative to recover a measure for (firm-level) productivity. Second, the increased availability of international trade data and firm-level data sets for various countries and 1 An earlier version of this paper was circulated under the title “Product Differentiation, MultiProduct Firms and Structural Estimation of Productivity,” and is a revised version of Chapter 4 of my Ph.D. thesis. This paper has benefited from comments and suggestions of the editor and three anonymous referees. I am especially grateful to Dan Ackerberg, Steve Berry, Penny Goldberg, Joep Konings, Marc Melitz, Ariel Pakes, and Amil Petrin for comments and suggestions. A second special thanks to Amil Petrin for sharing his programs and code. Comments on an earlier version from David Blackburn, Mariana Colacelli, Frank Verboven, Patrick Van Cayseele, and Hylke Vandenbussche helped improve the paper. Finally, I would like to thank various seminar participants for their comments.
© 2011 The Econometric Society
DOI: 10.3982/ECTA7617
1408
JAN DE LOECKER
industries has further boosted empirical work analyzing the trade–productivity relationship. A seemingly robust result from this literature is that opening up to trade, measured by either tariff or quota reductions, is associated with measured productivity gains and firms engaged in international trade (through export or Foreign Direct Investment (FDI)) have higher measured productivity.2 The productivity measures used to come to these conclusions are recovered after estimating a production function where output is replaced by sales, because physical output is usually not observed. The standard solution in the literature has been to deflate firm-level sales by an industrywide producer price index in the hope to eliminate price effects. This has two major implications. First, it will potentially bias the coefficients of the production function if inputs are correlated with the price error, that is, the omitted price variable bias. More precisely, the coefficients of the production function are biased if the price error, which captures the difference between a firm’s price and the industry price index, is correlated with the firm’s input choices. The use of production function estimation on industries with differentiated products therefore requires controlling for these unobserved prices. Second, relying on deflated sales in the production function will generate productivity estimates that contain price and demand variation.3 This potentially introduces a relationship between measured productivity and trade liberalization simply through the liberalization’s impact on prices and demand. This implies that the impact on actual productivity cannot be identified, which invalidates evaluation of the welfare implications. Given that trade policy is evaluated on the basis of its impact on welfare, the distinction between physical productivity and prices is important. This paper analyzes the productivity effects of reduced quota protection using detailed production and product-level data for Belgian textile producers.4 The trade protection change in my application plays two roles. First, we want to investigate its effects on productivity. Second, it is used to construct a set of plausibly exogenous demand shifters at the firm level. I develop an empirical model to estimate productivity by introducing a demand system in the standard production function framework. While the point that productivity estimates obtained by using revenue data may be poor measures of true efficiency had been made before (Klette and Griliches (1996), Katayama, Lu, and Tybout (2009), Levinsohn and Melitz (2006)), we do not know how important it is in 2 Pavcnik (2002) documented productivity gains from trade liberalization in Chile, Javorcik (2004) found positive spillovers from FDI in Lithuania and Van Biesebroeck (2005) found learning by exporting in sub-Saharan African manufacturing. Bartelsman and Doms (2000) and Tybout (2000) reviewed related studies. 3 Productivity growth measures are not biased under the assumption that input variation is not correlated with the price deviation when every firm’s price relative to the industry producer price index (PPI) does not change over time. 4 These data have become fairly standard in international trade applications. For instancesimilar data are available for India (Goldberg, Khandelwal, Pavcnik, and Topalova (2008)), Colombia (Kugler and Verhoogen (2008)), and the United States.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1409
practice.5 My contribution is to empirically quantify the productivity response to a reduction in trade protection while relying on demand shifters and exogenous trade protection measures to control for demand and price effects. My methodology is therefore able to isolate the productivity response to reduced trade protection from the price and demand responses. The suggested method and identification strategy are quite general and can be applied whenever it is important to distinguish between revenue and physical productivity. Recently the literature has focused almost exclusively on controlling for the simultaneity bias when estimating production functions by relying on proxy methods (Olley and Pakes (1996), Levinsohn and Petrin (2003)). A series of papers used this approach to verify the productivity gains from changes in the operating environment of firms, such as trade liberalization. In almost all of the empirical applications, the omitted price variable bias was ignored or assumed away.6 I build on this framework by introducing observed demand shifters, product (-group) controls, and exogenous trade policy changes. This allows me to consider a demand system where elasticities of demand differ by product segment, and to recover estimates for productivity and returns to scale. Using specific functional forms for production and demand, I back out estimates for true productivity and estimate its response to the specific trade liberalization process that took place in the textile market. I find that the estimated productivity gains from relaxing protection are reduced substantially when relying on my empirical methodology. My results further imply that abolishing all quotas would lead to only a 2 percent change in productivity, on average, as opposed to about 8 percent when relying on standard measures of productivity. To my knowledge, this paper is the first to analyze productivity responses to trade liberalization while controlling for price and demand responses, by relying on the insight that changes in quota protection serve as exogenous demand shifters. The remainder of this paper is organized as follows. In the next section, I introduce the empirical framework of production and demand which generates a revenue generating production function. Before turning to estimation and identification of this model, I provide some essential background information on the trade liberalization process in the textile market and describe the main data sources. This data section highlights the constraints most, if not all, empirical work faces in estimating production functions. Section 4 introduces the specific econometric strategy to estimate and identify the parameters of my model. The main results are presented in Section 5 and I provide various robustness checks in Section 6. I collect some final remarks and implications of my results together with the main conclusion in the final section. The Supplemental Material (De Loecker (2011)) contains Appendixes A–C. 5 Klette and Griliches (1996) were mostly interested in estimating returns to scale for selected U.S. manufacturing sectors while correcting for imperfect competition in output markets. 6 Some authors explicitly reinterpreted the productivity measures as sales per input measures. For instance, see footnote 3 of Olley and Pakes (1996, p. 1264).
1410
JAN DE LOECKER
2. EMPIRICAL MODEL: A FRAMEWORK OF PRODUCTION AND DEMAND I start out with a model of single product firms to generate the estimating equation of interest. However, given the prevalence of multiproduct and multisegment firms in the data, I discuss the extension toward multiproduct firms. I stress that this extension allows me to use all firms in the data and therefore increase the efficiency of the estimates. The product mix has no role on the production function, but I emphasize the importance of observing the product mix of each firm in my sample to recover segment-specific demand parameters. 2.1. Single Product Producers I consider a standard Cobb–Douglas production function (1)
α
α
Qit = Litl Mitαm Kit k exp(ωit + uit )
where a firm i produces a unit of output Qit at time t using labor (Lit ), intermediate inputs (Mit ), and capital (Kit ). In addition to the various inputs, production depends on a firm-specific productivity shock (ωit ), which captures a constant term, and uit , which captures measurement error and idiosyncratic shocks to production. As in most applications, physical output Qit is not observed; therefore, empirical researchers have relied on measures of sales (Rit ) or value added. The standard solution in the literature has been to deflate firm-level sales by an industrywide producer price index in the hope of eliminating price effects. This has two major implications. First, it potentially biases the coefficients of the production function if inputs are correlated with prices, that is, the omitted price variable bias. Second, it generates productivity estimates that contain price and demand variation. This potentially introduces a relationship between measured productivity and trade liberalization simply through the liberalization’s impact on prices and demand. To single out the productivity response to trade liberalization, I introduce a demand system for a firm’s i variety in segment s into the production framework. I consider a standard horizontal product differentiation demand system (constant elasticity of substitution (CES)), where I allow for different substitution patterns by segment s: ηs Pit Qit = Qst (2) exp(ξit ) Pst This demand system implies that demand for a firm’s product depends on its own price (Pit ), an average price in the industry (Pst ), an aggregate demand shifter (Qst ), and unobserved demand shocks (ξit ). The CES demand system coupled with monopolistic competition implies a constant markup over
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1411
s ) or, altermarginal cost for every specific segment of the industry of ( ηηs +1 1 natively, a segment-specific Lerner index ( |ηs | ). Producers of textiles therefore face different demand elasticities depending on the product segment. My model therefore departs from the single demand parameter model used by Klette and Griliches (1996) and Levinsohn and Melitz (2006), who relied on the number of firms or aggregate industry output to estimate a single markup.7 It is worth noting that the aggregate demand shifter Qst represents total demand for textile products (in a given segment) in the market.8 As discussed by Klette and Griliches (1996) and Levinsohn and Melitz (2006), I use the demand system (2) to obtain an expression for the price Pit . Under the single product case, every firm produces a single variety and, in equilibrium, quantity produced equals quantity demanded. A firm’s revenue is simply Rit = Qit Pit and using the expression for price,
(3)
Rit = Qit(ηs +1)/ηs Qst−1/ηs Pst (exp(ξit ))−1/ηs
A final step is to plug the specific production function (1) into equation (3) and consider log deflated revenue ( rit ≡ rit − pst ), where lowercase letters denote logs. This implies the following estimating equation for the sales generating production function: (4)
rit = βl lit + βm mit + βk kit + βs qst + ω∗it + ξit∗ + uit
The coefficients of the production function are reduced form parameters that combine production and demand, and I denote them by β as opposed to the )αh true technology parameters α. The parameters of interest are βh = ( ηηs +1 s for h = {l m k}, and the segment-specific demand parameters, βs = |η1s | , which are direct estimates of the segment-specific elasticity of demand. Returns to scale in production γ is obtained by summing the production function parameters, that is, γ = αl + αm + αk . Note that unobserved prices are already picked up through the correlation with inputs and by including the segment-specific output. In Section 4, I discuss the structure of unobserved productivity and demand shocks in detail. For now, I note that both unobservables enter the estimating equation scaled by the relevant demand parameter, just like the production function coefficients, that is, ω∗it ≡ ωit ( ηηs +1 ) and ξit∗ ≡ ξit |ηs |−1 . The s difference between both only becomes important when recovering estimates of productivity after estimating the parameters of the model. 7 These single markup models imply that aggregate technology shocks or factor utilization cannot be controlled for because year dummies can no longer be used if one wants to identify the demand parameter. 8 I refer to Appendix B for a detailed discussion on the exact construction of this variable in my empirical application. For now, it is sufficient to note that Qst measures total demand for textile products (in segment s) at time t in the relevant market. In my context, it will be important to include imports as well.
1412
JAN DE LOECKER
2.2. Multiproduct Producers Anticipating the significant share of multiproduct firms in my data, I explicitly allow for multiproduct firms in my empirical model so as to use the entire sample of firms. However, to use the full data, more structure is required given that standard production data do not record input usage by product. Therefore, to use the product-level information I require an extra step of aggregating the data at the product level to the firm level, where the relevant variables such as sales and inputs are observed. I aggregate the production function to the firm level by assuming identical production functions across products produced, which is a standard assumption in empirical work; see, for instance, Foster, Haltiwanger, and Syverson (2008) and more discussion in Appendix B. Under this assumption, and given that I observe the number of products each firm produces, I can relate the production of a given product j of firm i Qijt , to its total input use and the number of products produced. The production function for product j of firm i is given by (5) (6)
Qijt = (cijt Lit )αl (cijt Mit )αm (cijt Kit )αk exp(ωit + uit ) = Jit−γ Qit
where cijt is defined as the share of product j in firm i’s total input use, for instance, labor used on product j at firm i at time t, Lijt , is given by cijt Lit . The second line uses the concept that inputs are spread across products in exact proportion to the number of products produced Jit such that cijt = Jit−1 . Introducing multiproduct firms in this framework therefore only requires controlling for the number of products produced, which scales the production from the product level to the firm level. This restriction is imposed to accommodate a common data constraint of not observing input use by product. I follow the same steps as above by relying on the demand system to generate an expression for firm-level revenue. The demand system is as before, but at the level of a product. The only difference is that to use the equilibrium condition that a firm’s total production is equal to its total demand, I have to consider the firm’s total revenue over all products it owns J(i), that is, Rit = j∈J(i) Pijt Qijt . Combining the production function and the expression for price from (2) leads to an expression for total deflated revenue as a function of inputs, observed demand shifters, productivity, the number of products, and unobserved demand shocks, and is given by (7)
rit = βnp npit + βl lit + βm mit + βk kit + βs qst + ω∗it + ξit∗ + uit
where npit = ln(Jit ). Refer to Appendix B.3 for an explicit derivation. For a single product firm ln(Jit ) = 0, whereas for multiproduct firms an additional term is introduced. I can take this last equation to data, except that in addition to firms producing multiple products, they can also be active in different segments. The variation of activity across segments is discussed in the next section.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1413
For single segment firms, it suffices to stack observations across segments, and to cluster the standard errors by segment to obtain consistent estimates and to verify significance levels. Alternatively I can estimate this regression segment by segment, but I can increase efficiency by pooling over all single segment firms, since the production coefficients are assumed not to vary across segments, and simply expand the term on qst to s βs sis qst , where a dummy variable sis is switched on per firm.9 This generates estimates for all parameters of interest, including the segment-specific demand parameters from estimating (8)
rit = βnp npit + βl lit + βm mit + βk kit +
S
sis βs qst + ω∗it + ξit∗ + uit
s=1
Going to firms with multiple segments, sis can be anywhere between 0 and 1. In this way, all firms now potentially face S different demand conditions, and the product mix (sis ) variance helps to identify the segment demand elasticities. Estimating the revenue production function across all firms in the data, which consist of single product producers, multiproduct–single segment producers, and multiproduct–segment producers, requires additional restrictions to the underlying production and demand structure. To estimate my model on multiproduct/single segment firms, I rely on proportionality of inputs, which follows the standard in the literature. However, to enlarge the sample to multisegment firms, I need to measure, at the level of a firm, the share of each segment s in the firm’s total demand. I consider various approaches to measure the latter and, in addition, I estimate my model on the different sets of firms in the data to verify whether my results are robust to the extra assumptions and potential measurement errors. Before I discuss the estimation procedure and the identification strategy, I introduce the various data sets I rely on in my empirical analysis. The discussion of the data guides my identification strategy, which requires to control for unobserved demand shocks (ξit ) and productivity shocks (ωit ). I control for unobserved productivity shocks by relying on a modification of recently developed proxy estimators initiated by Olley and Pakes (1996) and subsequent work of Levinsohn and Petrin (2003). While segment-specific demand shocks (qst ) are directly observed, I rely on detailed product data and firm-specific protection rates to control for unobserved demand shocks which could otherwise bias the production function and demand parameters. I develop an estimation strategy that combines both controls and generates estimates of productivity purged from price variation by jointly estimating production and demand parameters. This approach allows for a separate identification of the productivity and residual demand effect of quota protection. 9 This assumption is a consequence of not observing inputs broken down by products, a standard restriction in microdata.
1414
JAN DE LOECKER
3. BACKGROUND ON TEXTILE MARKET AND DATA I provide some background on the operating environment of Belgian textile producers in the European Union (EU-15) market during my sample period 1994–2002. I show that reduced quota protection had a downward pressure on output prices and led to a reallocation of imports toward low wage textile producers. Furthermore, I describe my three main data sources and highlight the importance of observing a firm’s product mix and product-specific protection data. 3.1. Trade Protection in the Textile Market Belgian producers shipped about 85 percent of their total production to the EU-15 market throughout the sample period, or about 8.5 billion Eur. I therefore consider the EU-15 as the relevant market for Belgian producers. This has an important implication for my empirical analysis. I can hereby rely on the drastic change in quota protection that took place in the EU-15 textile market during my sample period to serve two distinct roles in the analysis. First of all, I want to examine its impact on firm-level productivity and the efficiency of the industry. Second, quota protection acts as firm- and time-specific demand shifters which play an important role in the analysis. Given that the protection is decided at the EU level, I treat changes in trade policy as exogenous, as it is hard to argue that a single Belgian producer can impact EU-wide trade policy. The EU textile industry experienced a dramatic change between 1990 and 2005. Both a significant reduction in quota protection and a reorientation of imports toward low wage countries had an impact on the operating environment of firms. All of this together had an impact on prices, investment, and production in this market. Table I shows the number of quotas and the average quota levels. I describe these data in more detail later in the paper. For now, it suffices to observe the drop in the number of quota restrictions over the sample period. By 2002, the number of quotas had fallen by 54 percent over a 9 year period and these numbers refer to the number of product–supplier restricted imports. The last four columns of Table I present the evolution by unit of measurement and the same pattern emerges: the average quota level for products protected throughout the sample period increased by 72 and 44 percent for products measured in kilograms and number of pieces, respectively. Both the enormous drop in the number of quotas and the increase in the levels of existing quotas point to a period of significant trade liberalization in the EU textile industry. This table does not show the variation in trade protection across products, which is ultimately what determines how an individual firm reacts to such a change. With respect to this dimension, the raw data also suggest considerable variation in the change of quota protection across products. Over the sample period, the unweighted average (across products) drop in the number of supplying countries facing a quota is 36 percent with a standard deviation of 36 percent. For instance, the number of quotas on “track suits of
1415
PRODUCT DIFFERENTIATION AND PRODUCTIVITY TABLE I NUMBER OF QUOTAS AND AVERAGE QUOTA LEVELS (IN MILLIONS) kg
No. of Pieces
Number of Quota Protections
No. of Quotas
Level
No. of Quotas
Level
1994 1995 1996 1997 1998 1999 2000 2001 2002
1,046 936 824 857 636 642 636 574 486
466 452 411 413 329 338 333 298 259
3.10 3.74 3.70 3.73 4.21 4.25 4.60 5.41 5.33
580 484 413 444 307 304 303 276 227
8.58 9.50 7.95 9.28 9.01 10.53 9.77 11.06 12.37
Change
−54%
−44%
72%
−60%
44%
knitted or crocheted fabric, of wool, of cotton or of man-made textile fibres” dropped by 80 percent, whereas quotas on “women’s or girls’ blouses, shirts and shirt-blouses, whether or not knitted or crocheted, of wool, of cotton or man-made fibres” only dropped by 30 percent. My empirical analysis relies on this product (and time) variation of protection by adding product information to the firm-level production data. Table II reports the top suppliers of textile products to the EU-15 and their share in total EU-15 imports in 1995 and 2002, as well as the number of quotas they faced. Total imports increased by almost 60 percent from 1995 to 2002 and the composition of supplying countries changed quite a bit. China is responsible for 16 percent of EU imports of textile products by 2002, while only facing about half of the quota restrictions compared to 1995. The Europe Agreements signed with (at the time) EU candidate countries reorientated imports as well. Both Romania and the Czech Republic significantly increased their share in EU imports due to the abolishment of quota protection. Firms operating in this environment clearly faced different demand conditions that in turn are expected to impact optimal price setting. Finally, when analyzing producer prices of textile products, I find that during 1996–1999 and 2000–2005, prices of textile products remained relatively unchanged and even slightly decreased. Producer prices in the textile sector further diverged from the manufacturing sector as a whole. In fact, in relative terms, Belgian textile producers saw their real prices, compared to the manufacturing sector, drop by almost 15 percent by 2007. This then suggests a potential relationship between producer prices and trade protection. It is important to note that these aggregate prices, PPI, mask the heterogeneity of prices across goods. It is precisely the variation across producers and products that needs to be controlled for, and relying on deflated sales to proxy for output in a production function only corrects for aggregate price shocks. The
1416
JAN DE LOECKER TABLE II TOP SUPPLIERS OF TEXTILE PRODUCTS TO THE EUa Top in 1995 (63%) Country
Share
China Turkey India Hong Kong Poland Tunisia United States Switzerland Morocco Indonesia Pakistan Bangladesh Romania
10.6 9.4 6.5 5.7 4.2 4.1 4.1 4.0 3.8 3.3 2.7 2.3 2.3
Top in 2002 (68.3%)
No. of Quotas
Country
66 35 19 32 18 2 0 0 0 14 15 3 30
China Turkey India Romania Tunisia Bangladesh Morocco Poland Hong Kong Indonesia Pakistan Czech Republic Republic of Korea
Share
No. of Quotas
15.9 12.5 5.6 5.4 4.4 4.0 3.8 3.4 3.3 2.8 2.7 2.3 2.2
35 0 17 0 0 3 0 0 18 12 14 0 27
Total imports
45,252
71,414
Imports top 13
28,509
48,776
a Total imports are in billions of euros and are from the author’s own calculations and EUROSTAT.
product-level quota data highlight the variation of protection across products and suppliers over time, and I will allow prices to vary with changes in protection. 3.2. Production, Product, and Protection Data I rely on detailed plant-level production data in addition to unique information on the product mix of firms and product-specific quota protection data to analyze the productivity effects of decreased quota protection. I describe the three data sets and the main variables of interest in turn. 3.2.1. Production Data My data cover firms active in the Belgian textile industry during the period 1994–2002. The firm-level data are collected from tax records by the National Bank of Belgium and the data base is commercialized by BvD BELFIRST. The data contain the entire balance sheets of all Belgian firms that have to report to the tax authorities. In addition to traditional variables, such as revenue, value added, employment, various capital stock measures, investments, and material inputs, the data set also provides detailed information on firm entry and exit behavior. Table III reports summary statistics of some of the variables used in the analysis. The last column repeats the finding of a decreased producer price
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1417
TABLE III SUMMARY STATISTICS OF PRODUCTION DATAa Year
Employment
Total Sales
VA p/w
Materials
PPI
1994 1995 1996 1997 1998 1999 2000 2001 2002
89 87 83 85 90 88 90 92 99
3,185 3,562 3,418 4,290 4,482 4,248 4,763 4,984 4,500
45.1 44.4 45.2 53.0 50.9 51.6 52.5 51.5 53.7
13,160 14,853 14,313 16,688 17,266 15,546 17,511 17,523 17,053
100.00 103.40 99.48 99.17 98.86 98.77 102.98 102.67 102.89
a Employment is average number of full time employees, total sales is given in millions of euros, VA p/w is value added per worker, and materials is given in thousands of euros.
index (PPI) during my sample period. In fact, the organization of employers, FEBELTEX, suggests in their various annual reports that the downward pressure on prices was largely due to increased competition from low wage countries, most notably the Central and Eastern European Countries (CEECs), Turkey, and China. Together with a drop in the average price, the industry as a whole experienced a downward trend in sales at the end of the nineties, as shown in the third column. Average value added per worker, often used as a crude measure of productivity, increased by 19 percent. The raw data clearly indicate a positive correlation between decreased quota protection and productivity, on the one hand, and a downward pressure on prices, on the other hand. This paper provides an empirical model to quantify both effects, and to isolate the productivity response from the price and demand effects. 3.2.2. Product-Level Data The employer’s organization of the Belgian textile industry (FEBELTEX (2003)) reports product-level information for around 310 Belgian textile producers.10 More precisely, they list all products produced by every textile producer, and the products are classified into five segments and various subsegments based on the relevant markets of the products. From this, I constructed the product mix of each firm: Table A.I lists the five segments of the textile market with their corresponding product categories. I matched the product information with the production data set (BELFIRST) and ended up with 308 firms for which I observe both firm-level and product-level information. After matching both data, I cover about 75 percent of total employment in the textile industry. 10 The firms listed in FEBELTEX account for more than 85 percent of value added and employment, and the data were downloaded online (www.febeltex.be) during 2003–2004.
1418
JAN DE LOECKER TABLE IV NUMBER OF FIRMS ACROSS DIFFERENT SEGMENTSa Interior
Interior Clothing Technical Finishing Spinning No. of firms
77.0
165
Clothing
Technical
Finishing
Spinning
4.8 58.9
15.8 33.9 35.1
7.3 7.1 19.6 39.6
1.8 1.8 17.5 12.5 47.5
56
97
48
40
a Cells do not sum to 100 percent by row or column, as a firm can be active in more than two segments.
For each firm, I observe the number of products produced, which products, and in which segment(s) the firm is active. There are five segments: (i) Interior, (ii) Clothing, (iii) Technical Textiles, (iv) Finishing, and (v) Spinning and Preparing (see Appendix A for more on the data). In total there are 563 different products: on average, a firm produces nine products and 50 percent of firms have three or fewer products. Furthermore, around 25 percent of firms are active in more than one segment and they are, on average, 20 percent larger in terms of sales and employment. This information is in itself interesting and relates to recent work by Bernard, Redding, and Schott (2010), who looked at the importance of differences in product mix across firms and time, where they relied on a 5 digit industry code to define a product. Given that I rely on a less aggregated definition of a product, it is not surprising that I find a higher average number of products per firm. Table IV presents a matrix where each cell denotes the percentage of firms that are active in both segments. For instance, 48 percent of firms are active in both the Interior and Clothing segments. The high percentages in the diagonal (set in Italic type) reflect that most firms specialize in one segment; however, firms active in the Technical and Finishing segments tend to be less specialized, as they capture applying and supplying segments, respectively. The same exercise can be done based on the number of products and is shown in Table V. The concentration of activities into one segment is even more pronounced when relying on the product-level data. The number in each cell denotes the average (across firms) share of a firm’s products in a given segment relative to its total number of products. The table has to be interpreted in the following way: firms that are active in the interior segment have (on average) 8372 percent of all their products in the Interior segment. The analysis based on the product information reveals that some firms concentrate their activity in one segment, while others combine different product segments. Firms active in any of the segments tend to have quite a large fraction of their products in Technical textiles, between 827 and 277 percent. Finally the last two
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1419
TABLE V FIRMS’ PRODUCT MIX ACROSS DIFFERENT SEGMENTS
Interior Clothing Technical Finishing Spinning Median Min
Interior
Clothing
Technical
Finishing
Spinning
83.72 3.03 7.01 5.75 3.72
2.78 79.28 8.97 3.52 0.65
8.27 15.36 70.16 15.53 27.20
4.41 1.86 9.06 72.85 7.40
0.80 0.48 4.79 2.35 61.04
2 1
6 2
8 1
11 2
9 1
rows of Table V show the median and minimum number of products owned by a firm across the different segments. Firms that produce only two (or less) products are present in all five segments, but the median varies somewhat across segments. 3.2.3. Protection Data The quota data come directly from the Systeme Integre de Gestion de Licenses (SIGL) data base constructed by the European Commission and is publicly available online (see Appendix A). This data base began in 1993 and reports all products that hold a quota at some point in time for a given supplying country. For each product, the following data are available: the supplying country, product, year, quota level, working level, licensed quantity, and quantity actually used by the supplying country.11 From this I constructed a data base that lists product–country–year specific information on quotas relevant to the EU market. In total there are 182 product categories and 56 supplying countries, where at least one quota on a product from a supplier country in a given year applies. As discussed before, Table I presents the change in protection in the raw data and highlights the sharp drop in the number of quota protections as well as the increase in average quota levels.12 I create a composite variable that measures the extent to which a firm is protected (across its products). A first and most straightforward measure is a dummy variable that is 1 if a quota protection applies for a certain product category c on imports from country e in year t (qrect ) and switches to 0 when 11
Appendix A.4 describes the quota data in more detail and provides two cases of how quota protection changed. 12 For the remainder of this paper, I work under the assumption that no remaining tariffs are in place after quotas are abolished. The quotas in my model therefore impact residual demand, after which no more protection occurs. This working assumption is consistent with EU trade policy in the textile market where the bulk of imported goods are exempt from tariffs and where quotas are the predominant trade policy tool.
1420
JAN DE LOECKER
the quota no longer applies. The average quota restriction that applies to a given product c is given by qrct = (9) aet qrect e
where aet is the weight of supplier e in period t.13 I experiment with various weights for aet , ranging from gross domestic product (GDP) shares, total output in textile products, export shares of a given supplier e, and simple averages. It is important to stress that my weights take into account the production potential of each supplier and are not related to the actual size of the quota. More specifically, I construct aet as the share of a country e’s production (of textile products) in total production across all supplying countries. When a quota is abolished for a very small country, that is, with a small production potential, aet is very close to zero and therefore its impact on my (firm-level) liberalization variable is extremely small. This measure, qrct , is 0 if not a single quota applies to imports of product c from any of the supplying countries at a given time and is 1 if it holds for all supplying countries. A final step is to relate the quota restriction measure to the firm-level data.14 I aggregate the product-level quota measure qct to the level of a firm i using the product mix information and recover a firm-specific quota protection measure (qrit ). More precisely, I take a weighted sum over all products c that a firm produces, c ∈ J(i), to obtain qrit = (10) ac qrct c∈J(i)
where ac represents the share of product c in a firm’s production so as to weigh the protection across products accordingly. Because of differences in product classification of the quota data and the production data, I can only rely on simple averages to compute qrit by considering the average of qrct across a firm’s products. The protection rate variable (qrit ) lies between 0 and 1: it is 0 if not a single product is protected and is 1 if all its products are protected from all supplying countries. Figure 1 shows the evolution of the quota restriction variable given 13 As noted by a referee, a more general treatment would allow for a product–supplier weight aect . However, due to data restrictions, I consider a constant share across product categories within a supplier–year. To be able to compute a more general share which is product–supplier specific, I need to observe production at the product level for all 52 supplying countries and these data are very hard to come by. 14 The 182 different quota product categories map into 390 different 8 digit product codes. The latter correspond to 23 different 4 digit industry classifications (equivalent to the 5 digit standard industrial criterion (SIC) level in the U.S.) that allow me to relate the quota restriction variable to the firm-level variables.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1421
FIGURE 1.—Evolution of quota restriction measure by segment (1994–2002).
by (10) for each segment. In all segments the average quota restriction has gone down considerably over the sample period. However, there is significant variation across segments which will be important to identify segment-specific demand elasticities. Quota levels are not reflected in my measure of trade liberalization. However, I relate changes in quota levels to firm-level productivity in the empirical analysis. The latter picks up changes in protection intensity, conditional on having a quota in place.15 In the next section, I discuss my empirical strategy to deal with unobserved productivity and demand shocks. The estimation procedure relies on observing demand shifters, such as the protection rate variable qrit , so as to jointly estimate demand and production parameters. 4. ESTIMATION AND IDENTIFICATION In this section, I describe the estimation procedure to obtain estimates of the production function and demand parameters, which are needed to obtain estimates for productivity. Finally, I discuss some remaining identification issues. 15 In principle, I could use my method to construct an alternative measure of protection where I rely on the quota levels and weigh them by production potential. The problem lies now in aggregating quota levels measured in various units (kg, m3 , etc.). I therefore choose to rely on a cleaner measure, that is, a quota dummy, to construct a measure for protection.
1422
JAN DE LOECKER
4.1. Estimation Strategy I lay out the estimation procedure that provides me with estimates of the production function coefficients and the demand parameters. The estimating equation for single product firms is given by (11)
rit = βl lit + βm mit + βk kit + βs qst + ω∗it + ξit∗ + uit
where the ultimate goal is to recover estimates of productivity. Note that the asterisks on the unobservables only keep track of the difference between the actual productivity (demand) unobservable and that unobservable multiplied by the inverse demand parameter, as discussed in Section 2. To obtain consistent estimates of the revenue production function, I need to control for both unobserved productivity shocks and unobserved demand shocks. Both the estimation procedure and the identification strategy are identical for the multiproduct and multisegment producers where an additional term, βnp npi , enters the estimating equation. This has no implications on the ability to identify the parameters of interest, as the product mix is assumed to be fixed over time. When estimating this equation over multisegment firms, I consider the same expansion of the output variable as before, s βs sis qst , to account for the different demand conditions across segments: Appendix B.3 provides more details. As noted before, under the CES demand structure, unobserved prices are picked up by the variation in inputs and by aggregate demand (qst ). However, other factors that impact firm-level prices and that are unaccounted for will potentially bias the coefficients of interest. In my setting a likely candidate is quota protection. Differences in protection across producers and over time are expected to impact firm-level residual demand and hence prices. I follow Goldberg (1995) and decompose the unobserved demand shock ξit based on the nesting structure of the product data into three observable components and an unobserved firm-specific demand shock. The observed components are based on the firm-specific protection rate, the products a firm produces, and the subsegment of the industry in which the firm is active (Table A.II). Formally, let ξit be decomposed into the components (12)
ξit = ξj + ξg + τqrit + ξit
where j refers to a product, g refers to a product group (subsegment), and τ is a coefficient.16 In this way I capture persistent product-group differences in demand as well as differences in protection. It will be important to keep track of the properties of the protection variable, qrit , for the specific estimation procedure. As discussed in Section 3, the protection differs by product, and because 16 The demand unobservable ξit∗ is again simply related to ξit via a common parameter and does not affect the results. All coefficients on the product, product–group, and protection capture the relevant segment Lerner index |ηs |−1 .
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1423
firms have different product mixes, the protection rate varies across firms and acts as a firm-specific residual demand shock. I assume that the remaining demand shocks ( ξit ) are independent and identically distributed (i.i.d.) across producers and time. In terms of the data, j is a product within a subsegment g which belongs to a given segment s. To illustrate the various components of ξit , I present the detailed structure for the Mobiltech sub-segment within the technical segment in Table A.II. This implies that, in this example, I control for differences in demand (and thus prices) across the nine subsegments (g) of the technical segment and within each subsegment for differences across the products, in addition to (potential) differences in protection rates. This leads to the main estimating equation of interest (13)
rit = βl lit + βm mit + βk kit + βs qst δj Dij + δg Dig + τqrit + ω∗it + εit + j∈J(i)
g∈G(i)
where J(i) and G(i) denote the set of products and product groups a firm markets, respectively. The variables Dij and Dig are dummy variables that take on value 1 if a firm i produces a product (product group) j (g) and are 0 otherwise.In what follows, I collect all product and product-group dummies in δD = j∈J(i) δj Dij + g∈G(i) δg Dig . Finally, εit captures idiosyncratic shocks to ξit ). production (uit ) and demand ( The product and group effects enter my model to control for unobserved demand shocks, and given my assumption on the production function, they should not reflect any difference in technology across products. However, if this assumption fails, the product and group effects might capture technology differences across producers of different products and product groups. This distinction becomes unimportant when I evaluate the productivity–protection relationship, because time invariant factors are eliminated as I am interested in relating protection to productivity changes. I revisit this when I discuss my productivity estimates. To estimate the parameters of the production function and the demand system, I rely on the insight of Olley and Pakes (1996) and Levinsohn and Petrin (2003) to proxy for unobserved productivity. I present both approaches in the context of my setting and provide conditions under which the parameters of interest are identified. The exact underlying assumptions depend on whether I rely on a static input (e.g., materials) or a dynamic control (investment) to proxy for productivity. The empirical literature has been vague about how trade shocks enter the productivity process. Given the research question at hand, it is clear that allowing for reduced quota protection to impact (firm-level) demand is desirable, and here I allow quotas to impact firm-level residual demand instantaneously
1424
JAN DE LOECKER
and therefore to impact equilibrium prices. However, the mechanism through which quotas impact productivity is less obvious. Standard approaches implicitly rely on a story of X-inefficiency: firms can react to increased competition by eliminating inefficiencies, whereas aggregate productivity increases can come about by reallocation and exit. In my case, plant managers who face less quota protection could cut slack, which might reflect in higher productivity. This does not capture productivity changes due to active investment decisions based on expectations of future product market toughness, which allow quota protection to causally impact future productivity. I consider a process whereby lagged quota protection is allowed to impact productivity and thereby affect productivity changes as (14)
ωit = gt (ωit−1 qrit−1 ) + υit
The law of motion on productivity highlights the two distinct effects of quotas. Firm-level productivity can only react to quota protection with a lag. The latter captures the idea that it takes time for firms to reorganize, cut slack, hire a new manager, or introduce better production–supply management without affecting input use, which can all lead to a higher ωit . On the other hand, quotas can impact residual demand (and prices) instantaneously and create variation in firm-level revenue. As discussed in the Introduction, I work under the assumption that an individual producer has no power over setting quotas, and this exogeneity of quota is important for my identification strategy, as the shocks to productivity, υit , are not correlated with current quota levels qrit and, by construction, are not correlated with lagged quota qrit−1 . After describing the estimation procedures, I briefly discuss the identification of the variable and static inputs (labor and intermediate inputs) in both approaches given the recent state of the literature. I evaluate the different assumptions in my application and test them where possible. It is important to note that my approach does not require a specific proxy estimator. In fact, the discussion below should help subsequent empirical work trade off these various assumptions. 4.1.1. Using a Static Input The main advantages of relying on the insight of Levinsohn and Petrin (2003; LP hereafter) in my setting are twofold. First of all, I do not have to revisit the underlying dynamic problem of the firm; put differently, I do not have to take a stand on the exact role and process of the protection variable qrit that enters as a residual demand shock and potentially impacts productivity. The use of a static input does not require me to model additional serially correlated variables (state variables); it simply exploits the static input demand conditions. Second, I can rely on intermediate inputs as opposed to investment data for which all firms report positive values at each point in time.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1425
The starting point is to observe that the choice of materials mit is directly related to a firm’s productivity level, capital stock, and all demand variables (qrit qst D), including quota protection, segment demand and product–group dummies which impact a firm’s residual demand and hence determine optimal input demand. This gives rise to the material demand equation (15)
mit = mt (kit ωit qrit qst D)
Before proceeding, I have to verify whether input demand is monotonically increasing in productivity under imperfect competition. In Appendix C.1, I show that monotonicity is preserved under the monopolistic competition setup with constant markups. This finding is intuitive, as under the constant markup assumption, markups are not related to productivity. I rely on a function ht (·) to proxy for productivity: (16)
ωit = ht (kit mit qrit qst D)
The estimation procedure consists of two stages as in the standard LP case, except for the fact that I obtain both demand and supply parameters. I will highlight the differences with respect to the original LP framework, and refer to Levinsohn and Petrin (2003) for more details. The first stage consists of the partial linear model (17)
rit = βl lit + φt (mit kit qrit qst D) + εit
where φt (·) = βm mit + βk kit + βs qst + δD + τqrit + ht (·).17 The first stage could then, in principle, identify the labor coefficient. I discuss the identification of the labor coefficient in the first stage in a separate section. For now I want to focus on the correction of price variation when estimating the production function. The second stage provides the moments to identify the parameters of interest after constructing the innovation in the productivity process. Relying on the productivity process (14), where past quota can impact current productivity, I can obtain the innovation in productivity υit+1 (βm βk βs τ δ) as a residual by nonparametrically regressing ωit+1 (βm βk βs τ δ) on ωit (βm βk βs τ δ) and qrit , and we know productivity from the first stage for given values of the parameters it+1 − βm mit+1 − βk kit+1 ωit+1 (βm βk βs τ δ) = φ − βs qst+1 − τqrit+1 − δD 17 The control function φt (·) always contains the demand parameter as well, and reflects the difference between the structural error ωit and how it enters the main estimating equation ω∗it .
1426
JAN DE LOECKER
The parameters are obtained by generalized method of moments (GMM) using the moment conditions ⎛ ⎞⎫ ⎧ mit ⎪ ⎪ ⎪ ⎪ ⎜ kit+1 ⎟ ⎬ ⎨ ⎜ ⎟ (18) E υit+1 (βm βk βs τ δ) ⎜ qst ⎟ = 0 ⎪ ⎝ qr ⎠ ⎪ ⎪ ⎪ ⎩ ⎭ it+1 D More precisely, the demand parameter τ is identified by the moment E(qrit+1 υit+1 ) = 0 by relying on the exogeneity of quotas as motivated in the Introduction of this paper. The demand parameter βs is identified by the condition that shocks to productivity are not correlated with lagged total (segment) output.18 The coefficients on material and capital are identified using the standard moment conditions in the literature. For the practical implementation, I do not consider interaction terms between the product dummies D and the other variables. However, the first stage coefficients of these product dummies contain both the demand parameters and the inverse productivity parameters. Therefore, I cannot simply use the first stage estimates to subtract out these product effects. Instead I obtain an estimate of productivity that contains the product effects and control for product-group effects when constructing the innovation in productivity, υit+1 by including D in the nonparametric regression of ωit+1 on ωit and qrit . This implies that I do not obtain estimates of δ, which are not of direct interest but serve to control for product-specific demand controls. The sample analogue of (18), given by ⎛ ⎞ mit ⎜ kit+1 ⎟ 1 1 ⎜ ⎟ υit+1 (βm βk βs τ δ) ⎜ qst ⎟ (19) ⎝ qr ⎠ NT i t it+1 D is minimized using standard GMM techniques to obtain estimates of the production and demand parameters. The above procedure generates a separate estimate of the quota’s effect on productivity (through g(·)) and on demand (τ). The intuition behind this result comes from the fact that in my model, protection can only affect productivity with a lag, while current quota protection can impact prices through residual demand. 18 In principle, I could rely on qst+1 as well, but this would depend on there being no correlation between total output produced in a segment and shocks to productivity, which is clearly a stronger assumption. In particular, since total output is a weighted sum of producer-level output, the correlation between the latter and productivity shocks is the reason why we need to control for unobserved productivity shocks when estimating production functions in the first place. We would, therefore, require the correlation to get washed out in the aggregate, which is not likely.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1427
4.1.2. Using a Dynamic Control The Olley and Pakes (1996; hereafter OP) model relies crucially on the notion that the investment policy function it (·) can be inverted to proxy unobserved productivity by a function of investment and capital. However, as mentioned in LP, given that my problem is not the standard OP model, I have to verify whether invertibility is preserved with the introduction of quotas into the model. That is, if quotas are correlated over time and not simply i.i.d. shocks, the quota variable qrit will constitute a new state variable in the dynamic problem of the firm. The investment equation in my setting is (20)
iit = it (kit ωit qrit D)
where the product–group dummies need to be included as well. The data suggest that the segment output variable qst is not serially correlated conditional on qrit and D; therefore, I do not treat it as a state variable.19 To invert the investment equation, I need to restrict the role of quotas in the model. The proof by Pakes (1994) goes through under the assumption that qrit is an exogenous state variable with known support: here it lies between 0 and 1. This implies that the level of protection at time t provides a sufficient statistic for future values of quotas. Although restrictive, it does get at an important mechanism through which differences in quotas across firms (through products) lead to differences in investment. Note that this rules out differences in beliefs about future quotas for firms with identical quotas at time t. I discuss this restriction and report a robustness check in Appendix C.2. Inverting equation (20) generates the basis for estimation, since I can use (21)
ωit = ht (kit iit qrit D)
to proxy for unobserved productivity. It is important to highlight that quotas enter through shocks to residual demand and through the investment equation. If τ = 0 and quotas do not impact residual demand, quota protection still enters the investment proxy because quota, potentially, impact the law of motion for productivity. The first stage is then given by (22)
rit = βl lit + βm mit + βs qst + φt (iit kit qrit D) + εit
where φt (·) = βk kit + τqrit + δD + ht (iit kit qrit D). This first stage identifies the coefficients on labor, intermediate inputs, and the demand elasticity. I revisit the identification of the variable static inputs (βl and βm ) below. 19
I can easily accommodate qst being serially correlated over time by including it in the investment equation and identifying its coefficient in the final stage by considering the moment E(υit+1 qst ) = 0, just as in the material demand approach discussed before.
1428
JAN DE LOECKER
The second stage rests on the estimated parameters of the first stage and identifies the remaining production and demand parameters in a similar way as the static input model discussed before. More precisely, I consider the moments kit+1 (23) = 0 E υit+1 (βk τ δ) qrit+1 D it −βk kit −τqrit −δD, Productivity is known up to parameters, ωit (βk τ δ) = φ and given the law of motion on productivity, I can recover the residual υit+1 (βk τ δ) by nonparametrically regressing ωit+1 (βk τ δ) on ωit (βk τ δ) and qrit . As in the material demand approach, I assume for computational convenience, that the product effects do not interact with the other state variables and, therefore, I do not have to search over all product dummy coefficients together with the coefficients of interest. The only difference with the static input version is that productivity shocks are controlled for by variation in investment choices rather than by variation in material demand choices. However, my setting also echoes the argument made by Levinsohn and Petrin (2003) on the advantage of using a static input to control for productivity by avoiding the extra complexity of relying on a dynamic control. In my setting, I need to specify the role of protection and incorporate an additional state variable in the underlying framework of Olley and Pakes (1996): Appendix C.2 discusses this complication further. However, I show that my framework can accommodate both approaches, and recover estimates of productivity and the productivity effect of quota reduction in this setup, which is the ultimate goal of this paper. 4.2. Identifying Variable Inputs’ Coefficients There has been some recent discussion on the ability to identify the variable input coefficients—labor and material inputs in my case—in the first stage of OP and LP. I briefly present how I can accommodate these concerns in my approach either by estimating the coefficient of labor (and materials under the OP approach) in a second stage or by following the Wooldridge (2011) one step GMM version of OP and LP. I briefly present the three cases and I refer to Wooldridge (2011) and Ackerberg, Caves, and Frazier (2006) for more details. In the empirical part of the analysis, I verify the robustness of the parameters of interest by considering the modifications suggested below. 4.2.1. Static Input Control The static input control approach described above can, in principle identify the labor coefficient in a first stage, while identifying the other parameters in a second stage. Before laying out a more general identification strategy, where
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1429
essentially all parameters are estimated jointly using moments on the productivity shock, I briefly discuss what data generating process (DGP) could provide identification in the approach under Section 4.1. There is a separate literature on the ability to identify variable inputs in a OP–LP setting. However, in my setting the demand variation across firms is brought to the forefront, highlighting that it is hard to come up with a suitable DGP where labor can move around independently from all other inputs of production and all demand variables of the model captured by φt (m k qst qr D). Without going into great detail, a DGP that delivers identification is one where firms make material choices, followed by labor choice, but both are made within the period t − 1, t and add an optimization error to labor but none to materials. This would create variation in labor choices that is related to the variance in output conditional on the nonparametric function in (m k qst qr D).20 I can relax this strong identification requirement by simply not identifying the labor coefficient in a first stage. Instead I identify it together with the other parameters by forming a moment on the productivity shock. When relying on materials to proxy for productivity, I collect all inputs in φt (·) in a first stage: rit = φt (kit lit mit qst qrit D) + εit From the first stage, I have an expression for productivity given all parameters and I can rely on the same moments to identify the production function coefficients, where the extra moment on labor, E(υit+1 lit ) = 0, provides identification. The only difference with the approach outlined before is that the labor coefficient is identified in a second stage. 4.2.2. Dynamic Input Control The investment approach can, in principle, identify the variable input coefficients in a first stage. However, as under the static input model, we need a DGP which allows both labor and material choices to move around independently from the control function φt (iit kit qrit D). In this setting, variation in lit and mit can be obtained through an additional timing assumption of when exactly the productivity shocks hit the firm. Let variable inputs be chosen after the firm observes its productivity shock ωit−b , where b < 0. Furthermore, an additional productivity shock occurs between t − b and t. The latter creates variation in variable input choices conditional on the proxy for productivity φt (·). The intuition behind this result comes from the idea that labor (material) is chosen 20 It is instructive to derive optimal material demand given my explicit demand and production structure, and to plug in the inverse material equation in the revenue production function. This highlights the identification issue in the first stage. In fact, if I were to directly observe output, αl lit would cancel out and illustrate the nonidentification result. However, the use of revenue data introduces additional variation but it does not help identification much since all relevant demand variation is controlled for through the productivity proxy φt (·).
1430
JAN DE LOECKER
at t − b without perfect information about what productivity at t is, and this incomplete information is what moves labor (material) around independently of the nonparametric function, where investment was set at t − 1 as it is a dynamic input decision. To relax the extra timing assumption to obtain identification, I follow the same approach as under the static input control. I collect all production function inputs in φt (·) in addition to investment (the proxy), the product dummies, and quota protection: rit = βs qst + φt (iit kit lit mit qrit D) + εit From the first stage, I have an expression for productivity given parameters, and I can proceed as before and obtain estimates of the labor and material coefficient from E(υit+1 lit ) = 0 and E(υit+1 mit ) = 0, respectively. The second stage provides estimates of the variable input coefficients by noting that none of the lagged input choices should be correlated with the innovation in the productivity process, while lagged inputs are correlated with current inputs through serially correlated input prices, therefore qualifying them as instruments. Both the static and dynamic input approaches can therefore be adjusted to accommodate a recent debate on the identification concerns in proxy estimators, as nicely summarized by Wooldridge (2011). However, the identification of a perfectly variable input, such as materials in my setting, is still not guaranteed as discussed by Bond and Soderbom (2005). I deal with this concern by considering a value added production function and imposing a fixed proportion technology on the production function whereby I eliminate the need to estimate the material coefficient. I present this robustness check in Section 6. 4.2.3. Alternative Approach Wooldridge (2011) proposed an alternative implementation that deals with the identification of the production function coefficients and is robust to the criticism of Ackerberg, Caves, and Frazier (2006). The approach relies on a joint estimation of a system of two equations using GMM, by specifying different instruments for both equations. I consider the Wooldridge approach while relying on materials mit to proxy for productivity which the literature refers to as the Wooldridge–LP approach, and I briefly discuss some important features that apply in my setting. Wooldridge considers moments on the joint error term E((εit + υit )|Iit ) = 0, where Iit is the information set of firm i at t. Applying the Wooldridge approach to my setting implies having moment conditions on both idiosyncratic production and demand shocks, εit , and the productivity shocks, υit . However, because of the joint estimation of both stages, this procedure requires estimation of all polynomial coefficients that approximate ht (·), gt (·), βht , and βgt ,
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1431
together with all production and demand coefficients, including the large number of product dummies. The sample analogue of these moments generates estimates for (βl βm βk βs δ τ) in addition to the polynomial coefficients on the functions ht (·) and g(·). The advantage of this approach is that bootstrapping is not required to obtain standard errors on the production function coefficients, and it produces more efficient estimators by using cross-equation correlations. However, this comes at the cost of searching over a larger parameter space, since I have to search jointly over the production function coefficients, the demand coefficients, and all polynomial coefficients used to approximate the functions ht (·) and g(·). The alternative approach I rely on requires searching over fewer parameters. In addition, I can control for the importance of product effects outside the GMM procedure using my approach, which is important given the high dimension of the product dummy parameters. Furthermore, since all demand variables enter the proxy for productivity, ht (·), and the quota protection variable enters the productivity evolution g(·), I further increase the dimension of the parameter space. These concerns are not present in the standard production function case where one either assumes observing quantities or directly observes it in the data.21 4.3. Productivity Estimates I compute productivity ω it using the estimated parameters and plug them in η s (24) ω it = ( rit − βl lit − βm mit − βk kit − βs qst − τqrit ) η s + 1 where I rescale by the relevant markup as indicated by the difference between ωit and ω∗it throughout the text. Note that I do not subtract the product and group effects, since they become unimportant when I analyze the impact of protection on productivity changes because time invariant factors are eliminated. For the cross-sectional analysis, I can consider the productivity measure both with and without the product and product–group effects, and evaluate the role of the product and group dummies.22 The same is true for the role of the number of products when considering multiproduct producers. I could directly rely on the second stage of my algorithm to compute productivity. However, to be able to compare my results across various proxy estimators, I could only rely on those observations with positive investment and I would lose at least 1 year of data since differences in capital stock are used to 21
However, the Wooldridge–LP estimator can be easily implemented using publicly available STATA code (ivreg2). I estimate this on my data as a robustness check, while ignoring the product dummies. 22 s qst and the For multisegment producers, the segment demand variable is expanded to sis β markup term is a share weighted average across segments.
1432
JAN DE LOECKER
construct the investment data (which is standard). Moreover, omitting plants with zero investment would imply omitting the relatively lower productivity firms which are important to include to verify the relationship between reduced quota protection and productivity; that is, those might be the firms with an initially high level of quota protection where we want to verify the productivity impact of reducing protection. Olley and Pakes (1996) followed the same procedure in their study of productivity dynamics in the telecommunications industry. The only difference between the productivity estimates that come directly out of the estimation procedure and those out of equation (24) is the role of measurement error and idiosyncratic shocks (uit ). The variance of my productivity estimates then contains the variance of the i.i.d. production and demand shocks, whereas the productivity estimates obtained inside the algorithm are by construction purged from these shocks in the first stage. The averages of both productivity measures are in fact identical. The only concern is therefore that I use a less precise measure of productivity. If my parameters are correctly estimated, then ω it = ωit + uit and, therefore, uit should not be correlated with qrit . I rely on (24) to obtain estimates for productivity and check whether my results are robust to using this alternative measure. My approach generates productivity estimates purged from price effects while relying on the correct returns to scale parameter. I exploit the time series variation in production and demand to estimate the demand parameters, and by giving up the ability to estimate a change in the slope of the demand curve, I can estimate the slope for various product segments. However, this restriction does buy me more flexibility on the production function by allowing for dynamic inputs and additional state variables of the underlying firm’s problem. It is useful to compare my productivity estimates directly to the standard sales per input measures for productivity. Let me collect all inputs into xit and the corresponding coefficients into β. In my notation, those standard estimates for productivity are obtained as (25)
rit − xit β ω stit =
Note that price variation that is correlated with input variation is not part of the productivity measure: that correlation leads to biased coefficients of the production function and therefore leads to incorrect productivity estimates. Relying on my methodology delivers estimates for productivity corrected from price variation since they are based on the correct input coefficients by accounting for both simultaneity and price bias; they are given by (26)
ω it = qit − xit α
It is helpful to consider the difference between the standard sales per input measure and my corrected estimate, (27)
ω stit − ω it = (pit − pt ) + xit ( α/|η s |)
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1433
). This expression shows that my apwhere I use the notion that β = α( ηηs +1 s proach purges the standard measures for productivity from price variation (uncorrelated with input variation) through the CES structure (through the terms βs qst and ξit ), and corrects for the appropriate returns to scale in production by correctly weighting the input variation by the correct input coefficients. 4.4. Some Identification Issues The identification strategy of the structural parameters is very similar to that in Klette and Griliches (1996). However, there are two important differences. First of all, I control for unobserved productivity shocks by relying on a proxy for productivity. Second, I incorporate the product mix of a firm so that my identification strategy allows for a richer demand structure. More specifically, as I project firm-specific revenue on segment-specific output (which are weighted by the share of products in a given segment), product-group fixed effects, and firm-level protection rates, I control for unobserved prices and identify the parameters of both the production function and the demand system. A crucial assumption throughout my identification strategy is that the product mix does not react to changes in protection and is time invariant. This condition is the result of a data constraint, that is, I only observe the product mix information for a cross section. However, given this limitation, I fully exploit this information and use it to put more structure on the demand system. This assumption seems to contradict recent evidence by Bernard, Redding, and Schott (2010) on product switching in U.S. manufacturing plants. However, my panel is 9 years, and the scope for adding and dropping products is therefore smaller. In further support of this assumption, I do observe the product mix over time for a set of firms in the Knitwear subsegment (see Table A.I). I see very little adjustment in the number of products over time and there is almost no adjustment in the number of segments in which a firm is active. In fact, year to year only 5 percent of the firms adjust their product mix and about 15 percent adjust over a 5 year horizon.23 Whenever the fixed product mix assumption does not hold, it can potentially impact the ability to identify the production and demand parameters. Conditional on the input proportionality assumption, it only introduces deviation around the observed number of products over time in the error term (npit − npi ). However, my framework incorporates the firm-level protection rate qrit , and this controls for changes in the product mix that are correlated with changes in protection. Therefore, I can identify the coefficients as long as changes in the product mix are picked up by changes in trade protection. When relying on the dynamic control to proxy for productivity, it additionally affects the exact conditions for invertibility in the Olley and Pakes (1996) framework 23 Relying on this small sample, I could not find a strong relationship between labor productivity and the change in product mix.
1434
JAN DE LOECKER
and requires modeling the product mix choice as well. I refer to Das, Roberts, and Tybout (2007) where a dynamic model with two controls is introduced with an application to international trade. If the product line changes over time, my estimates of productivity pick up changes in the product mix. Therefore, when projecting the productivity estimates on the change in protection, those coefficients capture changes in the product mix due to a change in protection. As a consequence, the results presented in this paper are consistent with the work of Bernard, Redding, and Schott (2011), who showed that firm-level productivity can increase after trade liberalization because firms adjust the product mix accordingly. Given the data constraint, I cannot further separate the pure productivity effect from this product reallocation and selection dimension. My approach still provides consistent estimates of the demand and production structure, and could potentially decompose the productivity effect into an intensive (change in productivity, holding number of products fixed) and an extensive margin (change in number of products, holding productivity fixed). However, the focus of my paper is on correctly estimating the impact of trade liberalization on firm-level productivity while controlling for unobserved prices; therefore, my results can be interpreted through the lens of product mix adjustment. Finally, it is worth mentioning that if my assumption on ξit fails and captures persistent demand shocks, I can accommodate for persistent demand shocks by simply collapsing the unobserved demand shock ξit with unobserved productivity ωit into a new state variable ω it . The same control function ht (·) is now used as long as the unobserved demand shock ξit follows the exact same Markov process as productivity.24 This would imply that ξit no longer enters in any of the expressions since it would be part of φt (·). However, it would change the interpretation of the productivity estimates since they would contain those demand shocks ( ξit ). Levinsohn and Melitz (2006) explicitly assumed that a control function in materials and capital is sufficient to control for both unobserved productivity and all demand shocks to identify the coefficients of the production function and the elasticity of substitution. In contrast, I allow for unobserved demand shocks to be correlated with segment-level output and the various inputs. Note that in my estimation algorithm, described above, I rely on three different (observed) demand shifters (at the product, group, and firm level) and I verify the importance of this control by comparing my estimates of βs to those where only the proxy for productivity is included. 24 In the case where ξit captures serially correlated demand shocks but follows a different process over time, the investment proxy approach is directly affected by introducing an additional serially correlated unobserved state variable in the model, and this affects both the invertibility conditions and the ability to identify the parameters.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1435
5. RESULTS In this section, I compare the coefficients of my augmented production function to a few benchmark estimators. I demonstrate the importance of controlling for both unobserved demand and productivity shocks so as to recover direct estimates of segment-specific elasticities.25 I recover estimates for firm productivity and relate them to the drastic change in trade protection in the textile market. In addition, I discuss the inability of the current available approaches to single out the productivity response from decreased trade protection. 5.1. Production Function Coefficients and Demand Parameters26 In this section, I show how the estimated coefficients of a revenue production function are reduced form parameters of a demand and supply structure. As as consequence, the actual production function coefficients and the resulting returns to scale parameter are underestimated. I compare my results with a few baseline specifications in Table VI: a simple ordinary least squares (OLS) estimation [1] and the Klette and Griliches (1996) specification in differences KG [2]. Furthermore, I compare my results with the Olley and Pakes (1996) and Levinsohn and Petrin (2003) estimation techniques to correct for the simultaneity bias [3]. I first compare the various benchmark coefficients to a highly aggregated version of my empirical model [4], where I only consider one demand parameter to characterize the textile industry. This model is similar to that used by Levinsohn and Melitz (2006), and this specification is useful to illustrate the importance of controlling for unobserved price variation around the aggregate producer price index. Going from specification [1] to [2] illustrates that OLS produces reduced form parameters from a demand and a supply structure. As expected, the omitted price variable biases the estimates on the inputs downward and hence underestimates the returns to scale elasticity. Specification [2] takes care of unobserved heterogeneity by taking a first difference of the production function, as in the original Klette and Griliches (1996) paper. The coefficient on capital goes to zero as expected for a fixed input. In specification [3], the impact 25 With the additional assumption that firms are cost minimizing and do not face adjustment cost, one can, in principle, obtain estimates of markups from the production function coefficients; see De Loecker and Warzynski (2010). In this paper I am concerned with obtaining the correct productivity estimates. 26 All the results reported here are robust with respect to estimating the variable input coefficients (labor and materials) either using the GMM approach of Wooldridge (2011) or the second stage approach as described in Section 4. I thank Amil Petrin for making the LP–Wooldridge code available. Interestingly, the coefficients on the freely chosen variables are very stable across methods. The returns to scale parameter differs slightly but is not statistically different due to different capital coefficients. This shows that my correction for unobserved prices is robust to the use of a specific proxy estimator.
1436
JAN DE LOECKER TABLE VI PRODUCTION FUNCTION ESTIMATESa [2] KG
[1] OLS Coefficient on
Labor Materials Capital
[4] Augmented
[5]: Single Segment
β
β
α
β
β
α
β
0.230 (0.009) 0.630 (0.007) 0.088 (0.007)
0.245 (0.0120) 0.596 (0.013) 0.019 (0.011)
0.334 (0.034) 0.812 (0.052) 0.026 (0.014)
0.211 (0.011) 0.628 (0.009) 0.093 (0.008)
0.213 (0.011) 0.627 (0.008) 0.104 (0.006)
0.308 (0.062) 0.906 (0.175) 0.150 (0.034)
0.230 (0.017) 0.650 (0.013) 0.090 (0.006)
Output
0.266 (0.046)
0.309 (0.134)
−3.76
η No. of obs
[3] Proxy
1,291
1,291
985/1,291
0.260 (0.137)
−3.24
−3.85
985
735
a Bootstrapped standard errors are given in parentheses. Results under [3] Proxy are obtained using the intermediate inputs as a proxy as suggested by Levinsohn and Petrin (2003) and are compared with the Olley and Pakes (1996) approach. The LP estimator was implemented using their estimator in STATA (levpet) and the Wooldridge (2011) version of LP using ivreg2 in STATA.
of correcting for the simultaneity bias changes my coefficients in the direction predicted by theory, that is, the labor coefficient is estimated somewhat lower and the capital coefficient is estimated higher. The omitted price variable bias is not addressed in the OP and LP framework as they are only interested in a sales per input productivity measure.27 Both biases are addressed in specification [4], and the corrections for the simultaneity and omitted price variable go in opposite directions, therefore making it hard to sign the total bias a priori. As expected, the estimate on the capital coefficient does not change much when introducing the demand shifter since the capital stock at t is predetermined by investments at t − 1; however, it is considerably higher than in the Klette and Griliches (1996) approach. The correct estimate of the scale elasticity (αl + αm + αk ) is of the most concern in the latter and indeed when correcting for the price variation, the estimated scale elasticity goes from 09477 in the OLS specification to 11709 in the KG specification. The latter specification does not control for the simultaneity bias, which results in an upward bias on the variable inputs labor and material. This is exactly what I find in specification [4], that is, when correcting for unobserved productivity shocks, the implied coefficient on labor drops from 0334 to 0308 for the labor coefficient. 27 A downside is that the product-level information (number of products produced, segments, and which products) is time invariant and leaves me with a panel of firms active until the product information was available. Therefore I check whether my results are sensitive to this by considering a full unbalanced data set where I control for the selection bias (exit before 2000) as well as suggested in Olley and Pakes (1996). I can do this as the BELFIRST data set provides me with the entire population of textile producers. The results turn out to be very similar.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1437
The estimated coefficient on the industry output variable is a direct estimate of the Lerner index and I also report the implied elasticity of demand. In Appendix B, I discuss in detail how the segment-specific shifters qst are constructed using a firm’s product mix. Moving across the various specifications, the estimate of the Lerner index (or the markup) increases as I control for unobserved firm productivity shocks.28 The implied demand elasticities are around −3. These estimates are worth discussing for several reasons. First of all, they provide me with a check on the economic relevance of the demand model I assumed. Second, they can be compared to other methods that estimate markups using firm-level production data.29 It is because of the specific demand system that I obtain direct estimates of the markup on the total output variable. This is in contrast to the approach due to Hall (1988) where, in addition to standard cost minimization, we require that firms face no adjustment costs so as to estimate markups directly from the production function coefficients. The last column of Table VI presents the estimates of the augmented production function for firms active in a single segment, specification [5]. As discussed in Section 2.2, I require an extra assumption on how inputs are allocated across segments and products so as to estimate my model on multi-segment producers. The estimates indicate that my results are not sensitive to this. For the remainder of the paper, I consider all firms in my data unless explicitly mentioned. In Table VII I demonstrate the importance of controlling for unobserved demand shocks by estimating the full model as described in Section 4. I rely on product and product-group fixed effects and firm-specific protection measures to control for ξit in the augmented production function. As expected, I find significantly more elastic demand, confirming that demand is higher for firms active in relatively more protected segments, and thereby inducing a positive correlation between the segment demand variables (qst ) and the error term, containing variation in protection (qrit ). It is useful to refer back to Figure 1, where the difference in the rate of protection across segments and time is highlighted. In addition to the protection measure, my model controls for unobserved differences in demand conditions such as (average over the sample period) differences in demand for products and product groups, reflecting 28 The industry output variable captures variation over time of total deflated revenue and as Klette and Griliches (1996) mention, therefore potentially picks up industry productivity growth and changes in factor utilization. If all firms had a shift upward in their production frontier, the industry output would pick up this effect and attribute it to a shift in demand and lead to an overestimation of the scale elasticity. In my prefered empirical model, I consider different demand elasticities at the segment level and can allow for aggregate (across all segments) productivity shocks. 29 In particular, Konings, Van Cayseele, and Warzynski (2001) used the Hall method and found a Lerner index of 0.26 for the Belgian textile industry, which is well within the range of my estimates.
1438
JAN DE LOECKER TABLE VII SEGMENT-SPECIFIC DEMAND PARAMETERS AND RETURNS TO SCALEa Estimated Coefficient (βs ) Demand Controls
Implied Elasticity (ηs )
Not Included
Included
Not Included
Included
Industry (I) Interior (s = 1) Clothing (s = 2) Technical (s = 3) Finishing (s = 4) Spinning (s = 5)
0.35 0.24 0.35 0.31 0.33 0.26
0.26 0.16 0.23 0.21 0.22 0.18
−2.86 −4.17 −2.86 −3.23 −3.03 −3.85
−3.85 −6.25 −4.35 −4.76 −4.55 −5.56
RTS
1.3
1.11
a Demand controls are product, product–group, and protection rate variables, (ξj ξg qrit ). All coefficients are significant at the 1 percent level.
consumer taste. It is interesting to note that the coefficients on the segment output variables are estimated more precisely when I control for unobserved productivity using the control function. Including the demand side controls has little impact on the estimated reduced form coefficients (βl βm , and βk ) of the production function, but they obviously matter for the correct marginal product estimates (αl αm αk ) which require the estimates on the segment demand elasticities (ηs ).30 The results in Table VII demonstrate that controlling for the differences in protection rates across segments and over time is crucial to recover reliable measures for segment-specific elasticities and hence productivity. The fact that the coefficients become much smaller implies higher (in magnitude) elasticities of demand after controlling for unobserved demand shocks. This is a standard finding in the empirical demand literature. For example, for the Interior segment, the estimated demand elasticity goes from −4.17 to −6.25 after I control for unobserved demand shocks, implying a positive correlation between demand for products in a given segment and the rate of protection as measured by qrit . Given the functional form for demand and supply, these lower markup estimates have a direct impact on my estimates for returns to scale. I find significantly increasing returns to scale in production with a coefficient of 111. This is in itself an important result and supports the results of Klette and Griliches (1996), who discussed the downward bias of the production function coefficients due to unobserved prices. 30 I do not list the estimated coefficients of the production function again, as they hardly change compared to the results in the last column of Table VI. The coefficients on labor, materials, and capital are obtained by multiplying the reduced form estimates by the segment-specific inverse markup estimate.
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1439
In sum, my results indicate that both the omitted price variable and the simultaneity bias are important to correct for, although the latter is somewhat smaller in my sample. The estimated reduced form parameters (β) do not change much when controlling for the omitted price variable, after correcting for the simultaneity bias. However, it has a big impact on the estimated production function parameters (α), which by itself is important if one is interested in obtaining the correct marginal products. Finally, the estimated coefficients of the production function and the demand elasticities, and hence productivity, are not sensitive to the specific proxy estimator approach (OP, LP, and modifications discussed in Section 4). The various proxy approaches produce very similar coefficients of the reduced production function coefficients (β) and do not impact the demand parameters. In Section 6, I discuss two additional robustness checks. The different approaches do, however, imply different modeling assumptions as discussed in detail in Section 4 and, therefore, show that my results are robust to them. 5.2. Estimating the Effect of Trade Liberalization on Productivity I rely on my estimates of productivity to estimate the firm-level productivity reaction to relaxing quota protection in the EU textile market. As discussed in Section 4, my estimation procedure provides a direct nonparametric estimate of the effect of quota on productivity. Once I have all the model’s parameters estimated, I can compute productivity and therefore obtain an estimate of g(·) which describes how past productivity and quota affect current productivity. Before presenting my results, I briefly discuss the main approach in the current literature and contrast it to my setting. 5.2.1. Standard Approach Attempts to identify the productivity effects from trade liberalization in the literature can be classified into two main approaches: a two-stage and a single equation approach. Based on my empirical framework, I briefly discuss how neither approach is able to single out productivity responses to the reduced quota protection or, in general, to any change in the operating environment of firms that impact demand for the goods marketed. The Two-Stage Approach. In this approach, the strategy follows two steps, where the first is estimating the production function while ignoring the unobserved price problem. The second stage then simply projects the recovered “productivity” estimate ωstit against the shift in trade regime. It is clear that this approach does not allow the productivity response to be isolated from the demand response since the productivity estimates capture price and demand variation. This approach typically considers a variant of the regression (28)
ωstit = c + λqrit + eit
1440
JAN DE LOECKER
where the interest lies in estimating λ. Equation 28 shows the very strong assumptions one needs to invoke to be able to identify the productivity impact from decreased quota protection: protection cannot impact prices—except through productivity—and cannot be related to returns to scale in production. Using the expression derived in (27), this two-stage approach then implies correlating the entire term (ωit + pit − pt + xit (α/|η|)) to quota protection and leads to incorrect estimates of the quota effect. In addition, this approach does not allow protection to impact the evolution of productivity when estimating the production function itself, which is at odds with the research question. The Single Equation Approach. This procedure starts out with specifying a parametric function for productivity in the variable of interest, here trade liberalization (qrit ). For example, let me consider a simple linear relationship ωit = λqrit + eit , where distributional assumptions are made on eit . Substituting this expression for productivity into the production function generates an estimating equation (29)
rit = βl lit + βm mit + βk kit + λqrit + e∗it
where λ is meant to capture the productivity effect of a reduction in protection, and where e∗it now includes eit and unobserved prices. The coefficient λ is not identified unless the protection rate variable, qrit , is not allowed to be correlated with (unobserved) prices. Both the single equation and the two-stage approach are expected to lead to an incorrect estimate of the impact of trade liberalization on productivity. In fact, the single and two-stage approaches should provide identical results on the estimate for λ. However, the two-stage approach has the potential to produce productivity estimates that are obtained while correcting for the simultaneity bias using the standard OP–LP approach. I compare my estimates to the two main approaches in the literature. Putting all the different approaches side by side allows me to shed light on the importance of controlling for unobserved productivity shocks, as well as controlling for unobserved prices in obtaining the correct estimated protection effect on productivity. My approach relaxes the strong assumption needed for both approaches—that changes in quota protection do not affect prices and firm-level demand—while controlling for unobserved productivity shocks. The ability to separately identify the demand from the productivity effect comes from the notion that prices reflect differences in residual demand and hence differences in quota protection, whereas the supply effect comes through productivity which takes time to materialize. This identification strategy implies that we expect to see separate roles for the contemporaneous and the lagged quota variables in a simple OLS regression of the production function while ignoring the simultaneity and omitted price variable bias. When running such a regression on my data, I obtain a strong effect on the contemporaneous quota variable, while the coefficient on
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1441
the lagged quota variable is smaller in magnitude and not significant. The next section produces the structural estimates while controlling explicitly for both unobserved productivity and unobserved prices as outlined in Section 4. 5.2.2. Controlling for Price and Demand Effects To verify the extent to which trade liberalization, measured by a decrease in quota restrictions, has impacted the efficiency of textile producers, I compare my results to several different specifications. I start out with following the standard practice in the literature, and consider both a single equation approach and a two-stage approach, as discussed above, which both ignore unobserved prices and productivity. In particular, the single-stage approach estimate of λ is obtained after running the regression (29) using OLS (Approach I).31 I then proceed by considering a two-stage approach (28) where I rely on the standard OP–LP productivity estimates (Approach II).32 The final benchmark model is a two-stage approach which relies on productivity estimates from an adjusted OP–LP framework where quota protection enters the model through the process of productivity, ωit = g(ωit−1 qrit−1 ) + υit , and hence in the control function (Approach III). This benchmark model, Approach III, generates nonparametric estimates of the productivity effect of quotas through the estimate of g(·). I report the average to compare with the other models and, in addition, list the support of the estimated effect. It is important to note that the first two benchmark results are not directly comparable to my results as they report the level effect of quotas and productivity, whereas I am concerned with obtaining the correct estimates of within firm productivity changes. Benchmark III, however, is directly comparable. Table VIII presents the results of the various estimates, where it is important to note that the sign of λ is expected to be negative if trade liberalization effects productivity at all, since qrit = 1 under full protection and is 0 if not a single product–supplier combination faces a quota. The first row presents the results from a simple OLS regression of deflated revenue on the protection variable, controlling for input use. The coefficient is highly significant and negative, suggesting about 16 percent higher productivity from eliminating all quotas on all products. As discussed before, this regression cannot separate the impact of protection on demand (and hence prices) and productivity. In addition, it considers quotas that directly impact the level of productivity, and would suggest a very high effect when ignoring price and demand effects. Approach II.1 shows the very small impact on the estimate of λ by relying on productivity estimates obtained from a OP–LP approach, which corrects for unobserved productivity shocks. Both Approaches I and II are not directly comparable to my estimates, 31 The two-stage approach without any correction for the unobserved productivity shocks produces the same estimate for λ, but differs in precision. 32 These estimates rely on an exogenous Markov process for productivity and control for unobserved productivity shocks using either material demand or investment, in addition to capital.
1442
JAN DE LOECKER TABLE VIII IMPACT OF TRADE LIBERALIZATION ON PRODUCTIVITYa Approach
Description
Estimate
Support
I
OLS levels
−0.161∗ (0.021) −0.153∗ (0.021) −0.135∗ (0.030) −0.086 (0.006) −0.021 sd: 0.067 −0.046∗∗ (0.027)
n.a.
II.1
Standard proxy-levels
II.2
Standard proxy-LD
III
Adjusted proxy
IV
Corrected
V
Corrected LD
n.a. n.a. [−0129 −0047] [−027 0100] n.a.
a I report standard errors in parentheses for the regressions, while I report the standard deviation (sd) of the estimated nonparametric productivity effect in my empirical model (given by g(·)). ∗ and ∗∗ denote significant at 5 or lower and 10 percent, respectively. LD refers to a 3 year differencing of a two-stage approach where Approach II.2 relies on standard productivity measures, as opposed to Approach V, which relies on my corrected estimates of productivity.
as suggested before. I therefore consider an adjusted proxy estimator where I nonparametrically estimate the lagged quota effect relying on my productivity process while still ignoring unobserved prices. The productivity effects now reflect productivity changes and how these within firm productivity changes are related to trade protection. The average effect of quotas on productivity is reduced to −0086. The support of the estimated effect indicates that eliminating quotas impacts productivity positively throughout the distribution. Moving to the results from my approach, Approach IV, where unobserved productivity and prices are jointly controlled for, I find a substantially lower average quota effect of −0021. This results indicates that eliminating quotas on all products would, on average, only raise productivity about 2 percent, as opposed to almost 9 percent when ignoring the price effect. Just as in Approach III, I obtain a nonparametric estimate of the productivity effect, and the support suggests a wide range of outcomes, with a large mass around zero. Interestingly, my estimates of g(·) also indicate that the productivity effect of quotas is positive for relatively low productivity firms, albeit still much lower in magnitude. The latter comes from the estimated coefficients of g(·), βg , that capture the interaction between quota protection and productivity. A final set of results is presented under Approaches II.2 and V, where I compare a simple long difference version of (28) while relying on standard productivity estimates and my corrected estimates, respectively. My results imply that abolishing all quotas over a 3 year period would merely increase firm-level pro-
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1443
ductivity by 45 percent as opposed to 135 percent when relying on standard measures of productivity. 5.2.3. Additional Results I briefly discuss a few other results that emerge from my analysis: the time aspect of the productivity effects, the potential separate role of enlarging quotas as opposed to abolishment, and implications for the effect of protection on prices charged. As discussed in Section 3.1, a significant part of reduction in protection took place after the Europe Agreements were signed with various Central and Eastern European countries (CEECs). The sharp fall in the number of quotas in the period between 1994 and 1997 is consistent with the preparation process for EU enlargement toward Central and Eastern Europe (CEE). By the year 1998, almost all trade barriers between the EU and the candidate countries of CEE were abolished, which implied that industrial products from the associated countries (mostly CEE) had virtually free access to the EU with restrictions in only a few sectors, such as agriculture and textiles. Using my estimates, I find that, if anything, the productivity effects are not constant over time and are somewhat higher during the first half of the sample, although still relatively small in magnitude compared to relying on standard approaches. Another channel through which EU trade policy relaxed quota restrictions was by increasing the level of existing quotas for a set of supplying countries. To verify the impact of this on productivity, I can only consider product categories where I observe a positive level of protection. Furthermore, I only consider quotas where the unit of measurement of a quota level is constant within a given industry code (23 categories). This dimension of opening up to trade has been for imports coming from outside CEE. Table A.IV lists the supplying countries where relaxing import restrictions mainly occurred through higher levels of quotas. I report the increase in the average level per quota during my sample period 1994–2002 and list the countries that gained access to the EU textile market. For instance, the average quota level on textile products from Pakistan has more than doubled over a 9 year period (129 and 144 percent, depending on product category). This process is not captured by the quota restriction variable that picks up whenever a quota on a given product from a supplying country is abolished. To verify the impact of increased quota levels, I simply correlated my estimate of productivity with a variable that measures the total level of quotas (in logs) while controlling for qrit . The coefficient on the level variable is estimated with a positive sign, indicating that an increase in the level of quotas had a small but positive effect on productivity. The point estimate suggests an elasticity of 1.9 and implies, on average, a moderate effect of increasing quota levels on firm-level productivity. Finally, in standard approaches, the coefficient on protection measures the effect of quota protection on revenues, including both the demand and the productivity effect. Using my structural model of demand and production, I can
1444
JAN DE LOECKER
back out estimates of firm-level prices, and I find that these are highly positively correlated with protection, suggesting that firms facing stronger competition on average charge lower prices.33 5.3. Collecting Results In sum, across all specifications, using standard measures for productivity leads to overestimating the productivity response to trade liberalization. These results indicate that correcting for unobserved price and demand effects leads to substantially lower productivity gains, ranging between one-third and onefourth, and suggests that trade liberalization substantially impacts firm-level demand and hence prices. These observations then imply a very different interpretation of how opening up to trade impacts individual firms.34 My results can be interpreted as a decomposition of measured productivity gains from relaxing trade protection into productivity effects and price–scale effects, and indicate that only a small share (ranging up to 20 percent) is associated with true productivity gains. The fact that after my correction, reduced quota protection still is associated with productivity gains, although small, is consistent with the work of Bernard, Redding, and Schott (2011), since my productivity estimates potentially capture product mix responses as discussed in Section 4.4. In the next section, I check whether my results are sensitive to relaxing some of the assumptions related to the production technology and the demand system. 6. ROBUSTNESS ANALYSIS In this section, I verify whether my results are robust to the specific production function and demand system. Specifically, I test whether the demand elasticities are sensitive to a specific production function. On the demand side, I show that my approach is consistent with assuming a restricted version of a nested CES model of demand and I test the importance of this restriction. 6.1. Relaxing Production Technology35 I verify whether my estimated demand parameters are sensitive to the specific choice of a production function. A more flexible approach allows for a 33 This result is obtained by first computing prices by using pit = ωstit − ωit − xit (α/|η|) + pt and then running a simple OLS of pit on qrit while controlling for unobserved firm-specific shocks. 34 However, average productivity of an industry can still increase due to the elimination of inefficient producers from the productivity distribution. In a different context, Syverson (2004) demonstrated the importance of demand shocks in increasing average productivity of producers through exit of inefficient producers. 35 All the results on the relationship between productivity and trade liberalization are robust to the use of productivity estimates under the various extensions and alternatives discussed in this section.
1445
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
general production function where productivity shocks are additive in the log specification, Qit = F(Lit Mit Kit ) exp(ωit + uit ), and thus allows for flexible substitution patterns among inputs such as the translog production function. However to recover the production function parameters, I have to specify moment conditions of the data on Lit Mit , and Kit . I consider the results in Section 5 (baseline) and compare them with estimates obtained from a more flexible approach, while relying on investment to proxy for productivity.36 The first stage is then reduced to the regression (30)
rit =
βs (sis qst ) + φt (lit mit kit iit qrit D) + εit
s
where I am only interested in the coefficients on the segment output variables. Table IX shows that the estimated demand parameters are well within the range of the less flexible model used in the main text. The estimates of the demand parameter at the industry and segment level are robust with respect to relaxing the assumptions on the production technology. For example, for the Interior segment, the estimated markup lies between 017 and 014. I also check whether the estimated demand parameters are robust to the use of a value added production function whereby I allow for a fixed proportion of materials per unit of output in the production technology. The last row in TABLE IX ESTIMATED DEMAND PARAMETERS UNDER ALTERNATIVE SPECIFICATIONSa Segment Specification
Industry
Interior
Clothing
Technical
Finishing
Spinning
Baseline Flexible Value added
0.26 0.22 0.29
0.16 0.14 0.20
0.23 0.23 0.21
0.21 0.19 0.27
0.22 0.22 0.25
0.18 0.21 0.21
Unrestricted
σ = 0008 (0.113)
0.17
0.25
0.21
0.23
0.19
a Baseline specification is compared to (i) Flexible, which allows for a general production function that is log additive in productivity, (ii) to a Value added production function specification, and (iii) to an unrestricted nested CES demand system. For the latter, I report the estimated crosssegment correlation coefficient σ and its standard error. I suppress the bootstrapped standard errors on the demand parameters, since they are all significant at the 1 percent level.
36 In addition, the flexible approach no longer takes a stand on whether labor and materials are static or dynamic inputs, and coincides with the approach discussed in Section 4.2. More importantly, it does not rely on a specific timing of the productivity shock and inputs to obtain consistent estimates of the demand elasticities. I do not change the properties of the segment output variables and still rely on them being not serially correlated conditional on all other demand variables in the model.
1446
JAN DE LOECKER
Table IX shows that the estimated demand parameters hardly change. In addition, the results presented in Section 5 are robust with respect to relying on productivity estimates from a value added production function. This robustness check deals directly with the observation of Bond and Soderbom (2005), who argued that it may be hard to identify coefficients on a perfectly variable input, such as materials in my case, in a Cobb–Douglas production function. 6.2. Alternative Demand Systems I briefly show how my approach is equivalent to a discrete choice model often used in the empirical demand estimation literature. The parameters have slightly different interpretations, but all of the main results on the productivity response of liberalizing trade are unchanged. More precisely, I show that the empirical model I rely on in the main text can be generated from a logit or restricted nested CES or logit model of consumer choice. It is important to stress that the inability to measure prices and quantities directly limits the demand system I can rely on. An important feature of the demand system I suggest is the ability to relate log price to log quantity and observed demand shocks that vary over time and segments, which allows identification of the demand parameter by segment. The discrete choice model consistent with my main empirical model has an indirect utility function Vljt for consumer l buying good j at time t, consisting of a mean utility component and an idiosyncratic preference shock ζljt , which is assumed to be drawn from a Type I extreme value distribution as in Berry, Levinsohn, and Pakes (1995). The mean utility component in my framework is simply a function of the logarithmic price pjt , product fixed effects δj , and an unobserved demand shock ξjt .37 This leads to the following specification for indirect utility: (31)
Vljt = δj Dj + ηpjt + ξjt + ζljt
The next step is to aggregate over individual choices to buy good j, and I recover a well known expression for the market share of good j relative to the outside good of the model, (32)
ln(msjt ) − ln(msot ) = δj Dj + ηpjt + ξjt
Starting from log market share and simply rearranging terms using that ln(msjt ) = ln Qjt − ln Qst , I obtain an expression for log price and plug it into 37 Introduction of the logarithmic price into the indirect utility function is the key difference between the (nested) CES and the (nested) logit model. Anderson, de Palma, and Thisse (1992) provided a good overview of the similarities of both models. To see this, it is useful to rewrite the CES demand system to make it directly comparable to the logit (i.e., ln(Qjt /Qst ) = η(pjt − pt ) + ξjt ).
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1447
the revenue expression to retrieve the same estimating equation, (33)
rit = β0 + βl lit + βm mit + βk kit + βs qst + δi Di + ω∗it + ξit∗ + uit
where I consider single product producers for simplicity (i.e., i = j),38 and 1 where now β0 = |η| ln(mso ), βh = ( η+1 )αh for h = {l m k}, and importantly η 1 βs = |η| . It is important to note that total output enters in exactly the same way, leading to identification of η in the production function framework. The difference between the CES model and the logit is clearly the interpretation of the estimated demand parameter η. In the CES case, the estimated 1 coefficient βs is a direct estimate of the Lerner index ( |η| ) or of the elasticity of demand (η). When relying on the logit demand structure, the estimated parameter η is used to compute own and cross-price elasticities. However, in this setup with log prices in the indirect utility function, they are identical (Anderson, de Palma, and Thisse (1992)). It is well known that the logit model makes very strong assumptions about the substitution patterns across the various goods in the market, just like the CES demand structure. However, I allow for different substitution patterns across the various segments of the market. In this way, I am estimating a restricted version of a nested CES model, whereby the correlation of unobserved demand shocks across segments is assumed to be zero, after controlling for subsegment and product fixed effects. Working through the same steps as above, I recover a similar estimating equation for a nested CES demand system where σ measures the correlation across segments. As Berry (1994) mentioned, we can interpret this model as a random coefficients model involving group-specific dummy variables. The advantage of the nested structure over the standard CES (or logit) is that I can allow for correlation of utility between groups of similar goods. This advantages comes at the cost of having to order the goods in a market in well defined nests. I rely on the classification presented in Table A.II to create segments and subsegments of the textile industry. It is important to note that relaxing the substitution patterns among the goods further reinforces the argued importance of controlling for unobserved prices and demand shocks. The ultimate constraint on modeling substitution patterns is the inability to observe quantities and prices for individual goods, which is a standard constraint in this literature. Proceeding as before by rearranging terms to get an expression for pjt , I obtain the same expression as before (i.e., (33)), with only an extra term that captures the share of segment s in total demand, qs|It (where subscript I denotes the textile industry), which provides information on σ.39 The coefficient 38 I show this for a single product producer. For multiproduct firms, I need an additional step to aggregate to the firm level. 39 The reduced form parameters of the production function are now given by βh = ( ηs +1−σ )αh ηs for h = {l m k}. I rely on the same demand controls to estimate both the segment-specific demand parameters and the additional parameter σ.
1448
JAN DE LOECKER
on this extra term is a direct estimate of the correlation across the various segments. The last row in Table IX reports the coefficients of an unrestricted nested CES demand system and I recover an insignificant estimate of 0008 for the cross-segment correlation parameter σ. Therefore, in my particular application, I cannot reject the restricted version where the cross-segment correlation is set to zero. Given the different product categories listed in the various segments, this result is not surprising (see Table A.I). I also considered the demand system one level lower in the structure of products, whereby I estimate a demand parameter for each subsegment (e.g., Mobiltech). However, this implies that more than 50 extra demand parameters need to be estimated in addition to the control function and the product dummies, and leads to a low testing power. I only find a few subsegments for which the markup is differently estimated. In sum, my demand specification is consistent with a random utility discrete choice demand system. The results further support the specification relied on throughout the paper, where I allow for different parameters per segment. However, it is clear that my methodology does not rule out richer substitution patterns across products. 7. REMARKS AND CONCLUSION In this section, I briefly discuss a few important implications that emerge from my findings. Detailed discussions of these implications are separate research questions on their own and are beyond the scope of this paper. This section is meant to highlight a few important avenues for research that directly follow from this paper. I conclude with an overview of my main results and findings. 7.1. Remarks This paper shows the importance of correctly estimating the effect of trade policy on productivity. The suggested correction is also expected to be important for any reallocation analysis where researchers are interested in the covariance between productivity and the weight of an individual producer in the industry or economy at large. Regardless of the specific method used to aggregate productivity across individual producers, it is clear that relying on productivity measures that contain prices and demand shocks leads to an incorrect assessment of the underlying sources of aggregate productivity growth. I find a strong covariance of market share and my implied estimates of prices, controlling for the correct returns to scale, that suggests that revenue based productivity measures are not sufficient to measure the extent to which resources get reallocated across producers based on productivity differences. This helps me further evaluate how trade liberalization impacts the productivity of individual producers and that of industries as a whole. A full analysis of the reallocation
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1449
effects lies beyond the scope of this paper, and I refer to Petrin and Levinsohn (2010) for more details on the aggregation of producer-level productivity estimates. Second, my estimates can directly shed light on whether traditional productivity measures are good measures of productivity. Using my estimates, I find a positive correlation between the standard measures of productivity and prices, whereas prices are negatively correlated with my corrected productivity estimates, as predicted by a large class of economic models in international trade and industrial organization. It interesting to compare the correlation coefficients using my results with those of Foster, Haltiwanger, and Syverson (2008), who observed prices at the plant level for a subset of U.S. producers in the U.S. census data set. I obtain similar correlation coefficients without observing prices and quantities produced at the firm level. More precisely, I obtain a positive correlation of 012 between prices and standard revenue based productivity estimates, while prices and my corrected productivity estimates are negatively correlated (−035). Foster, Haltiwanger, and Syverson (2008) found correlation coefficients of 016 and −054, respectively. My results therefore show that correcting for unobserved prices and demand effects is absolutely critical in obtaining measures for productivity. 7.2. Conclusion I analyze productivity responses to a reduction in trade protection in the Belgian textile industry using a matched product–firm-level data set. By introducing demand shifters, I am able to decompose the traditional measured productivity gains into real productivity gains and price and demand effects. The empirical method sheds light on other parameters of interest, such as the price elasticity of demand, and my results highlight the importance of both productivity and price responses to a change in a trade regime. Combining a production function and a demand system into one framework provides other interesting results and insights with respect to product mix and market power. I find that including the product mix of a firm is an important dimension to consider when analyzing productivity dynamics. Even if this has no impact on the aggregation of production across products, it allows estimation of different elasticities across product segments. In the context of the estimation of production functions, multiproduct firms have not received a lot of attention. The main reason for this is the lack of detailed product-level production data: input (labor, material, and capital) usage and output by product and firm. I observe the number of products produced per firm and where these products are located in product space (segments of the industry). This allows specifying a richer demand system while investigating the productivity response of trade liberalization, controlling for price and demand shocks. While I find positive significant productivity gains from relaxing quota restrictions, the effects are estimated to be substantially lower, up to 20 per-
1450
JAN DE LOECKER
cent, compared to using standard productivity estimates. The latter still capture price and demand variation (across product segments and time) which are correlated with the change in trade policy, leading to an overestimation of productivity gains from opening up to trade. The suggested method and identification strategy are quite general and can be applied whenever it is important to distinguish between prices and physical productivity. REFERENCES ACKERBERG, D., K. CAVES, AND G. FRAZER (2006): “Structural Estimation of Production Functions,” Mimeo, U.C.L.A. [1428,1430] ANDERSON, S. A., A. DE PALMA, AND J.-F. THISSE (1992): Discrete Choice Theory of Product Differentiation. Cambridge: MIT Press. [1446,1447] BARTELSMAN, E. J., AND M. DOMS (2000): “Understanding Productivity. Lessons From Longitudinal Micro Data,” Journal of Economic Literature, 38, 569–594. [1408] BERNARD, A. B., S. REDDING, AND P. K. SCHOTT (2010): “Multi-Product Firms and Product Switching,” American Economic Review, 100, 70–97. [1418,1433] (2011): “Multi-Product Firms and Trade Liberalization,” Quarterly Journal of Economics (forthcoming). [1434,1444] BERRY, S. (1994): “Estimating Discrete-Choice Models of Product Differentiation,” Rand Journal of Economics, 25, 242–262. [1447] BERRY, S., L. LEVINSOHN, AND A. PAKES (1995): “Automobile Prices in Market Equilibrium,” Econometrica, 63, 841–890. [1446] BOND, S., AND M. SODERBOM (2005): “Adjustment Costs and the Identification of Cobb–Douglas Production Functions,” Working Paper 05/04, The Institute for Fiscal Studies. [1430,1446] DAS, S., M. ROBERTS, AND J. TYBOUT (2007): “Market Entry Costs, Producer Heterogeneity and Export Dynamics,” Econometrica, 75, 837–873. [1434] DE LOECKER, J. (2011): “Supplement ‘Product Differentiation, Multiproduct Firms, and Estimating the Impact of Trade Liberalization on Productivity’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/Supmat/7617_extensions.pdf; http://www.econometricsociety.org/ecta/Supmat/7617_data and programs.zip. [1409] DE LOECKER, J., AND F. WARZYNSKI (2010): “Markups and Firm-Level Export Status,” Mimeo, Princeton. [1435] FEBELTEX (2003): “Anual Report 2002–2003,” FEBELTEX, Belgium. [1417] FOSTER, L., J. HALTIWANGER, AND C. SYVERSON (2008): “Reallocation, Firm Turnover and Efficiency: Selection on Productivity or Profitability?” American Economic Review, 98, 394–425. [1412,1449] GOLDBERG, P. (1995): “Product Differentiation and Oligopoly in International Markets: The Case of the U.S. Automobile Industry,” Econometrica, 63, 891–951. [1422] GOLDBERG, P., A. KHANDELWAL, N. PAVCNIK, AND P. TOPALOVA (2008): “Multi-Product Firms and Product Turnover in the Developing World Evidence From India,” Working Paper 14127, NBER. [1408] HALL, R. E. (1988): “The Relation Between Price and Marginal Cost in US Industry,” Journal of Political Economy, 96, 921–947. [1437] JAVORCIK, B. S. (2004): “Does Foreign Direct Investment Increase the Productivity of Domestic Firms? In Search of Spillovers Through Backward Linkages,” American Economic Review, 94, 605–627. [1408] KATAYAMA, H., S. LU, AND J. TYBOUT (2009): “Firm-Level Productivity Studies: Illusions and a Solution,” International Journal of Industrial Organization, 27, 403–413. [1408] KLETTE, T. J., AND Z. GRILICHES (1996): “The Inconsistency of Common Scale Estimators When Output Prices Are Unobserved and Endogenous,” Journal of Applied Econometrics, 11, 343–361. [1408,1409,1411,1433,1435-1438]
PRODUCT DIFFERENTIATION AND PRODUCTIVITY
1451
KONINGS, J., P. VAN CAYSEELE, AND F. WARZYNSKI (2001): “The Dynamics of Industrial Markups in Two Small Open Economies; Does National Competition Policy Matters?” International Journal of Industrial Organization, 19, 841–859. [1437] KUGLER, M., AND E. VERHOOGEN (2008): “The Quality-Complementarity Hypothesis: Theory and Evidence From Colombia,” Working Paper 14418, NBER. [1408] LEVINSOHN, J., AND M. MELITZ (2006): “Productivity in a Differentiated Products Market Equilibrium,” Mimeo. [1408,1411,1434,1435] LEVINSOHN, J., AND A. PETRIN (2003): “Estimating Production Functions Using Inputs to Control for Unobservables,” Review of Economic Studies, 70, 317–342. [1409,1413,1423-1425,1428, 1435,1436] OLLEY, S., AND A. PAKES (1996): “The Dynamics of Productivity in the Telecommunications Equipment Industry,” Econometrica, 64, 1263–1298. [1409,1413,1423,1427,1428,1432, 1433,1435,1436] PAKES, A. (1994): “The Estimation of Dynamic Structural Models: Problems and Prospects, Part II,” in Advances in Econometrics: Proceedings of the 6th World Congress of the Econometric Society, ed. by J. J. Laffont and C. Sims. Cambridge: Cambridge University Press, 171–259. [1427] PAVCNIK, N. (2002): “Trade Liberalization, Exit, and Productivity Improvement: Evidence From Chilean Plants,” Review of Economic Studies, 69, 245–276. [1408] PETRIN, A., AND J. LEVINSOHN (2010): “Measuring Aggregate Productivity Growth Using PlantLevel Data,” Mimeo, Minnesota University. [1449] SYVERSON, C. (2004): “Market Structure and Productivity: A Concrete Example,” Journal of Political Economy, 112, 1181–1222. [1444] TYBOUT, J. (2000): “Manufacturing Firms in Developing Countries,” Journal of Economic Literature, 38, 11–44. [1408] VAN BIESEBROECK, J. (2005): “Exporting Raises Productivity in the Sub-Saharan African Manufacturing,” Journal of International Economics, 67, 373–391. [1408] WOOLDRIDGE, J. (2011): “On Estimating Firm-Level Production Functions Using Proxy Variables to Control for Unobservables,” Economic Letters (forthcoming). [1428,1430,1435,1436]
Dept. of Economics, Princeton University, Princeton, NJ 08540, U.S.A. and NBER and CEPR;
[email protected]. Manuscript received December, 2007; final revision received October, 2010.
Econometrica, Vol. 79, No. 5 (September, 2011), 1453–1498
AN ANATOMY OF INTERNATIONAL TRADE: EVIDENCE FROM FRENCH FIRMS BY JONATHAN EATON, SAMUEL KORTUM, AND FRANCIS KRAMARZ1 We examine the sales of French manufacturing firms in 113 destinations, including France itself. Several regularities stand out: (i) the number of French firms selling to a market, relative to French market share, increases systematically with market size; (ii) sales distributions are similar across markets of very different size and extent of French participation; (iii) average sales in France rise systematically with selling to less popular markets and to more markets. We adopt a model of firm heterogeneity and export participation which we estimate to match moments of the French data using the method of simulated moments. The results imply that over half the variation across firms in market entry can be attributed to a single dimension of underlying firm heterogeneity: efficiency. Conditional on entry, underlying efficiency accounts for much less of the variation in sales in any given market. We use our results to simulate the effects of a 10 percent counterfactual decline in bilateral trade barriers on French firms. While total French sales rise by around $16 billion (U.S.), sales by the top decile of firms rise by nearly $23 billion (U.S.). Every lower decile experiences a drop in sales, due to selling less at home or exiting altogether. KEYWORDS: Export destinations, firms, efficiency, sales distribution, simulated moments.
1. INTRODUCTION WE EXPLOIT DETAILED DATA on the exports of French firms to confront a new generation of trade theories. These theories resurrect technological heterogeneity as the force that drives international trade. In Eaton and Kortum (2002), differences in efficiencies across countries in making different goods determine aggregate bilateral trade flows. Since they focused only on aggregate data, underlying heterogeneity across individual producers remains hidden. Subsequently, Melitz (2003) and Bernard, Eaton, Jensen, and Kortum (2003; henceforth BEJK) have developed models in which firm heterogeneity explicitly underlies comparative advantage. An implication is that data on individual firms can provide another window on the determinants of international trade. On the purely empirical side, a literature has established a number of regularities about firms in trade.2 Another literature has modeled and estimated the 1 We thank Costas Arkolakis, Christopher Flinn, Jeremy Fox, Xavier d’Haultfoeuille, and Azeem Shaikh for extremely useful comments, and Dan Lu and Sebastian Sotelo for research assistance. The paper has benefitted enormously from the reports of three anonymous referees and from comments received at many venues, in particular the April 2007 conference to celebrate the scientific contributions of Robert E. Lucas, Jr. Eaton and Kortum gratefully acknowledge the support of the National Science Foundation under Grant SES-0339085. 2 For example, Bernard and Jensen (1995), for the United States, and Aw, Chung, and Roberts (2000), for Taiwan and Korea, documented the size and productivity advantage of exporters.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8318
1454
J. EATON, S. KORTUM, AND F. KRAMARZ
export decision of individual firms in partial equilibrium.3 However, the task of building a structure that can simultaneously embed behavior at the firm level into aggregate relationships and dissect aggregate shocks into their firm-level components remains incomplete. This paper seeks to further this mission. To this end, we examine the sales of French manufacturing firms in 113 destinations, including France itself. Combining these data with observations on aggregate trade and production reveals striking regularities in (i) patterns of entry across markets, (ii) the distribution of sales across markets, (iii) how export participation connects with sales at home, and (iv) how sales abroad relate to sales at home. We adopt the approach of Melitz (2003), as augmented by Helpman, Melitz, and Yeaple (2004) and Chaney (2008), as a basic framework for understanding these relationships. Core elements of the model are that firms’ efficiencies follow a Pareto distribution, demand is Dixit–Stiglitz, and markets are separated by iceberg trade barriers and require a fixed cost of entry. The model is the simplest one we can think of that can square with the facts. The basic model fails, however, to come to terms with some features of the data: (i) Firms do not enter markets according to an exact hierarchy. (ii) Their sales where they do enter deviate from the exact correlations the basic model insists on. (iii) Firms that export sell too much in France. (iv) In the typical destination, there are too many firms selling small amounts. To reconcile the basic model with the first two failures, we introduce market and firm-specific heterogeneity in entry costs and demand. We deal with the last two by incorporating Arkolakis’s (2010) formulation of market access costs. The extended model, while remaining very parsimonious and transparent, is one that we can connect more formally to the data. We describe how the model can be simulated and we estimate its main parameters using the method of simulated moments. Our parameter estimates imply that the forces underlying the basic model remain powerful. Simply knowing a firm’s efficiency improves our ability to explain the probability it sells in any market by 57 percent. Conditional on a firm selling in a market, knowing its efficiency improves our ability to predict how much it sells there, but by much less. While these results leave much to be explained by the idiosyncratic interaction between individual firms and markets, they tell us that any theory that ignores features of the firm that are universal across markets misses much. We conclude by using our parameterized model to examine a world with 10 percent lower trade barriers. To do so, we embed our model, with our parameter estimates, into a general equilibrium framework, calibrating it to aggregate data on production and bilateral trade. A striking finding is the extent to which lower trade barriers, while raising welfare in every country, favor the largest firms at the expense of others. Total output of French firms rises by 3
A pioneering paper here is Roberts and Tybout (1997).
ANATOMY OF INTERNATIONAL TRADE
1455
3.8 percent, with all of the growth accounted for by firms in the top decile. Sales in every other decile fall. Import competition leads to the exit of 11.5 percent of firms, 43 percent of which are in the smallest decile. Section 2, which follows, explores four empirical regularities. With these in mind, we turn, in Section 3, to a model of exporting by heterogeneous firms. Section 4 explains how we estimate the model and considers some implications of the parameters. Section 5 explores the consequences of lower trade costs. 2. EMPIRICAL REGULARITIES Our data are the sales, translated into U.S. dollars, of 230,423 French manufacturing firms to 113 markets in 1986. (Table III in Section 5.3 lists the countries.) Among them, only 34,558 sell outside France. The firm that exports most widely sells to 110 out of the 113 destinations. All but 523 of these firms indicate positive sales in France. Since much of our analysis focuses on the relationship between exporting and activity in France, we exclude these firms from much of our analysis, leaving 34,035 exporters also selling in France.4 We cut the data in four different ways, each revealing sharp regularities. 2.1. Market Entry Figure 1A plots the number of French manufacturing firms NnF selling to a market against total manufacturing absorption Xn in that market across our 113 markets.5 While the number of firms selling to a market tends to increase with the size of the market, the relationship is cloudy. The relationship comes into focus, however, when the number of firms is normalized by the share of France in a market. Figure 1B continues to report market size across the 113 destinations along the x axis. The y axis replaces NnF the number of French firms selling to a market, with NnF divided by French market share, πnF defined as πnF =
XnF Xn
where XnF is total exports of our French firms to market n. 4 Appendix A describes the data. Appendices, data, and programs are available in the Supplemental Material (Eaton, Kortum, and Kramarz (2011)). Biscourp and Kramarz (2007) and Eaton, Kortum, and Kramarz (2004; EKK) used the same sources. EKK (2004) partitioned firms into 16 manufacturing sectors. While features vary across industries, enough similarity remains to lead us to ignore the industry dimension here. If a firm’s total exports declared to French customs exceed its total sales from mandatory reports to the French fiscal administration, we treat the firm as not selling in France. The 523 such firms represent 1.51 percent of all French exporters and account for 1.23 percent of the total French exports to our 112 export destinations. 5 Manufacturing absorption is calculated as total production plus imports minus exports. See EKK (2004) for details.
1456
J. EATON, S. KORTUM, AND F. KRAMARZ
FIGURE 1.—Entry and sales by market size.
1457
ANATOMY OF INTERNATIONAL TRADE TABLE I FRENCH FIRMS EXPORTING TO THE SEVEN MOST POPULAR DESTINATIONS Number of Exporters
Fraction of Exporters
Belgiuma (BE) Germany (DE) Switzerland (CH) Italy (IT) United Kingdom (UK) Netherlands (NL) United States (US)
17,699 14,579 14,173 10,643 9752 8294 7608
0.520 0.428 0.416 0.313 0.287 0.244 0.224
Any destination (all French exporters)
34,035
Export Destination
a Belgium includes Luxembourg.
Note that the relationship is not only very tight, but linear in logs. Correcting for market share pulls France from the position of a large positive outlier to a slightly negative one. A regression line has a slope of 0.65.6 While the number of firms selling to a market rises with market size, so do sales per firm. Figure 1C shows the 95th, 75th, 50th, and 25th percentile sales in each market (on the y axis) against market size (on the x axis). The upward drift is apparent across the board. We now turn to firm entry into different sets of markets. As a starting point for this examination, suppose firms obey a hierarchy in the sense that any firm selling to the k + 1st most popular destination necessarily sells to the kth most popular destination as well. Not surprisingly, firms are less orderly in their choice of destinations. Consider exporters to the top seven foreign destinations. Table I reports these destinations and the number of firms selling to each, as well as the total number of exporters. The last column of the table reports, for each top-seven destination, the unconditional probability of a French exporter selling there. Table II lists each of the strings of top-seven destinations that obey a hierarchical structure, together with the number of firms selling to each string (irrespective of their export activity outside the top 7). Overall, 27 percent of exporters (9260/34,035) obey a hierarchy among these most popular destinations. The next column of Table II uses the probabilities from Table I to predict the number of firms selling to each hierarchical string, if selling in one market is independent of selling in any other of the top seven. Under independence, 6 If we make the assumption that French firms do not vary systematically in size from other (non-French) firms selling in a market, the measure on the y axis indicates the total number of firms selling in a market. We can then interpret Figure 1B as telling us how the number of sellers varies with market size.
1458
J. EATON, S. KORTUM, AND F. KRAMARZ TABLE II FRENCH FIRMS SELLING TO STRINGS OF TOP-SEVEN COUNTRIES Number of French Exporters Export Stringa
Data
Under Independence
Model
BE BE–DE BE–DE–CH BE–DE–CH–IT BE–DE–CH–IT–UK BE–DE–CH–IT–UK–NL BE–DE–CH–IT–UK–NL–US
3988 863 579 330 313 781 2406
1700 1274 909 414 166 54 15
4417 912 402 275 297 505 2840
Total
9260
4532
9648
a
a The string BE means selling to Belgium but no other among the top 7; BE–DE means selling to Belgium and Germany but no other, and so forth.
the number of firms adhering to a hierarchy would be less than half of what we see in the data. 2.2. Sales Distributions Our second exercise expands on Figure 1C by looking at the entire distribution of sales within individual markets. We plot the sales of each firm in a particular market (relative to mean sales there) against the fraction of firms selling in the market who sell at least that much.7 By doing so for all our 113 destinations, a remarkable similarity emerges. Figure 2 plots the results for Belgium, France, Ireland, and the United States on common axes. The basic shape is common for size distributions.8 7
Following Gabaix and Ibragimov (2011), we construct the x axis as follows. Denote the rank in terms of sales of French firm j in market n among the NnF French firms selling there, as rn (j) with the firm with the largest sales having rank 1. For each firm j, we calculate (rn (j) − 05)/NnF To preserve confidentiality, our plots report the geometric mean of a group of adjacent sales. For the top 1000 sales, a group contains four observations, with larger groups used for lower ranks. 8 Sales distributions are often associated with a Pareto distribution, at least in the upper tail q (Simon and Bonini (1958)). To interpret Figure 2 as distributions, let xn be the qth percentile French sales in market n normalized by mean sales in that market. We can write Pr[xn ≤ xqn ] = q where xn is sales of a firm in market n relative to the mean. If the distribution is Pareto with parameter a > 1 (so that the minimum sales relative to the mean is (a − 1)/a), we have 1−
q
axn a−1
−a =q
ANATOMY OF INTERNATIONAL TRADE
1459
FIGURE 2.—Sales distributions of French firm: Graphs by country.
2.3. Export Participation and Size in France How does a firm’s participation in export markets relate to its sales in France? We organize our firms in two different ways based on our examination of their entry behavior above. First, we group firms according to the minimum number of destinations where they sell. All of our firms, of course, sell to at least one market, while none sells to all 113 destinations. Figure 3A depicts average sales in France on the y axis for the group of firms that sell to at least k markets with k on the x axis. Note the near monotonicity with which sales in France rise with the number of foreign markets served. Figure 3B reports, on a log scale, average sales in France of firms selling to k or more markets against the number of firms selling to k or more markets. The relationship is strikingly linear with a regression slope of −0.66. Second, we rank countries according to their popularity as destinations for exports. The most popular destination is France itself, where all of our firms sell, followed by Belgium with 17,699 exporters. The least popular is Nepal, where only 43 French firms sell. Figure 3C depicts average sales in France on the y axis plotted against the number of firms selling to the kth most popular or
1 a−1 − ln(1 − q) ln(xqn ) = ln a a
implying a straight line with slope −1/a Considering only sales by the top 1 percent of French firms selling in the four destinations depicted in Figure 2, regressions yield slopes of −0.74 (Belgium), −0.87 (France), −0.69 (Ireland), and −0.82 (United States). Note, however, that the distributions appear to deviate from a Pareto distribution, especially at the lower end.
1460
J. EATON, S. KORTUM, AND F. KRAMARZ
FIGURE 3.—Sales in France and market entry.
market on the x axis. The relationship is tight and linear in logs as in Figure 3B, although slightly flatter, with a slope of −0.57. Firms selling to less popular markets and to more markets systematically sell more in France. Delving further into the French sales of exporters to markets of varying popularity, Figure 3D reports the 95th, 75th, 50th, and 25th percentile sales in France (on the y axis) against the number of firms selling to each market. Note the tendency of sales in France to rise with the unpopularity of a destination across all percentiles (less systematically so for the 25th percentile).9
2.4. Export Intensity Having looked separately at what exporters sell abroad and what they sell in France, we now examine the ratio of the two. We introduce the concept of a 9 We were able to observe the relationship between market popularity and sales in France for the 1992 cross section as well. The analog (not shown) of Figure 3C is nearly identical. Furthermore, the changes between 1986 and 1992 in the number of French firms selling in a market correlate negatively with changes in the mean sales in France of these firms. The only glaring discrepancy is Iraq, where the number of French exporters plummeted between the two years, while average sales in France did not skyrocket, as the relationship would dictate.
ANATOMY OF INTERNATIONAL TRADE
1461
FIGURE 4.—Distribution of export intensity, by market.
firm j’s normalized export intensity in market n which we define as (XnF (j)/X nF ) (XFF (j)/X FF )
Here XnF (j) is French firm j’s sales in market n and X nF are average sales by French firms in market n (XFF (j) and X FF are the corresponding magnitudes in France). Scaling by X nF normalizes by the effect of market n on the size of all French firms there. Scaling by XFF (j) normalizes by firm j’s size in France. Figure 4 plots the median and 95th percentile normalized export intensity for each foreign market n (on the y axis) against the number of firms selling to that market (on the x axis) on log scales. Two aspects stand out: (i) As a destination becomes more popular, normalized export intensity rises. The slope for the median is 0.39, but the relationship is noisy. (ii) Normalized “export” intensity for France itself is identically 1, while median export intensity in Figure 4 is usually 2 orders of magnitude or more below 1. Even among exporting firms, sales abroad are small compared to sales at home. 3. THEORY In seeking to interpret these relationships, we turn to a parsimonious model which explains where firms sell and how much they sell there. The basic structure is monopolistic competition: Goods are differentiated, with each one corresponding to a firm. Selling in a market requires a fixed cost, while moving
1462
J. EATON, S. KORTUM, AND F. KRAMARZ
goods from country to country incurs iceberg transport costs. Firms are heterogeneous in efficiency as well as in other characteristics, while countries vary in size, location, and fixed cost of entry.10 3.1. Producer Heterogeneity A potential producer of good j in country i has efficiency zi (j) A bundle of inputs there costs wi so that the unit cost of producing good j is wi /zi (j) Countries are separated by iceberg trade costs, so that delivering 1 unit of a good to country n from country i requires shipping dni ≥ 1 units, where we set dii = 1 for all i Combining these terms, the unit cost to this producer of delivering 1 unit of good j to country n from country i is (1)
cni (j) =
wi dni zi (j)
The measure of potential producers in country i who can produce their good with efficiency at least z is (2)
μzi (z) = Ti z −θ
z > 0
where θ > 0 and Ti ≥ 0 are parameters.11 Using (1), the measure of goods that can be delivered from country i to country n at unit cost below c is defined as z wi dni μni (c) = μi = Φni c θ (3) c where Φni = Ti (wi dni )−θ We now turn to demand and market structure in a typical destination. 10 We go with Melitz’s (2003) monopolistic competition approach rather than the Ricardian framework with a fixed range of commodities used in BEJK (2003), since it more readily delivers the feature that a larger market attracts more firms, as we see in our French data. 11 We follow Chaney (2008) in taking Ti as exogenous and Helpman, Melitz, and Yeaple (2004) and Chaney (2008) in treating the underlying heterogeneity in efficiency as Pareto. A Pareto distribution of efficiencies can arise naturally from a dynamic process that is a history of independent shocks, as shown by Simon (1955), Gabaix (1999), and Luttmer (2007). The Pareto distribution is closely linked to the type II extreme value (Fréchet) distribution used in Kortum (1997), Eaton and Kortum (1999, 2002), and BEJK (2003). Say that the range of goods is limited to the interval j ∈ [0 J] with the measure of goods produced with efficiency at least z given by μzi (z; J) = J{1 − exp[−(T/J)z −θ ]} (where J = 1 in these previous papers). This generalization allows us to stretch the range of goods while compressing the distribution of efficiencies for any given good. Taking the limit as J → ∞ gives (2). (To take the limit, rewrite the expression as {1 − exp[−(T/J)z −θ ]}/J −1 and apply l’Hôpital’s rule.)
ANATOMY OF INTERNATIONAL TRADE
1463
3.2. Demand, Market Structure, and Entry A market n contains a measure of potential buyers. To sell to a fraction f of them, a producer in country i selling good j in country n must incur a fixed cost: (4)
Eni (j) = εn (j)Eni M(f )
Here εn (j) is a fixed-cost shock specific to good j in market n and Eni is the component of the cost shock faced by all sellers from country i in destination n. The function M(f ) the same across destinations, relates a seller’s fixed cost of entering a market to the share of consumers it reaches there. Any given buyer in the market has a chance f of accessing the good, while f is the fraction of buyers reached. In what follows, we use the specification for M(f ) derived by Arkolakis (2010), M(f ) =
1 − (1 − f )1−1/λ 1 − 1/λ
where the parameter λ > 0 reflects the increasing cost of reaching a larger fraction of potential buyers.12 This function has the desirable properties that the cost of reaching zero buyers in a market is zero and that the total cost increases continuously (and the marginal cost weakly increases) in the fraction f of buyers reached. This formulation can explain why some firms sell a very small amount in a market while others stay out entirely. Each potential buyer in market n has the same probability f of being reached by a particular seller which is independent across sellers. Hence each buyer can purchase the same measure of goods, although the particular goods in question vary across buyers. Buyers combine goods according to a constant elasticity of substitution (CES) aggregator with elasticity σ, where we require θ + 1 > σ > 1 Hence we can write the aggregate demand for good j, if it has price p and reaches a fraction f of the buyers in market n as (5)
Xn (j) = αn (j)f Xn
p Pn
−(σ−1)
12 By l’Hôpital’s rule, at λ = 1, the function becomes M(f ) = − ln(1 − f ) Arkolakis (2010) provided an extensive discussion of this functional form, deriving it from a model of the microfoundations of marketing. He parameterized the function in terms of β = 1/λ The case β > 0 corresponds to an increasing marginal cost of reaching additional buyers, which always results in an outcome with 0 ≤ f < 1. The case β ≤ 0 means a constant or decreasing marginal cost of reaching additional buyers, in which case the firm would go to a corner (f = 0 or f = 1) as in Melitz (2003). Our restriction λ > 0 thus covers all cases (including Melitz as λ → ∞) that we could observe.
1464
J. EATON, S. KORTUM, AND F. KRAMARZ
where Xn is total spending there. The term αn (j) reflects an exogenous demand shock specific to good j in market n The term Pn is the CES price index, which we derive below. The producer of good j from country i selling in market n with a unit cost of cn (j), charging a price p and reaching a fraction f of buyers, earns a profit (6)
−(σ−1) cn (j) p Πni (p f ) = 1 − αn (j)f Xn − εn (j)Eni M(f ) p Pn
To maximize (6), the producer sets the standard Dixit–Stiglitz (1977) markup over unit cost, (7)
pn (j) = mcn (j)
where m=
σ σ −1
It seeks a fraction (8)
−(σ−1) −λ Xn mcn (j) 0 fni (j) = max 1 − ηn (j) σEni Pn
of buyers in the market, where ηn (j) =
αn (j) εn (j)
is the entry shock in market n given by the ratio of the demand shock to the fixed-cost shock. Note that it will not sell at all, hence avoiding any fixed cost there, if −(σ−1) mcn (j) Xn ≤ Eni ηn (j) Pn σ We can now describe a seller’s behavior in market n in terms of its unit cost cn (j) = c demand shock αn (j) = α and entry shock ηn (j) = η From the condition above, a firm enters market n if and only if (9)
c ≤ c ni (η)
where
(10)
Xn c ni (η) = η σEni
1/(σ−1)
Pn m
ANATOMY OF INTERNATIONAL TRADE
1465
We can use the expression for (10) to simplify expression (8) for the fraction of buyers reached by a producer with unit cost c ≤ c ni (η) (11)
fni (η c) = 1 −
c c ni (η)
λ(σ−1)
Substituting (7), (11), and (10) into (5) gives (12)
Xni (α η c) =
λ(σ−1) −(σ−1) c c α 1− σEni η c ni (η) c ni (η)
Conditioning on c ni (η) α/η replaces α in (5) as the shock to sales. Since it charges a markup m = σ/(σ − 1) over unit cost, its total gross profit is simply Xni (α η c)/σ Some of this profit is eaten up by its fixed cost, which, from (4) and (11), is (13)
Eni (α η c) =
1 − (c/c ni (η))(λ−1)(σ−1) α Eni η 1 − 1/λ
To summarize, the relevant characteristics of market n that apply across sellers are total purchases Xn , the price index Pn and, for sellers from country i, the common component of the fixed cost Eni The particular situation of a potential seller of product j in market n is captured by three magnitudes: the unit cost cn (j), the demand shocks αn (j), and the entry shocks ηn (j). We treat αn (j) and ηn (j) as the realizations of producer-specific shocks drawn from a joint density g(α η) that is the same across destinations n and independent of cn (j).13 Equations (9) and (10), which govern entry, and (12), which governs sales conditional on entry, link our theory to the data on French firms’ entry and sales in different markets of the world described in Section 2. Before returning to the data, however, we need to solve for the price index Pn in each market. 3.3. The Price Index As described above, each buyer in market n has access to the same measure of goods (even though they are not necessarily the same goods). Every buyer faces the same probability fni (η c) of purchasing a good with cost c and entry shock η for any value of α Hence we can write the price index faced by a 13 We require E[ηθ/(σ−1) ] and E[αη[θ/(σ−1)]−1 ] both to be finite. For any given g(α η), these restrictions imply an upper bound on the parameter θ.
1466
J. EATON, S. KORTUM, AND F. KRAMARZ
representative buyer in market n as N cni (η) 1−σ Pn = m αfni (η c)c dμni (c) i=1
0
× g(α η) dα dη
−1/(σ−1)
To solve, we use (3), (10), (11), and the laws of integration, to get 1/(σ−1) θ−(σ−1) N Pn Xn α Pn = m κ0 Φni η σEni m i=1
−1/(σ−1) × g(α η) dα dη
where κ0 =
θ θ − θ − (σ − 1) θ + (σ − 1)(λ − 1)
Moving Pn to one side of the equation gives (14)
Pn = m(κ1 Ψn )−1/θ Xn(1/θ)−1/(σ−1)
where (15)
κ1 = κ0
αη[θ−(σ−1)]/(σ−1) g(α η) dα dη
and (16)
Ψn =
N
Φni (σEni )−[θ−(σ−1)]/(σ−1)
i=1
Note that the price index has an elasticity of (1/θ) − 1/(σ − 1) with respect to total expenditure (given the terms in Ψn ). Our restriction that θ > σ − 1 makes the effect negative: A larger market enjoys lower prices, for reasons similar to the price index effect in Krugman (1980) and common across models of monopolistic competition.14 14 In models of monopolistic competition with homogeneous firms, consumers anywhere consume all varieties regardless of where they are made. Since more varieties are produced in a larger market, local consumers benefit from their ability to buy these goods without incurring trade costs. With heterogeneous firms and a fixed cost of market entry, an additional benefit to consumers in a large market is greater variety.
ANATOMY OF INTERNATIONAL TRADE
1467
3.4. Entry, Sales, and Fixed Costs An individual firm enters market n if its cost c is below the threshold given by (10), which we can now rewrite using (14) as (17)
c ni (η) = η
1/(σ−1)
Xn κ1 Ψn
1/θ (σEni )−1/(σ−1)
For firms from country i with a given value of η, a measure μni (c ni (η)) pass the entry hurdle. To obtain the total measure of firms from i that sell in n denoted Jni we integrate across the marginal density g2 (η), κ2 πni Xn Jni = (18) μni (c ni (η)) g2 (η) dη = κ1 σEni where (19)
κ2 =
ηθ/(σ−1) g2 (η) dη
and (20)
πni =
Φni (σEni )−[θ−(σ−1)]/(σ−1) Ψn
From (12), integrating over the measure of costs, given by (3), and substituting (17), firms from country i with a given value of α and η sell a total amount Xni (α η) = αη[θ−(σ−1)]/(σ−1)
κ0 πni Xn κ1
in market n To obtain total sales by firms from country i in market n Xni we integrate across the joint density g(α η) to get Xni = πni Xn Hence the trade share of source i in market n is just πni 15 15
Of the total measure Jn of firms selling in n the fraction from i is Jni πni /Eni = N Jn πnk /Enk k=1
1468
J. EATON, S. KORTUM, AND F. KRAMARZ
To obtain an expression for the fixed costs incurred by firms from i in n given their values of α and η we integrate (13) over the measure with c below c ni (η) to get λ [θ−(σ−1)]/(σ−1) πni Xn Eni (α η) = αη σκ1 [θ/(σ − 1)] + λ − 1 Integrating across the joint density g(α η) total fixed costs incurred in market n by firms from i are E ni =
πni Xn [θ − (σ − 1)] σθ
Summing across sources i the total fixed costs incurred in market n are simply (21)
En =
θ − (σ − 1) Xn θ σ
Note that E n (spending on fixed costs) does not depend on the Eni the countrypair components of the fixed cost per firm, just as in standard models of monopolistic competition. A drop in Eni leads to more entry and ultimately the same total spending on fixed costs. If total spending in a market is Xn , then gross profits earned by firms in that market are Xn /σ If firms were homogeneous, then fixed costs would fully dissipate gross profits. With producer heterogeneity, firms with a unit cost below the entry cutoff in a market retain a net profit there. Total entry costs are a fraction [θ − (σ − 1)]/θ of gross profits, and net profits are Xn /(mθ). 3.5. A Streamlined Representation We now employ a change of variables that simplifies the model in two respects. First, it allows us to characterize unit cost heterogeneity in terms of a uniform measure. Second, it allows us to consolidate parameters. To isolate the heterogeneous component of unit costs, we transform the efficiency of any potential producer in France as (22)
u(j) = TF zF (j)−θ
If Eni does not vary according to i, then a country’s share in the measure of firms selling in n and its share in total sales there are both πni where πni =
Ti (wi dni )−θ N k=1
−θ
Tk (wk dnk )
1469
ANATOMY OF INTERNATIONAL TRADE
We refer to u(j) as firm j’s standardized unit cost. From (2), the measure of firms with standardized unit cost below u equals the measure with efficiency above (TF /u)1/θ , which is simply μzF ((TF /u)1/θ ) = u Hence standardized costs have a uniform measure that does not depend on any parameter. Substituting (22) into (1) and using (20), we can write unit cost in market n in terms of u(j) as (23)
1/θ u(j) wF dnF = cnF (j) = zF (j) ΦnF
Associated with the entry hurdle c nF (η) is a standardized entry hurdle unF (η) satisfying
(24)
unF (η) c nF (η) = ΦnF
1/θ
Firm j enters market n if its u(j) and ηn (j) satisfy πnF Xn u(j) ≤ unF (ηn (j)) = ηn (j)θ (25) κ1 σEnF where θ=
θ > 1 σ −1
Conditional on firm j’s passing this hurdle, we can use (23) and (24) to rewrite firm j’s sales in market n expression (12), in terms of u(j) as (26)
XnF (j) = εn (j) 1 −
u(j) unF (ηn (j))
λ/θ
u(j) unF (ηn (j))
−1/θ σEnF
Equations (25) and (26) reformulate the entry and sales equations (17) and (12) in terms of u(j) rather than cn (j) Since standardized unit cost u(j) applies across all markets, it gets to the core of a firm’s underlying efficiency as it applies to its entry and sales in different markets. Notice that in reformulating the model as (25) and (26), the θ and σEnF The parameters θ σ and EnF enter only collectively through parameter θ translates unobserved heterogeneity in u(j) into observed heterogeneity in sales. A higher value of θ implies less heterogeneity in efficiency, while a higher value of σ means that a given level of heterogeneity in efficiency translates into greater heterogeneity in sales. By observing just entry and sales, we are able to identify only θ
1470
J. EATON, S. KORTUM, AND F. KRAMARZ
3.6. Connecting the Model to the Empirical Regularities We now show how the model can deliver the features of the data about entry and sales described in Section 2. We equate the measure of French firms JnF selling in each destination with the actual (integer) number NnF and equate X nF with their average sales there. 3.6.1. Entry From (18), we get (27)
NnF κ2 Xn = πnF κ1 σEnF
a relationship between the number of French firms selling to market n relative to French market share and the size of market n, just like the one plotted in Figure 1B. The fact that the relationship is tight with a slope that is positive but less than 1 suggests that entry cost σEnF rises systematically with market size, but not proportionately so. We do not impose any such relationship, but rather employ (27) to calculate (28)
σEnF =
κ2 πnF Xn κ2 = X nF κ1 NnF κ1
directly from the data.16 Using (28), we can write (25) as (29)
u(j) ≤ unF (ηn (j)) =
NnF ηn (j)θ κ2
Without variation in the firm-specific and market specific entry shock ηn (j) (29) implies efficiency is all that matters for entry, dictating a deterministic ranking of destinations with a less efficient firm (with a higher u(j)) selling to a subset of the destinations served by any more efficient firm. Hence deviations from market hierarchies, as we see in Table II, identify variation in ηn (j) 16 We can use equation (28) to infer how fixed costs vary with country characteristics. Simply regressing X nF against our market size measure Xn (both in logs) yields a coefficient of 035, (1 minus the slope in Figure 1B), in which NnF /πnF rises with market size with an elasticity of 065. (The connection between the two regressions is a result of the accounting identity: X nF ∗ NnF /πnF = Xn .) If gross domestic product (GDP) per capita is added to the regression, it has a negative effect on entry costs. French data from 1992, and data from Denmark (from Pedersen (2009)) and from Uruguay (compiled by Raul Sampognaro) show similar results for market size but not a robust effect of GDP per capita. We also find that mean sales are higher in the home country. Appendix B reports these results.
ANATOMY OF INTERNATIONAL TRADE
1471
3.6.2. Sales in a Market Conditional on a firm’s entry into market n, the term (30)
vnF (j) =
u(j) unF (ηn (j))
is distributed uniformly on [0 1] Replacing u(j) with vnF (j) in expression (26), and exploiting (28), we can write sales as (31)
κ2 XnF (j) = εn (j) 1 − vnF (j)λ/θ vnF (j)−1/θ X nF κ1
Not only does vnF have the same distribution in each market n, so does εn .17 Hence the distribution of sales in any market n is identical up to a scaling factor equal to X nF (reflecting variation in σEnF ), consistent with the common shapes of sales distributions exhibited in Figure 2. If the only source of variation in sales were the term vnF (j)−1/θ , the sales distribution would be Pareto with parameter θ The term in square brackets, however, implies a downward deviation from a Pareto distribution that is more pronounced as vnF (j) approaches 1, consistent with the curvature in the lower end of the sales distributions that we observe in Figure 2. The sales distribution also inherits properties of the distribution of εn (j) 3.6.3. Sales in France Conditional on Entry in a Foreign Market We can also look at the sales in France of French firms selling to any market n. To condition on these firms, we take (31) as it applies to France and use (30) and (29) to replace vFF (j) with vnF (j): (32)
XFF (j)|n =
λ λ/θ ηn (j) αF (j) NnF 1 − vnF (j)λ/θ ηn (j) NFF ηF (j) −1/θ κ2 NnF X FF × vnF (j)−1/θ NFF κ1
Since both vnF (j) and ηn (j) have the same distributions across destinations n, the only systematic source of variation across n is NnF 17 To see that the distribution of εn (j) is the same in any n, consider the joint density of α and η conditional on entry into market n
unF (η) unF (η )g2 (η ) dη
g(α η) =
ηθ g(α η) κ2
which does not depend on n The term ηθ /κ2 captures the fact that entrants are a selected sample with typically better than average entry shocks.
1472
J. EATON, S. KORTUM, AND F. KRAMARZ
Consider first the presence of NnF in the term in square brackets, representing the fraction of buyers reached in France. Since NnF /NFF is near zero everywhere but France, the term in square brackets is close to 1 for all n = F . Hence the relationship between NnF and XFF (j) is dominated by the appearance of NnF outside the square bracket, implying that firms’ sales in France fall with NnF with an elasticity of −1/ θ Interpreting Figure 3C in terms of Equation (32), the slope of −0.57 implies a θ = 175. Expression (32) also suggests how we can identify other parameters of the model. The gap between the percentiles in Figure 3D is governed by the variation in the demand shock αF (j) in France together with the variation in the entry shock ηn (j) in country n. Together (31) and (32) reconcile the near log linearity of sales in France with NnF and the extreme curvature at the lower end of the sales distribution in any given market. An exporting firm may be close to the entry hurdle in the export market and hence sells to a small fraction of buyers there, while reaching most consumers at home. Hence looking at the home sales of exporters isolates firms that reach most of the French market. These equations also explain why France itself is somewhat below the trend line in Figure 3A and B: The many nonexporting firms that reach just a small fraction of the French market appear only in the data point for France. 3.6.4. Normalized Export Intensity Finally, we can calculate firm j’s normalized export intensity in market n: (33)
XnF (j) X nF XFF (j) X FF
1/θ NnF 1 − vnF (j)λ/θ αn (j) = λ λ/θ αF (j) NFF NnF ηn (j) λ/ θ 1 − vnF (j) NFF ηF (j)
Note first how the presence of the sales shock αn (j) accommodates random variation in sales in different markets conditional on entry. As in (32), the only systematic source of cross-country variation on the righthand side of (33) is NnF . The relationship is consistent with three features of Figure 4. First, trivially, the observation for France is identically 1 Second, normalized export intensity is substantially below 1 for destinations served by only a small fraction of French firms, as is the case for all foreign markets. Third, normalized export intensity increases with the number of French firms selling there. According to (33) the elasticity of normalized export intensity θ (ignoring NnF /NFF ’s role in the denominator of with respect to NnF /NFF is 1/ the term in the square bracket, which is tiny since NnF /NFF is close to zero for
ANATOMY OF INTERNATIONAL TRADE
1473
n = F) The slope coefficient of 0.38 reported in Section 2.4 suggests a value of θ = 263.18 4. ESTIMATION We estimate the parameters of the model by the method of simulated moments. We simulate firms that make it into at least one foreign market and into France as well.19 Given parameter estimates, we later explore the implications of the model for nonexporters as well. We first complete our parameterization of the model. Second, we explain how we simulate a set of artificial French exporters given a particular set of parameter values, with each firm assigned a cost draw u, and an α and η in each market. Third, we describe how we calculate a set of moments from these artificial data to compare with moments from the actual data. Finally, we explain our estimation procedure, report our results, and examine the model’s fit. 4.1. Parameterization To complete the specification, we assume that g(α η) is joint log normal. Specifically, ln α and ln η are normally distributed with zero means and variances σα2 and ση2 and correlation ρ 20 Under these assumptions, we can write (15) and (19) as (34)
σα + 2ρσα ση ( θ − 1) + ση ( θ − 1)2 ) θ θ κ1 = exp − 2 θ−1 θ+λ−1
18
Equations (32) and (33) apply to the latent sales in France of firms that sell in n but do not enter France. In Figure 3C and D we can only look at the firms that sell in both places, of course. Since the French share in France is so much larger than the French share elsewhere, our theory predicts that a French firm selling in another market but not in France is very unlikely. Indeed, the number of such firms is small. 19 The reason for the selling-in-France requirement is that key moments in our estimation procedure involve sales in France by exporters, which we can compute only for firms that enter the home market. The reason for the foreign-market requirement is that firms selling only in France are very numerous, so that capturing them would consume a large portion of simulation draws. However, since their activity is so limited, they add little to parameter identification. We also estimated the model matching moments of nonexporting firms. Coefficient estimates were similar to those we report below, but the estimation algorithm, given estimation time, was much less precise. 20 Since the entry-cost shock is given by ln ε = ln α − ln η the implied variance of the entry-cost shock is σε2 = σα2 + ση2 − 2ρσα ση which is decreasing in ρ.
1474
J. EATON, S. KORTUM, AND F. KRAMARZ
and (35)
2 (θση ) 2
κ2 = exp
As above, in estimating the model, we equate the measure of French firms JnF selling in each destination with the actual (integer) number NnF and equate X nF with their average sales there.21 We are left with only five parameters to estimate: Θ = { θ λ σα ση ρ} For a given Θ, we use (28) to back out the cluster of parameters σEnF using our data on X nF = XnF /NnF , and the κ1 and κ2 implied by (34) and (35). Similarly, we use (29) to back out a firm’s entry hurdle in each market unF (ηn ) given its ηn and the κ2 implied by (35). 4.2. Simulation Algorithm To estimate parameters, to assess the implications of those estimates, and to perform counterfactual experiments, we need to construct sets of artificial French firms that operate as the model tells them, given some Θ. We denote an artificial French exporter by s and the number of such exporters by S The number S does not bear any relationship to the number of actual French exporters. A larger S implies less sampling variation in our simulations. As we search over different parameters Θ, we want to hold fixed the realizations of the stochastic components of the model. Hence, prior to running any simulations, (i) we draw S realizations of v(s) independently from the uniform distribution U[0 1] putting them aside to construct standardized unit cost u(s) below, and (ii) we draw S × 113 realizations of an (s) and hn (s) independently from the standard normal distribution N(0 1), putting them aside to construct the αn (s) and ηn (s) below. A given simulation of the model requires a set of parameters Θ, data for each destination n on total sales XnF by French exporters, and the number NnF of French firms selling there. It involves the following eight steps: Step 1. Using (34) and (35), we calculate κ1 and κ2 . Step 2. Using (28), we calculate σEnF for each destination n. Step 3. We use the an (s)’s and hn (s)’s to construct S × 113 realizations for each of ln αn (s) and ln ηn (s) as σα 1 − ρ2 σα ρ an (s) ln αn (s) = ln ηn (s) 0 ση hn (s) 21 The model predicts that some French firms export but do not sell domestically. Consequently, the data for NnF and XnF that we condition on in our estimation include the 523 French exporters (mentioned in footnote 4) who do not enter the domestic market.
ANATOMY OF INTERNATIONAL TRADE
1475
Step 4. We construct the S × 113 entry hurdles (36)
un (s) =
NnF ηn (s)θ κ2
where un (s) stands for unF (ηn (s)) Step 5. We calculate uX (s) = max{un (s)} n =F
the maximum u consistent with exporting somewhere, and (37)
u(s) = min{uF (s) uX (s)}
the maximum u consistent with selling in France and exporting somewhere. Step 6. To simulate exporters that sell in France, u(s) should be a realization from the uniform distribution over the interval [0 u(s)]. Therefore, we construct u(s) = v(s)u(s) using the v(s)’s that were drawn prior to the simulation. Step 7. In the model, a measure u of firms have standardized unit cost below u. Our artificial French exporter s therefore gets an importance weight u(s). This importance weight is used to construct statistics on artificial French exporters that relate to statistics on actual French exporters.22 Step 8. We calculate δnF (s), which indicates whether artificial exporter s enters market n, as determined by the entry hurdles 1 if u(s) ≤ un (s), δnF (s) = 0 otherwise. Wherever δnF (s) = 1, we calculate sales as λ/θ −1/θ u(s) u(s) αn (s) 1− (38) σEnF XnF (s) = ηn (s) un (s) un (s) This procedure gives us the behavior of S artificial French exporters. We know three things about each one: where it sells, δnF (s) how much it sells there, XnF (s), and its importance weight, u(s). From these terms, we can compute any moment that could have been constructed from the actual French data.23 22 See Gouriéroux and Monfort (1994, Chap. 5) for a discussion of the use of importance weights in simulation. 23 In principle, in any finite sample, the number of simulated firms that overcome the entry hurdle for a destination n could be zero, even though the distribution of efficiencies is unbounded
1476
J. EATON, S. KORTUM, AND F. KRAMARZ
4.3. Moments For a candidate value Θ, we use the algorithm above to simulate the sales of 500000 artificial French exporting firms in 113 markets. From these artificial (Θ) analogous to particular moments data, we compute a vector of moments m m in the actual data. Our moments are the number of firms that fall into sets of exhaustive and mutually exclusive bins, where the number of firms in each bin is counted in the data and is simulated from the model. Let N k be the number of firms achieving k be the corresponding number some outcome k in the actual data and let N k in the simulated data. Using δ (s) as an indicator for when artificial firm s k as achieves outcome k we calculate N k = 1 u(s)δk (s) N S s=1 S
(39)
We now describe the moments that we seek to match.24 We have chosen our moments to capture the four features of French firms’ behavior described in Section 2: • The first set of moments relate to the entry strings discussed in Section 2.1. k (1; Θ) of simulated exporters selling to each We compute the proportion m possible combination k of the seven most popular export destinations (listed in Table I). One possibility is exporting yet selling to none of the top seven, giving us 27 possible combinations (so that k = 1 128). The corresponding moments from the actual data are simply the proportion mk (1) of exporters selling (1; Θ) and m(1) each to combination k Stacking these proportions gives us m with 128 elements (subject to 1 adding up constraint). • The second set of moments relate to the sales distributions presented in Section 2.2. For firms selling in each of the 112 export destinations n, we compute the qth percentile sales snq (2) in that market (i.e., the level of sales such that a fraction q of firms selling in n sells less than snq (2)) for q = 50, 75, 95. Using these snq (2), we assign firms that sell in n into four mutually exclusive and exhaustive bins determined by these three sales levels. We compute the n (2; Θ) of artificial firms falling into each bin analogous to the proportions m actual proportion mn (2) = (05 025 02 005) . Stacking across the 112 coun(2; Θ) and m(2), each with 448 elements (subject to 112 addingtries gives us m up constraints). • The third set of moments relate to the sales in France of exporting firms as discussed in Section 2.3. For firms selling in each of the 112 export destinations from above. Helpman, Melitz, and Rubinstein (2008) accounted for zeros by truncating the upper tail of the Pareto distribution. Zero exports to a destination then arise simply because not even the most efficient firm could surmount the entry barrier there. 24 Notice, from (36), (37), and (19), that the average of the importance weights u(s) is the simulated number of French firms that export and sell in France.
ANATOMY OF INTERNATIONAL TRADE
1477
n, we compute the qth percentile sales snq (3) in France for q = 50 75 95. Pro(3; Θ) and m(3) each with 448 elements (subject ceeding as above, we get m to 112 adding-up constraints). • The fourth set of moments relate to normalized export intensity by market as discussed in Section 2.4. For firms selling in each of the 112 export destinations n, we compute the qth percentile ratio snq (4) of sales in n to sales in France (4; Θ) and m(4) each with 336 for q = 50 75. Proceeding as above, we get m elements (subject to 112 adding-up constraints). For the last three sets, we emphasize higher percentiles because they (i) appear less noisy in the data and (ii) account for much more of total sales. Stacking the four sets of moments gives us a 1360-element vector of deviations between the moments of the actual and artificial data: ⎡ ⎤ (1 Θ) m(1) − m ⎢ m(2) − m (2 Θ) ⎥ ⎥ (Θ) = ⎢ y(Θ) = m − m ⎣ m(3) − m (3 Θ) ⎦ (4 Θ) m(4) − m By inserting the actual data on X nF (to get σEnF ) and NnF (to get un (s)) in our simulation routine, we are ignoring sampling error in these measures. Inserting X nF has no effect on our estimate of Θ. The reason is that a change in σEnF shifts the sales in n of each artificial firm, the mean sales X nF , and the percentiles of sales in that market, all by the same proportion, leaving our moments unchanged. We only need X nF to get estimates of σEnF given Θ. Our estimate of Θ does, however, depend on the data for NnF Appendix C reports on Monte Carlo simulations examining the sensitivity of our parameter estimates to this form of sampling error.25 4.4. Estimation Procedure We base our estimation procedure on the moment condition E[y(Θ0 )] = 0 that achieves where Θ0 is the true value of Θ We thus seek a Θ = arg min{y(Θ) Wy(Θ)} Θ Θ
√ The coefficient of variation of NnF is approximated by 1/ NnF , which is highest for Nepal, with 43 sellers, at 0152 The median number of sellers is 686 (Malaysia), implying a coefficient of variation of 0038 25
1478
J. EATON, S. KORTUM, AND F. KRAMARZ
using the simuwhere W is a 1360 × 1360 weighting matrix.26 We search for Θ 27 lated annealing algorithm. At each function evaluation involving a new value of Θ, we compute a set of 500,000 artificial firms and construct the moments for them as described above. The simulated annealing algorithm converges in 1–3 days on a standard PC. We calculate standard errors using a bootstrap technique, taking into account both sampling error and simulation error.28
26 The weighting matrix is the generalized inverse of the estimated variance–covariance matrix Ω of the 1360 moments calculated from the data m. We calculate Ω using the following bootstrap procedure: (i) We resample, with replacement, 229,900 firms from our initial data set 2000 times. (ii) For each resampling b, we calculate mb , the proportion of firms that fall into each of the q 1360 bins, holding the destination strings fixed to calculate mb (1) and holding the sn (τ) fixed to b calculate m (τ) for τ = 2 3 4. (iii) We calculate
1 b (m − m)(mb − m) 2000 b=1 2000
Ω=
Because of the adding-up constraints, this matrix has rank 1023, forcing us to take its generalized inverse to compute W. 27 Goffe, Ferrier, and Rogers (1994) described the algorithm. We use a version developed specifically for GAUSS by Goffe (1996). 28 To account for sampling error, each bootstrap b replaces m with a different mb . To account for simulation error, each bootstrap b samples a new set of 500,000 vb ’s, abn ’s, and hbn ’s as described b (Θ) (Just like mb m b is calculated according to the bins in Section 4.2, thus generating a new m b (Θ), for each b, we search for defined from the actual data.) Defining y b (Θ) = mb − m b = arg min{y b (Θ) Wy b (Θ)} Θ Θ
using the same simulated annealing procedure. Doing this exercise 25 times, we calculate 1 Θ b − Θ) (Θb − Θ)( 25 b=1 25
V (Θ) =
and take the square roots of the diagonal elements as the standard errors. Since we pursue our bootstrapping procedure only to calculate standard errors rather than to perform tests, we do not recenter the moments to account for the initial misfit of our model. Recentering would involve setting b (Θ) − (m − m ( θ)) y b (Θ) = mb − m above. In fact, our experiments with recentered moments yielded similar estimates of the standard errors. See Horowitz (2001) for an authoritative explanation.
ANATOMY OF INTERNATIONAL TRADE
1479
4.5. Results The best fit is achieved at the parameter values (with bootstrapped standard errors in parentheses) θ
λ
σα
ση
ρ
2.46 (0.10)
0.91 (0.12)
1.69 (0.03)
0.34 (0.01)
−0.65 (0.03)
As a check on our procedure, given our sample size, we conduct a Monte Carlo analysis, which is described in Appendix C. A basic finding is that the standard errors above are good indicators of the ability of our procedure to recover parameters. We also analyze the sensitivity of our results, as described in Appendix D, to different moments. A basic finding is that the results are largely insensitive to the alternatives we explore. We turn to some implications of our parameter estimates. Our discussion in Section 3.6 foreshadowed our estimate of θ which lies between the values implied by the slopes in Figure 3C and Figure 4. From equations (31), (29), and (30), the characteristic of a firm determining both entry and sales conditional on entry is v−1/θ , where v ∼ U[0 1]. Our estimate of θ implies that the ratio of the 75th to the 25th percentile of this term is 156. Another way to assess the magnitude of θ is by its implication for aggregate fixed costs of entry. Using expression (21), our estimate of 2.46 implies that fixed costs dissipate 59 percent of gross profit in any destination. Our estimate of σα implies enormous idiosyncratic variation in a firm’s sales across destinations. In particular, the ratio of the 75th to the 25th percentile of the sales shock α is 978 In contrast, our estimate of ση means much less idiosyncratic variation in the entry shock η, with the ratio of the 75th to 25th percentile equal to 158 Given σα and ση the variance of sales within a market decreases in ρ as can be seen from equation (38). Hence the negative estimate of ρ reflects high variation of sales in a market. A feature of the data is the entry of firms into markets where they sell very little, as seen in Figure 1C. Two features of our estimates reconcile these small sales with a fixed cost of entry. First, our estimate of λ, which is close to 1, means that a firm that is close to the entry cutoff incurs a very small entry cost.29 Second, the negative covariance between the sales and entry shocks explains why a firm with a given u might enter a market and sell relatively little. The first feature applies to firms that differ systematically in their efficiency, while the second applies to the luck of the draw in individual markets.
29 Arkolakis (2010) found a value around 1, consistent with various observations from several countries.
1480
J. EATON, S. KORTUM, AND F. KRAMARZ
4.6. Model Fit We can evaluate the model by seeing how well it replicates features of the data described in Section 2. To glean a set of predictions from our model, we to simulate a set of artificial firms including use our parameter estimates Θ nonexporters.30 We then compare four features of these artificial firms with corresponding features of the actual firms. Entry Since our estimation routine conditions entry hurdles on the actual number of French firms selling in each market, our simulation would hit these numbers were it not for simulation error. The total number of exporters is a different matter since the model determines the extent to which the same firms are selling to multiple countries. We simulate 31,852 exporters, somewhat below the actual number of 34,035. Table II displays all the export strings that obey a hierarchy out of the 128 subsets of the seven most popular export destinations. The Data column is the actual number of French firms selling to that string of countries, while the last column displays the simulated number. In the actual data, 27.2 percent of exporters adhere to hierarchies compared with 30.3 percent in the model simulation. In addition the model captures very closely the number of firms selling to each of the seven different strings that obey a hierarchy. Sales in a Market Equation (31) motivates Figure 5A, which plots the simulated (×’s) and the actual (circles) values of the median and the 95th percentile sales to each market against actual mean French sales in that market. The model captures very well both the distance between the two percentiles in any given market and the variation of each percentile across markets. The model also nearly matches the amount of noise in these percentiles, especially in markets where mean sales are small. Sales in France Conditional on Entry in a Foreign Market Equation (32) motivates Figure 5B, which plots the median and the 95th percentile sales in France of firms selling to each market against the actual number of firms selling there. Again, the model picks up the spread in the distribution as well as the slope. It also captures the fact that the datum point for France is below the line, reflecting the role of λ. The model understates noise in these percentiles in markets served by a small number of French firms. 30 Here we simulate the behavior of S = 230000 artificial firms, both nonexporters and exporters that sell in France, to mimic more closely features of the raw data behind our analysis. Thus in Step 5 in the simulation algorithm, we reset u(s) = uF (s).
ANATOMY OF INTERNATIONAL TRADE
FIGURE 5.—Model versus data.
1481
1482
J. EATON, S. KORTUM, AND F. KRAMARZ
Export Intensity Equation (33) motivates Figure 5C, which plots median normalized export intensity in each market against the actual number of French firms selling there. The model picks up the low magnitude of normalized export intensity and how it varies with the number of firms selling in a market. Despite our high estimate of σα however, the model understates the noisiness of the relationship. 4.7. Sources of Variation In our model, variation across firms in entry and sales reflects both differences in their underlying efficiency, which applies across all markets, and idiosyncratic entry and sales shocks in individual markets. We ask how much of the variation in entry and in sales can be explained by the universal rather than the idiosyncratic components. 4.7.1. Variation in Entry We first calculate the fraction of the variance of entry in each market that can be explained by the cost draw u alone. A natural measure (similar to R2 in a regression) of the explanatory power of the firm’s cost draw for market entry is REn = 1 −
E[VnC (u)] VnU
where VnU is the variance of entry and E[VnC (u)] is the expected variance of entry given the firm’s standardized unit cost u31 We simulated the term E[VnC (u)] using the techniques employed in our estimation routine with 230,000 simulated firms and obtained a value of REn for each of our 112 export markets. Looking across markets, the results indicate that we can attribute on average 57 percent (with a standard deviation of 001) of the variation in entry in a market to the core efficiency of the firm rather than to its draw of η in that market. 31 By the law of large numbers, the fraction of French firms selling in n approximates the probability that a French firm will sell in n. Writing this probability as qn = NnF /NFF , the unconditional variance of entry for a randomly chosen French firm is VnU = qn (1 − qn ) Conditional on its standardized unit cost u, a firm enters market n if its entry shock ηn satisfies ηn ≥ (uκ2 /NnF )1/θ Since ln ηn is distributed N(0 ση ) the probability that this condition is satisfied is ln(uκ2 /NnF ) qn (u) = 1 − Φ θση
where Φ is the standard normal cumulative density. The variance conditional on u is therefore VnC (u) = qn (u)[1 − qn (u)]
ANATOMY OF INTERNATIONAL TRADE
1483
4.7.2. Variation in Sales Looking at the firms that enter a particular market, how much does variation in u explain sales variation? Consider firm j selling in market n Inserting (29) into (26), the log of sales is λ/θ u(j)κ2 ln XnF (j) = ln αn (j) + ln 1 − NnF [ηn (j)]θ 1 2
1 − ln u(j) + ln (NnF /κ2 )1/θ σEnF θ 3
4
where we have divided sales into four components. Component 4 is common to all firms selling in market n, so it does not contribute to variation in sales there. The first component involves firm j’s idiosyncratic sales shock in market n, while component 3 involves its efficiency shock that applies across all markets. Complicating matters is component 2, which involves both firm j’s idiosyncratic entry shock in market n ηn (j) and its overall efficiency shock, u(j) We deal with this issue by first asking how much of the variation in ln Xn (j) is due to variation in component 3 and then in the variation in components 2 and 3 together. We simulate sales of 230,000 firms across our 113 markets and divide the contribution of each component to its sales in each market where it sells. Looking across markets, we find that component 3 itself contributes only 48 percent of the variation in ln XnF (j), and components 2 and 3 together contribute around 39 percent (with very small standard deviations).32 The dominant role of α is consistent with the shapes of the sales distributions in Figure 2. Even though the Pareto term u must eventually dominate the log-normal α at the very upper tail, such large observations are unlikely in a reasonably sized sample. With our parameter values, the log-normal term is evident in both the curvature and the slope, even among the biggest sellers.33 Together these results indicate that the general efficiency of a firm is very important in explaining its entry into different markets, but makes a much smaller contribution to the variation in the sales of firms actually selling in a market.34 32 In comparison, Munch and Nguyen (2009) found that firm effects drive around 40 percent of the sales variation of Danish sellers across markets. 33 Our estimate of θ implies that the sales distribution asymptotes to a slope with absolute value of around 041, much lower than the values reported in footnote 7 among the top 1 percent of firms selling in a market. Sornette (2000, pp. 80–82) discussed how a log normal with a large variance can mimic a Pareto distribution over a very wide range of the upper tail. Hence α continues to play a role, even among the largest sellers in our sample. 34 Our finding that u plays a larger role in entry than in sales conditional on entry is consistent θ (implying more sales heterowith our higher estimate of σα relative to ση . A lower value of
1484
J. EATON, S. KORTUM, AND F. KRAMARZ
4.8. Productivity Our methodology so far has allowed us to estimate θ, which incorporates both underlying heterogeneity in efficiency, as reflected in θ, and how this heterogeneity in efficiency gets translated into sales, through σ. To separate θ into these components, we turn to data on firm productivity. A common observation is that exporters are more productive (according to various measures) than the average firm.35 The same is true of our French exporters: The average value added per worker of exporters is 122 times the average for all firms. Moreover, value added per worker, like sales in France, tends to rise with the number of markets served, but not with nearly as much regularity. A reason for this relationship in our model is that a more efficient firm typically both enters more markets and sells more widely in any given market. As its fixed costs are not proportionately higher, larger sales get translated into higher value added relative to inputs used, including those used in fixed costs. An offsetting factor is that more efficient firms enter tougher markets where sales are lower relative to fixed costs. Determining the net effect of efficiency on productivity requires a quantitative assessment. We calculate productivity among our simulated firms as value added per unit of factor cost, proceeding as follows.36 The value added Vi (j) of firm j from country i is its gross production Yi (j) = n Xni (j) less spending on intermediates Ii (j): Vi (j) = Yi (j) − Ii (j) We calculate intermediate spending as Ii (j) = (1 − β)m−1 Yi (j) + Ei (j) where β is the share of factor costs in variable costs and Ei (j) = Value added per unit of factor cost is (40)
qi (j) =
Vi (j) −1
βm Yi (j)
=
n
Eni (j).37
[m − (1 − β)] − m[Ei (j)/Yi (j)] β
geneity attributable to efficiency) for given σα and ση would lead us to attribute more variation both in entry and in sales to the firm’s underlying efficiency. 35 See, for example, Bernard and Jensen (1995), Clerides, Lach, and Tybout (1998), and BEJK (2003). 36 In our model, value added per worker and value added per unit of factor cost are proportional since all producers in a country face the same wage and input costs, with labor having the same share. 37 We treat all fixed costs as purchased services. See EKK (2008) for a more general treatment.
ANATOMY OF INTERNATIONAL TRADE
1485
The only potential source of heterogeneity is in the ratio Ei (j)/Yi (j).38 The expression for firm productivity (40) depends on the elasticity of substitution σ (through m = σ/(σ − 1)) independently of θ. We can thus follow BEJK (2003) and find the σ that makes the productivity advantage of exporters in our simulated data match their productivity advantage in the actual data (1.22). To find the productivity of a firm in the simulated data, we take its sales Xni (j) and fixed cost Eni (j), using (13), in each market where it sells and then sum to get Yi (j) and Ei (j).39 We then calculate the average qi (j) for exporters relative to that for all firms. The σ that delivers the same advantage in the simulated data as the actual is σ = 298 implying β = 034 Using our estimate of θ = θ/(σ − 1) = 246, the implied value of θ is 487.40 4.9. The Price Index Effect With values for the individual parameters θ and σ, we can return to equation (14) and ask how much better off buyers are in a larger market. Taking 38 An implication of (40) is that the distribution of productivity of sellers in supplying any given market is invariant to any market-specific feature such as size, location, or trade openness. To see this result, replace Ei (j)/Yi (j) in (40) with Eni (j)/Xni (j) Using (13) for the numerator and (26) for the denominator, exploiting (30), we can write this ratio in terms of vni (j) as ⎧ ⎪ vni (j)(1−λ)/θ − 1 λ ⎪ ⎪ λ = 1, ⎨ Eni (j) σ(λ − 1) vni (j)−λ/θ − 1 = Xni (j) ⎪ − ln vni (j) ⎪ ⎪ λ = 1. ⎩ σ θ(vni (j)−1/θ − 1)
Since vni (j) is distributed uniformly on [0 1] in any market n, the distribution of the ratio of fixed costs to sales revenue depends only on λ, σ, and θ, and nothing specific to destination n 39 We calibrate β from data on the share of manufacturing value added in gross production. Denoting the value-added share as sV averaging across data from the United Nations Industrial Development Organization (UNIDO (1999)) gives us sV = 036 Taking into account profits and fixed costs, we calculate β = msV − 1/θ so that β is determined from sV simultaneously with our estimates of m and θ 40 While the model here is different, footnote 11 suggests that the parameter θ plays a similar role here to its role in Eaton and Kortum (2002) and in BEJK (2003). Our estimate of θ = 487 falls between the estimates of 8.28 and 3.60, respectively, from those papers. Our estimate of σ = 298 is not far below the estimate of 3.79 from BEJK (2003). Using Chaney (2008) as a basic framework, Crozet and Koenig (2010) estimated θ and σ (along with a parameter δ that reflects the effect of distance D on trade costs) using French firm-level data on exports to countries contiguous to France. They employed a three-step procedure, estimating (i) a probit equation for firm entry, retrieving δθ from the coefficient ln D (ii) an equation that estimates firm sales, retrieving δ(σ − 1) from the coefficient ln D, and (iii) a relationship between productivity and sales, retrieving [θ−(σ −1)] Performing this procedure for 34 industries, they obtained estimates of θ ranging from 1.34 to 4.43 (mean 2.59) and estimates of σ ranging from 1.11 to 3.63 (mean 1.72).
1486
J. EATON, S. KORTUM, AND F. KRAMARZ
Eni ’s as given, the elasticity of Pn with respect to Xn is 1/θ − 1/(σ − 1) = −030: Doubling market size leads to a 30 percent lower manufacturing price index.41 5. GENERAL EQUILIBRIUM AND COUNTERFACTUALS We now use our framework to examine how counterfactual aggregate shocks in policy and the environment affect individual firms. To do so, we need to consider how such changes affect wages and prices. So far we have conditioned on a given equilibrium outcome. We now have to ask how the world reequilibrates. 5.1. Embedding the Model in a General Equilibrium Framework Embedding our analysis in general equilibrium requires additional assumptions: A1. Factors are as in Ricardo (1821). Each country is endowed with an amount Li of labor (or a composite factor), which is freely mobile across activities within a country but does not migrate. Its wage in country i is Wi . A2. Intermediates are as in Eaton and Kortum (2002). Consistent with Section 4.8, manufacturing inputs are a Cobb–Douglas combination of labor and intermediates, where intermediates have the price index Pi given in (14). Hence an input bundle costs wi = Wi β Pi1−β where β is the labor share. A3. Nonmanufacturing is as in Alvarez and Lucas (2007). Final output, which is nontraded, is a Cobb–Douglas combination of manufactures and nonmanufactures, with manufactures having a share γ Labor is the only input to nonmanufactures. Hence the price of final output in country i is proportional to Piγ Wi 1−γ A4. Fixed costs pay for labor in the destination. We thus decompose the country-specific component of the entry cost Eni = Wn Fni where Fni reflects the labor required for entry for a seller from i in n42 A5. Each country i’s manufacturing deficit Di and overall trade deficit DA i are held at their 1986 values (relative to world GDP). 41 Recall that our analysis in footnote 16 suggested that entry costs for French firms rise with market size with an elasticity of 035 attenuating this effect. Assuming that this elasticity applies to the entry costs Eni for all sources i, a calculation using (14) and (16) still leaves us with an elasticity of (1 − 035) ∗ (−030) = −020. 42 Combined with our treatment of fixed costs as intermediates in our analysis of firm-level productivity, A4 implies that these workers are outsourced manufacturing labor.
1487
ANATOMY OF INTERNATIONAL TRADE
Equilibrium in the world market for manufactures requires that the sum across countries of absorption Xn of manufactures from each country i equal its gross output Yi , or
(41)
Yi =
N
πni Xn
n=1
We can turn these equilibrium conditions for manufacturing output into conditions for labor market equilibrium that determine wages Wi in each country as follows: As shown in Appendix E, we can solve for Yi in terms of wages Wi and exogenous terms as (42)
Yi =
γσ(Wi Li + DA i ) − σDi 1 + (σ − 1)(β − γ/θ)
Since Xn = Yn + Dn , (43)
Xn =
γσ(Wn Ln + DA n ) − (σ − 1)(1 − β + γ/θ)Dn 1 + (σ − 1)(β − γ/θ)
From expression (20), we can write (44)
πni =
Ti (Wi β Pi1−β dni )−θ (σFni )−[θ−(σ−1)]/(σ−1) N β 1−β Tk (Wk Pk dnk )−θ (σFnk )−[θ−(σ−1)]/(σ−1) k=1
Substituting (42), (43), and (44) into (41) gives us a set of equations that determine wages Wi given prices Pi . From expression (14), we have (45)
Pn = mκ
−1/θ 1
Xn × Wn
N
−1/θ β
1−β −θ ni i
Ti (Wi P
−[θ−(σ−1)]/(σ−1)
d ) (σFni )
i=1
(1/θ)−1/(σ−1)
giving us prices Pi given wages Wi Together the two sets of equations deliver Wi and Pi around the world in terms of exogenous variables.
1488
J. EATON, S. KORTUM, AND F. KRAMARZ
5.2. Calculating Counterfactual Outcomes We apply the method used in Dekle, Eaton, and Kortum (2008; henceforth DEK) to equations (41) and (45) to calculate counterfactuals.43 Denote the counterfactual value of any variable x as x and define x = x /x as its change. Equilibrium in world manufactures in the counterfactual requires (46)
N
i
Y =
πni Xn
n=1
We can write each of the components in terms of each country’s baseline labor income YiL = Wi Li baseline trade shares πni , baseline deficits, and the change i using (42), (43), and (44) as i and prices P in wages W Yi =
i + DA ) − σDi γσ(YiL W i 1 + (σ − 1)(β − γ/θ)
Xn =
n + DA ) − (σ − 1)(1 − β + γ/θ)Dn γσ(YnL W n 1 + (σ − 1)(β − γ/θ)
πni =
−θ F i −βθ P i−(1−β)θ d ni−[θ−(σ−1)]/(σ−1) πni W ni N −θ F −βθ P −(1−β)θ d −[θ−(σ−1)]/(σ−1) πnk W nk nk k k k=1
where sticking these three equations into (46) yields a set of equations involvi ’s. From (45), we can get a set of equations that involve P i ’s i ’s for given P ing W for given Wi ’s: (47)
n = P
N i=1
i πni W
−βθ
P
F ni−[θ−(σ−1)]/(σ−1) d
−(1−β)θ −θ i ni
−1/θ
n (1/θ)−1/(σ−1) X n W
We implement counterfactual simulations for our 113 countries in 1986, aggregating the rest of the world (ROW) into a 114th country. We calibrate the πni with data on trade shares. We calibrate β = 034 (common across countries) as described in Section 4.8, and take YiL and country-specific γi ’s from data on GDP, manufacturing production, and trade. Appendix F describes our sources of data and our procedures for assembling them to execute the counterfactual. 43 DEK (2008) calculated counterfactual equilibria in a perfectly competitive 44-country world. Here we adopt their procedure to accommodate the complications posed by monopolistic competition and firm heterogeneity, expanding coverage to 114 countries (our 113 plus the rest of the world).
ANATOMY OF INTERNATIONAL TRADE
1489
A simple iterative algorithm solves jointly for changes in wages and prices, n , P n , n . From these values, we calculate (i) the implied giving us W πni , and X change in French exports in each market n, using the French price index for nF = n /P F and (ii) the change in the manufactures as the numéraire, as X πnF X nF = ( n )/(W nF ). n F number of French firms selling there, using (27), as N πni X We then calculate the implications of this change for individual firms. We hold fixed all of the firm-specific shocks that underlie firm heterogeneity so as to isolate the microeconomic changes brought about by general-equilibrium forces. We produce a data set, recording both baseline and counterfactual firmlevel behavior, as follows: nF and N nF derived above to our original data set • We apply the changes X C to get counterfactual values of total French sales in each market XnF and the C number of French sellers there NnF • We run the simulation algorithm described in Section 4.2 with S = 500000 and the same stochastic draws applying both to the baseline and to the counterfactual. We set Θ to the parameter estimates reported in Section 4.5. To accommodate counterfactuals, we tweak our algorithm in four places: (a) In Step 2, we use our baseline values XnF and NnF to calculate baseline C C σEnF ’s, and use our counterfactual values XnF and NnF to calculate counterC factual σEnF ’s in each destination. (b) In Step 4, we use our baseline values XnF and NnF to calculate baseline C C un (s)’s, and use our counterfactual values XnF and NnF to calculate counterC factual un (s)’s for each destination and firm, using (28) and (36). (c) In Step 5, we set u(s) = max{un (s) uCn (s)} n
A firm for which u(s) ≤ un (s) sells in market n in the baseline, while a firm for which u(s) ≤ uCn (s) sells there in the counterfactual. Hence our simulation allows for entry, exit, and survival. (d) In Step 8, we calculate entry and sales in each of the 113 markets in the baseline and in the counterfactual. 5.3. Counterfactual: Implications of Globalization ni = 1/(11) for We consider a 10 percent drop in trade barriers, that is, d i = n and dnn = Fni = 1. This change roughly replicates the increase in French import share over the decade following 1986.44 Table III shows the aggregate 44 Using time-series data from the Organization for Economic Cooperation and Development’s (OECD’s) STAN data base, we calculated the ratio of manufacturing imports to manufacturing absorption (gross production + imports − exports) for the 16 OECD countries with uninterrupted annual data from 1986 through 2000. By 1997, this share had risen for all 16 countries,
1490
J. EATON, S. KORTUM, AND F. KRAMARZ TABLE III AGGREGATE OUTCOMES OF COUNTERFACTUAL EXPERIMENT Counterfactual Changes (ratio of counterfactual to baseline)
Country
Afghanistan Albania Algeria Angola Argentina Australia Austria Bangladesh Belgiuma Benin Bolivia Brazil Bulgaria Burkina Faso Burundi Cameroon Canada Central African Republic Chad Chile China Colombia Costa Rica Cote d’Ivoire Cuba Czechoslovakia Denmark Dominican Republic Ecuador Egypt El Salvador Ethiopia Finland France Germany, East Germany, West Ghana
French Firms
Code
Real Wage
Relative Wage
Sales
Number
AFG ALB ALG ANG ARG AUL AUT BAN BEL BEN BOL BRA BUL BUK BUR CAM CAN CEN CHA CHI CHN COL COS COT CUB CZE DEN DOM ECU EGY ELS ETH FIN FRA GEE GER GHA
1.01 1.01 1.00 1.00 1.01 1.02 1.04 1.01 1.09 1.02 1.02 1.01 1.02 1.01 1.01 1.01 1.04 1.02 1.01 1.03 1.01 1.01 1.02 1.03 1.01 1.03 1.04 1.04 1.02 1.02 1.02 1.01 1.03 1.02 1.01 1.03 1.02
0.92 0.94 0.90 0.90 0.96 0.96 1.04 0.95 1.11 0.94 0.94 0.96 0.95 0.93 0.92 0.92 1.05 1.03 0.90 1.02 0.94 0.92 0.94 0.98 0.93 1.01 1.06 0.99 0.96 0.92 0.93 0.92 1.02 1.00 0.96 1.02 0.99
1.22 1.35 1.09 1.08 1.57 1.35 1.49 1.37 1.44 1.12 1.21 1.64 1.38 1.17 1.21 1.19 1.43 1.33 1.07 1.53 1.38 1.22 1.22 1.36 1.26 1.52 1.46 1.36 1.33 1.12 1.10 1.08 1.53 0.95 1.58 1.60 1.38
1.23 1.34 1.12 1.10 1.52 1.29 1.32 1.33 1.19 1.09 1.18 1.57 1.34 1.17 1.21 1.19 1.26 1.20 1.10 1.38 1.36 1.23 1.20 1.28 1.24 1.38 1.27 1.28 1.28 1.12 1.10 1.09 1.38 0.88 1.52 1.45 1.28 (Continues)
with a minimum increase of 2.4 for Norway and a maximum of 21.1 percentage points for Belgium. France, with a 10.0, and Greece, with an 11.0 percentage point increase, straddled the median.
1491
ANATOMY OF INTERNATIONAL TRADE TABLE III—Continued Counterfactual Changes (ratio of counterfactual to baseline)
Country
Greece Guatemala Honduras Hong Kong Hungary India Honduras Hong Kong Hungary India Indonesia Iran Iraq Ireland Israel Italy Jamaica Japan Jordan Kenya Korea, South Kuwait Liberia Libya Madagascar Malawi Malaysia Mali Mauritania Mauritius Mexico Morocco Mozambique Nepal Netherlands New Zealand Nicaragua Niger Nigeria Norway Oman Pakistan Panama
French Firms
Code
Real Wage
Relative Wage
Sales
Number
GRE GUA HON HOK HUN IND HON HOK HUN IND INO IRN IRQ IRE ISR ITA JAM JAP JOR KEN KOR KUW LIB LIY MAD MAW MAY MAL MAU MAS MEX MOR MOZ NEP NET NZE NIC NIG NIA NOR OMA PAK PAN
1.02 1.01 1.02 1.14 1.05 1.01 1.02 1.14 1.05 1.01 1.02 1.01 1.04 1.07 1.04 1.02 1.05 1.01 1.03 1.01 1.04 1.02 1.49 1.02 1.01 1.01 1.07 1.02 1.08 1.07 1.01 1.02 1.01 1.01 1.06 1.03 1.01 1.02 1.00 1.04 1.04 1.02 1.09
0.97 0.92 0.95 1.20 1.04 0.95 0.95 1.20 1.04 0.95 0.96 0.93 0.94 1.09 1.01 0.99 1.02 0.98 0.95 0.93 1.04 0.94 1.03 0.95 0.94 0.92 1.08 0.95 1.19 1.05 0.94 0.98 0.94 1.01 1.14 1.00 0.89 1.09 0.89 1.03 0.99 0.97 0.96
1.37 1.16 1.19 1.33 1.41 1.40 1.19 1.33 1.41 1.40 1.44 1.16 1.08 1.43 1.44 1.57 1.35 1.80 1.16 1.18 1.58 1.14 1.27 1.10 1.22 1.17 1.45 1.15 1.36 1.47 1.32 1.37 1.29 1.35 1.41 1.45 1.06 1.47 1.07 1.38 1.11 1.41 1.15
1.30 1.17 1.16 1.02 1.25 1.37 1.16 1.02 1.25 1.37 1.38 1.16 1.06 1.21 1.31 1.46 1.22 1.69 1.13 1.18 1.40 1.12 1.14 1.07 1.20 1.17 1.23 1.12 1.05 1.29 1.30 1.30 1.26 1.24 1.14 1.33 1.09 1.25 1.12 1.23 1.03 1.34 1.10 (Continues)
1492
J. EATON, S. KORTUM, AND F. KRAMARZ TABLE III—Continued Counterfactual Changes (ratio of counterfactual to baseline)
Country
Papua New Guinea Paraguay Peru Philippines Portugal Romania Rwanda Saudi Arabia Senegal Sierra Leone Singapore Somalia South Africa Spain Sri Lanka Sudan Sweden Switzerland Syria Taiwan Tanzania Thailand Togo Trinidad and Tobogo Tunisia Turkey Uganda United Kingdom United States Uruguay USSR Venezuela Vietnam Yugoslavia Zaire Zambia Zimbabwe
French Firms
Code
Real Wage
Relative Wage
Sales
Number
PAP PAR PER PHI POR ROM RWA SAU SEN SIE SIN SOM SOU SPA SRI SUD SWE SWI SYR TAI TAN THA TOG TRI TUN TUR UGA UNK USA URU USR VEN VIE YUG ZAI ZAM ZIM
1.07 1.01 1.02 1.02 1.03 1.01 1.00 1.02 1.03 1.03 1.24 1.03 1.03 1.02 1.03 1.00 1.04 1.05 1.02 1.04 1.01 1.03 1.03 1.04 1.04 1.01 1.00 1.03 1.01 1.02 1.00 1.01 1.01 1.02 1.06 1.03 1.02
1.09 0.93 0.97 0.97 1.03 0.97 0.90 0.95 1.01 1.17 1.15 0.96 1.01 0.97 0.99 0.91 1.05 1.05 0.96 1.05 0.94 0.99 0.96 1.01 1.00 0.95 0.90 1.00 0.96 1.00 0.92 0.91 0.95 0.97 1.21 1.12 0.97
1.33 1.21 1.39 1.50 1.47 1.68 1.14 1.15 1.36 1.36 1.37 1.09 1.56 1.49 1.34 1.13 1.51 1.48 1.20 1.64 1.15 1.50 1.11 1.22 1.36 1.37 1.06 1.46 1.45 1.65 1.32 1.18 1.37 1.48 1.37 1.49 1.43
1.13 1.20 1.32 1.43 1.32 1.61 1.17 1.11 1.24 1.08 1.10 1.05 1.43 1.42 1.24 1.15 1.33 1.31 1.15 1.44 1.13 1.40 1.07 1.12 1.26 1.33 1.08 1.35 1.40 1.53 1.33 1.20 1.33 1.41 1.04 1.22 1.36
a Belgium includes Luxembourg.
general equilibrium consequences of this change: (i) the change in the real n /P n )γn , (ii) the change in the relative wage W n /W F , (iii) the wage ω n = (W nF , and (iv) the change change in the sales of French firms to each market X
1493
ANATOMY OF INTERNATIONAL TRADE TABLE IV COUNTERFACTUALS: FIRM TOTALSa Counterfactual Baseline
Change From Baseline
Percentage Change
Number All firms Exporting
231,402 32,969
−26,589 10,716
−11.5 32.5
Values ($ millions) Total sales Domestic sales Exports
436,144 362,386 73,758
16,442 −18,093 34,534
3.8 −5.0 46.8
a Counterfactual simulation of a 10% decline in trade costs.
nF . Lower trade barin the number of French firms selling to each market N riers raise the real wage in every country, typically by less than 5 percent.45 Relative wages move quite a bit more, capturing terms of trade effects from globalization.46 The results that matter at the firm level are French sales and the number of French firms active in each market. While French sales decline by 5 percent in the home market, exports increase substantially, with a maximum 80 percent increase in Japan.47 The number of French exporters increases roughly in parallel with French exports.48 Table IV summarizes the results, which are dramatic. Total sales by French firms rise by $16.4 million, the net effect of a $34.5 million increase in exports and a $18.1 million decline in domestic sales. Despite this rise in total sales, 45 There are a several outliers on the upper end, with Belgium experiencing a 9 percent gain, Singapore a 24 percent gain, and Liberia a 49 percent gain. These results are associated with anomalies in the trade data due to entrepôt trade or (for Liberia) ships. These anomalies have little consequence for our overall results. 46 A good predictor of the change in country n’s relative wage is its baseline share of exports in manufacturing production, as the terms of trade favor small open economies as trade barriers decline. A regression in logs across the 112 export destinations yields an R2 of 0.72. 47 A good predictor of the change in French exports to n comes from log-linearizing πnF , noting that ln XnF = ln πnF + ln Xn . The variable that captures the change in French cost advantage nF ] with a predicted elasticity n /W F ) − ln d relative to domestic producers in n is x1 = πnn [β ln(W of θ (we ignore changes in the relative value of the manufacturing price index, since they are n with a small). The variable that captures the percentage change in n’s absorption is x2 = ln W predicted elasticity of 1. A regression across the 112 export destinations of ln XnF on x1 and x2 yields an R2 of 0.88 with a coefficient on x1 of 5.66 and on x2 of 1.30. 48 We can compare these counterfactual outcomes with the actual change in the number of French sellers in each market between 1986 and 1992. We tend to overstate the increase and to understate heterogeneity across markets. The correlation between the counterfactual change and the change in the data is nevertheless 0.40. Our counterfactual, treating all parameters as given except for the iceberg trade costs, obviously misses much of what happened in those 6 years.
1494
J. EATON, S. KORTUM, AND F. KRAMARZ TABLE V COUNTERFACTUALS: FIRM ENTRY AND EXIT BY INITIAL SIZE All Firms
Exporters
Counterfactual Initial Size Interval (percentile)
Counterfactual
Baseline No. of Firms
Change From Baseline
Change (%)
Baseline No. of Firms
Change From Baseline
Change (%)
Not active 0 to 10 10 to 20 20 to 30 30 to 40 40 to 50 50 to 60 60 to 70 70 to 80 80 to 90 90 to 99 99 to 100
0 23,140 23,140 23,140 23,140 23,140 23,138 23,142 23,140 23,140 20,826 2314
1118 −11,551 −5702 −3759 −2486 −1704 −1141 −726 −405 −195 −38 0
— −49.9 −24.6 −16.2 −10.7 −7.4 −4.9 −3.1 −1.8 −0.8 −0.2 0.0
0 767 141 181 357 742 1392 2450 4286 7677 12,807 2169
1118 15 78 192 357 614 904 1343 1829 2290 1915 62
— 2.0 55.1 106.1 100.0 82.8 65.0 54.8 42.7 29.8 15.0 2.8
Total
231,402
−26,589
32,969
10,716
competition from imports drives almost 27,000 firms out of business, although almost 11,000 firms start exporting. Tables V and VI decompose these changes into the contributions of firms of different baseline size, with Table V considering the counts of firms. Nearly half the firms in the bottom decile are wiped out, while only the top percentile avoids any attrition. Because so many firms in the top decile already export, the greatest number of new exporters emerge from the second highest decile. The biggest percentage increase in number of exporters is for firms in the third from the bottom decile. Table VI decomposes sales revenues. All of the increase is in the top decile, and most of that in the top percentile. For every other decile, sales decline. Almost two-thirds of the increase in export revenue is from the top percentile, although lower deciles experience much higher percentage increases in their export revenues. Comparing the numbers in Tables V and VI reveals that, even among survivors, revenue per firm falls in every decile except the top. In summary, the decline in trade barriers improves the performance of the very top firms at the expense of the rest.49 In results not shown, we decompose the findings according to the number of markets where firms initially sold. Most of the increase in export revenues is 49 The first row of the tables pertains to firms that entered only to export. There are only 1108 of them selling a total of $4 million.
1495
ANATOMY OF INTERNATIONAL TRADE TABLE VI COUNTERFACTUALS: FIRM GROWTH BY INITIAL SIZE Total Sales
Exports
Counterfactual Initial Size Interval (percentile)
Counterfactual
Baseline ($ millions)
Change From Baseline
Change (%)
Baseline ($ millions)
Change From Baseline
Change (%)
Not active 0 to 10 10 to 20 20 to 30 30 to 40 40 to 50 50 to 60 60 to 70 70 to 80 80 to 90 90 to 99 99 to 100
0 41 190 469 953 1793 3299 6188 12,548 31,268 148,676 230,718
3 −24 −91 −183 −308 −476 −712 −1043 −1506 −1951 4029 18,703
— −58.0 −47.7 −39.0 −32.3 −26.6 −21.6 −16.9 −12.0 −6.2 2.7 8.1
0 1 1 1 2 6 18 58 206 1085 16,080 56,301
3 2 2 3 7 18 48 130 391 1501 11,943 20,486
— 345.4 260.3 266.7 391.9 307.8 269.7 223.0 189.5 138.4 74.3 36.4
Total
436,144
16,442
73,758
34,534
among the firms that were already exporting most widely, but the percentage increase falls with the initial number of markets served. For firms that initially export to few markets, a substantial share of export growth comes from entering new markets. Finally, we can also look at what happens in each foreign market. Very little of the increase in exports to each market is due to entry (the extensive margin). We can also look at growth in sales by incumbent firms. As Arkolakis (2010) would predict, sales by firms with an initially smaller presence grow substantially more than those at the top.50 6. CONCLUSION A literature that documents the superior performance of exporters, notably Bernard and Jensen (1995), inspired a new generation of trade theories that incorporate producer heterogeneity. These theories, in turn, deliver predictions beyond the facts that motivated them. To account for the data presented here, 50 in labor required We can examine another counterfactual analytically: a uniform change F cancel out in the expression for πni so that the only action for entry. Proportional changes in F is in equation (47) for the manufacturing price index:
= F [θ−(σ−1)]/[βθ(σ−1)] P = 1/11, the number of French firms selling in every market rises by 10 percent and total With F sales increase by the factor 1/0919 = 1088 in each market.
1496
J. EATON, S. KORTUM, AND F. KRAMARZ
which break down firms’ foreign sales into individual export destinations, these theories are confronted with a formidable challenge. We find that the Melitz (2003) model, with parsimonious modification, succeeds in explaining the basic features of such data along a large number of dimensions: entry into markets, sales distributions conditional on entry, and the link between entry and sales in individual foreign destinations and sales at home. Not only does the framework explain the facts, it provides a link between firm-level and aggregate observations that allows for a general-equilibrium examination of the effect of aggregate shocks on individual firms. Our framework does not, however, constitute a reductionist explanation of what firms do in different markets around the world. In particular, it leaves the vastly different performance of the same firm in different markets as a residual. Our analysis points to the need for further research into accounting for this variation. REFERENCES ALVAREZ, F., AND R. E. LUCAS (2007): “A General Equilibrium Analysis of the Eaton–Kortum Model of International Trade,” Journal of Monetary Economics, 54, 1726–1768. [1486] ARKOLAKIS, K. (2010): “Market Penetration Costs and the New Consumers Margin in International Trade,” Journal of Political Economy, 118, 1151–1199. [1454,1463,1479,1495] AW, B. Y., S. CHUNG, AND M. J. ROBERTS (2000): “Productivity and Turnover in the Export Market: Micro-Level Evidence From the Republic of Korea and Taiwan (China),” World Bank Economic Review, 14, 65–90. [1453] BERNARD, A. B., AND J. B. JENSEN (1995): “Exporters, Jobs, and Wages in U.S. Manufacturing: 1976–1987,” Brookings Papers on Economic Activity: Microeconomics, 1995, 67–119. [1453,1484, 1495] BERNARD, A. B., J. EATON, J. B. JENSEN, AND S. KORTUM (2003): “Plants and Productivity in International Trade,” American Economic Review, 93, 1268–1290. [1453,1462,1484,1485] BISCOURP, P., AND F. KRAMARZ (2007): “Employment, Skill Structure, and International Trade: Firm-Level Evidence for France,” Journal of International Economics, 72, 22–51. [1455] CHANEY, T. (2008): “Distorted Gravity: Heterogeneous Firms, Market Structure, and the Geography of International Trade,” American Economic Review, 98, 1707–1721. [1454,1462,1485] CLERIDES, S., S. LACH, AND J. R. TYBOUT (1998): “Is Learning by Exporting Important? MicroDynamic Evidence From Colombia, Mexico, and Morocco,” Quarterly Journal of Economics, 113, 903–947. [1484] CROZET, M., AND P. KOENIG (2010): “Structural Gravity Equations With Intensive and Extensive Margins,” Canadian Journal of Economics, 43, 41–62. [1485] DEKLE, R., J. EATON, AND S. KORTUM (2008): “Global Rebalancing With Gravity,” IMF Staff Papers, 55, 511–540. [1488] DIXIT, A., AND J. E. STIGLITZ (1977): “Monopolistic Competition and Optimum Product Diversity,” American Economic Review, 67, 297–308. [1464] EATON, J., AND S. KORTUM (1999): “International Technology Diffusion: Theory and Measurement,” International Economic Review, 40, 537–570. [1462] (2002): “Technology, Geography, and Trade,” Econometrica, 70, 1741–1780. [1453,1462, 1485,1486] EATON, J., S. KORTUM, AND F. KRAMARZ (2004): “Dissecting Trade: Firms, Industries, and Export Destinations,” American Economic Review, Papers and Proceedings, 94, 150–154. (Data sources and tables available in NBER Working Paper 10344.) [1455]
ANATOMY OF INTERNATIONAL TRADE
1497
(2008): “An Anatomy of International Trade: Evidence From French Firms,” Working Paper 14610, NBER. [1484] (2011): “Supplement to ‘An Anatomy of International Trade: Evidence From French Firms’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ ecta/Supmat/8318_miscellaneous.pdf; http://www.econometricsociety.org/ecta/Supmat/8318_ data and programs-1.zip; http://www.econometricsociety.org/ecta/Supmat/8318_data and programs-2.zip. [1455] GABAIX, X. (1999): “Zipf’s Law for Cities: An Explanation,” Quarterly Journal of Economics, 94, 739–767. [1462] GABAIX, X., AND R. IBRAGIMOV (2011): “Rank-1/2: A Simple Way to Improve the OLS Estimation of Tail Exponents,” Journal of Business & Economic Statistics, 29, 24–39. [1458] GOFFE, W. L. (1996): “SIMANN: A Global Optimization Algorithm Using Simulated Annealing,” Studies in Nonlinear Dynamics and Econometrics, 1, Algorithm 1. Available at http:// www.bepress.com/snde/vol1/iss3/algorithm1. [1478] GOFFE, W. L., G. D. FERRIER, AND J. ROGERS (1994): “Global Optimization of Statistical Functions With Simulated Annealing,” Journal of Econometrics, 60, 65–99. [1478] GOURIÉROUX, C., AND A. MONFORT (1994): Simulation Based Econometric Methods. CORE Lecture Series. Louvain-la-Neuve: Université catholique de Louvain. [1475] HELPMAN, E., M. J. MELITZ, AND Y. RUBINSTEIN (2008): “Estimating Trade Flows: Trading Partners and Trading Volumes,” Quarterly Journal of Economics, 123, 441–487. [1476] HELPMAN, E., M. J. MELITZ, AND S. R. YEAPLE (2004): “Exports vs. FDI With Heterogeneous Firms,” American Economic Review, 94, 300–316. [1454,1462] HOROWITZ, J. (2001): “The Bootstrap,” in Handbook of Econometrics, Vol. 5, ed. by J. J. Heckman and E. E. Leamer. Amsterdam: Elsevier Science B.V., 3159–3228, Chapter 52. [1478] KORTUM, S. (1997): “Research, Patenting, and Technological Change,” Econometrica, 65, 1389–1419. [1462] KRUGMAN, P. R. (1980): “Scale Economies, Product Differentiation, and the Patterns of Trade,” American Economic Review, 70, 950–959. [1466] LUTTMER, E. (2007): “Selection, Growth, and the Size Distribution of Firms,” Quarterly Journal of Economics, 122, 1103–1144. [1462] MELITZ, M. J. (2003): “The Impact of Trade on Intra-Industry Reallocations and Aggregate Industry Productivity,” Econometrica, 71, 1695–1725. [1453,1454,1462,1463,1496] MUNCH, J. R., AND D. X. NGUYEN (2009): “Decomposing Firm-Level Sales Variation,” Working Paper 2009-05, EPRU. [1483] OECD (1995): The OECD STAN Database. [1489] PEDERSEN, N. K. (2009): “Essays in International Trade,” Ph.D. Thesis, Northwestern University, Evanston, IL. [1470] RICARDO, D. (1821): On The Principles of Political Economy and Taxation. London: Murray. [1486] ROBERTS, M., AND J. R. TYBOUT (1997): “The Decision to Export in Colombia: An Empirical Model of Entry With Sunk Costs,” American Economic Review, 87, 545–564. [1454] SIMON, H. A. (1955): “On a Class of Skew Distribution Functions,” Biometrika, 42, 425–440. [1462] SIMON, H. A., AND C. P. BONINI (1958): “The Size Distribution of Business Firms,” American Economic Review, 98, 607–617. [1458] SORNETTE, D. (2000): Critical Phenomena in Natural Sciences. Berlin: Springer. [1483] UNIDO (1999): Industrial Statistics Database. [1485]
Dept. of Economics, Pennsylvania State University, 608 Kern Graduate Building, University Park, PA 16802, U.S.A.;
[email protected], Dept. of Economics, University of Chicago, 1126 East 59th Street, Chicago, IL 60637, U.S.A.;
[email protected],
1498
J. EATON, S. KORTUM, AND F. KRAMARZ
and CREST (ENSAE), 15 Boulevard Gabriel Péri, 92245, Malakoff, France;
[email protected]. Manuscript received December, 2008; final revision received December, 2010.
Econometrica, Vol. 79, No. 5 (September, 2011), 1499–1539
NONPARAMETRIC IDENTIFICATION OF A CONTRACT MODEL WITH ADVERSE SELECTION AND MORAL HAZARD BY ISABELLE PERRIGNE AND QUANG VUONG1 This paper studies the nonparametric identification of a contract model with adverse selection and moral hazard. Specifically, we consider the false moral hazard model developed by Laffont and Tirole (1986). We first extend this model to allow for general random demand and cost functions. We establish the nonparametric identification of the demand, cost, deterministic transfer, and effort disutility functions as well as the joint distribution of the random elements of the model, which are the firm’s type and the demand, cost, and transfer shocks. The cost of public funds is identified with the help of an instrument. Testable restrictions of the model are characterized. KEYWORDS: Contracts, adverse selection, moral hazard, procurement, regulation, nonparametric identification.
1. INTRODUCTION OVER THE PAST 30 YEARS, economists have emphasized the fundamental role played by asymmetric information in economic relationships. The imperfect knowledge of key economic variables induces strategic behavior among economic agents. Contracts provide an important example of how information asymmetry and incentives govern relationships between a principal and an agent. Imperfect information takes the form of an agent’s hidden characteristic or type and an agent’s hidden action or effort, leading to the so-called adverse selection and moral hazard problems, respectively; see Laffont and Martimort (2002) and Bolton and Dewatripont (2005). In this paper, we focus on the mixed model, combining adverse selection and moral hazard, developed by Laffont and Tirole (1986) for procurement and regulation.2 Our interest is motivated as follows. The Laffont and Tirole 1 This paper is dedicated to the memory of Jean-Jacques Laffont. We feel fortunate to have been his friends and are grateful for the time we had. We also dedicate this paper to Jean-Jacques’ family, especially Colette, Cécile, Bénédicte, Bertrand, and Charlotte, and his childhood friend, Michel Sistac. Preliminary versions under the title “Nonparametric Identification of Incentive Regulation Models” were presented at Johns Hopkins University, Pennsylvania State University, SITE Meeting, European University Institute in Firenze, Université de Toulouse, Columbia University, New York University, Princeton University, Northwestern University, and Yale University. We thank P. Courty, A. Estache, P. Grieco, J. Harrington, D. Martimort, A. Pakes, R. Porter, M. Whinston, and the participants at seminars and conferences for helpful discussions. We are grateful to a co-editor and three referees for detailed comments that have significantly improved our paper. Financial support from the National Science Foundation under Grant SES-0452154 is gratefully acknowledged. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation. 2 This model is also associated with false moral hazard, as observation of the contractual variable allows the principal to disentangle imperfectly the agent’s type and effort level. An alter-
© 2011 The Econometric Society
DOI: 10.3982/ECTA6954
1500
I. PERRIGNE AND Q. VUONG
(1986) model is a control problem under incomplete information in which the payment to the agent is based on an agent’s performance indicator observed ex post by the principal; see Laffont (1994). In the case of procurement, the performance indicator can take several forms such as cost, service quality, or output level.3 In the case of regulation, Thomas (1995) analyzed the control of water pollution emission by industries with incentive contracts, while Gagnepain and Ivaldi (2002) studied regulation of public transit with cost-sharing contracts. The Laffont and Tirole (1986) model has broader impacts than procurement and regulation. For example, the use of performance-based contracts is a standard practice in labor as further discussed in the conclusion. Despite fundamental theoretical developments, data availability, and the wide use of contracts in various sectors such as agriculture, corporate finance, credit markets, insurance, and retailing, few empirical studies analyzing contract data rely on a structural model. Until recently, the empirical literature has mostly adopted a reduced form approach as surveyed by Chiappori and Salanié (2003) for the analysis of contracts in general and by Joskow and Rose (1989) for regulation. Some notable exceptions incorporating aspects of optimal contracting are Paarsch and Shearer (2000), d’Haultfoeuille and Février (2007), and Gayle and Miller (2008) for labor contracts; Ivaldi and Martimort (1994), Miravete (2002), Crawford and Shum (2007), and Perrigne and Vuong (2010) for nonlinear pricing; Einav, Jenkins, and Levin (2008) for credit markets; Ho, Ho, and Mortimer (2011) for retailing; and Wolak (1994), Thomas (1995), Gagnepain and Ivaldi (2002), and Brocas, Chan, and Perrigne (2006) for regulation. This situation did not meet theorists’ expectations. For instance, Laffont and Tirole (1993, p. 669) concluded that “econometric analyses are badly needed in the area,” while they “do wish that such a core of empirical analysis will develop in the years to come.” Such expectations have not been met because asymmetric information models lead to complex econometric models whose estimation requires suitable econometric tools.4 Moreover, identification needs to be addressed. Although the analyst may end up native approach relies on the obedience principle, in which the principal induces the agent to produce a given level of effort. See Myerson (1982). 3 In 2003, the World Bank launched a program to favor payment schemes based on achievements for various services and utilities such as phone, water, electricity, road maintenance, health care, and education, to name a few across all continents; see Brook and Smith (2001) and International Development Association (2006). In the United States, the General Accounting Office (2002) strongly encourages performance-based contracts for federal procurements with the use of cost-sharing and other contractual arrangements providing incentives for cost effectiveness. In 2001, it was reported that 11% of service contracts in the United States were performance-based, totaling 28.6 billions of dollars. Such contracts are also used in several states for a wide range of services including social, health, and family services. The performance indicator can also assess energy efficiency as discussed by Singh, Limaye, Henderson, and Shi (2010). 4 Einav, Jenkins, and Levin (2008) and Ho, Ho, and Mortimer (2011) derived moment inequalities from optimal contracting to estimate their models using Pakes, Porter, Ho, and Ishii (2006).
IDENTIFICATION OF A CONTRACT MODEL
1501
specifying a parametric model as in the above papers, studying nonparametric identification is valuable for thinking carefully about which information in the data allows identification of each unknown function. Another important related question is how to characterize the restrictions imposed by the model on observables to test the model validity. Without such restrictions, the model could rationalize any data. In this paper, we adopt a nonparametric approach in the spirit of Guerre, Perrigne, and Vuong (2000) to address identification of the Laffont and Tirole (1986) procurement model. Other contract models can be identified following similar ideas, although some adjustments will be needed because of the specificity of each model. In particular, as in auctions, the one-to-one mapping between the agent’s unobserved type and the observed contractual variable is key to identifying the model. Moreover, given the numerous functions that typically need to be recovered in a contract model, it is important to exploit all the information provided by the model and the data, including the principal’s objective. Nonetheless, our approach permits some deviations from contract optimality that can arise from behavioral, legal, and institutional constraints. We first adapt the Laffont and Tirole (1986) model by considering a private good with random demand. The contract design then considers expected demand, while the firm has to fullfill the realized demand. We verify contract implementation and second-order conditions. We then derive the corresponding structural econometric model. The structural elements are the demand function, the cost function, which depends on the unobserved agent’s type and effort, the cost of public funds, the effort disutility function, and the agent’s type distribution. The observables are the (ex post) demand, the (ex post) cost, the price, and the payment to the agent. Because the observed payment may differ from the optimal one, we introduce an additive random function to the transfer that captures deviations from the optimal payment due to constraints faced by the principal that arise from politics, legal systems, and external environments. The main difficulty for identification arises from the unobservability of the agent’s effort and type. We first show that the model is nonparametrically identified given the cost of public funds. Identification relies on the bijective mapping between the price and the agent’s type. The relation with auction models is clear. The bidder’s private value and his bid are analogous to the agent’s type and the price in a contract. Next, we derive all the restrictions imposed by the model on observables and address the nonparametric identification of the cost of public funds using an instrument. Our nonparametric identification result represents a step toward the development of the econometrics of contract models. Our results also offer a new approach to the estimation of cost efficiency that explicitly incorporates the effects of information asymmetry.5 5 See Park, Sickles, and Simar (1998) for a survey of the classical literature on production frontier models with recent developments in a semiparametric setting.
1502
I. PERRIGNE AND Q. VUONG
The paper is organized as follows. Section 2 presents the Laffont and Tirole (1986) model with stochastic demand, its implementation through linear contracts, and the verification of second-order conditions. Section 3 addresses its nonparametric identification given the cost of public funds. Section 4 derives the model restrictions on observables, addresses the nonparametric identification of the cost of public funds and discusses some extensions. Section 5 concludes. The results of Sections 3 and 4 are proved in the Appendix, while those of Section 2 are proved in the Supplemental Material (Perrigne and Vuong (2011a)). 2. THE MODEL In this section, we extend the Laffont and Tirole (1986) procurement model to a monopolist producing a private good subject to general random demand and cost functions.6 The demand for the private good and the cost for producing it are y = y(p εd ) c = (θ − e)co (y εc ) where y is the quantity of private good with a price per unit p, c is the realized cost, θ represents the firm’s (inefficiency) type, e is the level of effort exerted by the firm, and (εd εc ) are the random demand and cost shocks, respectively. As usual, θ and e are private information to the firm, where θ is the (scalar) adverse selection parameter known to be distributed as F(·). The random shocks (εd εc ) are known to be jointly distributed as G(· ·). Note that in Laffont and Tirole (1986), εc is independent of θ, while εd is voided because the demand y is fixed. Laffont and Tirole (1986) considered a constant marginal cost function, namely c = (θ − e)y + εc , while Laffont and Tirole (1993, p. 171) considered the cost function c = H(θ − e)co (y) + εc , which is, except for the additive separability of εc , the one we consider with H(·) the identity function; see Perrigne and Vuong (2011b) for the nonidentification of H(·). The function co (· ·) is the baseline cost function, while θ−e is interpreted as the firm’s cost inefficiency. We make the following assumption. ASSUMPTION A1: The random demand and cost functions satisfy y(· ·) ≥ 0, co (· ·) ≥ 0, and θ − e > 0. Moreover, the random shocks (εd εc ) are independent of θ, which is distributed as F(·) with density f (·) > 0 on its support [θ θ], θ < θ. 6
This section owes much to Jean-Jacques Laffont’s comments and arose from a paper we wrote for a course he taught in Spring 2003. Further extensions of the model with investment and a weighted objective function for the principal are discussed in Section 4.3.
IDENTIFICATION OF A CONTRACT MODEL
1503
The principal offers a price schedule p(·) ≥ 0 based on the firms’ announcement θ˜ about its true type θ as well as a net transfer function t(· ·). The re˜ c) is the net transfer. In alized cost c is paid by the principal so that t = t(θ the Laffont and Tirole (1986) model, ex post observability of a firm’s performance indicator, such as its cost, is used in the contract design to improve efficiency relative to the Baron and Myerson (1982) model in which no ex post information is used. Such an observed performance indicator leads to false moral hazard. In the context of regulation, Joskow and Schmalensee (1986) and Baron (1989) argued that cost observability by the regulator is a reasonable assumption as firms submit annual audits of their financial results and costs by regulatory commissions. The random shocks (εd εc ) are realized ex post, that is, after contractual arrangements are made between the principal and the firm. Consequently, the contract is designed ex ante based on expected values with respect to (εd εc ). On accepting the contract, the firm must satisfy ˜ εd ] at the price p(θ) ˜ that corresponds to its the realized demand y = y[p(θ) ˜ announcement θ. The principal and the firm are both risk neutral. We assume that all functions are at least twice continuously differentiable and that integration and differentiation can be interchanged. Whenever a(·) is a function of more than one variable, we denote its derivative with respect to the kth argument by ak (·). 2.1. The Firm’s Problem Given price p(·) and payment t(· ·), the utility for a firm with type θ when it announces θ˜ and exerts effort e with monetary cost ψ(e) is ˜ θ e εd εc ) = t θ ˜ εd ] εc − ψ(e) ˜ (θ − e)co y[p(θ) U(θ (1) The firm’s optimization problem is (F)
˜ θ e εd εc )|θ] = max E[U(θ ˜ θe
˜ θ e εd εc ) dG(εd εc ) U(θ
where the independence between θ and (εd εc ) is used. This problem can be solved in two steps. In the first step, the effort level e is chosen optimally given ˜ θ): (θ ˜ ˜ θ e εd εc ) dG(εd εc ) max E[U(θ θ e εd εc )|θ] = U(θ (FE) e
˜ θ), which solves the first-order condition (FOC) This gives e = e(θ ˜ θ e εd εc )|θ] = U3 (θ ˜ θ e εd εc ) dG(εd εc ) 0 = E[U3 (θ (2)
1504
I. PERRIGNE AND Q. VUONG
˜ θ) solves that is, using (1), e = e(θ
˜ (θ − e)co y[p(θ) ˜ εd ] εc co y[p(θ) ˜ εd ] εc dG(εd εc ) t2 θ
(3)
= −ψ (e) Denote the corresponding expected utility by (4)
˜ θ) ≡ E U(θ ˜ θ e(θ ˜ θ) εd εc )|θ U(θ ˜ θ e(θ ˜ θ) εd εc ) dG(εd εc ) = U(θ
˜ θ), giving θ˜ = θ(θ), ˜ which solves In the second step, the firm solves maxθ˜ U(θ ˜ the FOC U 1 (θ θ) = 0. 2.2. Incentive Constraint We now consider the incentive constraint (IC) arising from the firm telling ˜ the truth θ, that is, θ = θ(θ) for any θ ∈ [θ θ]. Thus, U 1 (θ θ) = 0 for any θ. Equivalently, denoting U(θ) ≡ U(θ θ) and e(θ) ≡ e(θ θ), and using U (θ) = U 1 (θ θ) + U 2 (θ θ), we obtain
U (θ) = U 2 (θ θ) = U2 (θ θ e(θ) εd εc ) dG(εd εc ) + e2 (θ θ)
U3 (θ θ e(θ) εd εc ) dG(εd εc )
=
U2 (θ θ e(θ) εd εc ) dG(εd εc )
=
t2 θ (θ − e(θ))co y[p(θ) εd ] εc
× co y[p(θ) εd ] εc dG(εd εc ) where the third equality follows from (2), since e(θ) = e(θ θ), and the fourth equality follows from (1). Hence, using (3) at θ˜ = θ and e = e(θ) = e(θ θ) gives the incentive constraint (5)
U (θ) = −ψ (e)
IDENTIFICATION OF A CONTRACT MODEL
where (6)
U(θ) =
t θ (θ − e(θ))co y(p(θ) εd ) εc dG(εd εc ) − ψ(e(θ))
(7)
1505
e(θ) = arg max e
t θ (θ − e)co y(p(θ) εd ) εc dG(εd εc ) − ψ(e)
The latter reduces moral hazard to a “false” moral hazard problem. 2.3. The Principal’s Problem The principal chooses the price schedule and the transfer [p(·) t(· ·)]. Suppose that [p(·) t(· ·)] is such that (i) it is truth telling, that is, (5) is satisfied and the firm exerts optimal effort e = e(θ) and (ii) the firm participates for any level of its type θ. Given that the good is private, the ex post social welfare (SW) when θ is the firm’s true type is SW(θ εd εc ) ∞ = y(v εd ) dv + (1 + λ) p(θ)y(p(θ) εd ) p(θ)
− t θ (θ − e(θ))co y(p(θ) εd ) εc − (θ − e(θ))co y(p(θ) εd ) εc + t θ (θ − e(θ))co y(p(θ) εd ) εc − ψ(e(θ)) where λ > 0 is the cost of public funds, as the payment to the firm requires raising taxes, which are costly to society.7 Using the independence of θ and (εd εc ), its expectation is (8) SW(θ εd εc ) dG(εd εc ) dF(θ) θ = θ
∞
y(v εd ) dv + (1 + λ) p(θ)y(p(θ) εd )
p(θ)
− ψ(e(θ)) − (θ − e(θ))co y(p(θ) εd ) εc
dG(εd εc )
− λU(θ) dF(θ) 7 Alternatively, following Baron and Myerson (1982), we could consider a weighted average of consumer surplus and firm’s profit; see Section 4.3.
1506
I. PERRIGNE AND Q. VUONG
where we have used (6) to rewrite the expected transfer as U(θ) + ψ(e(θ)). Therefore, the principal’s optimization problem is max SW(θ εd εc ) dG(εd εc ) dF(θ) (P) [p(·)e(·)U(·)]
subject to the incentive and participation constraints
(9)
U (θ) = −ψ (e(θ))
(10)
U(θ) ≥ 0
for all θ ∈ [θ θ], where e(·) is given by (7). Without loss of generality, the optimization problem (P) includes e(·), since this function is determined by p(·) and U(·) through (6) and (7). In view of (9), U (θ) < 0 because ψ (·) > 0 (see A2 below). Hence, the participation constraint (10) can be written as U(θ) ≥ 0 or (11)
U(θ) = 0
because the expected social welfare (8) decreases with U(·). This suggests the simpler optimization problem max SW(θ εd εc ) dG(εd εc ) dF(θ) (P ) [p(·)e(·)U(·)]
subject to (9) and (11) only. In Section 2.4, we verify that there exists a trans∗ fer t ∗ (· ·) that satisfies (6) and (7) for the solution [p∗ (·) e∗ (·) U (·)] of the optimization problem (P ).
We denote the expected demand at price p by y(p) ≡ E[y(p εd )] = y(p εd ) dG(εd ), where G(·) is the marginal distribution of εd . Let E[a(εd εc θ)] = a(εd εc θ) dG(εd εc ) denote the expectation of a function a(· · ·) with respect to (εd εc ) for fixed θ or conditional on θ in view of the independence of θ and (εd εc ). PROPOSITION 1: Given A1, the price p∗ (·) and effort e∗ (·) that solve the FOC of the optimization problem (P ) satisfy (12)
c(p) 1 p−m =μ p η(p) ˜
(13)
ψ (e) = c o (p) − μ
F(θ) ψ (e) f (θ)
IDENTIFICATION OF A CONTRACT MODEL
1507
˜ = −py (p)/y(p), and where p = p∗ (θ), e = e∗ (θ), μ = λ/(1 + λ), η(p) E[(θ − e)co1 (y(p εd ) εc )y1 (p εd )] E[y1 (p εd )] c o (p) = E co (y(p εd ) εc ) c(p) = m
Note that η˜ p is the elasticity of the expected demand y(p), which differs from the expected elasticity of demand η(p) ≡ E[−py1 (pεd )/y(p εd )]. c(p) differs from the expected marginal cost mc(p) ≡ E[(θ − Moreover, m e)co1 (y(p εd ) εc )] for producing one additional unit at price p. On the other hand, c o (p) is the expected baseline cost at price p or also the expected cost saving for one additional unit of effort. Thus, (12) can be viewed as a generalized Ramsey pricing, while (13) leads to a downward distortion in effort due to the second term arising from asymmetric information as ψ (·) > 0 (see A2 below). When θ = θ, (13) gives ψ (e) = c o (p) and the first-best is achieved for the most efficient firm θ.8 It remains to determine the optimal firm’s rent ∗ U (θ) by integrating out the incentive constraint (9) subject to the participation constraint (11). This gives (14)
∗
U (θ) =
θ
ψ [e∗ (β)] dβ
θ
which is strictly positive whenever θ < θ, since ψ (·) > 0 (see A2 below). 2.4. Implementation PROPOSITION 2: Given A1, consider the transfer c ∗ ˜ ∗ ˜ ∗ ˜ ˜ ˜ (15) t (θ c) = A(θ) − ψ [e (θ)] − (θ − e (θ)) ˜ c o [p∗ (θ)] where e∗ (·) and p∗ (·) are the optimal price and effort obtained from (P ), and (16)
˜ + ˜ = ψ[e∗ (θ)] A(θ)
θ˜
θ
˜ + U ∗ (θ) ˜ ψ [e∗ (β)] dβ = ψ[e∗ (θ)]
Thus, announcing its true type θ and exerting the optimal effort e∗ (θ) satisfy the ∗ FOC of problem (F). Moreover, [p∗ (·) t ∗ (· ·) e∗ (·) U (·)] solves the FOC of problem (P). 8 When demand is not random, that is, εd has a degenerate distribution so that y(p εd ) = y(p), c(p) = mc(p) and η(p) we have m ˜ = η(p) so that (12) and (13) reduce to the FOC in Laffont and Tirole (1986) with a constant marginal cost function and an additive random shock, namely c = (θ − e)y + εc .
1508
I. PERRIGNE AND Q. VUONG
From (15) and (16), the transfer is equal to the cost of effort plus the firm’s expected rent minus a fraction of the cost overrun, which is the discrepancy between the realized cost and the expected cost. Thus (15) can be viewed as a menu of linear cost-sharing contracts. Moreover, ψ [e∗ (θ)] = c o [p∗ (θ)] so that the slope coefficient in (15) equals −1 when θ˜ = θ. Hence, the most efficient firm chooses a fixed-price contract. 2.5. Second-Order Conditions Following Laffont and Tirole (1986), we verify ex post that our solution satisfies the second-order conditions (SOC)
∞for a local maximum and that these SOC extend globally.9 Let V (p εd ) = p y(v εd ) dv + (1 + λ)py(p εd ) be the social value of producing
∞ y(p εd ). The expected social value is then V (p) ≡ V (p εd ) dG(εd ) = p y(v) dv + (1 + λ)py(p). ASSUMPTION A2: The demand, cost, and effort functions satisfy the following inequalities: (i) V (p) > 0, V (·) < 0, V (·) < 0. (ii) c o (·) > 0, c o (·) < 0, c o (·) ≥ 0. (iii) ψ(·) ≥ 0, ψ (·) > 0, ψ (·) > 0, ψ (·) ≥ 0. Assumption A2(i) is satisfied if the expected demand is not too inelastic and if the expected demand is not too convex, while A2(ii) and (iii) are standard. ˜ θ), we conWe begin with the firm’s problem (F). In particular, for any (θ sider the optimization problem (FE) with respect to e. LEMMA 1: Suppose that the transfer t(· ·) is weakly decreasing and concave in ˜ θ), which solves the FOC (3), realized cost c. Given A1 and A2, the effort e(θ is uniquely defined and corresponds to a global maximum of the problem (FE). Moreover, 0 ≤ e2 (θ θ) < 1. Because t ∗ (· ·) is weakly decreasing and concave in realized cost c, Lemma 1 applies. Second, the local SOC for θ˜ = θ to be a local maximum is U 11 (θ θ) ≤ 0, which is equivalent to U 12 (θ θ) ≥ 0, where U 12 (θ θ) = −ψ [e(θ θ)]e1 (θ θ). Thus, the local SOC is equivalent to e1 (θ θ) ≤ 0, that is, e (θ) ≤ e2 (θ θ), since e (θ) = e1 (θ θ) + e2 (θ θ). Hence, when the transfer t(· ·) is weakly decreasing and concave in c as in (15), Lemma 1 implies that a sufficient condition for the local SOC for truth telling is e (·) ≤ 0. 9 The following assumptions are sufficient but not necessary. In Section 4.1, we provide necessary and sufficient assumptions for the second-order conditions and implementation to hold.
IDENTIFICATION OF A CONTRACT MODEL
1509
ASSUMPTION A3: For any θ ∈ [θ θ], the following inequalities hold: (i) ψ [e∗ (θ)]V [p∗ (θ)] + (1 + λ){c o [p∗ (θ)]}2 < 0. (ii) d[F(θ)/f (θ)]/dθ ≥ 0, that is, F(·) is log concave. Assumption A3(i) is reminiscent of Assumption 1(iii) in Laffont and Tirole (1986). LEMMA 2: Given A1–A3, transfer t ∗ (· ·), and price p∗ (·), the local SOC for truth telling are satisfied as e∗ (·) < 0. Moreover, p∗ (·) > 0. Effort e∗ (·) and price p∗ (·) are strictly decreasing and increasing in θ, respectively. It remains to show that θ˜ = θ provides a global maximum of the firm’s utility (4). PROPOSITION 3: Given A1–A3, transfer t ∗ (· ·), and price p∗ (·), truth telling ˜ θ). Moreover, the expected transfer t ≡ provides the global maximum of U(θ E[t ∗ (θ (θ − e∗ (θ))co [y(p∗ (θ) εd ) εc ])] is strictly decreasing in the firm’s cost inefficiency θ − e∗ (θ). Proposition 3 ensures that the principal can use the menu of contracts defined by (15). Because θ − e∗ (θ) is increasing in θ, the expected payment is decreasing in firm’s type. 3. IDENTIFICATION OF [y co ψ F G m] In this section, we specify the econometric model for the observables, taking into account observed heterogeneity. We study the identification of the functions [y co ψ F G m] given λ from the distribution of observables. An additional function m(·), which captures systematic deviations from the optimal transfer, is introduced in Section 3.1. 3.1. The Econometric Model The observables are (Y C P T ), respectively, the quantity of good, the production cost, the price per unit of product, the transfer/payment as well as a vector of exogenous variables Z ∈ Z ⊂ Rd that characterizes the principal, firm, and/or market. Hereafter, we use capital letters to distinguish random variables from their realizations. Hence, the demand, baseline cost, and effort disutility functions are defined as y(p z εd ), co (y z εc ), and ψ(e z) when Z = z. For instance, Z may contain exogenous variables such as average income in demand and input prices in baseline cost. Similarly, the firm’s type θ and the demand and cost shocks (εd εc ) may depend on some firm’s and market characteristics included in Z, leading to the conditional distributions F(·|z) and G(· ·|z) for θ and (εd εc ) given Z = z. We let [θ(z) θ(z)] denote
1510
I. PERRIGNE AND Q. VUONG
the support of F(·|z). The cost of public funds λ may also depend on z, that is, λ = λ(z). For instance, the cost of public funds may depend on local economic conditions as in Perrigne (2002). From such dependencies on z, it follows that price, transfer, and effort functions are of the form p∗ (· z), t ∗ (· · z), and e∗ (· z). Assumptions A1–A3 are assumed to hold for every value of Z ∈ Z . In the context of procurement and regulation, two examples may fix ideas on data availability: water utilities studied by Wolak (1994) and Brocas, Chan, and Perrigne (2006) and public transit studied by Gagnepain and Ivaldi (2002), Perrigne (2002), and Perrigne and Surana (2004). In the first case, Y represents the total amount of water (measured in hundreds of cubic feet (HCF)) delivered to residual consumers by a privately owned utility in California, P is the price per HCF charged to consumers, including a fixed access fee, C is the production cost, including energy and labor costs as well as other components such as chemicals and pipeline repair, and T is based on the rate of return on the firm’s capital.10 In the second case, Y represents the number of passengers who take public transit operated by a firm in France, P is the average bus fare paid by a passenger, C is the operating cost, including energy and labor costs as well as various maintenance costs, and T takes the form of a subsidy paid to the firm by the transportation authority.11 For other industries such as electricity, Joskow and Schmalensee (1986) indicated that utilities are required by regulatory commissions to make frequent reports on their cost, price, financial, and production variables. Regarding the interpretation of (θ e) and ψ(·), θ captures the firm’s management and production skills, e captures the firm’s actions taken to reduce production costs, while ψ(·) is the opportunity cost for exerting effort. The random shocks (εd εc ) could arise from macroeconomic variables, weather, and all other conditions that are unanticipated by the principal and the firm. Our econometric model is based on the theoretical model of Section 2, but differs from it by not allowing the observed transfer to be the optimal transfer T ∗ = t ∗ (θ C Z) given in (15). Specifically, we consider that the observed transfer T takes the form T ∗ + T˜ , that is, it may deviate from the optimal T ∗ by a random quantity T˜ , which is uncorrelated with the firm’s type θ given Z, namely E[T˜ |θ Z] = E[T˜ |Z]. Letting T˜ ≡ m(Z) + εt , where E[εt |Z] = 0, then m(·) represents the deterministic component that is common knowledge. The component εt is a random term unknown to both parties. As noted by Joskow and Schmalensee (1986) and Joskow (2005), m(·) can capture possible systematic departures from the model of Section 2 due to behavioral, legal, and institutional constraints. It can also capture institutional weaknesses in developing 10 Wolak (1994) and Brocas, Chan, and Perrigne (2006) relied on the Besanko (1984) model in which capital is used as a contractual variable. In addition to the variables mentioned above, detailed information is provided on capital and its depreciation. 11 Gagnepain and Ivaldi (2002), Perrigne (2002), and Perrigne and Surana (2004) relied on the Laffont and Tirole (1986) model. Public transit is heavily subsidized. Moreover, capital plays a minor role, as buses are provided to the firm by the transportation authority.
IDENTIFICATION OF A CONTRACT MODEL
1511
countries as argued by Estache and Wren-Lewis (2009). These possible departures, however, do not question the optimality of the transfer t ∗ (θ C Z), as the principal–agent model of Section 2 is based on optimization at both levels of the hierarchy.12 We make the following assumption on the random elements (θ εd εc εt ). ASSUMPTION B1: θ is independent of (εd εc ) conditional on Z and E[εt |θ Z] = 0. The first part of B1 follows the theoretical model of Section 2. The condition E[εt |θ Z] = 0 holds under the normalization E[εt |Z] = 0 and under the independence of εt from θ given Z. The introduction of m(Z) + εt in the payment does not modify the agent’s behavior. Although εt enters (additively) in the firm’s utility (1), it disappears from the firm’s problem (F) given B1. Given Z, the term m(Z) plays a similar role as a constant in the participation constraint and thus does not affect the FOC of Proposition 1. The econometric model for the endogenous variables (Y C P T ) given the exogenous variables Z retains all the equations of the theoretical model c(P Z) = [(θ − with the exception of the transfer equation. Noting that m e)c o (P Z)]/y (P Z) in (12) and combining (15) and (16) evaluated at θ˜ = θ gives the nonlinear nonparametric simultaneous equation model implicit in P with random terms (θ εd εc εt ): (17)
Y = y(P Z εd )
(18)
C = (θ − e)co (Y Z εc )
(19)
Py (P Z) + μy(P Z) = (θ − e)c o (P Z)
(20) (21)
F(θ|Z) ψ (e Z) = c o (P Z) f (θ|Z) θ(Z) ˜ Z) Z] d θ˜ T = ψ(e Z) + ψ [e∗ (θ ψ (e Z) + μ
θ
− ψ (e Z)
C − (θ − e) + m(Z) + εt c o (P Z)
where μ = μ(Z), P = p∗ (θ Z), and e = e∗ (θ Z) solve (19) and (20), and a prime denotes derivation with respect to the first argument of a function. Following Section 2, y(p z) and c o (p z) in (19)–(21) are, conditional on Z = z, the expected demand at price p and the expected baseline cost for producing 12
Optimization at the principal level is further discussed in Sections 4 and 5.
1512
I. PERRIGNE AND Q. VUONG
the random quantity y(p εd ) at price p, that is, (22) y(p z) = y(p z εd ) dG(εd |z) (23)
c o (p z) =
co [y(p z εd ) z εc ] dG(εd εc |z)
The firm’s type θ can be viewed as unobserved heterogeneity distributed according to F(·|·). A complication in the above econometric model is that the effort e is endogenous and unobserved.13 In short, the model structure is given by the vector of seven functions [y co ψ F G λ m]. The structural elements of the model are the cost of public funds λ(·), the deterministic transfer function m(·), the demand y(· · ·), the baseline cost co (· · ·), the effort disutility ψ(· ·), the conditional type distribution F(·|·), and the joint distribution G(· · ·|·) of the random terms (εd εc εt ) given Z. The identification problem is to assess whether these primitives can be uniquely recovered from the conditional distribution of (Y C P T ) given Z. For definitions of identification in nonparametric contexts, see, for example, Roehrig (1988) and Prakasa Rao (1992). 3.2. Identification of ψ(· ·), F(·|·), and m(·) Given λ(·) In this subsection, we study the nonparametric identification of the effort disutility ψ(· ·), the conditional distribution of firm’s type F(·|·), and the deterministic transfer m(·) assuming that λ(·) is known. Identification of λ(·) is addressed in Section 4. LEMMA 3: Let α = α(·), β = β(·) > 0, and γ = γ(·) be some functions of Z. ˜ λ ˜ G ˜ F ˜ m] ˜ c˜o ψ ˜ Consider two structures S ≡ [y co ψ F G λ m] and S˜ ≡ [y ˜ · ·) = y(· · ·), c˜o (· · ·) = co (· · ·)/β, that satisfy A1–A3 and B1, where y(· ˜ · ·|·) = G(· · ·|·), ˜ ˜ ·) = ψ[(· − α)/β ·] − γ(·), F(·|·) ψ(· = F[(· − α)/β|·], G(· ˜ = λ(·), and m(·) ˜ λ(·) = m(·) + γ(·). Thus, the structures S and S˜ lead to the same conditional distribution of (Y C P T ) given Z, that is, they are observationally equivalent. As the proof of Lemma 3 indicates, the observational equivalence between S and S˜ arises because the unknown firm’s type θ can be linearly transformed into a new type θ˜ = α(z) + β(z)θ for each value z of Z. Thus, the firm’s type θ˜ is identified at best up to location and scale. Furthermore, any shift in ψ(e z) that depends on z can be compensated by an appropriate shift of m(z). 13 We note that the econometric model would be singular without the random term εt , because three unobserved random variables (θ εd εc ) would determine four endogenous variables (Y C P T ).
IDENTIFICATION OF A CONTRACT MODEL
1513
In view of Lemma 3, we must impose some location and scale assumptions. For instance, we can fix how two quantiles of θ vary with z such as θ(z) and θ(z), which correspond to the most and least efficient firms, respectively. This corresponds to setting θ(z) = θo (z) and θ(z) = θo (z), where θo (·) and θo (·) are chosen functions. These assumptions, however, must be consistent with the model. In particular, they must satisfy θ − e∗ (θ z) ≥ 0 for all (θ z) so that C = [θ − e∗ (θ Z)]co (Y Z εc ) ≥ 0. To ensure θ − e∗ (θ z) ≥ 0 for all (θ z), we set the cost inefficiency of the most efficient firm to be 1 and the optimal effort of the least efficient firm to be 0. Regarding ψ(· ·), a natural assumption is to impose ψ(0 ·) = 0 following Laffont and Tirole (1993), among others. ASSUMPTION B2: For every value z of Z, (24)
θ(z) − e∗ [θ(z) z] = 1
e∗ [θ(z) z] = 0
and ψ(0 z) = 0
Assumption B2 determines θ(z) and θ(z) through the optimal effort function e∗ (· z). Moreover, because the optimal effort e∗ (θ z) is strictly decreasing in θ, (24) implies that e∗ (θ z) ≥ 0 and θ − e∗ (θ z) ≥ 1 for every type so that co (y z εc ) is the cost frontier for producing y given (z εc ), while θ − e = [θ − e∗ (θ z)]/[θ(z) − e∗ [θ(z) z]] is the relative cost inefficiency of a firm with type θ relative to the efficient firm with type θ(z). We now turn to the nonparametric identification of the effort disutility ψ(· ·) and the conditional type distribution F(·|·). The next lemma establishes that the expected demand y(· ·) and the expected baseline cost c o (· ·) are identified nonparametrically from observations on quantity, price, and costs given the cost of public funds λ(·). Moreover, the firm’s relative cost inefficiency can be uniquely recovered from these observables. Let Δ(p z) = E[C|P = p Z = z]/c o (p z) be the ratio of the expected cost over the expected baseline cost and let [p(z) p(z)] denote the support of the conditional distribution GP|Z (·|·) of P given Z. LEMMA 4: Suppose that A1–A3, B1 and B2 hold, and that λ(·) is known. Thus, the expected demand y(· ·) and the expected baseline cost c o (· ·) are uniquely determined by p(·) and the conditional means of (Y C) given (P Z) as (25)
y(p z) = E[Y |P = p Z = z]
(26)
c o (p z) = E[C|P = p(z) Z = z] p ˜ z) + μy(p ˜ z) ˜ (p py d p˜ × exp ˜ Z = z] p(z) E[C|P = p
where μ = μ(z). Moreover, the relative cost inefficiency is identified as (27)
θ − e∗ (θ z) = Δ(p z)
1514
I. PERRIGNE AND Q. VUONG
where p = p∗ (θ z), and the function Δ(· ·) satisfies Δ(· ·) ≥ 1 and ∂Δ(· ·)/ ∂p > 0. The expected demand (25) is just the expectation of Y given (P Z) despite a possible correlation between the demand shock εd and P through Z in the demand (17). However, the expectation of C (or log C) given (P Z), as used in the estimation of production/cost frontier (see, e.g., Gagnepain and Ivaldi (2002)), is not the expected baseline cost (26). For example, consider the typical cost specification log C = s(Y Z) + εc + log(θ − e), where log(θ − e) ≥ 0 in view of (24). The composite error term εc + log(θ − e) is correlated with both Y = y(P Z εd ) and Z. Instrumental variable (IV) or maximum likelihood (ML) methods estimate the cost frontier s(y z), which is different from the expected baseline cost c o (p z) that is relevant in the FOC (19) and (20) that determines price and effort; see Perrigne and Vuong (2009). Nevertheless, by exploiting the generalized Ramsey pricing rule in (19), (26) indicates that the expected baseline cost is identified from the expectations of Y and C given (P Z) and the knowledge of p(·).14 Moreover, (27) shows that the firm’s relative cost inefficiency θ − e∗ (θ z) can be recovered from (p z), as Δ(· ·) is known from the expectation of C given (P Z) and the identified expected baseline cost c o (· ·). Using Lemma 4, the next result establishes the nonparametric identification of the effort disutility ψ(· ·), the conditional type distribution F(·|·), and the deterministic transfer deviation m(·) from observations on (Y C P T Z) given λ(·). In the spirit of Guerre, Perrigne, and Vuong (2000), we exploit the bijective mapping between the price P and the firm’s type θ from Lemma 2. The parallel with auction models becomes clear. In auction models, the unobserved private value can be expressed in terms of the corresponding observed bid, the bid distribution, and the density, from which one can identify the private value distribution. A similar strategy is used here. Let θ∗ (· z) be the inverse of p∗ (· z). Because θ = θ∗ (P Z), we replace in (20) the ratio F(θ|Z)/f (θ|Z) by [GP|Z (P|Z)/gP|Z (P|Z)] × (∂θ∗ (P Z)/∂p), where GP|Z (·|·) is the conditional distribution of P given Z and gP|Z (·|·) is its density. Using (20) and the identification of ψ(· ·) from (21), we derive an expression for θ as a function of the observed price, its distribution, and its density. Thus, we identify the type distribution F(·|·). In particular, the unobserved firm’s type θ can be recovered from the observed pair (p z) once the various unknown functions have been recovered from the data. 14 This result is reminiscent of recovering the marginal cost in mark-up models; see Bresnahan (1989). The generalized Ramsey pricing rule plays a similar role as a mark-up equation with the difference that recovering c o (· ·) is somewhat more complicated here because of the unobserved cost inefficiency θ − e.
IDENTIFICATION OF A CONTRACT MODEL
1515
For any value (p z), we define (28)
Γ (p z) = −
(29)
R(p z) =
∂E[T |P = p Z = z]/∂p ∂Δ(p z)/∂p
μ[GP|Z (p|z)/gP|Z (p|z)] × ∂Γ (p z)/∂p × ∂Δ(p z)/∂p Γ (p z) − c 0 (p z) + μ[GP|Z (p|z)/gP|Z (p|z)] × ∂Γ (p z)/∂p
The functions Γ (· ·) and R(· ·) are known from the joint distribution of (Y C P T ) conditional on Z in view of Lemma 4. In particular, Γ (· ·) can be interpreted as the marginal decrease in the expected transfer T due to a oneunit increase in relative cost inefficiency Δ. As (30) and (31) show, Γ (p z) is also the marginal cost of effort, while R(p z) is the marginal decrease in effort due to a one-unit increase in price. Let e∗+ (· z) = e∗ [θ∗ (· z) z] and let p∗+ (· z) be the inverse of the optimal effort e∗+ (· z). PROPOSITION 4: Suppose that A1–A3, B1 and B2 hold, and that λ(·) is known. Thus, the deterministic transfer deviation m(·) and the effort disutility ψ(· ·) are uniquely determined by p(z) and the conditional mean of T given (P Z) as (30)
m(z) = E[T |P = p(z) Z = z] and e ˜ z) z] d e ˜ ψ(e z) = Γ [p∗+ (e 0
where Γ (· ·) > 0, ∂Γ (· ·)/∂p < 0, and e∗+ (· z) satisfies (31)
∗ +
p(z)
˜ z) d p˜ R(p
e (p z) = p
with ∂Δ(· ·)/∂p > R(· ·) > 0. Moreover, the conditional mean of T given (P Z) and the conditional distribution of P given Z uniquely determine the conditional type distribution F(·|·) as F(θ|z) = GP|Z [p∗ (θ z)|z], where p∗ (· z) is the inverse of p(z) ∗ ˜ z) d p ˜ (32) R(p θ (p z) = Δ(p z) + p
The key of Proposition 4 is the bijection between the observed price P and the unobserved type θ given Z = z. Thus, conditioning on (P Z) is equivalent to conditioning on (θ Z). As a matter of fact, (31) and (32) show that the firm’s type θ and effort e can be recovered from the observed pair (P Z) associated with the firm’s contract. In particular, while the minimal effort e∗ [θ(z) z] = 0
1516
I. PERRIGNE AND Q. VUONG
by (24), (31) implies that the maximal effort exerted by the most efficient firm given Z = z is (33)
p(z)
e∗ [θ(z) z] =
˜ z) d p˜ > 0 R(p p(z)
Similarly, the lower and upper bounds of the conditional type distribution F(·|z) are p(z) ˜ z) d p˜ > 1 (34) R(p θ(z) = 1 + p(z)
and (35)
p(z) ˜ z) + μy(p ˜ z) ˜ (p py E[C|P = p(z) Z = z] exp − d p˜ θ(z) = ˜ Z = z] E[C|P = p(z) Z = z] E[C|P = p p(z) > θ(z)
from (32) in view of (26)–(27), and Δ[p(z) z] = 1 from (24). The inequality in (35) follows from θ = θ∗ (p z), where ∂θ∗ (p z)/∂p = ∂Δ(p z)/∂p − R(p z) > 0. 3.3. Identification of y(· · ·), co (· · ·), and G(· · ·|·) Given λ(·) Lemma 4 establishes identification of the expected demand y(· z) and the expected baseline cost c o (· z) for every price p ∈ [p(z) p(z)] and z ∈ Z . For counterfactual exercises or policy evaluations, one may need to identify the demand y(· · ·), the baseline cost co (· · ·), and the conditional distribution G(· · ·|·) of (εd εc εt ) given Z. This is the purpose of this subsection assuming that λ(·) is known. The demand shock εd and the cost shock εc do not enter additively in (17) and (18). If one is willing to consider a specification of the demand and baseline cost in which the random shocks εd and εc enter additively so that y(p z εd ) = δ(p y) + εd and co (y z εc ) = γ(y z) + εc , identification becomes standard under the usual normalizations E[εd |Z] = 0 and E[εc |Z] = 0. Specifically, under B1, we have εd independent of P given Z, implying E[εd |P Z] = E[εd |Z] = 0. Hence E[Y |P = p Z = z] = δ(p z), showing that y(· · ·) and Gεd |Z (·|·) are identified. Moreover, let the (random) baseline cost be Co = co (Y Z εc ) = C/(θ − e∗ (θ Z)) = C/Δ(P Z). Thus Co can be recovered from (C P Z) as Δ(· ·) is identified by Lemma 4. Now, using the control function approach to endogeneity (see Blundell and Powell (2003)), we have E[Co |Y P Z] = γ(Y Z) + E[εc |εd P Z] = γ(Y Z) + ρ(εd Z), where the second equality follows from the independence of εc and P given (εd Z)
IDENTIFICATION OF A CONTRACT MODEL
1517
from B1. Hence, E[Co |Y P Z = z] is additively separable in Y and εd with E[ρ(εd Z)|Z] = E[E(εc |εd Z)|Z] = E[εc |Z] = 0, thereby showing that γ(· ·) and hence co (· · ·) are identified. See Hastie and Tibshirani (1990). The identification of Gεc |εd Z (·|· ·) follows. As Matzkin (2003) argued, the structural specification of a random demand or cost function seldom leads to an additive error term in the relationship between the endogenous variable and the exogenous variables. Matzkin (2003) then showed that this relationship is nonidentified. For instance, consider the demand Y = y(P Z εd ), where P = p∗ (θ Z) is independent of εd given Z in view of B1. Clearly, y(· · ·) is nonidentified as a monotonic transformation of εd can be compensated by a transformation of y(· · ·).15 We introduce some notations and further assumptions. Let [εd (z) εd (z)] × [εc (z) εc (z)] be the support of the distribution Gεd εc |Z (· ·| z) of (εd εc ) given Z = z. Similarly, let [y(z) y(z)] and [y(p z) y(p z)] denote the supports of the distributions GY |Z (·|z) and GY |PZ (·|p z) of Y = y(P Z εd ) given Z = z and (P Z) = (p z), respectively. ASSUMPTION B3: (i) For all z ∈ Z and all (εd εc ) ∈ [εd (z) εd (z)] × [εc (z) εc (z)], there exist po (z) ∈ [p(z) p(z)] and yo (z) ∈ [y(z) y(z)] such that (36)
y[po (z) z εd ] = εd
and co [yo (z) z εc ] = εc
Moreover, the functions po (·) and yo (·) are known. (ii) The demand and baseline cost y(p z ·) and co (y z ·) are strictly increasing in εd and εc , respectively, for all values (y p z), while the conditional distributions Gεd |Z (·|·) and Gεc |εd Z (·|· ·) are nondegenerate and strictly increasing in their first arguments. (iii) There exists p† (z εd ) ∈ [p(z) p(z)] such that yo (z) = y[p† (z εd ) z εd ]. Assumption B3(i) and (ii) follows Matzkin’s (2003) first assumption, although B3(i) is slightly more general by allowing po and yo to depend on z.16 15
Our problem differs from Matzkin’s framework in two aspects. A first question is whether the knowledge of y(· ·) and c o (· ·) helps to identify y(· · ·), co (· · ·), and Gεd εc |Z (· ·|·). Lemma 5 in the Appendix shows that this is not the case. Second, our simultaneous equation model creates potential endogeneity, as Y may be correlated with εc in c0 (· · ·). Matzkin (2008) studied nonparametric identification of nonlinear simultaneous equation models with nonadditive error terms assumed to be independent of Z. This assumption may not be satisfied in our case. Chesher (2003) and Imbens and Newey (2009) studied the nonparametric identification of marginal effects in triangular systems. 16 The second assumption in Matzkin (2003) relates to homogeneity of degree 1 in z and ε. Such a restriction seems natural for the baseline cost, which is homogenous of degree 1 in input prices. See also Matzkin (1994). This restriction requires choosing some values or functions zo , εco and Co , and a factor of homogeneity γ. Moreover, because Matzkin’s proof relies on using γ = εc /εco , the conditional distribution of εc can be recovered along a specific value of z, that is, (εc z)/εco . Thus Gεd εc |z (· ·|·) cannot be identified everywhere.
1518
I. PERRIGNE AND Q. VUONG
The first part of B3(i) restricts the functional forms that can be considered for the demand and baseline cost. Without the second part of B3(i), identification of y(· · ·) and co (· · ·) in Proposition 5 would hold up to [po (·) yo (·)]. Assumption B3(iii) is used only to establish Proposition 5(ii) and is discussed later. The next result establishes nonparametric identification of y(· · ·), co (· · ·), and the distribution G(· ·|·) of (εd εc ) given Z from the quantiles of quantity and baseline cost given (P Z). Recall that the baseline cost Co can be recovered by C/Δ(P Z). Thus, its distribution GCo |YPZ (·|· · ·) given (Y P Z) is identified. Formally, because Δ(P Z) ≥ 1, we have GCo |YPZ (co |y p z) = GC|YPZ [co Δ(p z)|y p z] for any co , where GC|YPZ (·|· · ·) is the distribution of C given (Y P Z). PROPOSITION 5: Suppose that A1–A3 and B1–B3 hold, and that λ(·) is known. (i) The demand y(· · ·) and the distribution Gεd |Z (·|·) of εd given Z are uniquely determined by the conditional distribution GY |PZ (·|· ·) as y(p z εd ) = G−1 (37) Y |PZ GY |PZ [εd |po (z) z]|p z (38)
Gεd |Z (·|z) = GY |PZ [ · |po (z) z]
(ii) The baseline cost co (· · ·) and the distribution Gεc |εd Z (·|· ·) of εc given (εd Z) are uniquely determined by the conditional distribution GCo |YPZ (·|· · ·) as co (y z εc ) = G−1 (39) Co |YPZ GCo |YPZ [εc |yo (z) p† (z εd ) z]|y p z (40)
Gεc |εd Z (·|εd z) = GCo |YPZ [ · |yo (z) p† (z εd ) z]
where p† (· ·) is identified and y = y(p z εd ). The proof of (i) follows Matzkin (2003), as P is independent of εd given Z from B1. Although Y = y(P Z εd ) is not independent of εc given Z, the proof of (ii) is only slightly more involved, as it exploits B3(iii). The latter says that for any (z εd ) there exists a price p† (z εd ) for which the output y[p† (z εd ) z εd ] is equal to the reference output yo (z) in B3(i). Assumption B3(iii) can be equivalently written as εd ∈[εd (z)εd (z)] {y = y(p z εd ) p ∈ [p(z) p(z)]} and is nonempty for every z ∈ Z .17 The latter can be relaxed, in which case we obtain only partial identification of co (· · ·) and Gεc |εd Z (·|· ·). Specifically, from the proof of (ii), one obtains identification of Gεc |εd Z (·|εd z) for those values of (εd z) for which there exists a price p† (z εd ) satisfying yo (z) = y[p† (z εd ) z εd ]. Similarly, one obtains identification of co (y z εc ) for those values of (y z), where y = y(p z εd ) and (z εd ) satisfies yo (z) = y[p† (z εd ) z εd ]. 17
IDENTIFICATION OF A CONTRACT MODEL
1519
Last, the distribution Gεt |εd εc Z (·|· · ·) of εt given (εd εc Z) is identified from observations on (Y C P T Z). This follows from (21), since εt can be expressed as a function of (C P T Z) that is identified by Lemma 4 and Proposition 4. For a simpler expression than (21), see (43) below. Moreover, because Gεc |εd Z (·|· ·) and Gεd |Z (·|·) are identified by Proposition 5, then the joint distribution G(· · ·|·) of (εd εc εt ) given Z is identified. 4. MODEL RESTRICTIONS AND THE COST OF PUBLIC FUNDS In this section, we first characterize all the model restrictions on observables. These restrictions can be used to test the validity of the model. Second, we address identification of λ(·). Our previous identification results can be used when the cost of public funds is known. Several studies provide macroeconomic estimates based on general equilibrium models. For instance, Ballard, Shoven, and Whalley (1985) found that the cost of public funds in the United States ranges from 0.17 to 0.56 per dollar, while Jorgenson and Yun (1991) found 0.46 per dollar. See also Walters and Auriol (2007) for a recent survey on such estimates in various countries. Microeconomic data can complement these studies. In particular, one can expect local economic conditions to affect the cost of public funds such as local unemployment rate in Perrigne (2002). We show that λ(·) is not identified. We then discuss some identifying conditions and derive a simple expression for λ(·). This section concludes with several extensions of our results. 4.1. Model Restrictions We first define the random shocks as identified functions of the observables given λ(·). Specifically, from (37) and (39), εd and εc can be expressed as identified functions of (Y P Z) and (Y C P Z), namely φd (Y P Z) and φc (Y C P Z), respectively. From (21) and Proposition 4, εt can be expressed as an identified function of (Y C P T Z), namely φt (Y C P T Z). Their expressions are given in the next lemma. LEMMA 6: Given A1–A3 and B1–B3, the random shocks (εd εc εt ) are given by εd = φd (Y P Z), εc = φc (Y C P Z), and εt = φt (Y C P T Z), where (41)
φd (Y P Z) = G−1 Y |PZ (GY |PZ (Y |P Z)|po (Z) Z)
(42)
φc (Y C P Z) =
1 Δ(p† (Z εd ) Z) × G−1 C|YPZ (GC|YPZ (C|Y P Z)|yo (Z) p† (Z εd ) Z)
(43)
φt (Y C P T Z) = T − E[T |P Z]
1520
I. PERRIGNE AND Q. VUONG
+
∂E[T |P Z]/∂p ∂E[C|P Z]/∂p − [Py (P Z) + μy(P Z)]
× (C − E[C|P Z]) with Δ(P Z) given by (27). Next we weaken A2 and A3. Our new assumptions are expressed in terms of the observables (Y C P T Z). We note that εd (z) = G−1 Y |PZ (0|po (z) z) and εd (z) = G−1 (1|p (z) z). o Y |PZ ASSUMPTION C1: Let εd ∈[εd (z)εd (z)] {y = y(p z εd ) p ∈ [p(z) p(z)]} be nonempty for every z ∈ Z , where y(· · ·) is given by (37). The cost of public funds λ(·) and the joint distribution of (Y C P T ) given Z satisfy he following statements: (i) E[Y |P = p Z = z] > 0 and E[C|P = p Z = z] > 0. (ii) Γ (p z) > 0 and ∂Γ (p z)/∂p < 0. (iii) ∂Δ(p z)/∂p > 0 and Γ (p z) < c o (p z), where c o (p z) is given by (26), for any p ∈ [p(z) p(z)] and z ∈ Z . (iv) φd (Y P Z) and φc (Y C P Z) defined in (41) and (42), respectively, are conditionally independent of P given Z, while E[φt (Y C P T Z)|P Z] = 0. (v) GP|Z (·|·) has a strictly positive density on its support {(p z) : p ∈ [p(z) p(z)] z ∈ Z } with p(·) < p(·), while GY |PZ (·|· ·) and GC|YPZ (·|· · ·) are nondegenerate and strictly increasing in their first arguments. Note that μ(·) = λ(·)/(1 + λ(·)), while Δ(p z) and Γ (p z) are defined in (27) and (28), respectively. Assumption C1(i) is implied by A1. Assumption C1(ii) ensures that the expected transfer is strictly decreasing in the firm’s cost inefficiency θ − e∗ (θ z) as required by Proposition 3. Assumption C1(iii) is actually equivalent to ∂Δ(p z)/∂p > R(p z) > 0 in Proposition 4, which ensures a strictly decreasing effort function and a strictly increasing price schedule, respectively, as required by Lemma 2.18 Assumption C1(iv) is a direct consequence of B1, as P = p∗ (θ z) is strictly increasing in θ. Last, the first part of C1(v) follows the assumptions on F(·|·) in A1, while the second part of C1(v) parallels the second part of B3(ii). We define the set of structures S ≡ {S = [y co ψ F G λ m] : A1, B1–B3 hold}. Thus A2 and A3 need not be satisfied by structures in S . The next lemma shows that C1 provides necessary and sufficient conditions for the properties 18 Specifically, from (29), R(p z) > 0 is equivalent to having its denominator negative, that is, Γ (p z) < c o (p z) − μ(z)(GP|Z (p|z)/gP|Z (p|z)) × (∂Γ (p z)/∂p) in view of ∂Γ (p z)/∂p < 0 by C1(ii) and ∂Δ(p z)/∂p > 0. On the other hand, ∂Δ(p z)/∂p > R(p) is equivalent to Γ (p z) < c o (p z) after some elementary algebra. Hence the combination of the two inequalities holds if and only if ∂Δ(p z)/∂p > 0 and Γ (p z) < c o (p z).
IDENTIFICATION OF A CONTRACT MODEL
1521
of Lemmas 1 and 2 and Proposition 3 to hold. Namely, the following properties hold: P1: The firm’s objective function is strictly concave in effort so that there is a unique solution to the firm’s effort maximization problem. P2: The optimal effort is strictly decreasing in θ so that the local SOC e (θ) ≤ e2 (θ θ) is satisfied. P3: The optimal price schedule is strictly increasing in θ. P4: Truth telling provides the global maximum of the firm’s problem. P5: The expected transfer is strictly decreasing in the firm’s cost inefficiency. In other words, A2 and A3 are sufficient but not necessary for such properties. Hereafter, the model is defined as the set of structures {S ∈ S ; S satisfies P1–P5}. A conditional distribution for (Y C P T ) given Z is induced by a structure S ∈ S if (Y C P T Z) satisfies (17)–(21) for some optimal effort e∗ (θ Z). LEMMA 7: If S ∈ S satisfies P1–P5, then the conditional distribution of (Y C P T ) given Z induced by S satisfies C1. Conversely, if the cost of public fund λ(·) and the conditional distribution of (Y C P T ) given Z satisfy C1, then there exists a structure in S that satisfies P1–P5 and rationalizes the observations (Y C P T ) given Z. The first part of Lemma 7 shows that A2 and A3 are stronger than C1, as any structure in S that satisfies A2 and A3 must satisfy P1–P5 by Lemmas 1 and 2 and Proposition 3. This arises as C1 provides parsimonious conditions on observables that rely on first-order and second-order conditions and implementation. The two parts of Lemma 7 show that C1 characterizes all the restrictions imposed by the model on observables. In other words, the model rationalizes the data if and only if one can find a function λ(·) that satisfies C1. Such a characterization is crucial in the structural approach, as it allows the analyst to test the model validity. In particular, violation of any of these restrictions rejects the model. 4.2. Identification of λ(·) The next proposition shows that the cost of public funds is not identified. PROPOSITION 6: In the model that consists of structures S ∈ S satisfying P1– P5, the cost of public funds λ(·) is not identified. This result is a consequence of Lemma 7. Given a structure S ∈ S that satis˜ in fies P1–P5, the proof shows that there exists another cost of public funds λ(·) a structure S˜ ∈ S that satisfies C1 and is observationally equivalent to S. Given that λ(·) is not identified, we consider some identifying restrictions. Another
1522
I. PERRIGNE AND Q. VUONG
strategy would be to consider set identification, that is, the restrictions on λ(·) embodied in C1, and to derive some bounds for λ(·) given the distribution of (Y C P T Z) in the spirit of Manski and Tamer (2002) and Chernozhukov, Hong, and Tamer (2007). Up to now, we have imposed a weak assumption on εt , namely E[εt |P Z] = 0. Imposing additional assumptions on εt is a natural way to achieve identification of λ(·).19 For instance, identification can be achieved through an instrument W that is conditionally independent of εt given (θ Z), leading to the conditional moment restriction E[W εt |P Z] = 0 since E[εt |P Z] = 0 by B1. In this case, we have E W (T − E(T |P Z))|P Z + K(P Z; μ)E W (C − E(C|P Z))|P Z = 0 where K(P Z; μ) is the coefficient of C − E(C|P Z) in (43). We can then solve for μ provided E[W (C − E(C|P Z))|P Z] = 0, that is, the instrument W must be correlated with C given (P Z). Thus, μ(·) is identified as
∂E[T |P = p Z = z] Cov(W C|P = p Z = z) 1 (44) μ(z) = y(p z) ∂p Cov(W T |P = p Z = z) ∂E[T |P = p Z = z] + − py (p z) ∂p for any p ∈ [p(z) p(z)] and z ∈ Z provided Cov(W T |P Z) = 0. In other words, a valid instrument W should be uncorrelated with εt , but correlated with the transfer T and the cost C given (P Z). In particular, this excludes P or any variable in Z, as these variables are uncorrelated with T and C given (P Z). A natural candidate for a valid instrument can be the cost C as it is correlated with itself and T provided it is uncorrelated with εt . ASSUMPTION C2: E[Cεt |θ Z] = 0. Under B1, C2 requires that C and εt are uncorrelated given (θ Z) or, equivalently, given (P Z). In particular, because P = p∗ (θ Z) and e = e∗ (θ Z), it follows from (17) and (18) that C2 is satisfied if (εd εc ) are independent of εt given (θ Z). The next proposition establishes nonparametric identification of λ(·) from observations on (Y C P T Z). 19 In view of B1, one could strengthen the mean independence of εt from θ given Z to the independence of εt and θ given Z. Thus, the conditional independence of εt = φt (Y C P T Z; λ) and θ = θ∗ (P Z; λ) given Z leads to some conditional higher moment restrictions equal to zero, which can be used to identify λ(·). It is unlikely that λ(·) can be obtained in explicit form.
IDENTIFICATION OF A CONTRACT MODEL
1523
PROPOSITION 7: Under A1, B1–B3 and C1, and C2, the cost of public funds λ(·) is uniquely determined by λ(z) = μ(z)/[1 − μ(z)], where (45)
μ(z) =
1 E[Y |P = p Z = z] ∂E[T |P = p Z = z] Var[C|P = p Z = z] × ∂p Cov[C T |P = p Z = z] ∂E[Y |P = p Z = z] ∂E[C|P = p Z = z] −p + ∂p ∂p
with Cov[C T |P = p Z = z] < 0 for every p ∈ [p(z) p(z)] and z ∈ Z . Because p can be arbitrarily chosen, (45) shows that μ(·) and hence λ(·) are overidentified. Thus, assumptions weaker than C2 can be exploited to achieve identification of the cost of public funds. For instance, we could assume that C2 holds for the most efficient firm only. In this case, (45) holds at p = p(z) only. 4.3. Extensions20 While acknowledging the richness of procurement and regulation models based on asymmetric information, Joskow and Schmalensee (1986) and Joskow (2005) have also emphasized the importance of investment and dynamic aspects. The extensions in this section attempt to take these issues into account. We maintain the same assumptions as before, namely A1, B1–B3, and C1 and C2.21 In addition to the random transfer deviation introduced in Section 3.1 to take into account political and legal factors that affect the observed payment, the principal may not weigh consumers’ interests and firm’s profit equally, as the principal may be elected or captured by interest groups. This aspect can be formally introduced in the model through a weight κ(·) on consumer surplus in the social welfare (8). The additional random transfer and the weighted consumer surplus could respond to some critics of the normative approach taken by Laffont and Tirole (1986). In particular, the introduction of a weight κ(·) affects the Ramsey pricing (19) by replacing μ with ν = (λ + 1 − κ)/(1 + λ). The rest of the model remains the same. Following Sections 3.2 and 3.3, we 20
This section owes much to discussions with David Martimort. In particular, we still assume a risk neutral agent. The agent may, however, exhibit “meanvariance” preferences as in Laffont and Tirole (1993, p. 105). Restricting the transfer to be linear in the ex post performance indicator, then the fraction borne by the firm decreases as expected. Identification of risk aversion is difficult. See Guerre, Perrigne, and Vuong (2009) and Campo, Guerre, Perrigne, and Vuong (2011) for identification of risk aversion in auctions. 21
1524
I. PERRIGNE AND Q. VUONG
first study identification of this extended model given (λ(·) κ(·)). Lemma 4 still holds with μ(·) replaced by ν(·) in (26). In particular, y(· ·) remains the same while c 0 (· ·) and Δ(· ·) depend on ν(·). Moreover, R(· ·) in (29) remains the same. Hence, ψ(· ·), F(·|·), and m(·) are identified given (λ(·) κ(·)) by Proposition 4. Thus, the results of Section 3.3 apply, thereby establishing identification of y(· · ·), co (· · ·), and G(· · ·|·) given (λ(·) κ(·)). Turning to identification of (λ(·) κ(·)) or, equivalently, (μ(·) ν(·)), we note that (44) remains valid with μ replaced by ν. Thus, Proposition 7 applies, thereby identifying ν(·). To identify μ(·), we use exclusion restrictions. Specifically, we partition the vector Z in some variables Z1 that affect only μ(·) or, equivalently, λ(·), some variables Z2 that affect only F(·|·) and other variables Z3 . As noted previously, Z1 can be the local unemployment rate affecting the cost of public funds. To identify μ(·), we use (34), which gives θ(z) = θ(z2 ) by the exclusion restriction F(·|z) = F(·|z2 ). Evaluating the right-hand side of (34) at two values of z3 gives an equation in μ(z1 ), thereby identifying the latter. Regarding investment, consider the case of a contractible sunk investment. The principal offers a contract to the firm. The firm then chooses its sunk investment and learns its type after the investment is made. The rest of the timing remains the same in terms of effort choice and ex post performance observation. An interesting feature of this case is that investment I affects the distribution of types, that is, F(θ|I Z). As shown in Laffont and Tirole (1993, p. 90) with an ex post participation constraint, the first-order conditions for price and effort are still given by (12) and (13). Implementation can be achieved by the optimal payment (15) linear in ex post cost. Moreover, optimal investment is a function of F(·|· ·), ψ(·), and λ(·). It is denoted I ∗ (Z). Consider first the case where observed investment I is optimal, that is, I = I ∗ (Z). Letting F(·|I ∗ (Z) Z) ≡ F ∗ (·|Z), Propositions 4 and 5 identify m(·) ψ(· ·) F ∗ (·|·) y(· · ·) co (· · ·), and G(· · ·|·) given λ(·). To identify F(·|· ·), assume that a subset of variables Z1 affects only F(·|· ·), namely F(·|I Z) = F(·|I Z1 ). Provided I ∗ (Z) varies sufficiently given Z1 , F(·|· ·) is identified. The equation that determines the optimal investment can then be used to identify λ(·) without the need for an instrument as in Section 4.3. In the case where the observed investment differs from the optimal one by a random noise, that is, I = I ∗ (Z) + εI , assuming that E[εI |Z] = 0 identifies I ∗ (Z) and the preceding argument applies to identify the model. Last, we discuss dynamic extensions in the case of full commitment of the principal, that is, the contracts are signed ex ante for all periods and are not subject to renegotiation.22 In this case, the false moral hazard applies and the contract of the first period depends on a performance indicator observed at the end of the first period, while the contract of the second period depends on 22 Renegotiation does not always lead to a separating equilibrium. As is well known, no commitment leads to the ratchet effect.
IDENTIFICATION OF A CONTRACT MODEL
1525
performance indicators of both periods and so on. Following Laffont and Martimort (2002, Chapter 8), we distinguish three cases and consider two periods to fix ideas. We discuss identification of each case given the discount factor. First, if the firm’s type remains the same over the two periods, contracts are offered ex ante, that is, before the firm learns its type, and the firm’s rents are guaranteed for both periods, then the static contract applies and the transfer of every period remains identical to that discussed in Section 2. Thus, our previous identification results apply. We can analyze cross-section, panel, or time series data. Contracts change over time due to variation in the exogenous variables Z that affect the model primitives. Such variables may also contain lagged exogenous variables. Second, if the types are independent over time, the optimal contracts combine the optimal static contract in the first period and the optimal contract written ex ante for the second period. Thus, the firm’s rent and transfer vary over the periods. Cross-section or panel data can be analyzed as long as the analyst observes the period to which the contract applies. In particular, if the firm has the possibility of leaving the relationship at the end of the first period, the transfer in the first period is still given by (15). In this case, our previous results apply for the observations in the first period, while our results would need to be adjusted for observations in the second period due to a different transfer. Nonetheless, data from the first period would allow us to identify all the model primitives given the exogenous variables of the first period Z1 , while data from the second period would allow us to identify them given the exogenous variables of both periods (Z1 Z2 ). Third, if the types are (imperfectly) correlated over periods, the theoretical analysis becomes more complex, as the principal can use the observation of the first period to upgrade his prior on the firm’s type in the second period.23 Both contracts are offered in the first period after the firm learns its first type θ1 , while the second period type θ2 is learned after the first period contract realization. Regarding the first period, the first-order conditions (12) and (13) apply. For the second period, an additional distortion to the optimal effort in (13) is introduced because the principal revises his prior on θ2 after the firm reveals θ1 . Moreover, transfers take different forms for both periods. Consequently, our results need to be adjusted. Here again, we can analyze panel data as long as the contract period is observed. Combining data from the first and second periods can identify the model primitives, and, in particular, F(θ1 |Z1 ) and F(θ2 |θ1 Z1 Z2 ). An interesting feature is that we can use lagged endogenous variables of the first period such as P1 = p(θ1 Z1 ) as a conditioning variable to identify F(θ2 |θ1 Z1 Z2 ) without the need to identify θ1 explicitly. 23 See Pavan, Segal, and Toikka (2008) for a recent contribution on dynamic mechanism design with full commitment.
1526
I. PERRIGNE AND Q. VUONG
5. CONCLUSION This paper studies nonparametric identification of a contract model with adverse selection and moral hazard, and, more specifically, the Laffont and Tirole (1986) procurement model with ex post observed cost. Exploiting the bijective mapping between the observed price and the firm’s unknown type, we show that for a given cost of public funds, we can recover the model structure, namely the demand and baseline cost, the effort disutility, the firm’s type distribution, the deterministic transfer deviation, and the joint distribution of the random shocks. We can then identify the cost of public funds using the realized cost as an instrument. We also characterize all the restrictions that must be satisfied by the cost of public funds and observables so that the latter can be rationalized by the model. Our paper complements the structural analysis of data subject to asymmetric information such as in contracts, nonlinear pricing, and insurance. Our analysis can be extended to other models of incomplete information. As a matter of fact, the procurement model we consider includes some functions that do not appear in models of asymmetric information under adverse selection only. This is the case for nonlinear pricing models as studied in Perrigne and Vuong (2010). See also D’Haultfoeuille and Février (2007) for general principal– agent models. Regarding insurance, Aryal, Perrigne and Vuong (2009) studied identification of an insurance model with multidimensional screening and a finite number of contracts. Also of interest, Lazear (2000) relied on a false moral hazard model to analyze the effects of performance pay on workers’ productivity. Similarly, a mixed model was used by Gayle and Miller (2008) to analyze CEO compensation. Our results can be applied to study a model of compensation based on an ex post performance indicator such as worker’s productivity or abnormal returns of the firm. More generally, our approach can be extended to cases in which the principal’s behavior arises from equilibrium conditions associated with a game as long as they involve the hazard rate as in (13). The problem of estimating and testing the procurement model needs to be addressed. First, tests of the model validity can be developed based on Lemma 7, which provides all the restrictions imposed by the model. Second, incomplete information is generally assumed. It would be interesting to assess the relevance of such a hypothesis as first performed by Wolak (1994) in a parametric setting. Characterizing all the restrictions imposed by a complete information model would allow one to test which model (incomplete or complete information) is the most accurate for explaining the data. The problem of testing adverse selection and moral hazard has recently attracted a lot of interest and tests have been developed based on some model predictions. See Chiappori and Salanié (2000) and Abbring, Chiappori, Heckman, and Pinquet (2003). Third, regarding estimation, our results express the model primitives from the reduced form probability distribution of observables through various con-
IDENTIFICATION OF A CONTRACT MODEL
1527
ditional expectations. A multistep estimation procedure can be pursued. A first step consists of estimating the cost of public funds using (45) and nonparametric regression estimators. The demand is then estimated by a nonparametric quantile regression of Y on (P Z) given po (·) using (37). A second step consists of estimating the expected baseline cost using (26) and the deterministic transfer (30). With an estimate for the expected baseline cost at hand, we then obtain estimates of the relative cost inefficiency by (27) and the two functions (28) and (29) that are needed in the estimation of the effort disutility (30) and the firm’s type (32). This step relies on nonparametric estimators for the conditional price density and distribution. A third step consists of estimating the conditional type density using the estimated types following Guerre, Perrigne, and Vuong (2000). The baseline cost is estimated by a nonparametric quantile regression of the recovered baseline cost value Co on (Y P Z) given yo (·) using (39). Last, the joint conditional distribution of error terms is obtained from the estimated error terms in (41), (42), and (43). A difficulty is that conditional expectations need to be estimated at some boundaries of the support. Nonparametric estimation is known to introduce boundary effects that can be alleviated by local polynomial estimators. Establishing the asymptotic properties of this estimation procedure needs to take into account its multistep nature and that some functions are estimated from recovered values instead of observed ones. Alternatively, given that nonparametric identification has been established, the analyst can adopt a fully parametric or semiparametric econometric specification. APPENDIX This appendix gives the proofs of the propositions and lemmas stated in Sections 3 and 4. ˜ P, ˜ and T˜ denote the endogenous variables PROOF OF LEMMA 3: Let Y˜ C ˜ ˜ ˜ under the structure S. Let θ ≡ α + βθ so that θ˜ is distributed as F(·|·) = F[(· − α)/β|·] conditional on Z. Let (ε˜d ε˜ c ε˜ t ) ≡ (εd εc εt ) so that (ε˜d ε˜ c ε˜ t ) is ˜ · ·|·) = G(· · ·|·) conditional on Z. We show that jointly distributed as G(· ˜ P ˜ T˜ ) = (Y C P T ), which implies the desired result. (Y˜ C ˜ · ·) = y(· · ·), c˜o (· · ·) = co (· · ·)/β, (22), and (23), we note that Using y(· ˜ ˜ ·) = y(· ˜ · ε˜ d ) d G(ε˜ d |·) = y(· · εd ) dG(εd |·) = y(· ·) (A.1) y(· (A.2)
c˜o (· ·) = =
1 β
˜ ε˜ d ε˜ c |·) ˜ · ε˜ d ) · ε˜ c ] d G( c˜o [y(· co [y(· · εd ) · εc ] dG(εd εc |·) =
1 c o (· ·) β
1528
I. PERRIGNE AND Q. VUONG
˜ e), ˜ and use (A.1) and (A.2), Now, we consider the FOC (19) and (20) for (P ˜ ˜ ·) = ψ[(· − α)/β ·] − γ(·), θ˜ = α + βθ, F(·|·) ψ(· = F[(· − α)/β|·], f˜(·|·) = ˜ (1/β)f [(· − α)/β|·], and λ(·) = λ(·) to obtain e˜ − α ˜ ˜ ˜ ˜ Py (P Z) + μy(P Z) = θ − c o (P Z) β ˜−α F(θ|Z) e˜ − α e ˜ Z) ψ Z + μ ψ Z = c o (P β f (θ|Z) β From the solution p∗ (θ z), and from e∗ (θ z) of (19) and (20), it follows that P˜ = p∗ (θ Z) = P and (e˜ − α)/β = e∗ (θ Z) = e. In particular, the lat˜ Z) = α + βe∗ [(θ˜ − α)/β Z]. Moreover, because Y˜ = ter implies that e˜ ∗ (θ ˜ ˜ ˜ P Z ε˜ d ) = y(P Z ε˜ d ), we obtain Y˜ = Y since P˜ = P and ε˜ d = εd . y( Next, we turn to cost and transfer. From (18), and using (A.2), θ˜ = α + βθ, and e˜ = α + βe, we have ˜ c˜o (Y˜ Z ε˜ c ) = (θ − e)co (Y Z εc ) = C C˜ = (θ˜ − e) since Y˜ = Y and ε˜ c = εc . Moreover, from (20) and the previous results we obtain ˜ e ˜ Z) + T˜ = ψ(
θ˜
˜ θ(Z)
˜ Z) Z] d u˜ ψ˜ [e˜ ∗ (u
C˜ ˜ ˜ + m(Z) ˜ − (θ − e) + ε˜ t ˜ Z) c˜o (P
α+βθ(Z) 1 ∗ u˜ − α ψ e Z Z d u˜ = ψ(e Z) + β β α+βθ 1 βC˜ − β(θ − e) + m(Z) + ε˜ t − ψ (e Z) ˜ Z) β c o (P θ(Z) = ψ(e Z) + ψ [e∗ (u Z) Z] du ˜ Z) − ψ˜ (e
θ
− ψ (e Z)
C˜ − (θ − e) + m(Z) + ε˜ t ˜ Z) c o (P
˜ ·) = ψ[(· − α)/β ·] − γ(·), θ˜ = α + where the second equality uses (A.2), ψ(· ∗ ˜ ∗ ˜ ˜ βθ, e˜ = α + βe, e˜ (θ Z) = α + βe [(θ − α)/β Z], and m(Z) = m(Z) + γ(Z), while the third equality follows from the change of variable u = (u˜ − α)/β. Thus, (20) implies that T˜ = T since C˜ = C, P˜ = P, and ε˜ t = εt .
IDENTIFICATION OF A CONTRACT MODEL
1529
Last, because of the linear transformation given every value of Z, it is easy to verify that the structure S˜ satisfies A1–A3 and B1 as soon as the structure S satisfies these assumptions. Q.E.D. PROOF OF LEMMA 4: Recall that P = p∗ (θ Z), where p∗ (· ·) is the optimal price schedule. Assumption B1 implies that P is independent of εd given Z. Hence, (17) gives E[Y
|P = p Z = z] = E[y(p z εd )|P = p Z = z] = E[y(p z εd )|Z = z] = y(p z εd ) dG(εd |z) = y(p z) by (22). This establishes (25). Regarding (23), we recall that θ can be expressed as a function θ∗ (P Z) = ∗−1 p (P Z), which is strictly increasing in P since P = p∗ (θ Z) is strictly increasing in θ by Lemma 2. Thus, e can be expressed as a function e∗ (P Z), which is strictly decreasing in P because e = e∗ (θ Z) is strictly decreasing in θ by Lemma 2, while θ = θ∗ (P Z) is strictly increasing in P. Now, from (18) and using θ − e = θ∗ (P Z) − e∗ (P Z), we obtain (A.3)
E[C|P = p Z = z] = (θ − e)E co [y(p z εd ) z εc ]|P = p Z = z = (θ − e)E co [y(p z εd ) z εc ]|Z = z = (θ − e) co [y(p z εd ) z εc ] dG(εd εc |z) = (θ − e)c o (p z)
where θ − e = θ∗ (p z) − e∗ (p z). The second equality follows from B1 and the last equality follows from (23). In particular, (A.3) establishes (27) with Δ(· ·) satisfying Δ(· ·) ≥ 1 and ∂Δ(· ·)/∂p > 0, because θ − e = θ∗ (p z) − e∗ (p z) is strictly increasing in p with strictly positive derivative with respect to p by Lemma 2, implying θ∗ (p z) − e∗ (p z) ≥ θ∗ [p(z) z] − e∗ [p(z) z] = θ(z) − e∗ [θ(z) z] = 1 by (24). Moreover, writing (A.3) at p = p(z), which is the price for the most efficient firm with type θ(z), and exerting the maximal effort e(z) = e∗ [θ(z) z], we obtain (A.4)
E[C|P = p(z) Z = z] = c o [p(z) z]
because [θ(z) − e(z)] = 1 by the normalization (24). Next, we write (19) at Z = z so that P = p∗ (θ z) = p and e = e∗ (θ z). Dividing the resulting equation by (A.3), we obtain py (p z) + μy(p z) c o (p z) = E[C|P = p Z = z] c o (p z)
1530
I. PERRIGNE AND Q. VUONG
Integrating this differential equation from p(z) to some arbitrary p ∈ [p(z) p(z)], where p(z) ≡ p∗ [θ(z) z)] is the price for the least efficient type when Z = z, we obtain p ˜ z) + μy(p ˜ z) ˜ (p c o (p z) py ˜ = d p log ˜ c o (p(z) z) E[C|P = p Z = z] p(z) Solving for c o (p z) and using the boundary condition (A.4) gives (26). Q.E.D. PROOF OF PROPOSITION 4: Because θ − e = θ∗ (p z) − e∗+ (p z), differentiating (27) with respect to p gives (A.5)
∂θ∗ (p z) ∂e∗+ (p z) ∂Δ(p z) − = > 0 ∂p ∂p ∂p
where ∂θ∗ (· ·)/∂p > 0 and ∂e∗+ (· ·)/∂p < 0 from Lemma 2. In particular, θ = θ∗ (P Z) and e = e∗+ (P Z) are in bijections with P given Z. Thus, taking the conditional expectation of (21) given (P Z) = (p z) and using (A.3) together with E[εt |P = p Z = z] = E[εt |θ Z = z] = 0 by B1, we obtain (A.6)
θ(z)
E[T |P = p Z = z] = ψ(e z) +
˜ z) z] d θ˜ + m(z) ψ [e∗ (θ
θ
where θ = θ∗ (p z), θ(z) = θ∗ [p(z) z], and e = e∗+ (p z). Differentiating (A.6) gives ∗ ∂e+ (p z) ∂θ∗ (p z) ∂E[T |P = p Z = z] (A.7) = ψ (e z) − ∂p ∂p ∂p where we have used e∗ (θ z) = e∗+ (p z) = e. Thus, (A.5), (A.7) and (28) give (A.8)
ψ (e z) = Γ (p z) > 0
because ψ (· z) > 0 by A2. Differentiating (A.8) again gives (A.9)
ψ (e z)
∂e∗+ (p z) ∂Γ (p z) = < 0 ∂p ∂p
because ψ (· z) > 0 by A2 and ∂e∗+ (p z)/∂p < 0 by Lemma 2. Substituing (A.8) and (A.9) into (20) at Z = z so that P = p∗ (θ z) = p and e = e∗ (θ z), we obtain (A.10)
Γ (p z) + μ
GP|Z (p z) ∂Γ (p z) ∂θ∗ (p z)/∂p = c o (p z) gP|Z (p z) ∂p ∂e∗+ (p z)/∂p
IDENTIFICATION OF A CONTRACT MODEL
1531
where we have used the property that F(θ|z)/f (θ|z) = [GP|Z (p z)/gP|Z (p z)][∂θ∗ (p z)/∂p], because θ = θ∗ (p z) is strictly increasing in p from Lemma 2. We now solve (A.5) and (A.10) for ∂e∗+ (p z)/∂p and ∂θ∗ (p z)/∂p to obtain, after some algebra, (A.11)
∂e∗+ (p z) = −R(p z) < 0 ∂p
(A.12)
∂θ∗ (p z) ∂Δ(p z) = − R(p z) > 0 ∂p ∂p
where R(p z) is given in (29) with R(p z) > 0 because ∂e∗+ (· z)/∂p < 0 by Lemma 2. Similarly, the right-hand side of (A.12) must be strictly positive, because ∂θ∗ (· z)/∂p > 0 by Lemma 2, leading to ∂Δ(p z)/∂p > R(p z). Now note that e∗+ [p(z) z] = e∗ [θ(z) z] = 0 by (24). Moreover, from (27), we have θ∗ [p(z) z] − 0 = Δ[p(z) z]. Thus, integrating (A.11) and (A.12) from some arbitrary p ∈ [p(z) p(z)] to p(z), and using the preceding boundary conditions, we obtain (31) and (32). As all the functions on the right-hand side of (32) are identified, it follows that θ∗ (· ·) is identified. This establishes that F(·|·) is identified through F(θ|z) = GP|Z [p∗ (θ z)|z]. Last, let e(z) ≡ e∗+ [p(z) z] = e∗ [θ(z) z] = 0 by the normalization (24) and let e(z) ≡ e∗+ [p(z) z] = e∗ [θ(z) z]. Integrating (A.8) from 0 to some arbitrary e ∈ [e(z) e(z)], where p = p∗+ (· z), and using ψ(0 z) = 0 by B2 gives e ˜ z) z] d e ˜ Γ [p∗+ (e ψ(e z) = 0
which establishes (30) since (A.6) evaluated at p = p(z) gives E[T |P = p(z) Z = z] = m(z), as e = e∗+ [p(z) z] = 0 and θ = θ∗ [p(z) z] = θ(z) when Q.E.D. p = p(z). LEMMA 5: Let Y = y(P Z εd ) and Co = co (Y Z εc ) satisfy B3(ii) with P conditionally independent of (εd εc ) given Z. There exists an observationally ˜ equivalent system Y = y(P Z ε˜ d ) and Co = c˜o (Y Z ε˜ c ) with P conditionally ˜ independent of (ε˜ d ε˜ c ) given Z satisfying B3(ii) such that y(p z) = y(p z) ˜ o (z) z ε˜d ) = ε˜ d and c˜o (yo (z) z ε˜ c ) = ε˜ c and c o (y z) = c˜o (y z). Moreover, y(p for any (p z) and (ε˜ d ε˜ c ) for some known po (z) ∈ [p(z) p(z)] and yo (z) ∈ [y(z) y(z)]. The first part of Lemma 5 shows that the knowledge of expected demand and expected baseline cost does not help to identify the demand and baseline cost as well as the joint distribution of error terms. The second part shows that the ˜ · ·) c˜o (· · ·)] can also satisfy B3(i). observationally equivalent structure [y(·
1532
I. PERRIGNE AND Q. VUONG
PROOF OF LEMMA 5: Let ε˜ d = y(po (Z) Z εd ) for some arbitrary po (·) ∈ [p(·) p(·)]. Thus, εd = y −1 [po (Z) Z ε˜ d ] since y(· · ·) is strictly increasing in ˜ εd by assumption. Moreover, let y(p z y −1 [po (z) z ε˜ d ]) = y(p z ε˜ d ), which ˜ o (z) z ε˜ d ) = ε˜ d , thereby satisis strictly increasing in ε˜ d . Thus, we have y(p ˜ fying the first equality in (36). We need to verify that y(p z) = y(p z). In particular, ˜ ˜ y(p z) = E[y(p z ε˜ d )|p z] = E y p z y −1 [p0 (z) z ε˜ d ] |p z = E[y(p z εd )|p z] = y(p z) We can apply the same reasoning for c˜o (y z ε˜ c ). Let ε˜ c = co (yo (Z) Z εc ) for an arbitrary yo (·) ∈ [y(·) y(·)]. Thus, εc = co−1 [yo (Z) Z ε˜ c ] since co (· · ·) is strictly increasing in εc by assumption. Let co (y z co−1 [yo (z) z ε˜ c ]) = c˜o (y z ε˜ c ), which is strictly increasing in ε˜ c . Thus, we have c˜o (y0 (z) z ε˜ c ) = ε˜ c , thereby satisfying the second inequality in (36). We need to verify that c o (p z) = c˜o (p z). In particular, c˜o (y z) = E[c˜o (y z ε˜ c )|p z] ˜ z ε˜ d ) z co−1 [yo (z) z ε˜ c ] |p z = E co y(p = E co (y(p z εd )z εc )|p z = c o (p z) It remains to show that these two models are observationally equivalent, ˜ which is straightforward. Since ε˜ d = y(po (Z) Z εd ), we have y(P Z ε˜ d ) = y(P Z y −1 [po (Z) Z y(po (Z) Z εd )]) = y(P Z εd ). Similarly, since ε˜ c = co (yo (Z) Z εc ), we have c˜o (Y Z ε˜ c ) = co (Y Z co−1 [yo (Z) Z co (yo (Z) Z ˜ εc )]) = co (Y Z εc ) Thus, we have Y = y(P Z εd ) = y(P Z ε˜ d ) and Co = co (Y Z εc ) = c˜o (Y Z ε˜ c ). Last, P is conditionally independent of (ε˜ d ε˜ c ) given Z because P is conditionally independent of (εd εc ) given Z combined with ε˜ d = y(po (Z) Z εd ) and ε˜ c = co (yo (Z) Z εc ). Because the latter functions are strictly increasing in εd and εc , respectively, it follows that Gε˜d |Z (·|·) and Gε˜c |ε˜ d Z(·|· ·) are strictly increasing in their first arguments. Q.E.D. PROOF OF PROPOSITION 5: Part (i) follows Matzkin’s (2003) identification argument. Because εd is independent of θ given Z by B1, then εd is independent of P = p∗ (θ Z) given Z. Thus, if Gεd |PZ (·|· ·) denotes the conditional distribution of εd given (P Z), then for every (p z), we have Gεd |Z (·|z) = Gεd |PZ (·|p z) = GY |PZ [y(p z ·)|p z], because y(p z ·) is strictly increasing in εd . In particular, this shows that GY |PZ (·|p z) is strictly increasing in its first argument in view of B3(ii). Hence, for every (p z), we have (A.13) y(p z ·) = G−1 Y |PZ Gεd |Z (·|z)|p z
IDENTIFICATION OF A CONTRACT MODEL
1533
Moreover, letting p = po (z) we obtain Gεd |Z (·|z) = GY |PZ [y(po (z) z ·)| po (z) z] = GY |PZ [·|po (z) z], where the second equality follows from the first normalization in (36). This establishes (38) and hence (37) using (A.13). To prove (ii), we extend Matzkin’s argument, as Y = y(P Z εd ) is not independent from εc given Z in Co = co (Y Z εc ). On the other hand, we exploit the fact that P is independent from εc given (εd Z), because P = p∗ (θ Z) and θ is independent of εc given (εd Z) by B1. Thus, similarly to above, we obtain Gεc |εd Z (·|εd z) = Gεc |εd PZ (·|εd p z) = GCo |εd PZ {co [y(p z εd ) z ·]|εd p z} = GCo |YPZ [co (y z ·)|y p z], because co (y z ·) is strictly increasing in εc and y ≡ y(p z εd ). In particular, GCo |YPZ (·|y p z) is strictly increasing in its first argument in view of B3(ii). Hence, we have (A.14) co (y z ·) = G−1 Co |YPZ Gεc |εd Z (·|εd z)|y p z for every (y p z εd ) satisfying y = y(p z εd ). We now exploit the second normalization in (36). Given B3(iii), let p = p† (z εd ) in (A.14). We obtain Gεc |εd Z (·|εd z) = GCo |YPZ co y[p† (z) z εd ] z · | y[p† (z) z εd ] p† (z εd ) z = GCo |YPZ co {yo (z) z ·}|yo (z) p† (z εd ) z = GCo |YPZ [·|yo (z) p† z] where the second equality follows from B3(iii), while the third equality follows from the second normalization in (36). This establishes (40). Equation (39) follows from (A.14). Last, p† (· ·) is identified, as yo (·) is known and y(· · ·) is identified by (i). Q.E.D. PROOF OF LEMMA 6: Equation (41) follows directly from (37), noting that Y = y(P Z εd ). Similarly, noting that C0 = co (Y Z εc ), (39) gives εc = G−1 Co |YPZ GCo |YPZ (Co |Y P Z)|yo (Z) p† (Z εd ) Z Recall that Co = C/Δ(P Z) so that GC|YPZ (·|Y P Z) = GCo |YPZ (·/Δ(P Z)| −1 Y P Z) and G−1 C|YPZ (·|Y P Z) = Δ(P Z)GCo |YPZ (·|Y P Z). This establishes (42). ˜ z) in (30) and To establish (43), we make the change of variable p˜ = p∗+ (e we obtain p∗+ (ez) p(z) ˜ z) ∂e∗ (p ˜ z) + ˜ z)R(p ˜ z) d p ˜ d p˜ = (A.15) ψ(e z) = Γ (p Γ (p ∂p p∗+ (0z) p where the second equality follows from p∗+ (0 z) = p(z) and (A.11). Thus, using (A.3), (A.6), and (A.8) in (21) we obtain (A.16)
T = E[T |P Z] −
Γ (P Z) (C − E[C|P Z]) + εt c o (P Z)
1534
I. PERRIGNE AND Q. VUONG
We now compute Γ (P Z)/c o (P Z). From (27), we note that Δ(P Z)c o (P Z) = E[C|P Z]. Thus, differentiating with respect to p gives c o (P Z)
∂c o (P Z) ∂Δ(P Z) ∂E[C|P Z] = − Δ(P Z) ∂p ∂p ∂p
where ∂c o (P Z) c o (P Z) = [Py (P Z) + μy(P Z)] ∂p E[C|P Z] =
1 [Py (P Z) + μy(P Z)] Δ(P Z)
from (26) and (27). Hence, c o (P Z)
∂Δ(P Z) ∂E[C|P Z] = − [Py (P Z) + μy(P Z)] ∂p ∂p
It follows from (28) that Γ (P Z) ∂E[T |P Z]/∂p =− c o (P Z) ∂E[C|P Z]/∂p − [Py (P Z) + μy(P Z)] This establishes (43) in view of (A.16).
Q.E.D.
PROOF OF LEMMA 7: To prove the first part, we need to show that the conditional distribution of (Y C P T ) given Z induced by a structure S ∈ S satisfying P1–P5 also satisfies C1. Regarding C1(i), the first condition follows from (22) and A1 together with B3(ii). The second condition follows by taking the conditional expectation of (18) given (P Z) and using θ − e > 0 by A1 and c o (p z) > 0. The latter inequality follows from (23) using co (· · ·) ≥ 0 together with B3(ii). Regarding C1(ii), from the second part of the proof of Proposition 3, we have dt/d(θ − e∗ (θ z)) = −ψ [e∗ (θ z) z] < 0, implying ψ [e∗ (θ z) z] > 0. From (30), ψ (e z) = Γ [p∗+ (e z) z]. Hence, Γ [p∗+ (e z) z] > 0. Regarding Γ [p∗+ (e z) z], the proof of Lemma 1 shows that the second partial derivative of the firm’s objective function is equal to −ψ (e) because of the linearity in C of the transfer (21). Thus, strict concavity implies ψ (e) > 0. As ψ (e) = Γ [p∗+ (e z) z]p∗+ (e z) from (30), where p∗+ (e z) < 0 because p∗ (θ z) > 0 and e∗ (θ z) < 0 by assumption, it follows that Γ [p∗+ (e z) z] < 0. Regarding C1(iii), we have Δ (p z) = θ∗ (p z) − e∗+ (p z) > 0 because ∗ p (θ z) > 0 and e∗+ (p z) < 0 by assumption. Moreover, from (32), we have θ∗ (p z) = Δ (p z) − R(p z) > 0 by assumption. Thus Δ (p z) > R(p z) or, equivalently, Γ (p z) < c o (p z) as discussed in the text. Regarding C1(iv), Lemma 6 shows that (εd εc ) can be recovered through identified functions
IDENTIFICATION OF A CONTRACT MODEL
1535
φd (Y P Z) and φc (Y C P Z). Since P and θ are in a bijective relationship, the latter are conditionally independent of P given Z in view of B1. Moreover, (43) shows that E[εt |P Z] = 0. Regarding C1(v), the first part follows from p∗ (θ z) > 0 and fθ|Z (·|·) > 0, while the second part follows from B3(ii), GY |PZ (·|p z) = Gεd |Z [y −1 (p z ·)|z], and GC|YPZ (·|y p z) = Gεc |εd Z [co−1 (y z ·/(θ − e))|εd z], where the latter uses the bijective mapping between P and θ given z and B1 and B2. Turning to the second part, let the observations (Y P C T Z) and a function μ(·) satisfy C1. We need to define [y co ψ F G m μ] from the observables and show that these functions satisfy A1 and B1–B3 as well as P1–P5. In view of Proposition 5 with εd ∈[εd (z)εd (z)] {y = y(p z εd ) p ∈ [p(z) p(z)]} nonempty for every z ∈ Z , let y(p z εd ) = G−1 Y |PZ GY |PZ (εd |po (z) z)|p z co (y z εc ) = G−1 Co |YPZ GCo |YPZ [εc |yo (z) p† (z εd ) z]|y p z for some [po (·) yo (·)], where y = y(p z εd ) and Co = C/Δ(P Z) with Δ(p z) given in (27). Note that B1(i) is satisfied by construction. By C1(v), y(p z ·) and co (y z ·) are strictly increasing in their last arguments. Thus, we can define εd = y −1 (P Z Y ) and εc = co−1 (Y Z Co ). Note that C = Δ(P Z)Co = Δ(P Z)co (Y Z εc ), where Δ(P Z) = θ − e and co (Y Z εc ) ≥ 0 because C ≥ 0. Using C1(i), E[C|P Z] = Δ(P Z)c o (P Z) > 0, leading to Δ(P Z) > 0 because c o (P Z) > 0 by (26). Thus the first part of A1 is satisfied. Using C1(iv), we have Gεd |Z (·|z) = Gεd |PZ (·|po (z) z) = GY |PZ (·|po (z) z). Thus, Gεd |Z (·|z) is nondegenerate and strictly increasing in its first argument by C1(v). Similarly, using C1(iv), we have Gεc |εd Z (·|εd z) = Gεc |εd PZ (·|εd p† (z εd ) z) = Gεc |YPZ (·|yo (z) p† (z εd ) z) = GCo |YPZ [·|yo (z) p† (z εd ) z], which is nondegenerate and strictly increasing in its first argument by C1(v) and Co = C/Δ(P Z) with Δ(P Z) > 0. Hence, B3(ii) is satisfied. In view of Proposition 4, let m(z) = E[T |P = p(z) Z = z] and e ˜ z)] d e ˜ ψ(e z) = Γ [p∗+ (e 0
p(Z)
θ = θ∗ (P Z) ≡ Δ(P Z) +
˜ Z) d p ˜ R(p P
p(z) ˜ z) d p, ˜ which is strictly where p∗+ (· z) is the inverse of e∗+ (· z) = · R(p decreasing as R(· ·) > 0 from C1(iii). The distribution F(θ|Z) is then defined as the distribution of θ∗ (P Z) given Z. Note that θ∗ (p z) > 0 because Δ (p z) > R(p z) by C1(iii). Thus F(θ|z) admits a density, which is strictly positive on [θ(z) θ(z)] as defined in (34) and (35) by C1(v). Moreover, P2 and P3 hold from C1(iii). Thus, P3 follows from θ∗ (p z) > 0 and P2 follows from e∗+ (p z) = −R(p z) < 0 by C1(iii). In addition, B2 is satisfied, as
1536
I. PERRIGNE AND Q. VUONG
θ(z) − e∗ (θ(z) z) = Δ(p(z) z) = 1 by (26) and (27), while e∗ (θ(z) z) = 0 because e∗+ (p(z) z) = 0 by construction and p∗ (· z) is strictly increasing, leading to p(z) = p∗ (θ z). Note that (εd εc ) are conditionally independent of P given Z by Lemma 6 and C1(iv), and hence conditionally independent of θ given Z. Moreover, (43) implies E[εt |P Z] = 0. Thus, B1 and the second part of A1 are satisfied. Last, P1, P4, and P5 hold from C1(ii). With the transfer (43), the proof of Lemma 1 shows that P1 is equivalent to ψ (e z) > 0, which is ensured by ψ (e z) = Γ (p z)e∗+ (p z), Γ (p z) < 0 by C1(ii), and e∗+ (p z) < 0 as above. Similarly, the proof of Proposition 3 shows that P4 holds when e∗ (θ z) < 0, which is established above. Moreover, the proof of Proposition 3 shows that P5 holds Q.E.D. when ψ (e z) > 0, which follows from C1(ii). PROOF OF PROPOSITION 6: Let S = [y co ψ F G m λ] be a structure satisfying P1–P5 and thus inducing a distribution for (Y C P T ) given Z satisfy˜ = λ(·) + ε, with ε = 0 sufficiently small so that ing C1 by Lemma 7. Define λ(·) ˜λ(·) and the distribution of (Y C P T ) given Z satisfy C1(ii) and (iii). Note ˜ that C1(i) and (v) are still satisfied, as these assumptions do not involve λ(·). ˜ We need to show that C1(iv) is satisfied as well. We have φd (· · ·) = φd (· · ·) and φ˜ c (· · · ·) = φc (· · · ·) from (41) and (42), respectively. On the other hand, φ˜ t (· · · · ·) = φt (· · · · ·), but E[φ˜ t (Y C P T Z)|P Z] = 0 by (43). ˜ m ˜ G ˜ F ˜ that ˜ λ] ˜ c˜o ψ Thus, from Lemma 7, there exists a structure S˜ = [y satisfies A1 and B1–B3 as well as P1–P5 and that rationalizes the observ˜ ables (Y C P T ) given Z. The structure S˜ differs from S because λ(·)
= ˜ ε |Z (·|·) = Gε |Z (·|·), and ˜ · ·) = y(· · ·), m(·) ˜ λ(·). Although y(· = m(·), G d d ˜ εc |ε Z (·|· ·) = Gεc |εc Z (·|· ·), the remaining functions differ. G Q.E.D. d PROOF OF PROPOSITION 7: It suffices to show that C is a valid instrument, as (45) follows from (44) with W = C. By C2, we have Cov[C εt |P Z] = 0, while C is perfectly correlated with itself given (P Z). Last, Cov[C T |P Z] = E{C(T − E[T |P Z])|P Z} = −[Γ (P Z)/c o (P Z)] Var[C|P Z] by (A.16). Thus, Cov[C T |P Z] < 0, as Γ (· ·) > 0 and c o (· ·) > 0. Q.E.D. REFERENCES ABBRING, J., P. A. CHIAPPORI, J. HECKMAN, AND J. PINQUET (2003): “Adverse Selection and Moral Hazard in Insurance: Can Dynamic Data Help to Distinguish?” Journal of the European Economic Association, Papers and Proceedings, 1, 512–521. [1526] ARYAL, G., I. PERRIGNE, AND Q. VUONG (2009): “Identification of Insurance Models With Multidimensional Screening,” Unpublished Manuscript, Pennsylvania State University. [1526] BALLARD, C., J. SHOVEN, AND J. WHALLEY (1985): “General Equilibrium Computations of the Marginal Welfare Costs of Taxes in the United States,” American Economic Review, 75, 128–138. [1519]
IDENTIFICATION OF A CONTRACT MODEL
1537
BARON, D. (1989): “Design of Regulatory Mechanisms and Institutions,” in Handbook of Industrial Organization, Vol. II, ed. by R. Schlamensee and R. Willig. Amsterdam, Netherlands: North Holland, 1347–1448. [1503] BARON, D., AND R. MYERSON (1982): “Regulating a Monopolist With Unknown Costs,” Econometrica, 50, 911–930. [1503,1505] BESANKO, D. (1984): “On the Use of Revenue Requirements Regulation Under Imperfect Information,” in Analyzing the Impact of Regulatory Change in Public Utilities, ed. by M. Crew. Lanham, MD: Lexington Books, 39–55. [1510] BLUNDELL, R., AND J. POWELL (2003): “Endogeneity in Semiparametric and Nonparametric Regression Models,” in Advances in Economics and Econometrics: Eighth World Congress, Vol. II, ed. by M. Dewatripont, L. P. Hansen, and S. Turnovky. Cambridge, U.K.: Cambridge University Press, 312–357. [1516] BOLTON, P., AND M. DEWATRIPONT (2005): Contract Theory. Cambridge, MA: MIT Press. [1499] BRESNAHAN, T. (1989): “Empirical Studies of Industries With Market Power,” in Handbook of Industrial Organization, Vol. II, ed. by R. Schmalensee and R. Willig. Amsterdam, Netherlands: North Holland, 1011–1058. [1514] BROCAS, I., K. CHAN, AND I. PERRIGNE (2006): “Regulation Under Asymmetric Information in Water Utilities,” American Economic Review, Papers and Proceedings, 96, 62–66. [1500,1510] BROOK, P., AND S. SMITH (2001): Contracting for Public Services: Output-Based Aid and Its Applications. Washington, DC: The World Bank. [1500] CAMPO, S., E. GUERRE, I. PERRIGNE, AND Q. VUONG (2011): “Semiparametric Estimation of First-Price Auctions With Risk Averse Bidders,” Review of Economic Studies, 78, 112–147. [1523] CHERNOZHUKOV, V., H. HONG, AND E. TAMER (2007): “Estimation and Confidence Regions for Parameter Sets in Econometric Models,” Econometrica, 75, 1243–1284. [1522] CHESHER, A. (2003): “Identification in Nonseparable Models,” Econometrica, 71, 1405–1441. [1517] CHIAPPORI, P. A., AND B. SALANIÉ (2000): “Testing for Asymmetric Information in Insurance Markets,” Journal of Political Economy, 108, 56–78. [1526] (2003): “Testing Contract Theory: A Survey of Some Recent Work,” in Advances in Economics and Econometrics: Eighth World Congress, Vol. I, ed. by M. Dewatripont, L. P. Hansen, and S. Turnovky. Cambridge, U.K.: Cambridge University Press, 115–149. [1500] CRAWFORD, G., AND M. SHUM (2007): “Monopoly Quality Degradation and Regulation in Cable Television,” Journal of Law and Economics, 50, 181–219. [1500] D’HAULTFOEUILLE, X., AND P. FÉVRIER (2007): “Identification and Estimation of Incentive Problems: Adverse Selection,” Unpublished Manuscript, CREST-INSEE. [1500,1526] EINAV, L., M. JENKINS, AND J. LEVIN (2008): “Contract Pricing in Consumer Credit Markets,” Unpublished Manuscript, Stanford University. [1500] ESTACHE, A., AND L. WREN-LEWIS (2009): “Toward a Theory of Regulation for Developing Countries: Following Jean-Jacques Laffont’s Lead,” Journal of Economic Literature, 47, 729–770. [1511] GAGNEPAIN, P., AND M. IVALDI (2002): “Incentive Regulatory Policies: The Case of Public Transit Systems in France,” Rand Journal of Economics, 33, 605–629. [1500,1510,1514] GAYLE, G. L., AND R. MILLER (2008): “Identifying and Testing Generalized Moral Hazard Models of Managerial Compensation,” Unpublished Manuscript, Carnegie Mellon University. [1500,1526] GENERAL ACCOUNTING OFFICE (2002): “Contract Management Guidance Needed for Using Performance-Based Service Contracting,” GAO-02-1049. [1500] GUERRE, E., I. PERRIGNE, AND Q. VUONG (2000): “Optimal Nonparametric Estimation of FirstPrice Auctions,” Econometrica, 68, 525–574. [1501,1514,1527] (2009): “Nonparametric Identification of Risk Aversion in First-Price Auctions Under Exclusion Restrictions,” Econometrica, 77, 1193–1227. [1523]
1538
I. PERRIGNE AND Q. VUONG
HASTIE, T., AND R. TIBSHIRANI (1990): Generalized Additive Models. New York: Chapman and Hall. [1517] HO, J., K. HO, AND J. MORTIMER (2011): “The Use of Full-Line Forcing Contracts in the Video Rental Industry,” American Economic Review (forthcoming). [1500] IMBENS, G., AND W. NEWEY (2009): “Identification and Estimation of Triangular Simultaneous Equations Models Without Additivity,” Econometrica, 77, 1481–1512. [1517] INTERNATIONAL DEVELOPMENT ASSOCIATION (2006): “A Review of the Use of Output-Based Aid Approaches,” Technical Report. [1500] IVALDI, M., AND D. MARTIMORT (1994): “Competition Under Nonlinear Pricing,” Annales d’Economie et de Statistique, 34, 71–114. [1500] JORGENSON, D., AND K. Y. YUN (1991): “The Excess Burden of US Taxation,” Journal of Accounting, Auditing and Finance, 6, 487–509. [1519] JOSKOW, N., AND N. ROSE (1989): “The Effects of Economic Regulation,” in Handbook of Industrial Organization, Vol. II, ed. by R. Schlamensee and R. Willig. Amsterdam, Netherlands: North Holland, 1449–1506. [1500] JOSKOW, P. (2005): “Regulation and Deregulation After 25 Years: Lessons Learned for Research in Industrial Organization,” Review of Industrial Organization, 26, 169–193. [1510,1523] JOSKOW, P., AND R. SCHMALENSEE (1986): “Incentive Regulation for Electric Utilities,” Yale Journal on Regulation, 4, 1–49. [1503,1510,1523] LAFFONT, J. J. (1994): “The New Economics of Regulation Ten Years After,” Econometrica, 62, 507–537. [1500] LAFFONT, J. J., AND D. MARTIMORT (2002): The Theory of Incentives: The Principal-Agent Model. Princeton, NJ: Princeton University Press. [1499,1525] LAFFONT, J. J., AND J. TIROLE (1986): “Using Cost Observation to Regulate Firms,” Journal of Political Economy, 94, 614–641. [1499-1503,1507-1510,1523,1526] (1993): A Theory of Incentives in Procurement and Regulation. Cambridge, MA: MIT Press. [1500,1502,1513,1523,1524] LAZEAR, E. (2000): “Performance Pay and Productivity,” American Economic Review, 90, 1346–1361. [1526] MANSKI, C., AND E. TAMER (2002): “Inference on Regressions With Interval Data on a Regressor or Outcome,” Econometrica, 70, 519–546. [1522] MATZKIN, R. (1994): “Restrictions of Economic Theory in Nonparametric Methods,” in Handbook of Econometrics, Vol. IV, ed. by R. Engle and D. McFadden. Amsterdam, Netherlands: North Holland. [1517] (2003): “Nonparametric Estimation of Nonadditive Random Functions,” Econometrica, 71, 1339–1375. [1517,1518,1532] (2008): “Identification in Nonparametric Simultaneous Equations,” Econometrica, 76, 945–978. [1517] MIRAVETE, E. (2002): “Estimating Demand for Local Telephone Service With Asymmetric Information and Optional Calling Plans,” Review of Economic Studies, 69, 943–971. [1500] MYERSON, R. (1982): “Optimal Coordination Mechanisms in Generalized Principal-Agent Models,” Journal of Mathematical Economics, 10, 67–81. [1500] PAARSCH, H., AND B. SHEARER (2000): “Piece Rates, Fixed Wages and Incentive Effects: Statistical Evidence From Payroll Records,” International Economic Review, 41, 59–92. [1500] PAKES, A., J. PORTER, K. HO, AND J. ISHII (2006): “Moment Inequalities and Their Application,” Unpublished Manuscript, Harvard University. [1500] PARK, B., R. SICKLES, AND L. SIMAR (1998): “Stochastic Panel Frontiers: A Semiparametric Approach,” Journal of Econometrics, 84, 273–301. [1501] PAVAN, A., I. SEGAL, AND J. TOIKKA (2008): “Dynamic Mechanism Design: Revenue Equivalence, Profit Maximization and Information Disclosure,” Unpublished Manuscript, Stanford University. [1525] PERRIGNE, I. (2002): “Incentive Regulatory Contracts in Public Transportation: An Empirical Study,” Unpublished Manuscript, Pennsylvania State University. [1510,1519]
IDENTIFICATION OF A CONTRACT MODEL
1539
PERRIGNE, I., AND S. SURANA (2004): “Politics and Regulation: The Case of Public Transit,” Unpublished Manuscript, Pennsylvania State University. [1510] PERRIGNE, I., AND Q. VUONG (2009): “Estimating Cost Efficiency Under Asymmetric Information,” Unpublished Manuscript, Pennsylvania State University. [1514] (2010): “Nonlinear Pricing in Yellow Pages,” Unpublished Manuscript, Pennsylvania State University. [1500,1526] (2011a): “Supplement to ‘Nonparametric Identification of a Contract Model With Adverse Selection and Moral Hazard’,” Econometrica Supplemental Material, 79, http://www. econometricsociety.org/ecta/Supmat/6954_proofs.pdf. [1502] (2011b): “On the Identification of the Procurement Model,” Economics Letters (forthcoming). [1502] PRAKASA RAO, B. (1992): Identifiability in Stochastic Models. New York: Academic Press. [1512] ROEHRIG, C. (1988): “Conditions for Identification in Nonparametric and Parametric Models,” Econometrica, 56, 433–447. [1512] SINGH, J., D. LIMAYE, B. HENDERSON, AND X. SHI (2010): Public Procurement of Energy Efficiency Services: Lessons From International Experience. Washington, DC: The World Bank. [1500] THOMAS, A. (1995): “Regulating Pollution Under Asymmetric Information: The Case of Industrial Wastewater Treatment,” Journal of Environmental Economics and Management, 28, 357–373. [1500] WALTERS, M., AND E. AURIOL (2007): “The Marginal Cost of Public Funds in Developing Countries: An Application to 38 African Countries,” Unpublished Manuscript, IDEI. [1519] WOLAK, F. (1994): “An Econometric Analysis of the Asymmetric Information, Regulator-Utility Interaction,” Annales d’Economie et de Statistique, 34, 13–69. [1500,1510,1526]
Dept. of Economics, Pennsylvania State University, University Park, PA 16802, U.S.A.;
[email protected] and Dept. of Economics, Pennsylvania State University, University Park, PA 16802, U.S.A.;
[email protected]. Manuscript received January, 2007; final revision received February, 2011.
Econometrica, Vol. 79, No. 5 (September, 2011), 1541–1565
NONPARAMETRIC INSTRUMENTAL REGRESSION BY S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT1 The focus of this paper is the nonparametric estimation of an instrumental regression function ϕ defined by conditional moment restrictions that stem from a structural econometric model E[Y − ϕ(Z) | W ] = 0, and involve endogenous variables Y and Z and instruments W . The function ϕ is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyzes identification and overidentification of this model, and presents asymptotic properties of the estimated nonparametric instrumental regression function. KEYWORDS: Instrumental variables, integral equation, ill-posed problem, Tikhonov regularization, kernel smoothing.
1. INTRODUCTION AN ECONOMIC RELATIONSHIP between a response variable Y and a vector Z of explanatory variables is often represented by the equation (1.1)
Y = ϕ(Z) + U
where the function ϕ should define the relationship of interest while U is an error term.2 The relationship (1.1) does not characterize the function ϕ if the residual term is not constrained. This difficulty is solved if it is assumed that E[U | Z] = 0 or, equivalently, ϕ(Z) = E[Y | Z]. However, in numerous structural econometric models, the conditional expectation function is not the parameter of interest. The structural parameter is a relation between Y and Z, where some of the Z components are endogenous. This is, for example, the case in various situations such as simultaneous equations, error-in-variables models, and treatment models with endogenous selection. This paper considers an instrumental variables treatment of the endogeneity. The introduction of instruments may be done in several ways. Our framework is based on the introduction of a vector W of instruments such that ϕ is defined as the solution of (1.2)
E[U | W ] = E[Y − ϕ(Z) | W ] = 0
1 We first want to thank our coauthors on papers strongly related with this one: M. Carrasco, C. Gourieroux, J. Johannes, J. Heckman, C. Meghir, S. Van Bellegem, A. Vanhems, and E. Vytlacil. We also acknowledge helpful comments from a co-editor, the referees, and D. Bosq, X. Chen, L. Hansen, P. Lavergne, J. M. Loubes, W. Newey, and J. M. Rolin. We thank the participants at conferences and seminars in Chicago, Harvard–MIT, London, Louvain-la-Neuve, Montreal, Paris, Princeton, Santiago, Seattle, Stanford, Stony Brook, and Toulouse. We also thank R. Lestringand, who performed the numerical illustration given in Section 5. 2 We remain true to the tradition in Econometrics of additive error terms. See, for example, Florens (2005), Horowitz and Lee (2007), and Imbens and Newey (2009) for alternative structural approaches.
© 2011 The Econometric Society
DOI: 10.3982/ECTA6539
1542
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
Instrumental variables estimation may be also introduced using control functions (for a systematic treatment, see Newey, Powell, and Vella (1999)) or local instrumental variables (see, e.g., Florens, Heckman, Meghir, and Vytlacil (2008)). Equation (1.2) characterizes ϕ as the solution of a Fredholm integral equation of the first kind, and this inverse problem is known to be ill-posed and to need a regularization method. The connection between instrumental variables estimation and ill-posed inverse problems was pointed out by Florens (2003) and Newey and Powell (2003). Florens (2003) proposed to address this question using a Tikhonov regularization approach, which is also used in Carrasco and Florens (2000) to treat generalized method of moments (GMM) estimation with an infinite number of moment conditions. The Tikhonov approach was also adopted by Hall and Horowitz (2005). Newey and Powell (2003) resorted to a different analysis based on sieve estimation under regularization by compactness. The literature on ill-posed inverse problems is huge, particularly in numerical analysis and image processing. The deconvolution problem is one of the main uses of inverse problems in statistics (see Carrasco and Florens (2011)). The main features of the instrumental variables estimation come from the necessity of the estimation of the equation itself (and not only the right hand side) and from the combination of parametric and nonparametric rates of convergence. The theory of inverse problems introduces to econometrics a different, albeit related, class of concepts of regularity of functions. Source conditions extend standard differentiability assumptions used, for example, in kernel smoothing. Even though the present paper is self-contained, we refer to Carrasco, Florens, and Renault (2007) for a general discussion on inverse problem in econometrics. This paper is organized as follows. In Section 2 the instrumental regression problem (1.2) is precisely defined and the identification of ϕ is discussed. Section 3 discusses the ill-posedness and presents regularization methods and regularity spaces. The estimator is defined in Section 4, and consistency and rate of convergence are analyzed. Section 5 briefly considers practical questions about the implementation of our estimator and displays some simulations. Some extensions are suggested in the conclusion section. Proofs and other details of mathematical rigor are provided in two appendices. Appendix A contains the proofs of the theorems while Appendix B, provided in the Supplemental Material (Darolles, Fan, Florens, and Renault (2011)) shows that the maintained assumptions can be derived from more primitive conditions on the Data Generating Process (DGP). Throughout the rest of this paper, all the limits are taken as the sample size N goes to infinity, unless otherwise stated. We use fA (·) and fAB (· ·) to denote the density function of the random variable A and the joint density function of the random variables A B, respectively. In addition, we use fA|B (·|b) and fA|BC (·|b c) to denote the conditional density functions of A given B = b and B = b C = c, respectively. For two numbers α and β, we let α ∧ β = min(α β).
NONPARAMETRIC INSTRUMENTAL REGRESSION
1543
2. THE INSTRUMENTAL REGRESSION AND ITS IDENTIFICATION 2.1. Definitions We denote by S = (Y Z W ) a random vector partitioned into Y ∈ R, Z ∈ Rp , and W ∈ Rq . The probability distribution on S is characterized by its joint cumulative distribution function (c.d.f.) F . We assume that Y , the first coordinate of S, is square integrable. This condition is actually a condition on F ; F denotes the set of all c.d.f.s that satisfy this integrability condition. For a given F , we consider the Hilbert space L2F of square integrable functions of S, and we denote by L2F (Y ), L2F (Z), and L2F (W ) the subspaces of L2F of real valued functions that depend on Y , Z, or W only. We denote by · and · · the norm and scalar product in these spaces. Typically, F is the true distribution function from which the observations are generated and these L2F spaces are related to this distribution. In this section, no additional restriction is maintained on the functional spaces, but more conditions are necessary, in particular, for the analysis of the asymptotic properties. These restrictions are only introduced when necessary. DEFINITION 2.1: We call instrumental regression any function ϕ ∈ L2F (Z) that satisfies the condition (2.1)
Y = ϕ(Z) + U
E[U | W ] = 0
Equivalently, ϕ corresponds to any solution of the functional equation (2.2)
E[Y − ϕ(Z) | W ] = 0
If Z and W are identical, ϕ is equal to the conditional expectation of Y given Z and then it is uniquely defined. In the general case, additional conditions are required to identify ϕ uniquely by (21) or (22). EXAMPLE 2.1: We assume that S ∼ N(μ Σ) and we restrict our attention to linear instrumental functions ϕ, ϕ(z) = Az +b. Conditions (21) are satisfied if and only if AΣZW = ΣY W , where ΣZW = cov(Z W ) and ΣY W = cov(Y W ). If Z and W have the same dimension and if ΣZW is nonsingular, then A = ΣY W Σ−1 ZW and b = μY − AμZ . We see later that this linear solution is the unique solution of (22) in the normal case. If Z and W do not have the same dimension, more conditions are needed for the existence and uniqueness of ϕ. It is useful to introduce the following two notations: (i) T : L2F (Z) → L2F (W ), ϕ → T ϕ = E[ϕ(Z) | W ]. (ii) T ∗ : L2F (W ) → L2F (Z), ψ → T ∗ ψ = E[ψ(W ) | Z]. These two linear operators satisfy: ϕ(Z) ψ(W ) = E[ϕ(Z)ψ(W )] = T ϕ(W ) ψ(W ) = ϕ(Z) T ∗ ψ(Z)
1544
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
and then T ∗ is the adjoint (or dual) operator of T ; the reciprocal also holds. Using these notations, ϕ corresponds to any solution of the functional equation: (2.3)
A(ϕ F) = T ϕ − r = 0
where r(W ) = E[Y | W ]. This implicit definition of the parameter of interest ϕ as a solution of an equation that is dependent on the data generating process is the main characteristic of the structural approach in econometrics. In our case, note that equation (23) is linear in ϕ. If the joint c.d.f. F is characterized by its density f (y z w) with respect to (w.r.t.) the Lebesgue measure, equation (23) is an integral Fredholm type I equation, fZW (z w) ϕ(z) (2.4) dz = r(w) fW (w) where r(w) = yfYW (y w) dy/fW (w). The estimation of a function by solving an integral equation is a usual problem in nonparametric statistics. The simpler issue of nonparametric estimation of a density function is actually an ill-posed inverse problem. From the empirical counterpart of the cumulative distribution function, we have a root-n consistent estimator of the integral of the density function on any interval of the real line. It is precisely the necessary regularization of the ill-posed characterization of the density function, which leads to nonparametric rates of convergence for density estimation (see, e.g., Hardle and Linton (1994) and Vapnik (1998)). The inverse problem (24) is an even more difficult issue since its inputs for statistical estimation of ϕ are nonparametric estimators of the functions fZW , fW , and r, which also involve nonparametric speeds of convergence. However, one contribution of this paper is to show that the dimension of W has no negative impact on the resulting speed of convergence of the estimator of ϕ. Roughly speaking, increasing the dimension of W increases the speed of convergence. The usual dimensionality curse in nonparametric estimation is only dependent on the dimension of Z 2.2. Identification The c.d.f. F and the regression function r are directly identifiable from the random vector S. Our objective is then to study the identification of the function of interest ϕ. The solution of equation (23) is unique if and only if T is one-to-one (or, equivalently, the null space N (T ) of T is reduced to zero). This abstract condition on F can be related to a probabilistic point of view using the fact that T is a conditional expectation operator.
NONPARAMETRIC INSTRUMENTAL REGRESSION
1545
This concept is well known in statistics and it corresponds to the notion of a complete statistic3 (see Lehman and Scheffe (1950) and Basu (1955)). A systematic study is made in Florens and Mouchart (1986) and Florens, Mouchart, and Rolin (1990, Chap. 5), under the name of strong identification (in a L2 sense), of the σ-field generated by the random vector Z by the σ-field generated by the random vector W . The characterization of identification in terms of completeness of the conditional distribution function of Z given W was already provided by Newey and Powell (2003). They also discussed the particular case detailed in Example 2.2 below. Actually, the strong identification assumption can be interpreted as a nonparametric rank condition as it is shown in the following example that deals with the normal case. EXAMPLE 2.2: Following Example 2.1, let us consider a random normal vector (Z W ). The vector Z is strongly identifiable by W if one of the three following equivalent conditions is satisfied (see Florens, Mouchart, and Rolin (1993)): (i) N (ΣZZ ) = N (ΣW Z ). (ii) N (ΣW Z ) ⊂ N (ΣZZ − ΣZW Σ−1 W W ΣW Z ). (iii) Rank(ΣZZ ) = Rank(ΣW Z ). In particular, if ΣZZ is nonsingular, the dimension of W must be greater than or equal to the dimension of Z. If the joint distribution of (Y Z W ) is normal and if a linear instrumental regression is uniquely defined as in Example 2.1, then it is the unique instrumental regression. The identification condition can be checked in specific models (see, e.g., Blundell, Chen, and Kristensen (2007)). It is also worth interpreting it in terms of the adjoint operator T ∗ of T PROPOSITION 2.1: The three following conditions are equivalent: (i) ϕ is identifiable. (ii) T ∗ T is one-to-one. (iii) R(T ∗ ) = L2F (Z), where E is the closure of E ⊂ L2F (Z) in the Hilbert sense and R(T ∗ ) is the range of T ∗ . We now introduce an assumption which is only a regularity condition when Z and W have no element in common. However, this assumption cannot be satisfied if there are some elements in common between Z and W . For an extension, see Feve and Florens (2010). 3 A statistic t is complete in a probability model dependent on θ if E[λ(t) | θ] = 0 ∀θ implies λ(t) = 0
1546
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
ASSUMPTION A.1: The joint distribution of (Z W ) is dominated by the product of its marginal distributions and its density is square integrable w.r.t. the product of margins. Assumption A.1 amounts to assuming that T and T ∗ are Hilbert–Schmidt operators, and is a sufficient condition of the compactness of T , T ∗ , T T ∗ , and T ∗ T , (see Lancaster (1958), and Darolles, Florens, and Renault (1998)). Therefore, there exists a singular values decomposition (see Kress (1999, Sec. 15.4)), that is, a sequence of nonnegative real numbers λ0 = 1 ≥ λ1 ≥ λ2 · · · and two sequences of functions ϕi , i ≥ 0, and ψj , j ≥ 0, such that: Singular Values Decomposition (SVD) (i) ϕi , i ≥ 0, is an orthonormal sequence of L2F (Z) (i.e., ϕi ϕj = δij , i j ≥ 0, where δij is the Kronecker symbol) and ψj , j ≥ 0, is an orthonormal sequence of L2F (W ). (ii) T ϕi = λi ψi , i ≥ 0. (iii) T ∗ ψi = λi ϕi , i ≥ 0. (iv) ϕ0 = 1 and ψ0 = 1. (v) ϕi ψj = λi δij , i j ≥ 0. ∞ ¯ where g¯ ∈ N (T ). (vi) ∀g ∈ L2F (Z), g(z) = i=0 g ϕi ϕi (z) + g(z), ∞ 2 ¯ (vii) ∀h ∈ LF (W ), h(w) = i=0 h ψi ψi (w) + h(w), where h¯ ∈ N (T ∗ ). Thus T [g(Z)](w) = E[g(Z) | W = w] =
∞
λi g ϕi ψi (w)
i=0
and T ∗ [h(W )](z) = E[h(W ) | Z = z] =
∞
λi h ψi ϕi (z)
i=0
The strong identification assumption of Z by W can be characterized in terms of the singular values decomposition of T . Actually, since ϕ is identifiable if and only if T ∗ T is one-to-one, we have a corollary: COROLLARY 2.1: Under Assumption A.1, ϕ is identifiable if and only if 0 is not an eigenvalue of T ∗ T . Note that the two operators T ∗ T and T T ∗ have the same nonnull eigenvalues λ2i , i ≥ 0. But, for example, if W and Z are jointly normal, 0 is an eigenvalue of T T ∗ as soon as dim W > dim Z and Σ is nonsingular.4 But if ΣW Z is of full-column rank, 0 is not an eigenvalue of T ∗ T 4
In this case, a ΣW Z = 0 ⇒ T ∗ (a W ) = 0
NONPARAMETRIC INSTRUMENTAL REGRESSION
1547
The strong identification assumption corresponds to λi > 0 for any i. It means that there is a sufficient level of nonlinear correlation between the two sets of random variables Z and W . Then we can directly deduce the Fourier decomposition of the inverse of T ∗ T from that of T ∗ T by inverting the λi s. Note that in these Fourier decompositions, the sequence of eigenvalues, albeit all positive, decreases fast to zero due to the Hilbert–Schmidt property. It should be stressed that the compactness and the Hilbert–Schmidt assumptions are not simplifying assumptions, but describe a realistic framework (we can consider, for instance, the normal case). These assumptions formalize the decline to zero of the spectrum of the operator, and make the inverse problem ill-posed and then more involved for statistical applications. Assuming that the spectrum is bounded from below may be relevant for other econometric applications, but is not a realistic assumption for the continuous nonparametric instrumental variable (IV) estimation. We conclude this section with a result that illustrates the role of the instruments in the decline of the λj . The following theorem shows that increasing the number of instruments increases the singular values and then the dependence between the Z and the W . THEOREM 2.1: Let us assume that W = (W1 W2 ) ∈ Rq1 × Rq2 (q1 + q2 = q). Denote by T1 the operator ϕ ∈ L2F (Z) → E[ϕ | W1 ] ∈ L2F (W1 ) and by T1∗ its dual. Then T1 is still a Hilbert–Schmidt operator and the eigenvalues of T1∗ T1 λ2j1 , satisfy λj1 ≤ λj where the eigenvalues are ranked as a nondecreasing sequence and each eigenvalue is repeated according to its multiplicity order. 3 EXAMPLE 2.3: Consider the case (Z W1 W2 ) ∈ R endowedwith a joint nor-
mal distribution with a zero mean and a variance
1 ρ1 ρ2
ρ1 1 0
T ∗ T is a conditional expectation operator characterized by
ρ2 0 1
. The operator
Z | u ∼ N[(ρ21 + ρ22 )u 1 − (ρ21 + ρ22 )2 ] and its eigenvalues λ2j are (ρ21 + ρ22 )j . The eigenvectors of T ∗ T are the Hermite polynomials of the invariant distribution of this transition, that is, the 1−(ρ4 +ρ4 )
2j
N(0 1−(ρ12 +ρ22 ) ). The eigenvalues of T1∗ T1 are λ2j1 = ρ1 and the eigenvectors are 1 2 the Hermite polynomials of the N(0 1) distribution.
1548
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
3. EXISTENCE OF THE INSTRUMENTAL REGRESSION: AN ILL-POSED INVERSE PROBLEM
The focus of our interest in this section is to characterize the solution of the IV equation (23), (3.1)
T ϕ = r
under the maintained identification assumption that T is one-to-one. The following result is known as the Picard theorem (see, e.g., Kress (1999)). 3.1: r belongs to the range R(T ) if and only if the series PROPOSITION 1 2 < r ψ > ϕ i i converges in LF (Z). Then r = T ϕ with i≥0 λi ϕ=
1 r ψi ϕi λi i≥0
Although Proposition 3.1 ensures the existence of the solution ϕ of the inverse problem (3.1), this problem is said to be ill-posed because a noisy measurement of r—r + δψi , say (with δ arbitrarily small)—leads to a perturbed solution ϕ + λδi ϕi which can be infinitely far from the true solution ϕ, since λi can be arbitrarily small (λi → 0 as i → ∞). This is actually the price to pay to be nonparametric, that is, not to assume a priori that r = E[Y | W ] is in a given finite dimensional space. Although in the finite dimensional case, all linear operators are continuous, the inverse of the operator T , albeit well defined by Proposition 3.1 on the range of T , is not a continuous operator. Looking for one regularized solution is a classical way to overcome this problem of noncontinuity. A variety of regularization schemes are available in the literature5 (see e.g., Kress (1999) and Carrasco, Florens, and Renault (2007) for econometric applications) but we focus in this paper on the Tikhonov regularized solution (3.2)
ϕα = (αI + T ∗ T )−1 T ∗ r =
i≥0
λi r ψi ϕi α + λ2i
or, equivalently, (3.3)
ϕα = arg min[r − T ϕ2 + αϕ2 ] ϕ
5 More generally, there is a large literature on ill-posed inverse problems (see, e.g., Wahba (1973), Nashed and Wahba (1974), Tikhonov and Arsenin (1977), Groetsch (1984), Kress (1999), and Engl, Hanke, and Neubauer (2000)). For other econometric applications see Carrasco and Florens (2000), Florens (2003), Carrasco, Florens, and Renault (2007), and references therein.
NONPARAMETRIC INSTRUMENTAL REGRESSION
1549
By comparison with the exact solution of Proposition 3.1, the intuition of the regularized solution (3.2) is quite clear. The idea is to control the decay of λi eigenvalues λi (and implied explosive behavior of λ1i ) by replacing λ1i with α+λ 2. i
Equivalently, this result is obtained by adding a penalty term αϕ2 to the minimization of T ϕ − r2 , which leads to the (noncontinuous) generalized inverse. Then α is chosen to be positive and converging to zero with a speed well tuned with respect to both the observation error on r and the convergence of λi . Actually, it can be shown (see Kress (1999, p. 285)) that lim ϕ − ϕα = 0
α→0
Note that the regularization bias is (3.4)
ϕ − ϕα = [I − (αI + T ∗ T )−1 T ∗ T ]ϕ = α(αI + T ∗ T )−1 ϕ
To control the speed of convergence to zero of the regularization bias ϕ − ϕα , it is worth restricting the space of possible values of the solution ϕ. This is the reason why we introduce the spaces ΦFβ β > 0. DEFINITION 3.1: For any positive β, ΨβF (resp. ΦFβ ) denotes the set of functions ψ ∈ L2F (W ) (resp. ϕ ∈ L2F (Z)) such that ψ ψi 2 i≥0
λ2β i
< +∞
resp.
ϕ ϕi 2 i≥0
λ2β i
< +∞
It is then clear that the following statements hold: (i) β ≤ β ⇒ ΨβF ⊃ ΨβF and ΦFβ ⊃ ΦFβ . (ii) T ϕ = r admits a solution that implies r ∈ Ψ1F . (iii) r ∈ ΨβF , β > 1 ⇒ ϕ ∈ ΦFβ−1 . (iv) ΦFβ = R[(T ∗ T )β/2 ] and ΨβF = R[(T T ∗ )β/2 ]6 The condition ϕ ∈ ΦFβ is called a source condition (see, e.g., Engl, Hanke, and Neubauer (2000)). It involves both the properties of the solution ϕ (through its Fourier coefficients ϕ ϕi ) and the properties of the conditional expectation operator T (through its singular values λi ). As an example, Hall and Horowitz (2005) assumed ϕ ϕi ∼ i1a and λi ∼ i1b . Then ϕ ∈ ΦFβ if β < b1 (a − 12 ). However, it can be shown that choosing b is akin to choose the degree of smoothness of the joint probability density function of (Z W ). This is the reason why 6 The fractional power of an operator is trivially defined through its spectral decomposition, as in the elementary matrix case.
1550
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
here we maintain a high-level assumption ϕ ∈ ΦFβ , without being tightly constrained by specific rates. Generally speaking, it can be shown that the maximum value allowed for β depends on the degrees of smoothness of the solution ϕ (rate of decay of ϕ ϕi ) as well as the degree of ill-posedness of the inverse problem (rate of decay of singular values λi ).7 It is also worth stressing that the source condition is even more restrictive when the inverse problem is severely ill-posed, because the rate of decay of the singular values λi of the conditional expectation operator is exponential. This is the case, in particular, when the random vectors Z and W are jointly normally distributed (see Example 2.3). In this case, the source condition requires that the Fourier coefficients ϕ ϕi (in the basis of Hermite polynomials ϕi ) of the unknown function ϕ are themselves going to zero at an exponential rate. In other words, we need to assume that the solution ϕ is extremely well approximated by a polynomial expansion. If, on the contrary, these Fourier coefficients have a rate of decay that is only like 1/ia for some a > 0, our maintained source condition is no longer fulfilled in the Gaussian case and we would no longer obtain the polynomial rates of convergence documented in Section 4.2. Instead, we would get rates of convergence that are polynomial in log n as in super-smooth deconvolution problems (see, e.g., Johannes, Van Bellegem, and Vanhems (2011)). ASSUMPTION A.2: For some real β, we have ϕ ∈ ΦFβ . The main reason why the spaces ΦFβ are worthwhile to consider is the following result (see Carrasco, Florens, and Renault (2007, p. 5679)). PROPOSITION 3.2: If ϕ ∈ ΦFβ for some β > 0 and ϕα = (αI + T ∗ T )−1 T ∗ T ϕ, then ϕ − ϕα 2 = O(αβ∧2 ) when α goes to zero. Even though the Tikhonov regularization scheme is the only scheme used in all the theoretical developments of this paper, its main drawback is obvious from Proposition 3.2. It cannot take advantage of a degree of smoothness β for ϕ larger than 2: its so-called qualification is 2 (see Engl, Hanke, and Neubauer (2000) for more details about this concept). However, iterating the Tikhonov regularization allows us to increase its qualification. Let us consider the sequence of iterated regularization schemes ϕα(1) = (αI + T ∗ T )−1 T ∗ T ϕ ϕα(k) = (αI + T ∗ T )−1 T ∗ T ϕ + αϕα(k−1) 7 A general study of the relationship between smoothness and Fourier coefficients is beyond the scope of this paper. It involves the concept of Hilbert scale (see Engl, Hanke, and Neubauer (2000), Chen and Reiss (2011), and Johannes, Van Bellegem, and Vanhems (2011)).
NONPARAMETRIC INSTRUMENTAL REGRESSION
1551
Then it can be shown (see Engl, Hanke, and Neubauer (2000, p. 123)) that the qualification of ϕα(k) is 2k, that is, ϕ − ϕα(k) = O(αβ∧2k ). To see this, note that ϕα(k) =
(λ2 + α)k − αk i
i≥0
λi (α + λ2i )k
ϕ ϕi ϕi
Another way to increase the qualification of the Tikhonov regularization is to replace the norm of ϕ in (3.3) by a Sobolev norm (see Florens, Johannes, and Van Bellegem (2011a)). 4. STATISTICAL INVERSE PROBLEM 4.1. Estimation To estimate the regularized solution (3.2) by a Tikhonov method, we need to estimate T , T ∗ , and r. In this section, we introduce the kernel approach. We assume that Z and W take values in [0 1]p and [0 1]q , respectively. This assumption is not really restrictive, up to some monotone transformations. We start by introducing univariate generalized kernel functions of order l. DEFINITION 4.1: Let h ≡ hN → 0 denote a bandwidth8 and let Kh (· ·) denote a univariate generalized kernel function with the properties Kh (u t) = 0 if u > t or u < t − 1; for all t ∈ [0 1],
t 1 if j = 0, −(j+1) j h u Kh (u t) du = 0 if 1 ≤ j ≤ l − 1. t−1 We call Kh (· ·) a univariate generalized kernel function of order l. The following example is taken from Muller (1991), where examples of K+ (· ·) and K− (· ·) are provided. EXAMPLE 4.1: Define
M0l ([a1 a2 ]) = g ∈ Lip([a1 a2 ])
a2 a1
1 if j = 0, x g(x) dx = 0 if 1 ≤ j ≤ l − 1 j
where Lip([a1 a2 ]) denotes the space of Lipschitz continuous functions on [a1 a2 ]. Define K+ (· ·) and K− (· ·) as follows: 8
We use h and hN interchangeably in the rest of this paper.
1552
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
(i) The support of K+ (x q ) is [−1 q ] × [0 1] and the support of K− (x q ) is [−q 1] × [0 1]. (ii) K+ (· q ) ∈ M0l ([−1 q ]) and K− (· q ) ∈ M0l ([−q 1]) We note that K+ (· 1) = K− (· 1) = K(·) ∈ M0l ([−1 1]). Now let ⎧ K+ (u 1) if h ≤ t ≤ 1 − h, ⎪ ⎪ ⎪ ⎪ u t ⎨ K+ if 0 ≤ t ≤ h, h h Kh (u t) = ⎪ ⎪ ⎪ u 1−t ⎪ ⎩ K− if 1 − h ≤ t ≤ 1 h h Then we can show that Kh (· ·) is a generalized kernel function of order l. A special class of multivariate generalized kernel functions of order l is given by a class of products of univariate generalized kernel functions of order l. Let KZh and KW h denote two generalized multivariate kernel functions of respective dimensions p and q. First we estimate the density functions fZW (z w), fW (w), and fZ (z)9 : fZW (z w) = fW (w) = fZ (z) =
N 1 KZh (z − zn z)KW h (w − wn w) Nhp+q n=1
N 1 KW h (w − wn w) Nhq n=1
N 1 KZh (z − zn z) Nhp n=1
Then the estimators of T , T ∗ , and r are fZW (z w) ˆ dz (T ϕ)(w) = ϕ(z) fW (w) fZW (z w) ∗ ˆ dw (T ψ)(z) = ψ(w) fZ (z) N
rˆ(w) =
yn KW h (w − wn w)
n=1 N
KW h (w − wn w)
n=1 9 For simplicity of notation and exposition, we use the same bandwidth to estimate fZW , fW , and fZ . This can obviously be relaxed.
NONPARAMETRIC INSTRUMENTAL REGRESSION
1553
Note that Tˆ (resp. Tˆ ∗ ) is a finite rank operator from L2F (Z) into L2F (W ) (resp. L2F (W ) into L2F (Z)). Moreover, rˆ belongs to L2F (W ) and thus Tˆ ∗ rˆ is a well defined element of L2F (Z). However, Tˆ ∗ is not, in general, the adjoint operator of Tˆ . In particular, while T ∗ T is a nonnegative self-adjoint operator and thus αI + T ∗ T is invertible for any nonnegative α, it may not be the case for αI + Tˆ ∗ Tˆ . However, for given α and consistent estimators Tˆ and Tˆ ∗ , αI + Tˆ ∗ Tˆ is invertible for N sufficiently large. The estimator of ϕ is then obtained by plugging consistent estimators of T ∗ , T , and r in the solution (3.2) of the minimization (3.3). DEFINITION 4.2: For an (αN )N>0 given sequence of positive real numbers, we call the function ϕˆ αN = (αN I + Tˆ ∗ Tˆ )−1 Tˆ ∗ rˆ an estimated instrumental regression function. This estimator can be basically obtained by solving a linear system of N equations with N unknowns ϕˆ αN (zi ), i = 1 N, as explained in Section 5 below. 4.2. Consistency and Rate of Convergence Estimation of the instrumental regression as defined in Section 4.1 requires consistent estimation of T ∗ , T , and r ∗ = T ∗ r. The main objective of this section is to derive the statistical properties of the estimated instrumental regression function from the statistical properties of the estimators of T ∗ , T , and r ∗ . Following Section 4.1, we use kernel smoothing techniques to simplify the exposition, but we could generalize the approach and use any other nonparametric techniques (for a sieve approach, see Ai and Chen (2003)). The crucial issue is actually the rate of convergence of nonparametric estimators of T ∗ , T , and r ∗ . This rate is specified by Assumptions A.3 and A.4 below in relation with the bandwidth parameter chosen for all kernel estimators. We propose in Appendix B a justification of high-level Assumptions A.3 and A.4 through a set of more primitive sufficient conditions. ASSUMPTION A.3: There exists ρ ≥ 2 such that 1 1 2ρ 2ρ ∗ ∗ 2 ˆ Tˆ − T 2 = OP T + h − T = O + h P N N p+q p+q NhN NhN where the norm in the equation is the supremum norm (T = supϕ T ϕ with ϕ ≤ 1). ASSUMPTION A.4: Tˆ ∗ rˆ − Tˆ ∗ Tˆ ϕ2 = OP ( N1 + h2ρ N ). Assumption A.4 is not about estimation of r = E[Y | W ], but rather about estimation of r ∗ = E[E[Y | W ] | Z]. The situation is even more favorable, since
1554
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
we are not really interested in the whole estimation error about r ∗ , but only one part of it: Tˆ ∗ rˆ − Tˆ ∗ Tˆ ϕ = Tˆ ∗ [ˆr − Tˆ ϕ] The smoothing step by application of Tˆ ∗ allows us to get a parametric rate of convergence 1/N for the variance part of the estimation error. We can then state the main result of the paper. THEOREM 4.1: Under Assumptions A.1–A.4, we have 1 1 1 2ρ 2ρ (β−1)∧0 β∧2 αN 2 + hN + + αN ϕˆ − ϕ = OP 2 p+q + hN αN αN N NhN COROLLARY 4.1: Under Assumptions A.1–A.4, if (i) αN → 0 with Nα2N → ∞, p+q p+q 1−β (ii) hN → 0 with NhN → ∞, Nh2ρ → N → c < ∞, and (iii) β ≥ 1 or NhN αN ∞, then ϕˆ αN − ϕ2 → 0 To simplify the exposition, Corollary 4.1 is stated under the maintained assumption that h2ρ N goes to zero at least as fast as 1/N. Note that this assumpp+q tion, jointly with the condition NhN → ∞, implies that the degree ρ of regularity (order of differentiability of the joint density function of (Z W ) and . This constant is minimally binding. order of the kernel) is greater than p+q 2 For instance, it is fulfilled with ρ = 2 when considering p = 1 explanatory variable and q = 2 instruments. The main message of Corollary 4.1 is that it is only when the relevance of instruments, (i.e., the dependence between explanatory variables Z and instruments W ) is weak (β < 1) that consistency of our estimator takes more than the standard conditions on bandwidth (for the joint distribution of (Z W )) and a regularization parameter. Moreover, the cost of the nonparametric estimation of conditional expectations (see terms involving the bandwidth hN ) will, under very general conditions, be negligible compared with the two other terms Nα1 2 and αβ∧2 N . To see N this, first note that the optimal trade-off between these two terms leads us to choose αN ∝ N −1/((β∧2)+2) The two terms are then equivalent, 1 −(β∧2)/((β∧2)+2) ∼ αβ∧2 N ∼N Nα2N
NONPARAMETRIC INSTRUMENTAL REGRESSION
1555
and, in general, dominate the middle term,
(β−1)∧0 α 1 2ρ (β−1)∧0 = O N p+q p+q + hN αN NhN NhN
1 under the maintained assumption h2ρ N = O( N ). More precisely, it is always possible to choose a bandwidth hN such that
β∧2 1 αN p+q = O (β−1)∧0 NhN αN For αN ∝ N −1/((β∧2)+2) , it takes
(β+1)/(β+2) 1 when β < 1, O N p+q = 2/((β∧2)+2) hN O(N ) when β ≥ 1, p+q
which simply reinforces the constraint10 NhN 1 assumption h2ρ N = O( N ), it simply takes
→ ∞. Since we maintain the
⎧ β+1 ⎪ ⎪ if β < 1, p+q ⎨ β+2 ≤ 2 ⎪ 2ρ ⎪ ⎩ if β ≥ 1. (β ∧ 2) + 2 To summarize, we have proved the following corollary. COROLLARY 4.2: Under Assumptions A.1–A.4, take one of the following conditions to be fulfilled: . (i) β ≥ 1 and ρ ≥ [(β ∧ 2) + 2] p+q 4 p+q (ii) β < 1 and ρ ≥ ( β+2 )( ). β+1 2 Then, for αN proportional to N −1/((β∧2)+2) , there exist bandwidth choices such that ϕˆ αN − ϕ2 = OP N −(β∧2)/((β∧2)+2) was always sufficient for the In other words, while the condition ρ ≥ p+q 2 validity of Theorem 4.1, the stronger condition ρ ≥ p + q is always sufficient for Corollary 4.2. p+q
10 In fact, the stronger condition (NhN )−1 log N → 0 (see Assumption B.4 in Appendix B) is satisfied with this choice of hN .
1556
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
5. NUMERICAL IMPLEMENTATION AND EXAMPLES Let us come back on the computation of the estimator ϕˆ αN . This estimator is a solution of the equation11 (5.1)
(αN I + Tˆ ∗ Tˆ )ϕ = Tˆ ∗ rˆ
where the estimators of T ∗ and T are linear forms of ϕ ∈ L2F (Z) and ψ ∈ L2F (W ): Tˆ ϕ(w) =
N
an (ϕ)An (w)
n=1
Tˆ ∗ ψ(z) =
N
bn (ψ)Bn (z)
n=1
and rˆ(w) =
N
yn An (w)
n=1
with
1 KZh (z − zn z) dz hp 1 bn (ψ) = ψ(w) q KW h (w − wn w) dw h KW h (w − wn w) An (w) = N KW h (w − wk w) an (ϕ) =
ϕ(z)
k=1
Bn (z) =
KZh (z − zn z) N KZh (z − zk z) k=1
Equation (5.1) is then equivalent to N N (5.2) bm an (ϕ)An (w) Bm (z) αN ϕ(z) + m=1
=
N m=1
11
bm
N
n=1
yn An (w) Bm (z)
n=1
A more detailed presentation of the practice of nonparametric instrumental variable is given in Feve and Florens (2009).
NONPARAMETRIC INSTRUMENTAL REGRESSION
1557
This equation is solved in two steps: first integrate the previous equation multiplied by h1p KZh (z − zn z) to reduce the functional equation to a linear system where the unknowns are al (ϕ), l = 1 n αN al (ϕ) +
N
an (ϕ)bm (An (w))al (Bm (z))
mn=1
N
=
yn bm (An (w))al (Bm (z))
mn=1
or αN a + EFa = EFy with a = (al (ϕ))l y = (yn )n E = bm (An (w)) nm F = al (Bm (z)) lm The last equation can be solved directly to get the solution a = (αN + EF)−1 EFy. In a second step, equation (5.2) is used to compute ϕ at any value of z. These computations can be simplified if we use the approximations al (ϕ) ϕ(zl ) and bl (ψ) ψ(wl ). Equation (5.2) is then a linear system where the unknowns are the ϕ(zn ), n = 1 N. To illustrate the power of our approach and its simplicity, we present the following simulated example. The data generating process is Y = ϕ(Z) + U Z = 01W1 + 01W2 + V where
W =
W1 W2
0 1 03 ∼N 0 03 1
V ∼ N(0 (027)2 ) U = −05V + ε
ε ∼ N(0 (005)2 )
W V ε mutually independent. The function ϕ(Z) is chosen equal to Z 2 (which represents a maximal order of regularity in our model, i.e., β = 2) or e−|Z| (which is highly irregular). The bandwidths for kernel estimation are chosen equal to 0.45 (kernel on the Z variable) or 0.9 (kernel on the W variable) for ϕ(Z) = Z 2 , and 0.45 and 0.25
1558
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
in the e−|Z| case. For each selection of ϕ, we show the estimation for αN varying in a very large range, and selection of this parameter appears naturally. All the kernels are Gaussian.12 For ϕ(Z) = Z 2 , we present in Figure 1, graph 1, the set of data (N = 1000) in the (Z Y ) space, the true function, the kernel estimation of the regression, and our estimation. In graph 2, we show the evolution of our estimator for different values of αN . In graph 3, a Monte Carlo analysis is performed: a sample is generated 150 times and the estimation of ϕ is performed with the same bandwidths and same regularization parameter as in graph 1. All these curves are plotted and give an illustration of their distribution. Finally, graph 4 is identical to graph 1 with ϕ(Z) = e−|Z| and graph 5 corresponds to Graph 2 in the same case. Let us stress that the endogeneity bias in the estimation of the regression by kernel smoothing clearly appears. The estimated ϕ curve is not obviously related to the sample of Z and Y , and depends on the instrumental variables W . Even though they cannot be represented, the instruments play a central role in the estimation. The main question about the practical use of nonparametric instrumental variables estimation is the selection of the bandwidth and of the αN parameter. This question is complex and the construction of a data driven procedure for the simultaneous selection of hN and αN is still an open question. We propose the following sequential method: (i) Fix first the bandwidths for the estimation of r and of the joint density of Z and W (for the estimation of T and T ∗ ) by usual methods. Note that these bandwidths do not need to be equal for the two estimations. (ii) Select αN by a data driven method. We suggest the following method based on a residual approach that extends the discrepancy principle of Morozov (1993). We consider the “extended residuals” of the model defined by εαN = Tˆ ∗ rˆ − Tˆ ∗ Tˆ ϕˆ (2)N α
α
where ϕˆ (2)N is the iterated Tikhonov estimation of order 2. Then εαN ≤ Tˆ ∗ rˆ − Tˆ ∗ Tˆ ϕ + Tˆ ∗ Tˆ ϕ − Tˆ ∗ Tˆ ϕˆ α(2) To simplify the exposition, let us assume h2ρ N goes to zero at least as fast as 1/N. Then Assumption A.4 implies that the first term on the right hand side of the above displayed inequality is OP ( √1N ). Under the previous assumptions, it α α can be shown that Tˆ ∗ Tˆ (ϕˆ (2)N − ϕ)2 = (Tˆ ∗ Tˆ ϕ)(2)N − Tˆ ∗ Tˆ ϕ2 = OP (αN ). This last property requires a regularization method of qualification at least 4 to 12
Note that for simplicity we have not used generalized kernels in the simulation.
NONPARAMETRIC INSTRUMENTAL REGRESSION
FIGURE 1.—Numerical implementation.
1559
1560
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
characterize a β not greater than 2, and this is the motivation for the use of an iterated Tikhonov estimation at the first stage. Then we have 1 1 (β+2)∧4 αN 2 + α ε = O P N α2N α2N N and a minimization of this value with respect to αN gives an αN with an optimal speed (N −1/(β+2) ) for use in a noniterated Tikhonov estimation. In practice, 1 εαN 2 can be computed for different values of αN and the minimum can be α2 N
selected. In graph 6, we give this curve for the example of ϕ(Z) = e−|Z| . 6. CONCLUSION This paper considers the nonparametric estimation of a regression function in the presence of a simultaneity problem. We establish a set of general sufficient conditions to ensure consistency of our nonparametric instrumental variables estimator. The discussion of rates of convergence emphasizes the crucial role of the degree of ill-posedness of the inverse problem whose unique solution defines the regression function. A Monte Carlo illustration shows that our estimator is rather easy to implement and able to correct for the simultaneity bias displayed by the naive kernel estimator. This paper treats essentially the purely nonparametric basic model and is, in particular, focused on kernel-based estimators. Numerous extensions are possible and relevant for the practical implementation of this procedure. • A first extension is to analyze the case where the explanatory variables Z contain exogenous variables that are also included in the instrumental variables W . These variables may be introduced in a nonparametric way or seminonparametrically (ϕ(Z) becomes ϕ(Z) + X β with X exogenous). In the general case, results are essentially the same as in our paper by fixing these variables (see Hall and Horowitz (2005)). In the semiparametric case, the procedure is described in Feve and Florens (2010). • The treatment of semiparametric models (additive, partially linear, index models ) (see Florens, Johannes, and Van Bellegem (2011b) and Ai and Chen (2003)) or nonparametric models with constraints is helpful to reduce the curse of dimensionality. • We need to improve and to study more deeply the adaptive selection of the bandwidths and of the regularization parameter. • The structure L2 of the spaces may be modified. In particular, Sobolev spaces can be used and the penalty norm can incorporate the derivatives (see Gagliardini and Scaillet (2006) and Blundell, Chen, and Christensen (2007)). This approach is naturally extended in terms of Hilbert scales (see Florens, Johannes, and Van Bellegem (2011a)). • Separable models can be extended to nonseparable models or, more generally, to nonlinear problems (duration models, auctions, GMM, dynamic models) (see Ai and Chen (2003)).
NONPARAMETRIC INSTRUMENTAL REGRESSION
1561
• A Bayesian approach to the nonparametric instrumental variables estimation (Florens and Simoni (2011)) enhances the use of a Gaussian process prior similar to machine learning. • In a preliminary version of this paper, we gave a proof of the asymptotic normality of ϕαN − ϕ δ. This result is now presented in a separate paper. APPENDIX A: PROOFS OF MAIN RESULTS PROOF OF PROPOSITION 2.1: (i) ⇐⇒ (ii) Part (ii) implies (i). Conversely, let us consider ϕ such that T ∗ T [ϕ(Z)] = E E[ϕ(Z) | W ] | Z = 0 Then E E[ϕ(Z) | W ]2 = E ϕ(Z)E[ϕ(Z) | W ] = E ϕ(Z)E E[ϕ(Z) | W ] | Z = 0 We obtain E[ϕ(Z) | W ] = 0 and ϕ = 0 using the strong identification condition. (i) ⇐⇒ (iii) This property can be deduced from Florens, Mouchart, and Rolin (1990, Theorem 5.4.3) or Luenberger (1969, Theorem 3, Section 6.3). Since R(T ∗ ) = N (T )⊥ R(T ∗ ) = L2F (Z) is tantamount to N (T ) = {0}. Q.E.D. PROOF OF THEOREM 2.1: Let us first remark that
2 (z w1 ) fZW 1
fZ2 (z)fW2 1 (w1 ) =
fZ (z)fW1 (w1 ) dz dw1
fZW (z w1 w2 ) fW |W (w2 | w1 ) dw2 fZ (z)fW (w1 w2 ) 2 1
2
× fZ (z)fW1 (w1 ) dz dw1 2 fZW (z w1 w2 ) fZ (z)fW (w1 w2 ) dz dw1 dw2 ≤ 2 fZ (z)fW2 (w1 w2 ) by Jensen’s inequality for conditional expectations. The first term is the Hilbert–Schmidt norm of T1∗ T1 and the last term is the Hilbert–Schmidt norm of T ∗ T Then T1∗ T1 is a Hilbert–Schmidt operator and j λ2j1 ≤ j λ2j
1562
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
The eigenvalues may be compared pairwise. Using the Courant theorem (see Kress (1999, Chap. 15, Theorem 15.14, p. 276)), we get λ2j = = ≥ ≥
min
max
T ∗ T ϕ ϕ
ρ0 ρ1 ρj−1 ∈L2z
ϕ=1 ϕ⊥(ρ0 ρ1 ρj−1 )
max
E(ϕ|w)2
max
E(ϕ|w1 )2
ϕ=1 ϕ⊥(ρ0 ρ1 ρj−1 )
ϕ=1 ϕ⊥(ρ0 ρ1 ρj−1 )
min
ρ0 ρ1 ρj−1 ∈L2z
max
T 1∗ T 1 ϕ ϕ
ϕ=1 ϕ⊥(ρ0 ρ1 ρj−1 )
= λ2j1
Q.E.D.
PROOF OF THEOREM 4.1: The proof of Theorem 4.1 is based on the decomposition ϕˆ αN − ϕ = A1 + A2 + A3 with A1 = (αN I + Tˆ ∗ Tˆ )−1 Tˆ ∗ rˆ − (αN I + Tˆ ∗ Tˆ )−1 Tˆ ∗ Tˆ ϕ A2 = (αN I + Tˆ ∗ Tˆ )−1 Tˆ ∗ Tˆ ϕ − (αN I + T ∗ T )−1 T ∗ T ϕ A3 = (αN I + T ∗ T )−1 T ∗ T ϕ − ϕ By Proposition 3.2, A3 2 = O(αβ∧2 N ) and by virtue of Assumption A.4, we have directly 1 1 2ρ 2 + hN A1 = OP 2 αN N To assess the order of A2 , it is worth rewriting it as A2 = αN [(αN I + Tˆ ∗ Tˆ )−1 − (αN I + T ∗ T )−1 ]ϕ = −αN (αN I + Tˆ ∗ Tˆ )−1 (Tˆ ∗ Tˆ − T ∗ T )(αN I + T ∗ T )−1 ϕ = −(B1 + B2 )
NONPARAMETRIC INSTRUMENTAL REGRESSION
1563
with B1 = αN (αN I + Tˆ ∗ Tˆ )−1 Tˆ ∗ (Tˆ − T )(αN I + T ∗ T )−1 ϕ B2 = αN (αN I + Tˆ ∗ Tˆ )−1 (Tˆ ∗ − T ∗ )T (αN I + T ∗ T )−1 ϕ By Assumption A.3, 1 2ρ + h N p+q NhN 1 2ρ ∗ ∗ 2 ˆ T − T = OP p+q + hN NhN Tˆ − T 2 = OP
and by Proposition 3.2, αN (αN I + T ∗ T )−1 ϕ2 = O(αβ∧2 N ) αN T (αN I + T ∗ T )−1 ϕ2 = O αN(β+1)∧2 while
1 αN 1 ∗ ˆ −1 2 ˆ (αN I + T T ) = OP 2 αN (αN I + Tˆ ∗ Tˆ )−1 Tˆ ∗ 2 = OP
Therefore A2 = OP 2
1 2ρ p+q + hN NhN
αβ∧2 α(β+1)∧2 N + N 2 αN αN
1 2ρ (β−1)∧1 (β−1)∧0 αN + αN = OP p+q + hN NhN 1 2ρ (β−1)∧0 = OP p+q + hN αN NhN
Q.E.D.
REFERENCES AI, C., AND X. CHEN (2003): “Efficient Estimation of Conditional Moment Restrictions Models Containing Unknown Functions,” Econometrica, 71, 1795–1843. [1553,1560] BASU, D. (1955): “On Statistics Independent of a Sufficient Statistic,” Sankhya, 15, 377–380. [1545] BLUNDELL, R., X. CHEN, AND D. KRISTENSEN (2007): “Semi-Nonparametric IV Estimation of Shape-Invariant Engel Curves,” Econometrica, 75, 1613–1669. [1545,1560] CARRASCO, M., AND J. P. FLORENS (2000): “Generalization of GMM to a Continuum of Moment Conditions,” Econometric Theory, 16, 797–834. [1542,1548]
1564
S. DAROLLES, Y. FAN, J. P. FLORENS, AND E. RENAULT
(2011): “A Spectral Method for Deconvolving a Density,” Econometric Theory, 27, 546–581. [1542] CARRASCO, M., J. P. FLORENS, AND E. RENAULT (2007): “Linear Inverse Problems in Structural Econometrics: Estimation Based on Spectral Decomposition and Regularization,” in Handbook of Econometrics, Vol. 6B, ed. by J. Heckman and E. Leamer. Elsevier/North Holland, 5633–5751. [1542,1548,1550] CHEN, X., AND M. REISS (2011): “On Rate Optimality for Ill-Posed Inverse Problems in Econometrics,” Econometric Theory, 27, 497–521. [1550] DAROLLES, S., Y. FAN, J. P. FLORENS, AND E. RENAULT (2011): “Supplement to ‘Nonparametric Instrumental Regression’,” Econometrica Supplemental Material, 79, http://www. econometricsociety.org/ecta/Supmat/6539_proofs.pdf. [1542] DAROLLES, S., J. P. FLORENS, AND E. RENAULT (1998): “Nonlinear Principal Components and Inference on a Conditional Expectation Operator,” Unpublished Manuscript, CREST. [1546] ENGL, H. W., M. HANKE, AND A. NEUBAUER (2000): Regularization of Inverse Problems. Netherlands: Kluwer Academic. [1548-1551] FEVE, F., AND J. P. FLORENS (2010): “The Practice of Nonparametric Estimation by Solving Inverse Problem: The Example of Transformation Models,” The Econometric Journal, 13, 1–27. [1545,1560] FLORENS, J. P. (2003): “Inverse Problems and Structural Econometrics: The Example of Instrumental Variables,” in Advances in Economics and Econometrics: Theory and Applications, ed. by M. Dewatripont, L. P. Hansen and S. J. Turnovsky. Cambridge University Press, 284–311. [1542,1548] (2005): “Endogeneity in Non Separable Models. Application to Treatment Effect Models Where Outcomes Are Durations,” Unpublished Manuscript, Toulouse School of Economics. [1541] FLORENS, J. P., AND M. MOUCHART (1986): “Exhaustivité, Ancillarité et Identification en Statistique Bayesienne,” Annales d’Economie et de Statistique, 4, 63–93. [1545] FLORENS, J. P., AND A. SIMONI (2011): “Nonparametric Estimation of an Instrumental Variables Regression: A Quasi Bayesian Approach Based on a Regularized Posterior,” Journal of Econometrics (forthcoming). [1561] FLORENS, J. P., J. HECKMAN, C. MEGHIR, AND E. VYTLACIL (2008): “Identification of Treatment Effects Using Control Function in Model With Continuous Endogenous Treatment and Heterogenous Effects,” Econometrica, 76, 1191–1206. [1542] FLORENS, J. P., J. JOHANNES, AND S. VAN BELLEGEM (2011a): “Identification and Estimation by Penalization in Nonparametric Instrumental Regression,” Econometric Theory, 27, 472–496. [1551,1560] (2011b): “Instrumental Regression in Partially Linear Models,” Econometric Journal (forthcoming). [1560] FLORENS, J. P., M. MOUCHART, AND J. M. ROLIN (1990): Elements of Bayesian Statistics. New York: Dekker. [1545,1561] (1993): “Noncausality and Marginalization of Markov Process,” Econometric Theory, 9, 241–262. [1545] GAGLIARDINI, C., AND O. SCAILLET (2006): “Tikhonov regularisation for Functional Minimum Distance Estimators,” Unpublished Manuscript, Swiss Finance Institute. [1560] GROETSCH, C. (1984): The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind. London: Pitman. [1548] HALL, P., AND J. HOROWITZ (2005): “Nonparametric Methods for Inference in the Presence of Instrumental Variables,” The Annals of Statistics, 33, 2904–2929. [1542,1549,1560] HARDLE, W., AND O. LINTON (1994): “Applied Nonparametric Methods,” in Handbook of Econometrics, ed. by R. F. Engle and D. L. McFadden. Elsevier/North Holland, 2295–2339. [1544] HOROWITZ, J., AND S. LEE (2007): “Non parametric Instrumental Variables Estimation of a Quantile Regression Model,” Econometrica, 75, 1191–1208. [1541]
NONPARAMETRIC INSTRUMENTAL REGRESSION
1565
IMBENS, G., AND W. NEWEY (2009): “Identification and Estimation of Triangular Simultaneous Equations Models Without Additivity,” Econometrica, 77, 1481–1512. [1541] JOHANNES, J., S. VAN BELLEGEM, AND A. VANHEMS (2011): “Convergence Rates for Ill-Posed Inverse Problems With an Unknown Operator,” Econometric Theory, 27, 522–545. [1550] KRESS, R. (1999): Linear Integral Equations. New York: Springer. [1546,1548,1549,1562] LANCASTER, H. (1958): “The Structure of Bivariate Distributions,” Annals of Mathematical Statistics, 29, 719–736. [1546] LEHMANN, E. L., AND H. SCHEFFE (1950): “Completeness Similar Regions and Unbiased Tests Part I,” Sankhya, 10, 305–340. [1545] LUENBERGER, D. G. (1969): Optimization By Vector Space Methods. Wiley. [1561] MOROZOV, V. A. (1993): Regularization Methods for Ill-Posed Problems. FL: CRC Press. [1558] MULLER, H.-G. (1991): “Smooth Optimum Kernel Estimators Near Endpoints,” Biometrika, 78, 521–530. [1551] NASHED, M. Z., AND G. WAHBA (1974): “Generalized Inverse in Reproducing Kernel Spaces: An Approach to Regularization of Linear Operator Equations,” SIAM Journal of Mathematical Analysis, 5, 974–987. [1548] NEWEY, W., AND J. POWELL (2003): “Instrumental Variables Estimation of Nonparametric Models,” Econometrica, 71, 1565–1578. [1542,1545] NEWEY, W., J. L. POWELL, AND F. VELLA (1999): “Nonparametric Estimation of Triangular Simultaneous Equations Models,” Econometrica, 67, 565–603. [1542] TIKHONOV, A., AND V. ARSENIN (1977): Solutions of Ill-Posed Problems. Washington, DC: Winston & Sons. [1548] VAPNIK, A. C. M. (1998): Statistical Learning Theory. New York: Wiley. [1544] WAHBA, G. (1973): “Convergence Rates of Certain Approximate Solutions of Fredholm Integral Equations of the First Kind,” Journal of Approximation Theory, 7, 167–185. [1548]
DRM—Université Paris Dauphine, Place du Maréchal de Lattre de Tassigny, 75775 Paris Cedex 16, France and Lyxor Asset Management; serge.darolles@ dauphine.fr, Dept. of Economics, Vanderbilt University, VU Station B 351819, 2301 Vanderbilt Place, Nashville, TN 37235-1819, U.S.A.;
[email protected], Toulouse School of Economics, Université Toulouse 1 Capitole, Manufacture des Tabacs, 21 Allée de Brienne, 31000 Toulouse, France;
[email protected], and Dept. of Economics, P.O. Box B, Brown University, Providence, RI 02912, U.S.A. and CENTER, Tilburg;
[email protected]. Manuscript received June, 2002; final revision received December, 2010.
Econometrica, Vol. 79, No. 5 (September, 2011), 1567–1589
ON THE SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS BY ROBERT J. BARRO AND TAO JIN1 The coefficient of relative risk aversion is a key parameter for analyses of behavior toward risk, but good estimates of this parameter do not exist. A promising place for reliable estimation is rare macroeconomic disasters, which have a major influence on the equity premium. The premium depends on the probability and size distribution of disasters, gauged by proportionate declines in per capita consumption or gross domestic product. Long-term national-accounts data for 36 countries provide a large sample of disasters of magnitude 10% or more. A power-law density provides a good fit to the size distribution, and the upper-tail exponent, α, is estimated to be around 4. A higher α signifies a thinner tail and, therefore, a lower equity premium, whereas a higher coefficient of relative risk aversion, γ, implies a higher premium. The premium is finite if α > γ. The observed premium of 5% generates an estimated γ close to 3, with a 95% confidence interval of 2 to 4. The results are robust to uncertainty about the values of the disaster probability and the equity premium, and can accommodate seemingly paradoxical situations in which the equity premium may appear to be infinite. KEYWORDS: Power law, rare disaster, equity premium, risk aversion.
THE COEFFICIENT OF RELATIVE RISK AVERSION, γ, is a key parameter for analyses of behavior toward risk, but good estimates of this parameter do not exist. A promising area for reliable estimation is rare macroeconomic disasters, which have a major influence on the equity premium; see Rietz (1988), Barro (2006), and Barro and Ursua (2008). For 17 countries with long-term data on returns on stocks and short-term government bills, the average annual (arithmetic) real rates of return were 0.081 on stocks and 0.008 on bills (Barro and Ursua (2008, Table 5)). Thus, if we approximate the risk-free rate by the average real bill return, the average equity premium was 0.073. An adjustment for leverage in corporate financial structure, using a debt–equity ratio of 0.5, implies that the unlevered equity premium averaged around 0.05. Previous research (Barro and Ursua (2008)) sought to explain an equity premium of 0.05 in a representative–agent model calibrated to fit the long-term history of macroeconomic disasters for up to 36 countries. One element in the calibration was the disaster probability, p, measured by the frequency of macroeconomic contractions of magnitude 10% or more. Another feature was the size distribution of disasters, gauged by the observed histogram in the range of 10% and above. Given p and the size distribution, a coefficient of relative risk aversion, γ, around 3.5 accorded with the target equity premium. The present paper shows that the size distribution of macroeconomic disasters can be characterized by a power law in which the upper-tail exponent, α, is the key parameter. This parametric approach generates new estimates of the 1 This research was supported by the National Science Foundation. We appreciate helpful comments from Xavier Gabaix, Rustam Ibragimov, Chris Sims, Jose Ursua, and Marty Weitzman.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8827
1568
R. J. BARRO AND T. JIN
coefficient of relative risk aversion, γ, needed to match the target equity premium. We argue that the parametric procedure can generate more accurate estimates than the sample-average approach because of selection problems related to missing data for the largest disasters. In addition, confidence sets for the power-law parameters translate into confidence intervals for the estimates of γ. Section 1 reviews the determination of the equity premium in a representative–agent model with rare disasters. Section 2 specifies a familiar, single power law to describe the size distribution of disasters and applies the results to estimate the coefficient of relative risk aversion, γ. Section 3 generalizes to a double power law to get a better fit to the observed size distribution of disasters. Section 4 shows that the results are robust to reasonable variations in the estimated disaster probability, the target equity premium, and the threshold for disasters (set initially at 10%). Section 5 considers possible paradoxes involving an infinite equity premium. Section 6 summarizes the principal findings, with emphasis on the estimates of γ. 1. THE EQUITY PREMIUM IN A MODEL WITH RARE DISASTERS Barro (2009) worked out the equity premium in a Lucas (1978) tree model with rare but large macroeconomic disasters. (Results for the equity premium are similar in a model with a linear, AK, technology, in which saving and investment are endogenous.) In the Lucas-tree setting, (per capita) real gross domestic product (GDP), Yt , and consumption, Ct = Yt , evolve as (1)
log(Yt+1 ) = log(Yt ) + g + ut+1 + vt+1
The parameter g ≥ 0 is a constant that reflects exogenous productivity growth. The random term ut+1 , which is independent and identically distributed (i.i.d.) normal with mean 0 and variance σ 2 , reflects “normal” economic fluctuations. The random term vt+1 picks up low-probability disasters, as in Rietz (1988) and Barro (2006). In these rare events, output and consumption jump down sharply. The probability of a disaster is the constant p ≥ 0 per unit of time. In a disaster, output contracts by the fraction b, where 0 < b ≤ 1. The distribution of vt+1 is given by probability 1 − p: vt+1 = 0 probability p: vt+1 = log(1 − b) The disaster size, b, follows some probability density function. In previous research, the density for b was gauged by the observed histogram. The present analysis specifies the form of this distribution—as a power law—and estimates the parameters, including the exponent of the upper tail. Note that the expected growth rate, g∗ , of consumption and GDP is g∗ = g + (1/2) · σ 2 − p · Eb
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1569
where Eb is the mean disaster size. Barro (2009) showed that, with a representative agent with Epstein–Zin (1989) and Weil (1990) preferences, the formula for the unlevered equity premium, when the period length approaches zero, is r e − r f = γσ 2 + p · E b · [(1 − b)−γ − 1] (2) where r e is the expected rate of return on unlevered equity (a claim on aggregate consumption flows), r f is the risk-free rate, and γ is the coefficient of relative risk aversion.2 The term in curly brackets has a straightforward interpretation under power utility, where γ equals the reciprocal of the intertemporal elasticity of substitution (IES) for consumption. Then this term is the product of the proportionate decline in equity value during a disaster, b, and the excess of marginal utility of consumption in a disaster state compared to that in a normal state, (1 − b)−γ − 1. Note that in the present setting, the proportionate fall in equity value during a disaster, b, equals the proportionate fall in consumption and GDP during the disaster. Equation (2) can be expressed as (3)
r e − r f = γσ 2 + p · [E(1 − b)−γ − E(1 − b)1−γ − Eb]
Equation (3) shows that the key properties of the distribution of b are the expectations of the variable 1/(1 − b) taken to the powers γ and γ − 1. (The Eb term has a minor impact.) Barro and Ursua (2008) studied macroeconomic disasters by using long-term annual data for real per capita consumer expenditure, C, for 24 countries and real per capita GDP (henceforth, called GDP) for 36 countries.3 These data go back at least to 1914 and as far back as 1870, depending on availability, and end in 2006. The annual time series, including sources, are available at www.economics.harvard.edu/faculty/barro/data_sets_barro. Barro and Ursua (2008) followed Barro (2006) by using an NBER (National Bureau of Economic Research) -style peak-to-trough measurement of the sizes of macroeconomic contractions. Starting from the annual time series, proportionate contractions in C and GDP were computed from peak to trough over periods of 1 or more years, and declines by 10% or greater were considered. This method yielded 99 disasters for C (for 24 countries) and 157 for GDP 2 The present analysis assumes that the representative agent’s relative risk aversion is constant. Empirical support for this familiar specification appears in Brunnermeier and Nagel (2008) and Chiappori and Paiella (2008). 3 This approach assumes that the same process for generating macroeconomic disasters and the same model of household risk aversion apply to all countries at all points in time. In general, reliable estimation of parameters for a rare-disasters model requires a lot of data coming from a population that can be viewed as reasonably homogeneous. However, Barro and Ursua (2008) found that results on the determinants of the equity premium were similar if the sample were limited to Organization for Economic Cooperation and Development (OECD) countries.
1570
R. J. BARRO AND T. JIN
(36 countries). The average disaster sizes, subject to the threshold of 10%, were similar for the two measures: 0.215 for C and 0.204 for GDP. The mean durations of the disasters were also similar: 3.6 years for C and 3.5 years for GDP. The list of the disaster events—by country, timing, and size—is given in Barro and Ursua (2008, Tables C1 and C2).4 Equation (1) is best viewed as applying to short periods, approximating continuous time. In this setting, disasters arise as downward jumps at an instant of time, and the disaster size, b, has no time units. In contrast, the underlying data on C and GDP are annual flows. In relating the data to the theory, there is no reason to identify disaster sizes, b, with large contractions in C or GDP observed particularly from one year to the next. In fact, the major disaster events—exemplified by the world wars and the Great Depression— clearly feature cumulative declines over several years, with durations of varying length. In Barro (2006) and Barro and Ursua (2008), the disaster jump sizes, b, in the continuous-time model—corresponding to equation (3) for the equity premium—were approximated empirically by the peak-to-trough measures of cumulative, proportionate decline. Barro (2006, Section V) showed that this procedure would be reasonably accurate if the true model were one with discrete periods with length corresponding to the duration of disasters (all with the same duration, say 3 12 years). The peak-to-trough method for gauging disaster sizes has a number of shortcomings, addressed in ongoing research by Nakamura, Steinsson, Barro, and Ursua (2011). In this work, the underlying model features a probability per year, p, of entering into a disaster state. (Disaster events are allowed to be correlated across countries, as in the world wars and the Great Depression.) Disasters arise in varying sizes (including occasional bonanzas), and the disaster state persists stochastically. This specification generates frequency distributions for the cumulative size and duration of disasters. In addition, as in Gourio (2008), post-disaster periods can feature recoveries in the form of temporarily high growth rates.5 The most important implication of Nakamura et al. (2011) for the equity premium comes from the recoveries. Since, on average, only half the decline in consumption during a disaster turns out to be permanent, the model’s predicted equity premium falls short of the value in equation (3). The other extensions have less influence on the equity premium, although the stochastic duration of disasters matters because of effects on the correlation between consumption growth and stock returns during disasters. 4 These data on disaster sizes are the ones used in the current study, except for a few minor corrections. The values used are in the Supplemental Material (Barro and Jin (2011)). 5 One nonissue (raised by Constantinides (2008, pp. 343–344) and Donaldson and Mehra (2008, p. 84)) is the apparent mismatch between the units for rates of return—per year—and the measurement of disaster sizes by cumulative declines over multiple years (with a mean duration around 3 12 years). As already noted, the peak-to-trough measures of macroeconomic decline are approximations to the model’s jump declines, which have no time units.
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1571
FIGURE 1.—Histogram of the empirical density of transformed C disasters and the density estimated by the single power law. The threshold is z0 = 1105, corresponding to b = 0095. For the histogram, multiplication of the height on the vertical axis by the bin width of 0.04 gives the fraction of the total observations (99) that fall into the indicated bin. The results for the single power law for C, shown by the curve, correspond to Table I.
The present analysis uses the peak-to-trough measures of declines in C and GDP to generate an empirical distribution of disaster sizes, b. Figures 1 and 2 show the corresponding histograms for transformed disaster sizes, 1/(1 − b), for C and GDP, respectively. The findings in Nakamura et al. (2011) suggest that these measures will be satisfactory for characterizing the distribution of disaster sizes, but that some downward adjustment to the equity premium in equation (3) would be appropriate to account particularly for the partly temporary nature of the typical disaster. As in previous research, the estimated disaster probability, p, equals the ratio of the number of disasters to the number of nondisaster years. This calculation yields p = 00380 per year for C and p = 00383 for GDP. Thus, disasters (macroeconomic contractions of 10% or more) typically occur around three times per century. The United States experience for C is comparatively mild, featuring only two contractions of 10% or more over 137 years—with troughs in 1921 and 1933. However, for GDP, the U.S. data show five contractions of 10% or more, with troughs in 1908, 1914, 1921, 1933, and 1947.6 6
The 1947 GDP contraction was associated with the demobilization after World War II and did not involve a decline in C. The 1908 and 1914 GDP contractions featured declines in C, but not up to the threshold of 10%.
1572
R. J. BARRO AND T. JIN
FIGURE 2.—Histogram of the empirical density of transformed GDP disasters and the density estimated by the single power law. The threshold is z0 = 1105, corresponding to b = 0095. For the histogram, multiplication of the height on the vertical axis by the bin width of 0.04 gives the fraction of the total observations (157) that fall into the indicated bin. The results for the single power law for GDP, shown by the curve, correspond to Table I.
Barro and Ursua (2008, Tables 10 and 11) used the observed histograms for disaster sizes from the C and GDP data (Figures 1 and 2) to compute the expectation (that is, the sample average) of the expression in brackets on the right side of equation (3) for alternative coefficients of relative risk aversion, γ. The resulting values were multiplied by the estimated p to calculate the disaster term on the right side of the equation. The other term on the right side, γσ 2 , was computed under the assumption σ = 002 per year. However, as in Mehra and Prescott (1985), this term was trivial, compared to the equity premium of around 0.05, for plausible values of γ (even with higher, but still reasonable, values of σ). Hence, the disaster term ended up doing almost all the work in explaining the equity premium. A key finding was that a γ around 3.5 got the model’s equity premium into the neighborhood of the target value of 0.05. 2. SINGLE-POWER-LAW DISTRIBUTION We work with the transformed disaster size z ≡ 1/(1 − b) which is the ratio of normal to disaster consumption or GDP. This variable enters into the formula for the equity premium in equation (3). The thresh-
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1573
old for b, taken to be 0.095, translates into one for z of z0 = 1105. As b approaches 1, z approaches ∞, a limiting property that accords with the usual setting for a power-law distribution. We start with a familiar, single power law, which specifies the density function as (4)
f (z) = Az −(α+1)
for z ≥ z0 , where A > 0 and α > 0. The condition that the density integrate to 1 from z0 to ∞ implies (5)
A = αz0α
The power-law distribution in equation (4) has been applied widely in physics, economics, computer science, ecology, biology, astronomy, and so on. For a review, see Mitzenmacher (2004a). Gabaix (2009) provided examples of power laws in economics and finance, and discussed forces that can generate these laws. The examples include sizes of cities (Gabaix and Ioannides (2004)), stock-market activity (Gabaix, Gopikrishnan, Plerou, and Stanley (2003, 2006)), chief executive officer compensation (Gabaix and Landier (2008)), and firm size (Luttmer (2007)). The power-law distribution has been given many names, including heavy-tail distribution, Pareto distribution, Zipfian distribution, and fractal distribution. Pareto (1897) observed that, for large populations, a graph of the logarithm of the number of incomes above a level x against the logarithm of x yielded points close to a straight line with slope −α. This property corresponds to a density proportional to x−(α+1) ; hence, Pareto’s α corresponds to ours in equation (4). The straight-line property in a log–log graph can be used to estimate α, as was done by Gabaix and Ibragimov (2011) using least squares. A more common method uses maximum-likelihood estimation (MLE), following Hill (1975). We use MLE in our study. In some applications, such as the distribution of income, the power law gives a poor fit to the observed frequency data over the whole range, but provides a good fit to the upper tail.7 In many of these cases, a double power law—with two different exponents over two ranges of z—fits the data well. For uses of this method, see Reed (2003) on the distribution of income and Mitzenmacher (2004b) on computer file sizes. The double power law requires estimation of a cutoff value, δ, for z, above which the upper-tail exponent, α, for the usual power law applies. For expository purposes, we begin with the single power law, but problems in fitting aspects of the data eventually motivate a switch to the richer specification. 7 There have been many attempts to explain this Paretian tail behavior, including Champernowne (1953), Mandelbrot (1960), and Reed (2003).
1574
R. J. BARRO AND T. JIN
The single-power-law density in equations (4) and (5) implies that the equity premium in equation (3) is given by α e f 2 zγ r − r = γσ + p · (6) α−γ 0 1 α α γ−1 z + · − −1 α+1−γ 0 α+1 z0 if α > γ. (This formula makes no adjustment for the partially temporary nature of disasters, as described earlier.) For given p and z0 , the disaster term on the right side involves a race between γ, the coefficient of relative risk aversion, and α, the tail exponent. An increase in γ raises the disaster term, but a rise in α implies a thinner tail and, therefore, a smaller disaster term. If α ≤ γ, the tail is sufficiently thick that the equity premium is infinite. This result corresponds to a risk-free rate, r f , of −∞. We discuss these possibilities later. For now, we assume α > γ. We turn now to estimation of the tail exponent, α. When equation (4) applies, the log likelihood for N independent observations on z (all at least as large as the threshold, z0 ) is (7)
log(L) = N · [α · log(z0 ) + log(α)] − (α + 1) · [log(z1 ) + · · · + log(zN )]
where we used the expression for A from equation (5). The MLE condition for α follows readily as (8)
N/α = log(z1 /z0 ) + · · · + log(zN /z0 )
We obtained standard errors and 95% confidence intervals for the estimate of α from bootstrap methods.8 Table I shows that the point estimate of α for the 99 C disasters is 6.27, with a standard error of 0.81 and a 95% confidence interval of (496 812). Results for the 157 GDP disasters are similar: the point estimate of α is 6.86, with a standard error of 0.76 and a 95% confidence interval of (556 848). Given an estimate for α—and given σ = 002 z0 = 1105, and a value for p (0.0380 for C and 0.0383 for GDP)—we need only a value for γ in equation (6) to determine the predicted equity premium, r e − r f . To put it another way, we can find the value of γ needed to generate r e − r f = 005 for each value of α. (The resulting γ has to satisfy γ < α for r e − r f to be finite.) In Table I, the point estimate for α of 6.27 from the single power law for the C data requires γ = 397. The corresponding standard error for the estimated γ is 0.51, with 8 See Efron and Tibshirani (1993). We get similar results based on −2 · log(likelihood ratio) being distributed asymptotically as a chi-squared distribution with 1 degree of freedom (see Greene (2002)).
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1575
TABLE I SINGLE AND DOUBLE POWER LAWS; THRESHOLD IS z0 = 1105a Parameter
Point Estimate
Standard Error
95% Confidence Interval
α γ
6.27 3.97
C Data (99 disasters) Single power law 0.81 0.51
α β δ γ
4.16 10.10 1.38 3.00
Double power law 0.87 2.40 0.13 0.52
(266 614) (737 1517) (124 177) (216 415)
α γ
6.86 4.33
GDP Data (157 disasters) Single power law 0.76 0.48
(556 848) (350 533)
α β δ γ
3.53 10.51 1.47 2.75
Double power law 0.97 3.81 0.15 0.56
(239 607) (867 2098) (121 169) (204 421)
(496 812) (313 513)
a The single power law, given by equations (4) and (5), applies to transformed disaster sizes, z ≡ 1/(1 − b), where b is the proportionate decline in C (real personal consumer expenditure per capita) or real GDP (per capita). Disasters are at least as large as the threshold, z0 = 1105, corresponding to b ≥ 0095. The table shows the maximum-likelihood estimate of the tail exponent, α. The standard error and 95% confidence interval come from bootstrap methods. The corresponding estimates of γ , the coefficient of relative risk aversion, come from calculating the values needed to generate an unlevered equity premium of 0.05 in equation (6) (assuming σ = 002 and p = 00380 for C and 0.0383 for GDP). For the double power law, given by equations (10)–(12), the table shows the maximum-likelihood estimates of the two exponents, α (above the cutoff) and β (below the cutoff), and the cutoff value, δ. The corresponding estimates of γ come from calculating the values needed to generate an unlevered equity premium of 0.05 in a more complicated version of equation (6).
a 95% confidence interval of (313 513). For the GDP data, the point estimate of γ is 4.33, with a standard error of 0.48 and a 95% confidence interval of (350 533). To assess these results, we now evaluate the fit of the single power law. Figure 1 compares the histogram for the C disasters with the frequency distribution implied by the single power law in equations (4) and (5), using z0 = 1105 and α = 627 from Table I. An important inference is that the single power law substantially underestimates the frequency of large disasters. Similar results apply for GDP in Figure 2. The failures in the single power law are clearer in diagrams for cumulative densities. The straight lines in Figures 3 and 4 show, for C and GDP, respectively, fitted logs of probabilities that transformed disaster sizes exceed the values shown on the horizontal axes. The lines connecting the points show logs of normalized ranks of disaster sizes (as in Gabaix and Ibragimov (2011)). If the
1576
R. J. BARRO AND T. JIN
FIGURE 3.—Estimated log-scale tail distribution and log of transformed ranks of C disaster sizes versus log of transformed C disaster sizes. The straight line corresponding to the logscale tail distribution comes from the estimated single power law for C in Table I. The ranks of the disaster sizes are transformed as log[(rank − 1/2)/(N − 1/2)], in accordance with Gabaix and Ibragimov (2011). The lines connecting these points should—if the estimated power law is valid—converge pointwise in probability to the log-scale tail distribution, as N approaches infinity.
FIGURE 4.—Estimated log-scale tail distribution and log of transformed ranks of GDP disaster sizes versus log of transformed GDP disaster sizes. The straight line corresponding to the logscale tail distribution comes from the estimated single power law for GDP in Table I. See note to Figure 3 for further information.
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1577
specified single power law were valid, the two graphs in each figure should be close to each other over the full range of z. However, the figures demonstrate that the single power laws underestimate the probabilities of being far out in the upper tails. One way to improve the fits is to allow for a smaller tail exponent at high disaster sizes by generalizing to a double power law. This form specifies an upper-tail exponent, α, that applies for z at or above a cutoff value, δ ≥ z0 , and a lower-tail exponent, β, that applies below the cutoff value for z0 ≤ z < δ. This generalization requires estimation of three parameters: the exponents, α and β, and the cutoff, δ. We still treat the threshold, z0 , as known and equal to 1.105. We should note that the critical parameter for the equity premium is the upper-tail exponent, α. The lower-tail exponent, β, is unimportant; in fact, the distribution need not follow a power law in the lower part. However, we have to specify a reasonable form for the lower portion to estimate the cutoff, δ, which influences the estimate of α. 3. DOUBLE-POWER-LAW DISTRIBUTION The double-power-law distribution, with exponents β and α, takes the form ⎧ if z < z0 , ⎨ 0 −(β+1) f (z) = Bz (9) if z0 ≤ z < δ, ⎩ Az −(α+1) if δ ≤ z, where β α > 0, A B > 0, z0 > 0 is the known threshold, and δ ≥ z0 is the cutoff separating the lower and upper parts of the distribution. The conditions that the density integrate to 1 over [z0 , ∞) and that the densities be equal just to the left and right of δ imply (10)
B = Aδβ−α
(11)
1 δβ−α −β δ−α = (z0 − δ−β ) + A β α
The single power law in equations (4) and (5) is the special case of equations (9)–(11) when β = α. The position of the cutoff, δ, determines the number, K, among the total observations, N, that lie below the cutoff. The remaining N − K observations are at or above the cutoff. Therefore, the log likelihood can be expressed as a generalization of equation (7) as (12)
log(L) = N · log(A) + K · (β − α) · log(δ) − (β + 1) · [log(z1 ) + · · · + log(zK )] − (α + 1) · [log(zK+1 ) + · · · + log(zN )]
where A satisfies equation (11).
1578
R. J. BARRO AND T. JIN
We use maximum likelihood to estimate α, β, and δ. One complication is that small changes in δ cause discrete changes in K when one or more observations lie at the cutoff. These jumps do not translate into jumps in log(L) because the density is equal just to the left and right of the cutoff. However, jumps arise in the derivatives of log(L) with respect to the parameters. This issue does not cause problems in finding numerically the values of (α β δ) that maximize log(L) in equation (12). Moreover, we get virtually the same answers if we rely on the first-order conditions for maximizing log(L) calculated while ignoring the jump problem for the cutoff. These first-order conditions are generalizations of equation (8).9 In Table I, the sections labeled double power law show the point estimates of (α β δ) for the C and GDP data. We again compute standard errors and 95% confidence intervals using bootstrap methods. A key finding is that the upper-tail exponent, α, is estimated to be much smaller than the lower-tail exponent, β. For example, for C, the estimate of α is 4.16, standard error equal to 0.87, with a confidence interval of (266 614), whereas that for β is 10.10, standard error equal to 2.40, with a confidence interval of (737 1517). The estimates reject the hypothesis α = β in favor of α < β at low p-values (for C and GDP). Table I shows that the estimated cutoff value, δ, for the C disasters is 1.38; recall that this value corresponds to the transformed disaster size, z ≡ 1/(1 − b). The corresponding cutoff for b is 0.275. With this cutoff, 77 of the C crises fall below the cutoff, whereas 22 are above. The corresponding cutoff for b with the GDP crises is 0.320, implying that 136 events fall below the cutoff, whereas 21 are above. Despite the comparatively small number of crises above the cutoffs, we know from previous research (Barro and Ursua (2008, Tables 10 and 11)) that the really large crises have the main influence on the equity premium. That assessment still holds for the present analysis. Figure 5 compares the histogram for the C disasters with the frequency distribution implied by the double power law in equations (9)–(11), using z0 = 1105, α = 416, β = 1010, and δ = 138 from Table I. Unlike the single power law in Figure 1, the double power law accords well with the histogram. Results are similar for the GDP data (not shown). Figures 6 and 7 provide corresponding information for cumulative densities. Compared with the single 9
The expressions are 1 zK+1 zN 1 = · log + · · · + log α N −K δ δ zN z1 zK zK+1 + · · · + log + β · log + · · · + log = N α · log z0 z0 z0 z0 Nα + K · (β − α) 1/β δ = z0 α · (N − K)
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1579
FIGURE 5.—Histogram of the empirical density of transformed C disasters and the density estimated by the double power law (z0 = 1105). For the histogram, multiplication of the height shown on the vertical axis by the bin width of 0.04 gives the fraction of the total observations (99) that fall into the indicated bin. The results for the double power law for C, shown by the curve, are based on Table I.
FIGURE 6.—Estimated log-scale tail distribution and log of transformed ranks of C disaster sizes versus log of transformed C disaster sizes. The line with two segments corresponding to the log-scale tail distribution comes from the estimated double power law for C in Table I. See note to Figure 3 for further information.
1580
R. J. BARRO AND T. JIN
FIGURE 7.—Estimated log-scale tail distribution and log of transformed ranks of GDP disaster sizes versus log of transformed GDP disaster sizes. The line with two segments corresponding to the log-scale tail distribution comes from the estimated double power law for GDP in Table I. See note to Figure 3 for further information.
power laws in Figures 3 and 4, the double power laws accord much better with the upper-tail behavior. The improved fits suggest that the double power law would be superior for estimating the coefficient of relative risk aversion, γ. With respect to the equity premium, the key difference in Table I between the double and single power laws is the substantially smaller upper-tail exponents, α. Since the estimated α is now close to 4, rather than exceeding 6, the upper tails are much fatter when gauged by the double power laws. These fatter tails mean that a substantially lower coefficient of relative risk aversion, γ, accords with the target equity premium of 0.05. Equation (3) still determines the equity premium, r e −r f . For given γ, a specification of (α β δ), along with z0 = 1105, determines the relevant moments of the disaster-size distribution. That is, we get a more complicated version of equation (6). (As before, this formulation does not adjust for the partially temporary nature of macroeconomic disasters.) Crucially, a finite r e − r f still requires α > γ. The results determine the estimate of γ that corresponds to those for (α β δ) in Table I (still assuming σ = 002 and p = 00380 for C and 0.0383 for GDP). This procedure yields point estimates for γ of 3.00 from the C disasters and 2.75 from the GDP disasters. As before, we use bootstrap methods to determine standard errors and 95% confidence intervals for the estimates of γ. Although the main parameter that matters is the upper-tail exponent, α, we allow also for variations in β and δ. For the C disasters, the estimated γ of 3.00 (Table I) has a standard error of 0.52, with a 95% confidence interval of (216 415). For GDP, the estimate of
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1581
2.75 has a standard error of 0.56, with a confidence interval of (204 421). Thus, γ is estimated to be close to 3, with a 95% confidence band of roughly 2 to 4.10 Because of the fatter upper tails, the estimated γ around 3 is well below the values around 4 estimated from single power laws (Table I). Given the much better fit of the double power law, we concentrate on the estimated γ around 3. As a further comparison, results based on the observed histograms for C and GDP disasters (Barro and Ursua (2008, Tables 10 and 11)) indicated that a γ in the vicinity of 3.5 was needed to generate the target equity premium of 0.05. The last comparison reflects interesting differences in the two methods: the moments of the size distribution that determine the equity premium in equation (3) can be estimated from a parametric form (such as the double power law) that accords with the observed distribution of disaster sizes or from sample averages of the relevant moments (corresponding to histograms). A disadvantage of the parametric approach is that misspecification of the functional form—particularly for the far upper tails that have few or no observations— may give misleading results. In contrast, sample averages seem to provide consistent estimates for any underlying functional form. However, the sampleaverage approach is sensitive to a selection problem, whereby data tend to be missing for the largest disasters (sometimes because governments have collapsed or are fighting wars). This situation must apply to an end-of-world (or, at least, end-of-country) scenario, discussed later, where b = 1. The tendency for the largest disasters to be missing from the sample means that the sampleaverage approach tends to underestimate the fatness of the tails, thereby leading to an overstatement of γ.11 In contrast, the parametric approach (with a valid functional form) may be affected little by missing data in the upper tail. That is, the estimate of the upper-tail exponent, α, is likely to have only a small upward bias due to missing extreme observations, which have to be few in number. This contrast explains why our estimated γ around 3 from the double power laws (Table I) is noticeably smaller than the value around 3.5 generated by the observed histograms. 10 For the threshold corresponding to b = 0095, there are 99 C crises, with a disaster probability, p, of 0.0380 per year and an average for b of 0.215. Using γ = 300, the average of (1 − b)−γ is 2.90 and that for (1 − b)1−γ is 1.87. For b ≥ 0275, corresponding to the cutoff, there are 22 C crises, with p = 00077, average for b of 0.417, average for (1 − b)−γ of 7.12, and average for (1 − b)1−γ of 3.45. For GDP, with the threshold corresponding to b = 0095, there are 157 crises, with p = 00383 and an average for b of 0.204. Using γ = 275, the average of (1 − b)−γ is 2.58 and that for (1 − b)1−γ is 1.68. For b ≥ 0320, corresponding to the cutoff, there are 21 GDP crises, with p = 00046, average for b of 0.473, average for (1 − b)−γ of 8.43, and average for (1 − b)1−γ of 3.60. 11 The magnitude of this selection problem has diminished with Ursua’s (2010) construction of estimates of GDP and consumer spending for periods, such as the world wars, where standard data were missing. Recent additions to his data set—not included in our current analysis—are Russia, Turkey, and China (for GDP). As an example, the new data imply that the cumulative contraction in Russia from 1913 to 1921 was 62% for GDP and 71% for C.
1582
R. J. BARRO AND T. JIN
4. VARIATIONS IN DISASTER PROBABILITY, TARGET EQUITY PREMIUM, AND THRESHOLD
We consider now whether the results on the estimated coefficient of relative risk aversion, γ, are robust to uncertainty about the disaster probability, p, the target equity premium, r e − r f , and the threshold, z0 , for disaster sizes. For p, the estimate came from all the sample data, not just the disasters: p equaled the ratio of the number of disasters (for C or GDP) to the number of nondisaster years in the full sample. Thus, a possible approach to assessing uncertainty about the estimate of p would be to use a model that incorporates all the data, along the lines of Nakamura et al. (2011). We could also consider a richer setting in which p varies over time, as in Gabaix (2010). We carry out here a more limited analysis that assesses how “reasonable” variations in p influence the estimates of γ.12 Figure 8 gives results for C, and analogous results apply for GDP (not shown). Recall that the baseline value for p of 0.038 led to an estimate for γ of 3.00, with a 95% confidence interval of (216 415). Figure 8 shows that lowering p by a full percentage point (to 0.028) increases the point estimate of γ to 3.2, whereas raising p by a full percentage point (to 0.048) decreases the point
FIGURE 8.—Estimates of the coefficient of relative risk aversion, γ, for alternative disaster probabilities, given target equity premium of 0.05, threshold of z0 = 1105, and σ = 0020. These results correspond to the estimated double power law for C in Table I. 12 For a given set of observed disaster sizes (for C or GDP), differences in p do not affect the maximum-likelihood estimates for the parameters of the power-law distributions. We can think of differences in p as arising from changes in the overall sample size while holding fixed the realizations of the number and sizes of disaster events.
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1583
estimate of γ to 2.8. Thus, substantial variations in p have only a moderate effect on the estimated γ. We assumed that the target equity premium was 0.05. More realistically, there is uncertainty about this premium, which can also vary over time and space (due, for example, to shifts in the disaster probability, p). As with our analysis of p, we consider how reasonable variations in the target premium influence the estimated γ. An allowance for a higher target equity premium is also a way to adjust the model to account for the partly temporary nature of macroeconomic disasters. That is, since equation (3) overstates the model’s equity premium when the typical disaster is partly temporary (as described before), an increase in the target premium is a way to account for this overstatement. Equation (3) shows that variations in the equity premium, r e − r f , on the left side are essentially equivalent, but with the opposite sign, to variations in p on the right side. Therefore, diagrams for estimates of γ versus r e − r f look similar to Figure 8, except that the slope is positive. Quantitatively, for the C data, if r e − r f were 0.03 rather than 0.05, the point estimate of γ would be 2.6 rather than 3.0. On the other side, if r e − r f were 0.07, the point estimate of γ would be 3.2. Results with GDP are similar. Thus, substantial variations in the target equity premium have only a moderate influence on the estimated γ. The results obtained thus far apply for a fixed threshold of z0 = 1105, corresponding to proportionate contractions, b, of size 0.095 or greater. This choice of threshold is arbitrary. In fact, our estimation of the cutoff value, δ, for the double power laws in Table I amounts to endogenizing the threshold that applies to the upper tail of the distribution. We were able to estimate δ by MLE because we included in the sample a group of observations that potentially lie below the cutoff. Similarly, to estimate the threshold, z0 , we would have to include observations that potentially lie below the threshold. As with estimates of p, this extension requires consideration of all (or at least more of) the sample, not just the disasters. As in the analysis of disaster probability and target equity premium, we assess the impact of variations in the threshold on the estimated coefficient of relative risk aversion, γ. We consider a substantial increase in the threshold, z0 , to 1.170, corresponding to b = 0145, the value used in Barro (2006). This rise in the threshold implies a corresponding fall in the disaster probability, p (gauged by the ratio of the number of disasters to the number of nondisaster years in the full sample). For the C data, the number of disasters declines from 99 to 62, and p decreases from 0.0380 to 0.0225. For the GDP data, the number of disasters falls from 157 to 91 and p declines from 0.0383 to 0.0209. That is, the probability of a disaster of size 0.145 or more is about 2% per year, corresponding to roughly two events per century. The results in Table II, for which the threshold is z0 = 1170, can be compared with those in Table I, where z0 = 1105. For the single power law, the rise in the threshold causes the estimated exponent, α, to adjust toward the
1584
R. J. BARRO AND T. JIN TABLE II SINGLE AND DOUBLE POWER LAWS WITH HIGHER THRESHOLD, z0 = 1170a
Parameter
Point Estimate
Standard Error
α γ
5.53 3.71
C Data (62 disasters) Single power law 0.85 0.53
α β δ γ
4.05 11.36 1.37 3.00
Double power law 0.87 8.27 0.15 0.54
95% Confidence Interval
(416 761) (283 497) (281 612) (663 3978) (121 186) (221 429)
GDP Data (91 disasters) Single power law α γ
5.67 3.86
0.81 0.51
(439 749) (303 499)
α β δ γ
4.77 59.22 1.20 3.41
Double power law 1.00 22.13 0.17 0.60
(242 624) (790 7673) (120 175) (204 434)
a See the notes to Table I. Disasters are now all at least as large as the threshold z = 1170, corresponding to 0 b ≥ 0145. The disaster probability, p, is now 0.0225 for C and 0.0209 for GDP.
value estimated before for the upper part of the double power law (Table I). Since the upper-tail exponents (α) were lower than the lower-tail exponents (β), the estimated α for the single power law falls when the threshold rises. For the C data, the estimated α decreases from 6.3 in Table I to 5.5 in Table II, and the confidence interval shifts downward accordingly. The reduction in α implies that the estimated γ declines from 4.0 in Table I to 3.7 in Table II, and the confidence interval shifts downward correspondingly. Results for the single power law for GDP are analogous.13 With a double power law, the change in the threshold has much less impact on the estimated upper-tail exponent, α, which is the key parameter for the estimated γ. For the C data, the rise in the threshold moves the estimated α from 4.16 in Table I to 4.05 in Table II, and the confidence interval changes corre13 These results apply even though the higher threshold reduces the disaster probability, p. That is, disaster sizes in the range between 0.095 and 0.145 no longer count. As in Barro and Ursua (2008, Tables 10 and 11), the elimination of these comparatively small disasters has only a minor impact on the model’s equity premium and, hence, on the value of γ required to generate the target premium of 0.05. The more important force is the thickening of the upper tail implied by the reduction of the tail exponent, α.
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1585
spondingly little.14 These results imply that the results for γ also change little, going from a point estimate of 3.00 with a confidence interval of (216 415) in Table I to 3.00 with an interval of (221 429) in Table II. Results for GDP are analogous. We conclude that a substantial increase in the threshold has little effect on the estimated γ. 5. CAN THE EQUITY PREMIUM BE INFINITE? Weitzman (2007), building on Geweke (2001), argued that the equity premium can be infinite (and the risk-free rate minus infinite) when the underlying shocks are log normally distributed with unknown variance. In this context, the frequency distribution for asset pricing is the t-distribution, for which the tails can be sufficiently fat to generate an infinite equity premium. The potential for an infinite equity premium arises also—perhaps more transparently—in our setting based on power laws. For a single power law, the equity premium, r e − r f , in equation (6) rises with the coefficient of relative risk aversion, γ, and falls with the tail exponent, α, because a higher α implies a thinner tail. A finite equity premium requires α > γ, and this condition still applies with a double power law, with α representing the upper-tail exponent. Thus, it is easy to generate an infinite equity premium in the power-law setting. For a given γ, the tail has only to be sufficiently fat; that is, α has to satisfy α ≤ γ. However, we assume that the equity premium, r e − r f , equals a known (finite) value, 0.05. The important assumption here is not that the premium equals a particular number, but rather that it lies in an interval of something like 0.03–0.07 and is surely not infinite. Our estimation, therefore, assigns no weight to combinations of parameters, particularly of α and γ, that generate a counterfactual premium, such as ∞. For given α (and the other parameters), we pick (i.e., estimate) γ to be such that the premium equals the target, 0.05. Estimates constructed this way always satisfy α > γ and, therefore, imply a finite equity premium. The successful implementation of this procedure depends on having sufficient data so that there are enough realizations of disasters to pin down the upper-tail exponent, α, within a reasonably narrow range. Thus, it is important that the underlying data set is very large in a macroeconomic perspective: 2963 annual observations on consumer expenditure, C, and 4653 on GDP. Consequently, the numbers of disaster realizations—99 for C and 157 for GDP—are sufficient to generate reasonably tight confidence intervals for the estimates of α. 14
The rise in the threshold widens the confidence interval for the estimated lower-tail exponent, β. As the threshold rises toward the previously estimated cutoff, δ, the lower tail of the distribution becomes increasingly less relevant.
1586
R. J. BARRO AND T. JIN
Although our underlying data set is much larger than those usually used to study macroeconomic disasters, even our data cannot rule out the existence of extremely low probability events of astronomical size. Our estimated disaster probabilities, p, were 3.8% per year for C and GDP, and the estimated upper-tail exponents, α, were close to 4 (Table I). Suppose that there were a far smaller probability p∗ , where 0 < p∗ p, of experiencing a super disaster; that is, one drawn from a size distribution with a much fatter tail, characterized by an exponent α∗ , where 0 < α∗ α. If p∗ is extremely low, say 0.01% per year, there is a good chance of seeing no realizations of super disasters even with 5000 observations. Thus, our data cannot rule out the potential for these events, and these far-out possibilities may matter. In particular, regardless of how low p∗ is, to fit the target equity premium of 0.05, the coefficient of relative risk aversion, γ, has to satisfy γ < α∗ to get a finite equity premium.15 If α∗ can be arbitrarily low (a possibility not ruled out by direct observation when p∗ is extremely low), the estimated γ can be arbitrarily close to zero. We, thus, get a reversal of the Mehra–Prescott (1985) puzzle, where the coefficient of relative risk aversion required to match the observed equity premium was excessive by a couple orders of magnitude.16 Any upper bound B < 1 on the potential disaster size, b, would eliminate the possibility of an infinite equity premium. In this sense, the extreme results depend on the possibility of an end-of-the-world event, where b = 1. To consider this outcome, suppose now that the very small probability p∗ refers only to b = 1. In this case, it is immediate from equation (3) that the equity premium, r e − r f , is infinite if γ > 0. Thus, with the assumed form of utility function,17 any positive probability of apocalypse (which cannot be ruled out by “data”), when combined with an equity premium around 0.05, is inconsistent with a positive degree of risk aversion. The reference to an end-of-the-world event suggests a possible resolution of the puzzle. The formula for the equity premium in equation (2) involves a comparison of the return on equity, interpreted as a claim on aggregate consumption, with that on a risk-free asset, interpreted as a short-term government bill. However, no claim can deliver risk-free consumption (from whom and to whom?) once the world has ended. Therefore, at least in the limit, we have to allow for risk in the “risk-free” claim. 15 The assumption here, perhaps unreasonable, is that constant relative risk aversion applies arbitrarily far out into the tail of low consumption. 16 This reversal is the counterpart of the one described in Weitzman (2007, p. 1110): “Should we be trying to explain the puzzle pattern: why is the actually observed equity premium so embarrassingly high while the actually observed riskfree rate is so embarrassingly low ? Or should we be trying to explain the opposite antipuzzle pattern: why is the actually observed equity premium so embarrassingly low while the actually observed riskfree rate is so embarrassingly high ?” 17 The result does not depend on the constant relative risk aversion form, but only on the condition that the marginal utility of consumption approaches infinity as consumption tends toward zero.
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1587
Even if we restrict to b < 1, a disaster that destroys a large fraction, b, of consumption is likely to generate partial default on normally low-risk assets such as government bills. Empirically, this low return typically does not involve explicit default but rather high inflation and, thereby, low realized real returns on nominally denominated claims during wartime (see Barro (2006, Section I.c)). For the 99 C crises considered in the present analysis, we have data (mainly from Global Financial Data) on real bill returns for 58, of which 33 were during peacetime and 25 involved wars. The median realized real rates of return on bills (arithmetic) were 0.014 in the peacetime crises, similar to that for the full sample, and −0062 in the wartime crises. Thus, the main evidence for partial default on bills comes from wars that involved macroeconomic depressions. To generalize the model (without specifically considering war versus peace), suppose that the loss rate on government bills is Φ(b), where 0 ≤ Φ(b) ≤ 1. We assume Φ(0) = 0, so that bills are risk-free in normal times. The formula for the equity premium in equation (2) becomes r e − r f = γσ 2 + p · E [b − Φ(b)] · [(1 − b)−γ − 1] (13) Thus, instead of the loss rate, b, on equity, the formula involves the difference in the loss rates during disasters on equity versus bills, b − Φ(b). We previously assumed Φ(b) = 0, but a more reasonable specification is Φ (b) ≥ 0, with Φ(b) approaching 1 as b approaches 1. The equity premium in equation (13) will be finite if, as b approaches 1, b − Φ(b) approaches 0 faster than (1 − b)−γ approaches infinity. In particular, the marginal utility of consumption (for a hypothetical survivor) may be infinite if the world ends (b = 1), but the contribution of this possibility to the equity premium can be nil because no asset can deliver consumption once the world has disappeared. 6. SUMMARY OF MAIN FINDINGS The coefficient of relative risk aversion, γ, is a key parameter for analyses of behavior toward risk. We estimated γ by combining information on the probability and sizes of macroeconomic disasters with the observed long-term average equity premium. Specifically, we calculated what γ had to be to accord with a target unlevered equity premium of 5% per year within a representative– agent model that allows for rare disasters. In our main calibration, based on the long-term global history of macroeconomic disasters, the probability, p, of disaster (defined as a contraction in per capita consumption or GDP by at least 10% over a short period) is 3.8% per year. The size distribution of disasters accords well with a double power law, with an upper-tail exponent, α, of about 4. The resulting estimate of γ is close to 3, with a 95% confidence interval of 2 to 4. This finding is robust to whether we consider consumer expenditure or GDP and to variations in the estimated disaster probability, p, the target equity premium, and the threshold for the size distribution. The results can also accommodate seemingly paradoxical situations in which the equity premium may appear to be infinite.
1588
R. J. BARRO AND T. JIN REFERENCES
BARRO, R. J. (2006): “Rare Disasters and Asset Markets in the Twentieth Century,” Quarterly Journal of Economics, 121, 823–866. [1567-1570,1583,1587] (2009): “Rare Disasters, Asset Prices, and Welfare Costs,” American Economic Review, 99, 243–264. [1568,1569] BARRO, R. J., AND T. JIN (2011): “Supplement to ‘On the Size Distribution of Macroeconomic Disasters’,” Econometrica Supplemental Materials, 79, http://www.econometricsociety. org/ecta/Supmat/8827_data and programs.zip. [1570] BARRO, R. J., AND J. F. URSUA (2008): “Macroeconomic Crises Since 1870,” Brookings Papers on Economic Activity, Spring, 255–335. [1567,1569,1570,1572,1578,1581,1584] BRUNNERMEIER, M. K., AND S. NAGEL (2008): “Do Wealth Fluctuations Generate Time-Varying Risk Aversion? Micro-Evidence on Individuals’ Asset Allocation,” American Economic Review, 98, 713–736. [1569] CHAMPERNOWNE, D. G. (1953): “A Model of Income Distribution,” Economic Journal, 63, 318–351. [1573] CHIAPPORI, P. A., AND M. PAIELLA (2008): “Relative, Risk Aversion Is Constant: Evidence From Panel Data,” Unpublished Manuscript, Columbia University. [1569] CONSTANTINIDES, G. M. (2008): “Comment on ‘Macroeconomic Crises Since 1870’,” by R. J. Barro and J. F. Ursua, Brookings Papers on Economic Activity, Spring, 341–350. [1570] DONALDSON, J. B., AND R. MEHRA (2008): “Risk-Based Explanations of the Equity Premium,” in Handbook of the Equity Premium, ed. by J. B. Donaldson and R. Mehra. Amsterdam, The Netherlands: Elsevier, Chapter 2. [1570] EFRON, B., AND R. J. TIBSHIRANI (1993): An Introduction to the Bootstrap. New York: Chapman & Hall/CRC. [1574] EPSTEIN, L. G., AND S. E. ZIN (1989): “Substitution, Risk Aversion, and the Temporal Behavior of Consumption and Asset Returns: A Theoretical Framework,” Econometrica, 57, 937–969. [1569] GABAIX, X. (2009): “Power Laws in Economics and Finance,” Annual Review of Economics, 1, 255–293. [1573] (2010): “Variable Rare Disasters: An Exactly Solved Framework for Ten Puzzles in Macro-Finance,” Unpublished Manuscript, New York University. [1582] GABAIX, X., AND R. IBRAGIMOV (2011): “Rank−1/2: A Simple Way to Improve the OLS Estimation of Tail Exponents,” Journal of Business & Economic Statistics, 29, 24–39. [1573,1575,1576] GABAIX, X., AND Y. IOANNIDES (2004): “The Evolution of City Size Distributions,” in Handbook of Regional and Urban Economics, Vol. 4, ed. by V. Henderson and J. F. Thisse. Amsterdam, The Netherlands: North-Holland. [1573] GABAIX, X., AND A. LANDIER (2008): “Why Has CEO Pay Increased so Much?” Quarterly Journal of Economics, 123, 49–100. [1573] GABAIX, X., P. GOPIKRISHNAN, V. PLEROU, AND H. E. STANLEY (2003): “A Theory of Power Law Distributions in Financial Market Fluctuations,” Nature, 423, 267–270. [1573] (2006): “Institutional Investors and Stock Market Volatility,” Quarterly Journal of Economics, 121, 461–504. [1573] GEWEKE, J. (2001): “A Note on Some Limitations of CRRA Utility,” Economics Letters, 71, 341–345. [1585] GOURIO, F. (2008): “Disasters and Recoveries,” American Economic Review, 98, 68–73. [1570] GREENE, W. H. (2002): Econometric Analysis (Fifth Ed.). Englewood Cliffs, NJ: Prentice Hall. [1574] HILL, B. M. (1975): “A Simple General Approach to Inference About the Tail of a Distribution,” The Annals of Statistics, 3, 1163–1174. [1573] LUCAS, R. E., JR. (1978): “Asset Prices in an Exchange Economy,” Econometrica, 46, 1429–1445. [1568] LUTTMER, E. G. J. (2007): “Selection, Growth, and the Size Distribution of Firms,” Quarterly Journal of Economics, 122, 1103–1144. [1573]
SIZE DISTRIBUTION OF MACROECONOMIC DISASTERS
1589
MANDELBROT, B. (1960): “The Pareto–Levy Law and the Distribution of Income,” International Economic Review, 1, 79–106. [1573] MEHRA, R., AND E. C. PRESCOTT (1985): “The Equity Premium: A Puzzle,” Journal of Monetary Economics, 15, 145–161. [1572,1586] MITZENMACHER, M. (2004a): “A Brief History of Generative Models for Power Law and Lognormal Distributions,” Internet Mathematics, 1, 226–251. [1573] (2004b): “Dynamic Models for File Sizes and Double Pareto Distributions,” Internet Mathematics, 1, 305–333. [1573] NAKAMURA, E., J. STEINSSON, R. J. BARRO, AND J. F. URSUA (2011): “Crises and Recoveries in an Empirical Model of Consumption Disasters,” Working Paper 15920, NBER. [1570,1571, 1582] PARETO, V. (1897): Cours d’Economie Politique, Vol. 2. Paris, France: F. Pichou. [1573] REED, W. J. (2003): “The Pareto Law of Incomes—An Explanation and an Extension,” Physica A, 319, 469–486. [1573] RIETZ, T. A. (1988): “The Equity Risk Premium: A Solution,” Journal of Monetary Economics, 22, 117–131. [1567,1568] URSUA, J. F. (2010): “Macroeconomic Archaeology: Unearthing Crises and Long-Term Trends,” Unpublished Manuscript, Harvard University. [1581] WEIL, P. (1990): “Nonexpected Utility in Macroeconomics,” Quarterly Journal of Economics, 105, 29–42. [1569] WEITZMAN, M. L. (2007): “Subjective Expectations and Asset-Return Puzzles,” American Economic Review, 97, 1102–1130. [1585,1586]
Dept. of Economics, Harvard University, 218 Littauer Center, Cambridge, MA 02138-3001, U.S.A.;
[email protected] and Dept. of Economics, Harvard University, Littauer Center, Cambridge, MA 02138-3001, U.S.A.;
[email protected]. Manuscript received September, 2009; final revision received February, 2011.
Econometrica, Vol. 79, No. 5 (September, 2011), 1591–1625
APPARENT OVERCONFIDENCE BY JEAN-PIERRE BENOÎT AND JUAN DUBRA1 It is common for a majority of people to rank themselves as better than average on simple tasks and worse than average on difficult tasks. The literature takes for granted that this apparent misconfidence is problematic. We argue, however, that this behavior is consistent with purely rational Bayesian updaters. In fact, better-than-average data alone cannot be used to show overconfidence; we indicate which type of data can be used. Our theory is consistent with empirical patterns found in the literature. KEYWORDS: Overconfidence, better than average, experimental economics, irrationality, signalling models.
FOR A WHILE, THERE WAS A CONSENSUS among researchers that overconfidence is rampant. Typical early comments include “Dozens of studies show that people are generally overconfident about their relative skills” (Camerer (1997)), “Perhaps the most robust finding in the psychology of judgment is that people are overconfident” (De Bondt and Thaler (1995)), and “The tendency to evaluate oneself more favorably than others is a staple finding in social psychology” (Alicke, Klotz, Breitenbecher, Yurak, and Vredenburg (1995)).2 Recent work has yielded a more nuanced consensus: When the skill under consideration is an easy one to master, populations display overconfidence in their relative judgements, but when the skill is difficult, they display underconfidence (see, for example, Kruger, Windschitl, Burrus, Fessel, and Chambers (2008) and Moore (2007)). In this paper, we argue that both the earlier and the later consensus are misleading: much of the supposed evidence for misconfidence reveals only an apparent, not a true, overconfidence or underconfidence. While we consider the evidence on overconfidence and underconfidence, for expository purposes we emphasize overconfidence, since this is the bias that is better known among economists. Overconfidence has been reported in peoples’ beliefs in the precision of their estimates, in their views of their absolute abilities, and in their appraisal of their relative skills and virtues. In this paper, we analyze the last form of overconfidence (or underconfidence), which has 1 We thank Stefano Sacchetto and Gabriel Illanes for their research assistance. We also thank Raphael Corbi, Rafael Di Tella, John Duffy, Federico Echenique, Emilio Espino, P. J. Healy, Richard Lowery, Henry Moon, Don Moore, Nigel Nicholson, Madan Pillutla, Matt Rabin, Ariel Rubinstein, Luís Santos-Pinto, Jack Stecher, and Juan Xandri for their comments. Juan Dubra gratefully acknowledges the financial support of ANII. 2 Papers on overconfidence in economics include Camerer and Lovallo (1999), Fang and Moscarini (2005), Garcia, Sangiorgi, and Urosevic (2007), Hoelzl and Rustichini (2005), K˝ oszegi (2006), Menkhoff, Schmidt, and Brozynski (2006), Noth and Weber (2003), Sandroni and Squintani (2007), Van den Steen (2004), and Zábojník (2004). In finance, recent (published) papers include Barber and Odean (2001), Biais, Hilton, Mazurier, and Pouget (2005), Bernardo and Welch (2001), Chuang and Lee (2006), Daniel, Hirshleifer, and Subrahmanyam (2001), Kyle and Wang (1997), Malmendier and Tate (2005), Peng and Xiong (2006), and Wang (2001).
© 2011 The Econometric Society
DOI: 10.3982/ECTA8583
1592
J.-P. BENOÎT AND J. DUBRA
been termed overplacement (underplacement) by Larrick, Burson, and Soll (2007). Our analysis has implications for overconfidence in absolute abilities as well, but it is not directly applicable to overconfidence in the precision of estimates. Myers (1999, p. 57) cited research showing that most people perceive themselves as more intelligent than their average peer, most business managers rate their performance as better than that of the average manager, and most high school students rate themselves as more original than the average highschooler. These findings, and others like them, are typically presented as evidence of overconfidence without further comment. Presumably, the reason for this lack of comment is that, since it is impossible for most people to be better than average, or, more accurately, better than the median, it is obvious that some people must have inflated self-appraisals. But the simple truism that most people cannot be better than the median does not imply that most people cannot rationally rate themselves above the median. Indeed, we show that median comparisons, like the ones cited above, can never demonstrate that people are overconfident. More detailed information, such as the percentage of people who believe they rank above each decile and the strengths of these beliefs, is needed. As an illustration of our main point, consider a large population with three types of drivers—low-skilled, medium-skilled, and high-skilled—and suppose , that the probabilities of any one of them causing an accident are fl = 47 80 9 1 fm = 16 , and fh = 20 , respectively. In period 0, nature chooses a skill level for each person with equal probability, so that the mean probability of an accident is 25 . Initially, no driver has information about his or her own particular skill level, and each person (rationally) evaluates himself as no better or worse than average. In period 1, everyone drives and learns something about their skill, based upon whether or not they have caused an accident. Each person is then asked how his driving skill compares to the rest of the population. How does a driver who has not caused an accident reply? Using Bayes’ , rule, he evaluates his own skill level as follows: p(low skill | no accident) = 11 48 35 19 p(medium skill | no accident) = 144 , p(high skill | no accident) = 36 . A driver ) that who has not had an accident thinks there is over a 12 chance (in fact, 19 36 his skill level is in the top third of all drivers, so that both the median and the mode of his beliefs are well above average. His mean probability of an accident is about 103 , which is better than for 23 of the drivers and better than the population mean. Moreover, his beliefs about himself strictly first order stochastically dominate (f.o.s.d.) the population distribution. Any way he looks at it, a driver who has not had an accident should evaluate himself as better than average. Since 35 of drivers have not had an accident, 35 rank themselves as above average and the population of drivers seems overconfident on the whole. However, rather than being overconfident, which implies some error in judgement, the
APPARENT OVERCONFIDENCE
1593
drivers are simply using the information available to them in the best possible manner.3 Although in this example a driver who has not had an accident considers himself to be above average whether he ranks himself by the mean, mode, or median of his beliefs, such uniformity is not always the case; in general, it is important to consider exactly how a subject is placing himself. Note that the example can easily be flipped, by making accidents likely, to generate an apparent underconfidence instead of overconfidence. The experimental literature uses two types of experiments, those in which subjects rank themselves relative to others and those in which subjects place themselves on a scale. We show that, in contrast to ranking experiments, certain scale experiments should not have even the appearance of misconfidence. There is a vast literature on overconfidence, both testing for it and providing explanations for it. On the explanatory side, most of the literature takes for granted that there is something amiss when a majority of people rank themselves above the median and seeks to pinpoint the nature of the error. Mistakes are said to result from egocentrism (Kruger (1999)), incompetence (Kruger and Dunning (1999)), or self-serving biases (Greenwald (1980)), among other factors. Bénabou and Tirole (2002) introduced a behavioral bias that causes people to become overconfident. A strand of the literature more closely related to ours involves purely rational Bayesian agents. In Zábojník (2004), agents who are uncertain of their abilities, which may be high or low, choose in each period to either consume or perform a test to learn about these abilities. Given technical assumptions on the agents’ utilities, the optimal stopping rule of agents leads them to halt their learning in a biased fashion, and a disproportionate number end up ranking themselves as high in ability. Brocas and Carillo (2007) also have an optimal stopping model, which can be interpreted as leading to apparent overconfidence. In K˝ oszegi (2006), agents with a taste for positive self-image sample in a way that leads to overplacement. Moore and Healy (2008) have a model in which people are uncertain about both their abilities and the difficulty of the task they are undertaking. In their model, people presented with a task that is easier than expected may simultaneously overplace their rankings and underestimate their absolute performances, while the opposite holds true for those presented with a task that is more difficult than expected. 3 For a suggestive calculation using real-world data, in 1990 there were 13,851,000 drivers in the United States aged 16–19, who were involved in 1,381,167 accidents (Massie and Campbell (1993)). Invoking the so-called Pareto principle, let us suppose that 80% of the accidents were caused by 20% of the drivers, and, for simplicity, that there were two types of drivers, good and bad. The above data then yield that bad drivers have a θb = 25 chance of having an accident in a single year, while good drivers have a θg = 401 chance. Using only accidents as a gauge, Bayes’ rule and some combinatorics yield that after 3 years, 79% of drivers will have beliefs about themselves that f.o.s.d. the population distribution.
1594
J.-P. BENOÎT AND J. DUBRA
1. RANKING EXPERIMENTS In a ranking experiment, a researcher asks each member of a population to rank his “skill” relative to the other members of the group by placing himself, through word or deed, into one of k equally sized intervals, or k-ciles (defined formally below). Implicitly, the experimenter assumes that skills can be well ordered—say by a one dimensional type—and that the distribution of actual types has a fraction k1 in each k-cile. The experimenter assembles population k ranking data: a vector x ∈ Δk ≡ {x ∈ Rk+ : 1 xi = 1}, where xi , i = 1 k, is the fraction of people who rank themselves in the ith k-cile. Svenson’s (1981) work is a prototypical example of a ranking experiment and provides perhaps the most widely cited population ranking data. Svenson gathered subjects into a room and presented them with the following instructions (among others): We would like to know about what you think about how safely you drive an automobile. All drivers are not equally safe drivers. We want you to compare your own skill to the skills of the other people in this experiment. By definition, there is a least safe and a most safe driver in this room. We want you to indicate your own estimated position in this experimental group. Of course, this is a difficult question because you do not know all the people gathered here today, much less how safely they drive. But please make the most accurate estimate you can.
Each subject was then asked to place himself or herself into one of ten intervals, yielding population ranking data x ∈ Δ10 . Svenson found that a large majority of subjects ranked themselves above the median. To determine if Svenson’s data evinced bias, inconsistency, or irrationality in his subjects, we need a notion of what it means for data to be rational and consistent. We derive this notion using an approach based upon the Harsanyi common prior paradigm. We first define a rationalizing model (Θ p S {fθ }θ∈Θ ), where Θ ⊆ R is a type space, p is a prior probability distribution over Θ S is a set of signals, and {fθ }θ∈Θ is a collection of likelihood functions: each fθ is a probability distribution over S. We adopt the following interpretation of this model. There is a large population of individuals. In period 0, nature draws a skill level, or type, for each individual independently from p. Higher types correspond to higher skill levels. The prior p is common knowledge, but individuals are not directly informed of their own type. Rather, each agent receives information about himself from his personal experience. This information takes the form of a signal, with an individual of type θ ∈ Θ receiving signal s ∈ S with probability fθ (s). Draws of signals are conditionally independent. Given his signal and the prior p, an agent updates his beliefs about his type using Bayes’ rule whenever possible. Our basic idea is that data are unproblematic if they can arise from a population whose beliefs are generated within a rationalizing model. Implementing this idea, however, is not completely straightforward. Recall that in Svenson’s ranking experiment, his instructions state that he is asking “a difficult question
APPARENT OVERCONFIDENCE
1595
because (the subjects) do not know all the people gathered much less how safely they drive.” But even if this difficulty were not present—for instance, if the subjects assumed that as a group they formed a representative draw from a well known population—an issue would remain: Does a person know how safely she herself drives? Of course, a driver has more information about herself than about a stranger, but there is no reason to presume that she knows precisely how safe her driving is4 (even assuming that she knows exactly what it means to drive “safely”5 ). Thus, a person may consider herself to be quite a safe driver since she has never had an accident, rarely speeds, and generally maneuvers well in traffic, but at the same time realize that the limited range of her experience restricts her ability to make a precise self-appraisal. In ranking herself, a subject must form beliefs about her own driving safety. Svenson ignores this issue and, in effect, asks each subject for a summary statistic of her beliefs without specifying what this statistic should be. There is no way to know if subjects responded using the medians of their beliefs, the means, the modes, or some other statistic. As a result, it is unclear what to make of Svenson’s data. Svenson’s experiment is hardly unique in this respect: much of the overconfidence literature, and other literatures as well, share this feature that the meaning of responses is not clear.6 Not all experiments share this ambiguity, however. For instance, the design of Hoelzl and Rustichini (2005) induces subjects to place themselves according to their median beliefs, while Moore and Healy (2008) calculated mean beliefs from subjects’ responses. In the interest of space, in this paper we analyze ranking data only under the assumption that subjects place themselves according to their median types—that is, that a subject places himself into a certain k-cile if he believes there is at least a probability 12 that his actual type is in that k-cile or better and a probability 12 that his type is in that k-cile or below—or, more generally, according to some specific quantile of their types. In Benoît and Dubra (2009), we established that the analysis is similar under different assumptions about subjects’ responses (see below also). The next definition says that data can be median-rationalized when they correspond to the medians of the posteriors of a rationalizing model. We start with some preliminary notation. 4 Several strands of the psychology literature, including Festinger’s (1954) social comparison theory and Bem’s (1967) self-perception theory stress that people are uncertain of their types. In the economics literature, a number of papers start from the premise that, as Bénabou and Tirole (2002) put it, “learning about oneself is an ongoing process.” 5 Dunning, Meyerowitz, and Holzberg (1989) argued that people may have different notions of what it means to drive safely, so that the data are not what they appears to be. Here, we give the best case for the data and assume that all subjects agree on the meaning of a safe driver. 6 Dominitz (1998), in critiquing a British survey of expected earnings, wrote “what feature of the subjective probability distribution determines the category selected by respondents? Is it the mean? Or perhaps it is the median or some other quantile. Or perhaps it is the category that contains the most probability mass.”
1596
J.-P. BENOÎT AND J. DUBRA
• Given Θ, p, and k for each 0 ≤ i ≤ k, let Θi denote the ith k-cile: for i ≤ k − 1 Θi = {θ ∈ Θ | i−1 ≤ p(θ < θ) < ki } and Θk = {θ ∈ Θ | k−1 ≤ p(θ < k k θ)}. Note that a k-cile is a set of types, not a cutoff type, and that higher kciles correspond to higher types. We do not include the dependence of Θi on p and k, since this does not cause confusion. • Given k and a rationalizing model (Θ p S {fθ }θ∈Θ ) for each 0 ≤ i ≤ k, let Si denote the set of signals that result in an updated median type in Θi : i k 1 1 Θn s ≥ and p Θn s ≥ Si = s ∈ S p 2 2 n=i n=1
Let F denote the marginal of signals over S: for each (measurable) T ⊂ S, F(T ) = Θ T dfθ (s) dp(θ). Thus, F(Si ) is the (expected) fraction of people who will place their median types in decile i when types are distributed according to p and signals are received according to fθ . DEFINITION 1: Given a type space Θ ⊆ R and a distribution p over Θ, the population ranking data x ∈ Δk can be median-rationalized for (Θ p) if there is a rationalizing model (Θ p S {fθ }θ∈Θ ) with xi = F(Si ) for i = 1 k. The example in the introduction shows that x = (0 25 35 ) can be medianrationalized for Θ = {l m h} and p(h) = p(m) = p(l) = 13 . When ranking data are median-rationalizable, they can arise from a Bayes-rational population working from a common prior, and there is no prima facie case for calling them biased. The following theorem indicates when data can be median-rationalized. In effect, a rational population can appear to be twice as confident as reality would suggest, but no more. For instance, suppose that people place themselves into ten intervals (k = 10). Then apparently overconfident data in which up to 102 of the people rank themselves in the top decile, up to 104 rank them2i rank themselves in the top i deciles selves in the top two deciles, and up to 10 for i = 3 4 5 can be rationalized. However, data in which 12 of the population places itself in the top two deciles cannot be explained as rational. THEOREM 1: Suppose that Θ ⊆ R and p is a distribution over Θ such that p(Θi ) = 1/k for all i. Then the population ranking data x ∈ Δk can be medianrationalized for (Θ p) if and only if, for i = 1 k, (1)
k
j=i
xj <
2 (k − i + 1) k
APPARENT OVERCONFIDENCE
1597
and (2)
i
j=1
xj <
2 i k
All proofs are provided in the Appendix. Population ranking data x that satisfy the necessary conditions (1) and (2) can be generated from a rational population with any distribution of types p, provided only that p is legitimate in the sense that it partitions the population uniformly into k-ciles.7 Conversely, if the necessary conditions are not satisfied,8 then no legitimate prior p can yield the data x. The necessary part of Theorem 1 comes from the fact that Bayesian beliefs must average out to the population distribution. When people self-evaluate k by their median type, 12 × j=i xj is a lower bound on the weight their beliefs put into the top (k − i + 1) k-ciles. If (1) is violated for some i, then too much weight (i.e., more than k1 (k − i + 1)) is put into these k-ciles. Similarly for condition (2). The sufficiency part of the theorem is more involved, although it is straightforward for the special case where each xi < k2 . For this case, set S = (s1 sk ) and let types θ ∈ Θi observe signal sj with probability fθ∈Θi (sj ) = k( k1 − x2i )xj for i = j and fθ∈Θi (sj ) = k( 12 + k1 − x2i )xj for i = j. Then (Θ p S {fθ }θ∈Θ ) median-rationalizes x for (Θ p). Theorem 1 is a corollary of a more general theorem, formally stated and proved in the Appendix, that covers the case in which person i places himself into a k-cile based on an arbitrary quantile q of his beliefs. In that case, data x can be rationalized if and only if for all i, (3)
k
j=i
1 (k − i + 1) xj < qk
and
i
j=1
xj <
1 i (1 − q)k
As an application of this more general theorem to an experimental setting, if a group of subjects is offered the choice between a prize of M if their score on a quiz places them in the 8th decile or above and M with probability 07, at most 43% should bet on their quiz placement. Researchers often summarize population ranking data by the percentage of people who place themselves above the population median. However, such data cannot be used to show overconfidence. Theorem 1 shows that any fraction r < 1 of the population can rationally place itself in the top half when peo7 This restriction on p avoids uninteresting trivialities. If, for instance, the distribution p were to assign all the weight to just the first two k-ciles, then even ( k1 k1 ) could not be medianrationalized for any k > 2. 8 In fact, conditions (1) only have bite for i > k+1 , while conditions (2) only have bite for i < 2 k+1 k+1 . (Since may or may not be an integer, it is easiest to state the conditions as we do.) 2 2
1598
J.-P. BENOÎT AND J. DUBRA
ple self-evaluate using their median types. Benoît and Dubra (2009, 2011) established that the same holds true when people self-evaluate using their mean or modal types.9 In Svenson’s experiment, students in Sweden and the United States were questioned about their driving safety and driving skill relative to their respective groups. Swedish drivers placed themselves into 10% intervals in the following proportions when asked about their safety: Interval Reports (%)
1 0.0
2 5.7
3 0.0
4 14.3
5 2.9
6 11.4
7 14.3
8 28.6
9 17.1
10 5.7
Note first that, although a majority of drivers rank themselves above the median, these population ranking data do not have an unambiguously overconfident appearance, as fewer than 10% of the driver population place themselves in the top 10%. More importantly, Theorem 1 implies that these data can be median-rationalized, as can the Swedish responses on driving skill. On the other hand, on both safety and skill, Svenson’s American data cannot be median-rationalized. For instance, 82% of Americans placed themselves in the top 30% on safety and 46% placed themselves in the top 20% on skill. Thus, Svenson did find some evidence of overconfidence, if his subjects based their answers on their median types, but this evidence is not as strong as is commonly believed. Note also that when 46% of the population place themselves in the top 20%, this is only 6% too many, not 26%. Some researchers summarize their results by their subjects’ mean k-cile k . The following placement, μ(x) = i=1 ixi , and infer overconfidence if μ > k+1 2 corollary to Theorem 1 shows that much of this overconfidence is only apparent. COROLLARY 1: Suppose that Θ ⊆ R and p is a distribution over Θ such that p(Θi ) = 1/k for all i. Then the mean k-cile placement μ can come from population ranking data that can be median-rationalized for (Θ p) if and only if μ − k + 1 < k for k even 2 4 1 k− k μ − k + 1 < for k odd 2 4 Thus, when subjects are asked to place themselves into deciles, a mean placement of, say, 7.9 is not out of order. 9 Benoît and Dubra (2009) also showed how the ideas in this paper can be applied to game theoretic settings, such as the entry games in Camerer and Lovallo (1999).
APPARENT OVERCONFIDENCE
1599
We have modelled individuals who know the distribution of types in the population. It is easy to generalize beyond this, although a bit of care must be taken, as the following example shows. Let the type space be Θ = [0 1]. In period 0, nature chooses one of two distributions, p = U[0 1] with probability 4 < q < 1 or p = U[ 34 1] with probability 1 − q. Nature then assigns each indi5 vidual a type, using the chosen distribution. In period 1, every one is informed exactly of his type (e.g., fθ (θ) = 1) and then median-ranks himself. Although individuals know their own types, the median plays a role since individuals are not told which distribution nature used in assigning types. (Recall that a median placement in k-cile j means that a subject believes there is at least a 12 chance that his type lies in k-cile j or above (of the realized distribution) and a 12 chance that it lies in j or below.10 ) Suppose that, as it happens, nature used the distribution p in assigning types, so that all types lie in the interval [ 34 1]. After being informed of his q type, any individual believes there is a q+4(1−q) > 12 chance that the distribution of types is p. Since the lowest type is 34 , all individuals median-place themselves top quartile of the population. In contrast, Theorem 1 does not allow more than half the individuals to place themselves in the top quartile. Note, however, that we have analyzed the result of only one population distribution draw, namely p , and one draw is necessarily biased. If we consider a large number of draws, so that a fraction q of the time the population distribution is p, we will find that overall only the fraction (q × 14 ) + ((1 − q) × 1) < 25 places themselves in the top quartile, in line with the theorem.11 In general, we model a population of individuals who may be uncertain of both their own types and the overall distribution of types using a quadruplet (Θ π S {fθ }θ∈Θ ), where π is a prior over a set of probability distributions over Θ, and making concomitant changes to the definitions of a k-cile, and so forth. With this modelling, conditions (1) and (2) in Theorem 1 remain necessary and sufficient for median-rationalization. Indeed, (1) and (2) remain necessary in any environment in which Bayesian updaters start from a common and correct prior, since these conditions simply reflect the fact that beliefs average out to the population distribution. 1.1. Monotone Signals Theorem 1 describes when population ranking data can be rationalized, without regard as to whether the collection of likelihood functions {fθ } used is, in some sense, reasonable. While it may not be possible to specify exactly what constitutes a reasonable collection of likelihood functions, it is possible 10 Information on median placements can be induced by offering subjects the choice between a prize with probability 12 and the prize if their type is at least in k-cile j. 11 Uncertainty about the distribution of types can be interpreted as uncertainty about the difficulty of the task, in the spirit of Moore and Healy (2008) (although our results differ from theirs).
1600
J.-P. BENOÎT AND J. DUBRA
to identify some candidate reasonable properties. One such property is that better types should be more likely to receive better signals (for instance, a safe driver should expect to experience few adverse driving incidents) and, conversely, better signals should be indicative of better types. More precisely, given Θ ⊆ R and S ⊆ R, we say that the collection of likelihood functions {fθ }θ∈Θ satisfies the monotone signal property (m.s.p.) if (i) fθ f.o.s.d. fθ for θ > θ ∈ Θ and (ii) for all s > s ∈ S, the posterior after s f.o.s.d. the posterior after s, for all probability distributions p over Θ that assign probability 1/k to each k-cile. A rationalizing model (Θ p S {fθ }θ∈Θ ) satisfies m.s.p. if {fθ }θ∈Θ satisfies m.s.p. A standard restriction found in the literature is that the collection of fθ ’s should satisfy the monotone likelihood ratio property (m.l.r.p.): for all f (s) θ > θ, fθθ(s) is increasing in s. The m.l.r.p. is equivalent to insisting that f satisfy properties (i) and (ii) for all p, not only those that assign probability k1 to each k-cile (see Whitt (1980) and Milgrom (1981)). In our framework, the only priors p that are relevant are those that divide the type space evenly into k-ciles, so the m.s.p. can be seen as the appropriate version of m.l.r.p. for our context. The following theorem, which has been formulated with overconfident looking data in mind, shows that the m.s.p. imposes more stringent necessary conditions that population ranking data x must satisfy if they are to be rationalized. When k ≤ 4, these conditions are also sufficient, while for k > 4, they are approximately sufficient in the following sense. Define h = min{n ∈ N : n > k/2}. Say that vector y is comparable to x if yi = xi for i = h k and y1 ≤ x1 . If x satisfies the necessary conditions, then there is a y comparable to x that can be rationalized. The comparable vector y matches x exactly in the components which are in the upper half, so that in this regard the overconfident aspect of the data is explained. Below the median, however, we may need to do some rearranging. However, we do not do this by creating a large group of unconfident people who rank themselves in the bottom k-cile. THEOREM 2: Suppose that Θ ⊆ R and p is a distribution over Θ such that p(Θi ) = 1/k for all i. The population ranking data x ∈ Δk can be medianrationalized for (Θ p) by a rationalizing model that satisfies the monotone signal property only if (4)
k
xj
2j − i − 1 2 < (k − i + 1) for j−1 k
xj
2 k + i − 2j < i k−j k
j=i
(5)
i
j=1
for
i = 2 k
i = 1 k − 1
Suppose x 0. Then for k ≤ 4, the above inequalities are also sufficient. For k > 4, if x satisfies (4) and (5), then there exists a y comparable to x that can be
APPARENT OVERCONFIDENCE
1601
median-rationalized for (Θ p) with a rationalizing model that satisfies the monotone signal property. For i = k, condition (4) yields xk < k2 , the same necessary condition as in Theorem 1. For 2 ≤ i < k, however, the restrictions on the data are k < k4 rather than more severe. Thus, for i = k − 1, we have xk−1 + xk k−1 4 xk−1 + xk < k . To derive this tighter bound, suppose that data x are medianrationalized by a rationalizing model that satisfies the m.s.p. As is shown in the Appendix, x is then also median-rationalized by a model with k signals—S = (s1 sk )—in which all agents within a k-cile j receive a signal si with the same probability fθ∈Θj (si ). For each i = 1 k, we have k (a) j≥i fθ∈Θj (si ) > k2 xi , so that an individual who sees signal si has unique k median type in Θi , and (b) j=1 fθ∈Θj (si ) = kxi , so that the fraction xi see signal si . Since the m.s.p. is satisfied, fθ∈Θj (sk ) is increasing in j. Therefore, k−1 kxk −fθ∈Θ (sk ) k , j=1 fθ∈Θj (sk ) ≤ (k − 1)fθ∈Θk−1 (sk ) and, from (b), fθ∈Θk−1 (sk ) ≥ (k−1) kxk −fθ∈Θk (sk ) . Since f (s ) ≤ (1 − f (s )), we so that fθ∈Θk−1 (sk−1 ) ≤ 1 − θ∈Θk k−1 θ∈Θk k (k−1) kxk −fθ∈Θ (sk )
fθ∈Θ (k−2)+kxk
k k + (1 − fθ∈Θk (sk )) = 2 − ≥ fθ∈Θk−1 (sk−1 ) + have 1 − (k−1) k−1 k fθ∈Θk (sk−1 ) > 2 xk−1 , where the last inequality follows from (a). Again from (a),
fθ∈Θ (sk )(k−2)+kxk
(k−2)+kxk k we have 2 − (k/2)xkk−1 >2− > k2 xk−1 , as was to be shown. k−1 Conditions (4) and (5) are not sufficient since, for instance, the data ( 258 751 751 151 154 258 ) cannot be rationalized with monotone signals, although the comparable vector ( 759 759 758 151 154 258 ) can be. In the Appendix, we show by direct construction that such a counterexample cannot arise when k ≤ 4. The reason is that the m.s.p. places fewer demands on the likelihood functions when there are fewer signals, and fewer signals are needed when k is smaller. While the monotone signal property imposes tighter bounds on population ranking data, plenty of scope for apparent overconfidence remains. In particular, the m.s.p. still allows any fraction r < 1 of the population to place itself above the median and vectors comparable to Svenson’s Swedish data.
1.2. Some Empirical Considerations Kruger (1999) found a “below-average effect in domains in which absolute skills tend to be low.” Moore (2007), surveying current research, wrote that “When the task is difficult or success is rare, people believe that they are below average,” while the opposite is true for easy tasks. As suggested by the phrase “success is rare,” some tasks are evaluated dichotomously: success or failure. Call an easy task one where more than half the people succeed and call a difficult task one where more than half the people fail. If people evaluate themselves primarily on the basis of their success or failure on the task—in the limit, if their only signal is whether or not they succeed—then rational updating
1602
J.-P. BENOÎT AND J. DUBRA
will lead to a better-than-average effect on easy tasks and a worse-than-average effect on difficult ones.12 Formally, this situation is described by a rationalizing model (Θ p S {fθ }θ∈Θ ) with S = {0 1} and fθ (1) increasing in θ, which yields the fraction F(1) = fθ (1) dp(θ) of the population with median type above the population median. Comparing two dichotomously evaluated tasks F and G with rationalizing models (Θ p S {fθ }θ∈Θ ) and (Θ p S {gθ }θ∈Θ ), if gθ f.o.s.d. fθ for all θ, then G is an easier task on which to succeed and more people will rate themselves above the median on G than F . While this observation is in keeping with the current wisdom on the effect of ease, this simple comparative static does not generalize to tasks that are not evaluated dichotomously. Suppose that F and G are two tasks in which competence is evaluated on the basis of three signals. On each task, 50% of the population is of type θL and 50% is of type θH > θL . The likelihood functions for the tasks are fθL (1) = 23 , fθL (2) = 13 , fθH (2) = 12 , and fθH (3) = 12 on task F , and gθL (1) = 12 , gθL (2) = 1 , gθH (2) = 13 , and gθH (3) = 23 on task G . Both {fθ }θ∈Θ and {gθ }θ∈Θ satisfy the 2 monotone signal property, so that a higher signal can be interpreted as a better performance. Since gθ f.o.s.d. fθ for all θ, these better performances are easier to obtain on task G than on task F . Nevertheless, only 13 of the population will place itself in the top half on G , while 23 of the population will place itself in the top half of the population on F . On the face of it, this example conflicts with the claim that easier tasks lead to more overconfidence; on reflection, this is not so clear. Given the way that low and high types perform on the two tasks, a case can be made that “success” on task F is a signal of 2 or above, while success on task G is a signal of 3. Then more people succeed on F than on G , and task F is the easier task. (In a more concrete vein, a judgement as to whether bowling is easier than skating depends on how one defines success in the two activities.) This ambiguity cannot arise when there are only two signals. At a theoretical level, it is unclear exactly how to define the ease of a task, in general, and how to establish a clear link between ease and apparent overconfidence.13 This suggests that the current wisdom on the impact of ease needs to be refined and reexamined. In line with this suggestion, Grieco and Hogarth (2009) found no evidence of a hard/easy effect, and while Kruger (1999) did find such an effect, his data contain notable exceptions.14 Moreover, even 12 Moore also noted that “people believe that they are more likely than others to experience common events—such as living past age 70—and less likely than others to experience rare events such as living past 100.” By interpreting experiencing the event as a “success” (for instance, having a parent live past 70 would be a success) and not experiencing it as “failure,” we obtain this prediction about people’s beliefs. 13 However, it is possible to establish such a link for specific cases beyond dichotomous tasks. In Benoît and Dubra (2011), we showed that the findings of Hoelzl and Rustichini (2005) and Moore and Cain (2007) on ease can be generated within our framework. 14 For instance, although Kruger categorized organizing for work as a difficult task, this task also displays a large better-than-average effect.
APPARENT OVERCONFIDENCE
1603
when our approach predicts a better-than-average effect on a task, it does not necessarily predict that too many people systematically place themselves in the upper k-ciles; that is, that the population ranking data f.o.s.d. the uniform distribution. As far as we know, the current literature makes no claims in this regard, and it is worth recalling that while Svenson found a better-than-average effect in his Swedish drivers, he also found that too few people rank themselves in the top 10% on safety (and the top 20% on skill). We turn now to some empirical evidence on how experience affects the degree of apparent overconfidence. Generally speaking, as people gather more information about themselves, they derive tighter estimates of their types. A population with tight estimates can be captured in our framework by only allowing rationalizing models in which, after updating, individuals are at least c% sure of the k-cile in which their types lie, for some large c. As a corollary to conditions (3), as c increases, the fraction of people who can rationally place themselves above the median gets closer and closer to 12 . This suggests that populations with considerable experience should exhibit little misconfidence. In keeping with this prediction, Walton (1999) interviewed professional truck drivers, who each drive approximately 100,000 kilometers a year, and found no bias in their self-assessments of their relative skills. He did find that a majority claim to be safer drivers than average; however, it is quite possible that most of the truckers had only had safe driving experiences, so that a majority could rationally rank themselves highly. Experience also comes with age, and the evidence on age and overconfidence is mixed. While some researchers have found that misplacement declines with age, others have found no relation.15 Accidents and moving violations are, presumably, negative signals about a driver’s safety. Despite this, Marotolli and Richardson (1998) found no difference between the confidence levels of drivers who have had adverse driving incidents and those who have not, which points against the hypothesis that they are making rational self-evaluations.16 On the other hand, Groeger and Grande (1996) found that although drivers’ self-assessments are uncorrelated to the number of accidents they have had, their self-assessments are positively correlated to the average number of accident-free miles they have driven. The number of accident-free miles seems to be the more relevant signal, as one would expect better drivers to drive more, raising their number of accidents. 15 For instance, Mathews and Moran (1986) and Holland (1993) found that drivers’ overplacement declines with age, while Marotolli and Richardson (1998) and Cooper (1990) found no such decline. 16 Note that interpreting the evidence can be a bit tricky. For instance, an accident may lead a driver to conclude that he used to be an unsafe driver, but that now, precisely because he has had an accident, he has become quite a safe driver.
1604
J.-P. BENOÎT AND J. DUBRA
2. SCALE EXPERIMENTS In a scale experiment, a scale in the form of a real interval and a population average are specified (sometimes implicitly), and each subject is asked to place himself somewhere on the scale.17 Population scale data comprise a triplet ¯ where Θ ⊂ R is a real interval, m ∈ Θ is a population average, and (Θ m a), a¯ ∈ Θ is the average of the placements. The idea underlying scale experiments is that in a rational population, selfplacements should average out to the population average. When the scale is subjective, this presumption is, at best, debatable, so let us restrict ourselves to experiments with an objective scale (for a brief discussion of the issues with a subjective scale, see Benoît and Dubra (2009)). As an example, Weinstein (1980) asked students how their chances of obtaining a good job offer before graduation compare to those of other students at their college, with choices ranging from 100% less than average to 5 times the average. Here there is no ambiguity in the meaning of the scale. However, two ambiguities remain; namely, what is meant by an average student and what a subject means by a point estimate of his or her own type. To illustrate, suppose for the sake of discussion that all of Weinstein’s subjects agree that there are two types of students at their college—low and high— who have job offer probabilities pL = 03 and pH = 1, and that 80% of the population are low type. A reasonable interpretation of an average student is one whose chance of obtaining an offer is 03. Consider a respondent who thinks that there is a 50% chance that she is a low type. Her probability of obtaining a good job offer is (05 × 03) + (05 × 1) = 065. A perfectly reasonable response to Weinstein’s question is that her chances are 35% above average. Thus, one sensible way to answer the question uses the population median, or mode, to determine what an average student is, but uses the mean of own beliefs to self-evaluate. Just considering medians and means, there are four ways to interpret answers to (unincentivized) scale questions. It is fairly obvious that in the three cases involving the median, apparent overconfidence will not imply overconfidence, since there is no particular reason for median calculations to average out. Theorem 3, which is a simple consequence of the fact that beliefs are a martingale, concerns the remaining case. It says that when a rational population reports their mean beliefs, these reports must average out to the actual population mean.
17
In some experiments, the scale Θ is not an interval of real numbers, but, say, a set of integers. This may force some subjects to round off their answers, leading to uninteresting complications, which we avoid.
APPARENT OVERCONFIDENCE
1605
¯ can be rationalized if DEFINITION 2: The population scale data (Θ m a) } ) such that m = E(θ) and a¯ = there is a rationalizing model (Θ p S {f θ θ∈Θ θ dc, where c is the probability distribution defined by Θ c(T ) = F{s : E(θ | s) ∈ T } for T ⊂ Θ ¯ can be rationalized if and only THEOREM 3: Population scale data (Θ m a) if a¯ = m. Clark and Friesen (2009) reported a scale experiment in which subjects are incentivized to, in effect, report their mean beliefs relative to the population mean. In keeping with Theorem 3, the experiment found no apparent overconfidence or underconfidence.18 Moore and Healy (2008) ran a set of incentivized scale experiments which yield no misconfidence in some treatments and misconfidence in others. 3. CONCLUSION Early researchers found a universal tendency toward overplacement. Psychologists and economists developed theories to explain this overplacement and explore its implications. Implicit in these theories was the presumption that a rational population should not overplace itself. We have shown, however, that there is no particular reason for 50% of the population to place itself in the top 50%. At an abstract level, our theory implies that rational populations should display both overplacement and underplacement, and this is what more recent work has uncovered. Many of the overplacement studies to date have involved experiments that are, in fact, of limited use in testing for overconfidence. Our results point to the type of experimental design that can provide useful data in this regard. In particular, experiments should yield information about the strengths of subjects’ beliefs and information beyond rankings relative to the median.19 If, say, 65% of subjects believe there is at least an 0.7 chance that they rank in the top 40%, the population displays (true) overconfidence. Note, however, that this does not demonstrate that 25% of the subjects are overconfident. In the extreme, as much as 57% of the population could rationally hold such a belief. Thus, the overconfidence of a few can produce quite overconfident looking data and 18 In one variant of their experiment, Clark and Friesen found that subjects underestimate their absolute performance. 19 In line with these requirements, the recent experimental paper of Merkle and Weber (2011) asked for subjects’ belief distributions, while Burks, Carpenter, Goette, and Rustichini (2011) provided incentives designed to elicit modal beliefs, which they then combined with information on actual performance. Prior work by Moore and Healy (2008) used a quadratic scoring rule to elicit beliefs. Karni (2009) described a different procedure for eliciting detailed information about subjects’ beliefs.
1606
J.-P. BENOÎT AND J. DUBRA
it may be misleading to broadly characterize a population as overconfident. At the same time, 65% of subjects could rationally hold that there is an 0.6 chance they are in the top 40%, so that a slight degree of overconfidence can also lead to quite overconfident looking data. For the sake of discussion, let us suppose that Svenson’s subjects answered his questions using their median beliefs about themselves. Then we have shown that Svenson’s Swedish data can be rationalized but that his American data cannot. On one interpretation, we have explained his Swedish data but not his American data. We prefer a different interpretation; namely, that we have provided a proper framework with which to analyze Svenson’s data. This framework shows that his American data display overconfidence, but that his Swedish data do not. Some psychologists and behavioral economists may be uneasy with our approach on the prior grounds that individuals do not use Bayes’ rule and, for that matter, may not even understand simple probability. Even for these researchers, however, the basic challenge of this paper remains: To indicate why, and in what sense, a finding that a majority of people rank themselves above the median is indicative of overconfidence. If such a finding does not show overconfidence in a Bayes’ rational population, there can be no presumption that it indicates overconfidence in a less rational population. It is, of course, possible that people are not rational, but not overconfident either. APPENDIX Theorem 1 is a special case of a theorem which we present after the following q definitions. For each i, let Si denote the set of signals that result in an updated qth percentile in Θi : i k Θn s ≥ q and p Θn s ≥ 1 − q S = s ∈ S p
q i
n=i
n=1
Given a type space Θ ⊆ R and a distribution p over Θ, the population ranking data x ∈ Δk can be q-rationalized for (Θ p) if there is a rationalizing q model (Θ p S {fθ }θ∈Θ ) with xj = F(Sj ) for j = 1 k. Note that medianrationalizing is q-rationalizing for q = 12 . THEOREM 4: Suppose that Θ ⊆ R and p is a distribution over Θ such that p(Θi ) = 1/k for all i. For q ∈ (0 1), the population ranking data x ∈ Δk can be q-rationalized for (Θ p) if and only if, for i = 1 k, (6)
k
j=i
xj <
k−i+1 qk
APPARENT OVERCONFIDENCE
1607
and (7)
i
xj <
j=1
i (1 − q)k
The proof of Theorem 4 proceeds as follows. Given a type space Θ and prior p, we construct likelihood functions such that every type in a given kcile i observes signals with the same probability. This allows us to identify every θ ∈ Θi with one type in Θi , and without loss of generality (w.l.o.g.) work with a type space {θ1 θk }. The key to q-rationalizing a vector x is finding k a nonnegative matrix A = (Aji )kji=1 such that xA = ( k1 k1 ) i=1 Aji = 1, j k and i=1 Aji > 1 − q and i=j Aji > q for all j Then, the matrix A can be interpreted as the rationalizing model that q-rationalizes x as follows. Nature picks (in an independent and identically distributed (i.i.d.) fashion) for each individual a type θi and a signal sj with probability xj Aji Each k-cile Θi then has probability 1/k since xA = ( k1 k1 ). The likelihood functions are given by fθi (sj ) = kxj Aji and row j of A is then the posterior belief after signal sj . Since j k i=1 Aji > 1 − q and i=j Aji > q, and the number of people observing sj is xj , the rationalizing model q-rationalizes x. PROOF OF THEOREM 4: Sufficiency for q-rationalization. Step 1. Suppose q ∈ (0 1) and that x ∈ Δk is such that inequalities (6) and (7) hold. We show that there exists a nonnegative k × k matrix A = (Aji )kji=1 k j such that xA = ( k1 k1 ) and for all j, i=1 Aji = 1, i=1 Aji > 1 − q, and k i=j Aji > q 1 Pick d such that min{ q1 1−q k+1 } > d > 1 and for all i k (8)
k
j=i
xj ≤
k−i+1 qdk
and
i
j=1
xj ≤
i (1 − q)dk
We say that r ∈ Δk can be justified if there exists a nonnegative k × k maj k trix R such that xR = r and for all i, i=1 Rji = 1, i=1 Rji ≥ (1 − q)d, and k i=j Rji ≥ qd Let R be the set of distributions that can be justified. Note that R is nonempty, since x itself can be justified by the identity matrix. Furthermore, R is closed and convex. We now show that ( k1 k1 ) ∈ R Assume all inequalities in (6) and (7) hold, but that ( k1 k1 ) ∈ / R. Then, 1 1 2 since f (t) = t − ( k k ) is a strictly convex function, there is a unique r such that ( k1 k1 ) = r = arg mint∈R f (t). Let R be a matrix that justifies r.
1608
J.-P. BENOÎT AND J. DUBRA
Since r = ( k1 k1 ), there exists some ri = k1 , and since r ∈ Δk , there must be some i for which ri > k1 and some i for which ri < k1 . Let i∗ = max{i : ri = k1 } and i∗ = min{i : ri = k1 }. Part A. We prove that ri∗ ri∗ < k1 . Suppose instead that ri∗ > k1 (a similar argument establishes that ri∗ < k1 ). Then, for all i > i∗ , ri = k1 and for some i < i∗ ri < k1 . Let ı˜ = max{i : ri < k1 }. We show that for all i > ı˜, (a) for any j such k that j ≤ ı˜ or j > i, either xj = 0 or Rji = 0; (b) either xi = 0 or g=i Rig = dq. To see (a), fix an i > ı˜, and suppose xj > 0 and Rj i > 0 for some j ≤ ı˜ or
by R
j ı˜ = Rj ı˜ + εRj i , R
j i = (1 − ε)Rj i , and for all j > i . Define the matrix R
˜ (j i) ∈ / {(j i ) (j ı)}, Rji = Rji . We have ⎧ ⎪ ⎪ ⎪ for j = j ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ if j ≤ ı˜ ⎪ ⎨ (i)
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ if i < j ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
j
i=1 k
ji = R
ji = R
j
i=j
i=1 k
Rji ≥ dq;
j i ≥ R
j i = R
i=j
j
Rj i ≥ d(1 − q) and
i=1 k
j i = R
j
i=1
i=1
k
k
i=j
Rj i + εRj i − εRj i ≥ dq;
i=j
j
and
i=j
j
Rji ≥ d(1 − q)
i=1 k
j i = R
Rj i + εRj i − εRj i ≥ d(1 − q)
and
Rj i ≥ dq
i=j
We have ri = ri − For ε sufficiently small, define r = xR. r = r i + xj εRj i , k ı˜ k xj εRj i , and, for i ∈ ri = i=1 ri = 1. For small / {i ı˜}, ri = ri . Therefore i=1 enough ε, 1 ≥ ri ≥ 0 for all i, since xj Rj i > 0 implies that ri > 0. Hence
r ∈ R. r ∈ Δk and, given (i), We now show that f (˜r ) < f (r): (9)
2 2 1 1 − rı˜ − f (˜r ) − f (r) = rı˜ + xj εRj i − k k 2 2 1 1 + ri − xj εRj i − − ri − k k
1609
APPARENT OVERCONFIDENCE
1 2 + (xj εRj i )2 = (xj εRj i ) + 2xj εRj i rı˜ − k 1 − 2xj εRj i ri − k 1 1 = 2(xj εRj i ) xj εRj i + rı˜ − − ri + k k = 2(xj εRj i )[xj εRj i + rı˜ − ri ] Recall that rı˜ < k1 , and since i > ı˜, ri ≥ k1 . Hence, for ε sufficiently small, [xj εRj i + r i − ri ] < 0. We have a contradiction, since, by definition r = arg mint∈R f (t). k To see (b), suppose that for some j > ı˜, we have xj > 0 and g=j Rj g > dq.
by R
j ı˜ = Rj ı˜ + Pick some i ≥ j with Rj i > 0 For ε sufficiently small, define R
εRj i , Rj i = (1 − ε)Rj i , and, for all (j i) ∈ / {(j i ) (j i )}, Rji = Rji . Define
As before,
r = xR. r ∈ R and f (˜r ) < f (r), a contradiction. Given (a) and (b), and recalling the definition of ı˜, we have k k k
k − ı˜
< rt = xj Rjt k t=˜ı+1 t=˜ı+1 j=1
=
k k
xj Rjt
(by (a) j ≤ ı˜ implies xj = 0 or Rjt = 0)
t=˜ı+1 j=˜ı+1
=
k
xj
j=˜ı+1
=
k
k
xj
k
Rjt
(by (a) j > t > ı˜ ⇒ xj = 0 or Rjt = 0)
t=˜ı+1
xj dq
j=˜ı+1
≤
Rjt
t=˜ı+1
j=˜ı+1
=
k
k − ı˜ k
(by (b) either xj = 0 or
k t=j
Rjt = dq)
(by definition of d and assumption of the theorem)
Thus, we have a contradiction. i < i∗ , such that ri > k1 . Since Part B. From Part A, there exists an i, i∗ < k ∗ ri = j=1 xj Rji , for some j we must have Rj∗i > 0. We now show that this leads to a contradiction. Consider a small enough ε.
1610
J.-P. BENOÎT AND J. DUBRA
• Suppose first that for all j = i Rji = 0 so that j ∗ = i and Rii > 0. Then we j∗ k 1 1 know that Rii xi = ri > k ⇒ Rii > k If i=1 Rj∗ i = (1 − q)d and i=j∗ Rj∗ i = qd we get ∗
d=
j
Rj∗ j +
k
Rj∗ j = 1 + Rj∗ j∗ = 1 + Rii > 1 +
i=j ∗
i=1
1 > d k
j ∗ ∗ which is a contradiction. Hence we must have i=1 Rj j > (1 − q)d or k j∗ ∗ ∗ i=1 Rj i > (1 − q)d (an analogous i=j ∗ Rj j > qd Suppose therefore that k
by R
j∗ i∗ = Rj∗ i∗ + εRj∗ j∗ , ∗ argument can be made if i=j∗ Rj i > qd). Define R ∗ ∗ ∗ ∗
ji = Rji . We can then
j∗ j∗ = (1 − ε)Rj∗ j∗ , and, for all (j i) ∈ / {(j j ) (j i )} R R j k
ji ≥ qd. verify that for small enough ε, for all j i=1 Rji ≥ (1 − q)d and i=j R
we obtain f ( Defining r˜ = xR, r) < f (r)—a contradiction.
by R
j∗i = (1 − ε)Rj∗i , • Suppose instead that j ∗ = i If j ∗ < i, define R ∗
j∗ i∗ = Rj∗ i∗ + εRj∗i , and R
ji = Rji for all (j i) ∈ R / {(j i) (j ∗ i∗ )}. If j ∗ > i
define R by Rj∗ i∗ = Rj∗ i∗ + εRj∗i , Rj∗i = (1 − ε)Rj∗i , and Rji = Rji for all
ji = Rji so j R (j i) ∈ / {(j ∗ i) (j ∗ i∗ )}. In any case, for all j = j ∗ R i=1 ji ≥ k j j ∗ ∗
d(1 − q) and i=j Rji ≥ dq; for j = j , if j < i i=1 Rji = i=1 Rji ≥ (1 − q)d j k
ji = k Rji − εRj∗i + εRj∗i ≥ dq; for i = j ∗ , if j ∗ >
ji = i i=1 R and i=j R i=j k k j
ji =
R Rji − εRj∗i + εRj∗i ≥ (1 − q)d and Rji ≥ dq. For r˜ = xR, i=1
i=j
i=j
it is easy to show (as in (9)) that f ( r) < f (r)—a contradiction. Parts A and B show that ( k1 k1 ) ∈ R. Let A be the matrix that justifies ( k1 k1 ). Step 2. Suppose that q x and A are as in Step 1. Given any Θ and p such that p(Θi ) = k1 for each i, let S = {1 2 k} and fθ (j) = kxj Aji , for θ ∈ Θi , i j = 1 k. To complete the proof of sufficiency, we show that (Θ S f p) q-rationalizes x for (Θ p); that is, xj = F(Sj ). (i) xj = F(j), since F(j) =
k
kxj Aji
i=1
1
= xj Aji = xj Aji = xj k i=1 i=1 k
(ii) j ∈ Sj since
p(Θi | j) = j i=1
kxj Aji
Aji > 1 − q, and
xj k i=j
1 k =A ji
Aji > q.
k
1611
APPARENT OVERCONFIDENCE
k g−1 (iii) g = j ⇒ g ∈ / Sj . Suppose g > j. We have i=g Agi > q ⇒ i=1 Agi < j 1 − q ⇒ i=1 Agi < 1 − q, so that g ∈ / Sj Similarly, j > g implies g ∈ / Sj . (i), (ii), and (iii) establish that xj = F(j). Necessity. Suppose data x can be q−rationalized for some (Θ p) and let k (Θ p S {fθ }θ∈Θ ) be the rationalizing model. Fix an i and let S i = g=i Sgq For k k each signal s ∈ S i , p( g=i Θg | s) ≥ q If j=i xj = 0, inequalities (6) (and (7)) k q hold trivially, so suppose j=i xj > 0. Since xj = F(Sj ) for all j, we have q F(Sj ) > 0 for some i ≤ j ≤ k. k q For any j let Tj = {s ∈ Sj : p( i=j Θi | s) = q} For any s ∈ T1 , we have 1 = k p(Θ | s) = p( i=1 Θi | s) = q < 1 so we must have T1 = ∅ Assume now j ≥ 2 j−1 q For any s ∈ Tj , p( i=1 Θi | s) = 1 − q so that s ∈ Sj−1 Thus, s ∈ Tj implies q q q q q q s ∈ Sj and s ∈ Sj−1 . If F(Tj ) > 0 then F(Sj ∪ Sj−1 ) < F(Sj ) + F(Sj−1 ) so that 1 = F(S) ≤ F
q
q
+ F(Sj ∪ Sj−1 )
q g
S
g =jj−1
q g
S
q
q
+ F(Sj ) + F(Sj−1 )
g =jj−1
≤
xg + xj + xj−1 = 1
a contradiction
g =jj−1
k Thus, for all j, F(Tj ) = 0 and for almost every s ∈ S i , p( g=i Θg | s) > q Hence, k k k−i+1 =p Θg = p Θg s dF(s) k g=i g=i k k
≥ p Θg s dF(s) > q dF(s) = q xj Si
g=i
Si
A similar argument applies for inequalities in (7).
j=i
Q.E.D.
PROOF OF COROLLARY 1: First note that for any x x ∈ Δk , if x f.o.s.d. x = x , then μ(x ) > μ(x). Let M be the set of median-rationalizable vectors. By Theorem 1, M is characterized by a set of linear inequalities, so M is a convex set. Suppose k is even. From Theorem 1, supx∈M μ(x) = k μ(0 0 k2 k2 ) = i=k/2+1 k2 i = 34 k + 12 and infx∈M μ(x) = μ( k2 k2 0 0) = 14 k + 2. Moreover, these bounds are not attained because neither (0 0 k2 k2 ) nor ( k2 k2 0 0) is in M. Since M is convex, for any
1612
J.-P. BENOÎT AND J. DUBRA
t ∈ ( 14 k + 2 34 k + 12 ), there exists an x ∈ M such that μ(x) = t. Similar reasoning applies to k odd. Q.E.D. The proof of Theorem 2 is constructive. For each x ∈ Δk x 0 that satisfies (4), we construct x = a1 x − 1−a ( k1 k1 ) for a arbitrarily close to 1. Then a k
x 0 and x satisfies the inequalities in (4). Then we find a z compax ∈ Δ rable to x and a nonnegative matrix P = (Pji )kji=1 such that for i j = 1 k, k j k k 1 1 1 1 1 j=1 Pji = k i=1 Pji = zj zj i=1 Pji ≥ 2 , and zj i=j Pji ≥ 2 . Moreover, we construct P so that it satisfies certain dominance relations. As in the proof of Theorem 1, the matrix P embodies the likelihood functions f through fθ (Sj ) = kPji for θ ∈ Θi and it yields a rationalizing model which almost rationalizes z; almost because for rationalization we would need strict inequalij k ties z1j i=1 Pji > 12 and z1j i=j Pji > 12 . Since z is comparable to x the vector y = az + (1 − a)( k1 k1 ) is comparable to x and is rationalized by the model j k yielded by Q = aP + (1 − a) k1 I (where y1j i=1 Qji > 12 and y1j i=j Qji > 12 ). From the dominance relations that P satisfies, the model has monotone signals. PROOF OF THEOREM 2: Sufficiency. For any matrix P, let P i denote the ith column and let Pj denote the jth row. x ∈ Δk x 0 that satisfies (4), there exists a comparable z CLAIM 1: For any k and a nonnegative matrix P = (Pji )kji=1 such that for i j = 1 k, j=1 Pji = j k k 1 i=1 Pji = zj z1j i=1 Pji ≥ 12 , and z1j i=j Pji ≥ 12 . Moreover, for all i j, kP i+1 k f.o.s.d. kP i and (10)
k k
Pjr Pj+1r ≤ zj zj+1 r=i+1 r=i+1
with strict inequality if
k r=i+1
Pj+1r > 0
We prove the claim for k even; the proof for k odd is similar. In Part A, we build rows h k of P; in Part B, we build rows 1 h − 1; in Part C, we verify that the matrix P satisfies the dominance relations. xj and define Pk by Pkk = zk /2 and Pki = Part A—k ≥ j ≥ h. Let zj = zk /2(k − 1) for i = 1 k − 1 We now build recursively Pk−1 through Ph . Whenever rows Pj+1 Pk have k j been defined, define the “slack” vector sj ∈ Rk by si = k1 − f =j+1 Pf i . Note that j j+1 si = si − Pji
APPARENT OVERCONFIDENCE
1613
k Suppose that for every r ≥ j + 1, (i) i=r Pri = z2r , (ii) for 1 ≤ i < r, Pri = zr , (iii) Pri ≥ 0 for all i and Pri is decreasing in i for i ≥ r, and (iv) sir−1 ≥ 0 2(r−1) r−1 for all i, sir−1 = sr−1 for i < r and sir−1 decreasing in i for i ≥ r − 1. Notice that for j = k − 1 (i)–(iv) are satisfied. We now build Pj in such a way that (i)–(iv) are satisfied for r = j. For i < j set Pji = zj /2(j − 1). We turn now to i ≥ j. Let ij = max{i ≥ k z j j j : 2j − f =i+1 sf ≤ si (i − j + 1)}. We first establish that ij is well defined. By the k zr induction hypothesis, for every r ≥ j + 1, i=j Pri = z2r + 2(r−1) (r − j) = z2r 2r−j−1 . r−1 We have (11)
k
i=j
k k
1 s = (k − j + 1) − Pri k i=j r=j+1 j i
k k
1 Pri = (k − j + 1) − k r=j+1 i=j k
zr 2r − j − 1 zj 1 (k − j + 1) − > k 2 r−1 2 r=j+1
=
k where the inequality holds since, by (4), k1 (k−j +1) > r=j z2r 2r−j−1 . Therefore, f −1 i = j satisfies the conditions in the definition of ij and ij is well defined. Define ⎧ k ⎪
zj ⎪ j ⎪ ⎪ − sf ⎪ ⎨ 2 f =ij +1 Pji = (12) for ij ≥ i ≥ j, ⎪ ⎪ ij − j + 1 ⎪ ⎪ ⎪ ⎩ j for i > ij . si Items (i) and (ii) are satisfied by construction. We now check (iii). Clearly Pji ≥ 0. Notice that Pji is constant in i for j ≤ i ≤ ij and decreasing in i for j i > ij since si is decreasing, so we only need to check that Pjij ≥ Pjij +1 But Pjij < Pjij +1 would imply k
zj j − sf 2 f =i +1 j
ij − j + 1
j
< sij +1
⇔
k
zj j j − s < sij +1 (ij − j + 2) 2 f =i +2 f j
which violates the definition of ij . j−1 j−1 To establish (iv), we first show si ≥ 0 for all i By definition of Pji , (a) si = j j−1 0 for all i > ij By definition of ij , Pjij ≤ sij so that (b) sij ≥ 0 Next, Pji = Pjij
1614
J.-P. BENOÎT AND J. DUBRA j
j−1
j−1
for ij ≥ i ≥ j and si decreasing in i establishes (c) si ≥ sij ≥ 0 for ij ≥ i ≥ j Consider now i < j. Since j ≥ h = 1 + k/2 we obtain Pjj ≥ zj /2(k − j + 1) ≥ j j j−1 zj /2(j − 1) = Pji for all i < j Then si = sj for all i < j, and sj ≥ 0 (established in (c)) imply (d) for all i < j: (13)
j−1
si
j−1
j−1
= sj−1 ≥ sj j
≥0
j−1
and si
j−1
j−1
= sj−1 > sj
j
≥0
if
j > h j−1
j−1
Next, notice that si = sj and Pji = Pjj−1 for all i < j establish si = sj−1 for j−1 i < j We finally show that si is decreasing in i for i ≥ j − 1 Equation (13) j−1 j−1 j−1 j−1 established that sj−1 ≥ sj so we only need to show that si ≥ si for any j i > i ≥ j. If Pji > Pji for some i > i ≥ j, then from (12), Pji = si and therefore j−1 j−1 j j si = 0 ≤ si . If Pji ≤ Pji then since si ≥ si j−1
si
j
j
j−1
= si − Pji ≥ si − Pji = si
This completes the proof of Part A. We now establish a property of P that will be used in Part B. Suppose that j for some j and i ≥ j we have Pji = 0. If Pji = si , (12) implies Pji = Pji = 0 for all j ≤ i ≤ ij Also, (iii) implies that for all i ≥ i Pji = 0 Then (i) implies j zj = 0, which is a contradiction. Therefore Pji = si = 0 From (iii) and (iv) we j j−1 j j−2 obtain Pji = si = 0 for all i ≥ i so that si = si − Pji = 0 Since si ≥ 0, we have Pj−1i = 0 Repeating the reasoning, we obtain (14)
Pji = 0
⇒
j
si = 0
⇒
j
P j i = si = 0
for all i ≥ i and all j ≤ j
xj For j < h, zj can be different from xj . Part B—j < h. For j ≥ h, zj = k We proceed by defining Pj and then setting zj = i=1 Pji Let Ph−1i = sih−1 for k 1 i ≥ h; for i ≤ h − 1, define Ph−1i = g=h sgh−1 /(h − 1). h−1 1 > 0 for i ≤ h − 1. We will establish sh−1 − We now show that sih−1 − Ph−1i h−1 h−1 1 1 1 Ph−1h−1 > 0 since Ph−1i = Ph−1h−1 and sh−1 = si for all i ≤ h − 1. Phh > Phh+1 h−1 h implies ih = h and Phh+1 = sh+1 , which ensures sh+1 = 0 In equation (11), we k j zj proved i=j si > 2 for all j, so from equation (12) and h = ih , we obtain k
i=h
sih >
zh 2
⇔
shh >
k
zh − sh = Phh 2 f =h+1 f
h−1 and therefore shh−1 = shh − Phh > 0 = sh+1 Hence Phh > Phh+1 implies shh−1 > h−1 h−1 h−1 sh+1 Also, Phh = Phh+1 implies sh > sh+1 because, by the strict inequality in
1615
APPARENT OVERCONFIDENCE
h−1 h . Since Phh ≥ Phh+1 we obtain shh−1 > sh+1 . Since sih−1 is decreas(13), shh > sh+1 ing in i, then
1 = Ph−1i
k
sgh−1 g=h
h−1
=
k
sgh−1
g=h
k−h+1
< shh−1
h−1 h−1 1 for all i ≤ h − 1 and therefore shh−1 > sh+1 ⇒ shh−1 − Ph−1i > sh+1 − shh−1 = 0 as was to be shown. h−1 2 1 2 2 = sh−1 − Ph−1h−1 > 0 and Ph−1j = Ph−1h−1 /(h − 2) for all j < Define Ph−1h−1 1 2 h − 1 and let Ph−1i = Ph−1i + Ph−1i for i ≤ h − 1 k k 2 Let zh−1 = i=1 Ph−1i = 2 i=h sih−1 + 2Ph−1h−1 . We have h−1
k
Ph−1i
i=1
j=h
=
zh−1
2 sjh−1 + 2Ph−1h−1
≥
k
2
2 sjh−1 + 2Ph−1h−1
1 2
j=h
and k
1 2 Ph−1h−1 + Ph−1h−1 +
Ph−1i
i=h−1
sih−1
i=h
=
zh−1
k
zh−1
1 ≥ 2
j
For j < h − 1 set Pji = 0 for i > j Pjj = sj and Pji = Pjj /(j − 1) for i < j, and k zj = i=1 Pji . Part C—Checking dominance relations. We first check inequality (10). Case 1. Pj and Pj+1 , j ≥ h (I) For i < j we have i
(15)
Pjr
r=1
zj
zj+1 zj i i i i 2(j − 1) 2j > = = = = zj 2(j − 1) 2j zj+1
(II) For i = j, since Pjj > Pjr = have i
(16)
j
Pjr
r=1
zj
=
Pjr
r=1
zj
zj 2(j−1)
i
j
r=1
zj+1
zj+1 2j
i
Pj+1r =
zj+1
and Pj+1j = Pj+1r =
zj j 1 2(j − 1) > > = zj 2
Pj+1r
r=1
for r < j we
Pj+1r
r=1
zj+1
1616
J.-P. BENOÎT AND J. DUBRA
(III.a) Pick i ≥ j + 1 and suppose Pj+1i+1 = 0. By (14), we have that for r ≥ i + 1 Pjr = 0 and therefore i
(17)
i
Pjr
r=1
zj
=1≥
Pj+1r
r=1
zj+1
(III.b) Pick i ≥ j + 1 and suppose Pj+1i+1 > 0 If i + 1 > ij+1 , then Pj+1i+1 = j+1 j si+1 , so that si+1 = 0. By (14), Pjr = 0 for all r ≥ i + 1, so that k k
Pjr Pj+1r = 0 < Pj+1i+1 ≤ z zj+1 j r=i+1 r=i+1
If i + 1 ≤ ij+1 , then since Pjr and Pj+1r are decreasing in r, we have the following ordering between distributions: the distribution 2(Pj+1j+1 Pj+1k )/zj+1 f.o.s.d. the uniform distribution on j + 1 to ij+1 ; the uniform distribution from j to ij+1 f.o.s.d. the distribution 2(Pjj Pjk )/zj Therefore, 2
k
Pj+1r
r=i+1
zj+1
ij+1 − i ij+1 − i > ≥ ≥ ij+1 − j ij+1 − j + 1
2
k
r=i+1
zj
Pjr
k Note that because Pj+1i is decreasing in i for i ≥ j + 1, then r=i+1 Pj+1r > 0 ⇔ Pj+1i+1 > 0. Therefore, (I), (II), (III.a), and (III.b) show that (10) holds and that the inequality is strict if Pj+1i+1 > 0. Case 2. Pj and Pj+1 , j = h − 1. (I) For i < j, recall from Part B that for all i < h − 1, k
1 2 + Ph−1i = Ph−1i = Ph−1i
srh−1
r=h
h−1
+
2 Ph−1h−1 h−2
k 2 Then letting a = r=h srh−1 and b = Ph−1h−1 , and recalling that zh−1 = 2a + 2b, we have that for i < h − 1,
(18)
a b + Ph−1i = h−1 h−2 zh−1 2a + 2b
1617
APPARENT OVERCONFIDENCE 2 > 0 we have Since, for i < h − 1, Phi = zh /2(h − 1) and b = Ph−1h−1 i
Ph−1r
r=1
zh−1
a a b b + + =ih−1 h−2 >ih−1 h−1 2a + 2b 2a + 2b i
i = = 2(h − 1)
Phr
r=1
zh
(II) For i = j, i
h−1
Ph−1r
r=1
zh−1
=
Ph−1r
r=1
zh−1
h−1
1 a + 2b > = = 2a + 2b 2
i
Phr
r=1
zh
=
Phr
f =1
zh
i P (III) Fix i > j If Pj+1i+1 = 0 repeat step (III.a) of Case 1 to show r=1 zjrj = i P 1 ≥ r=1 zj+1r If Pj+1i+1 > 0 repeat step (III.b) of Case 1 to show that j+1 k k P /zh−1 < f =i+1 Phr /zh for i > h − 1 h−1f f =i+1 Steps (I), (II), and (III) show that (10) holds, and that the inequality is strict k if r=i+1 Pj+1r > 0. Case 3. Pj and Pj+1 for j = h − 2. (I) For i < j recall equation (18) and that Ph−2r /zh−2 = 1/2(h − 3) for r ≤ i so that i
Ph−2r r=1
zh−2
a b i +
i Ph−1r i > >ih−1 h−2 = = 2(h − 3) 2(h − 2) 2a + 2b zh−1 r=1
(II) For i = j i
Ph−2r r=1
zh−2
= 1 > 1 − Ph−1h−1 ≥ 1 −
k i
Ph−1r Ph−1r = zh−1 zh−1 r=1 r=h−1
k Ph−2r (III) For i > j Ph−2i = 0 for all i ≥ h − 1 so r=i+1 zh−2 = 0 If k k k Ph−1r Ph−2r r=i+1 Ph−1r > 0, we have r=i+1 zh−1 > r=i+1 zh−2 Steps (I), (II), and (III) show that (10) holds, and that the inequality is strict k if r=i+1 Pj+1r > 0.
1618
J.-P. BENOÎT AND J. DUBRA z
g Case 4. Pj and Pj+1 for j < h − 2. These cases are trivial, since Pgi = 2(g−1) for zg i < g, Pgg = 2 , and Pgi = 0 for i > g for g = j j + 1 We now check that P i+1 f.o.s.d. P i . Suppose i ≥ h Since Pkk ≥ Pkc = Pkc for k k all c c < k, we have r=j Pri+1 ≥ r=j Pri for j = k. Fix then h − 1 ≤ j < k.
j
• If Pj i > Pj i+1 for any j ≤ j < k we know Pj i+1 = si+1 and therefore k j −1 si+1 = 0 and by (14), Pri+1 = 0 for all r ≤ j − 1 This implies r=j Pri+1 ≥ k k r=j Pri+1 = 1 ≥ r=j Pri . • If Pj i ≤ Pj i+1 for all j ≤ j < k, since Pkk ≥ Pkc = Pkc for all c c < k, k k we have r=j Pri+1 ≥ r=j Pri k k Fix j < h − 1. Since Pji = Pji+1 = 0 we have 1 = r=j Pri+1 ≥ r=j Pri . Suppose i = h − 1. For all j > h Pjh = Pjh−1 and for j = h, we have Phh ≥ k k Phh−1 so that for all j ≥ h, r=j Prh ≥ r=j Prh−1 Also, since Ph−1h = shh−1 and k k Pjh = 0 for j < h − 1, we have that for all j < h r=j Prh = k1 ≥ r=j Prh−1 Suppose i ≤ h − 2. For all j > i + 1 Pji+1 = Pji , and for j = i + 1, we have k k Pji+1 ≥ Pji , so that for all j ≥ i + 1, r=j Pri+1 ≥ r=j Pri Also, since Pi+1i+1 = k k i+1 si+1 , we have, for all j < i + 1 r=j Pri+1 = k1 ≥ r=j Pri This establishes Claim 1. Now take any vector x ∈ Δk x 0, that satisfies (4), and for the k × k identity matrix I, let J = k1 I and K = ( k1 k1 ) We find a y comparable to x that can be rationalized with monotone signals. Let x = a1 x − 1−a K for a arbitrarily close to 1. Then x ∈ Δk x 0, and x a satisfies the inequalities in (4). Find a z and P as in Claim 1. Let y = az + (1 − a)K and Q = aP + (1 − a)J Define the matrix A by Aj = k k k k Qj / i=1 Qji . Since i=1 Pji = zj and i=1 Jji = k1 we have that i=1 Qji = yj = azj + (1 − a) k1 and Aj = Qj /yj k Note that y is comparable to x, yA = ( k1 k1 ) and for all j, i=1 Aji = 1. For 1 ≤ j ≤ k, j
j
Aji =
i=1
j
(aPji + (1 − a)Jji )
Qji
i=1
yj
=
i=1
azj + (1 − a) j
=
azj 1 azj + (1 − a) k
Pji
1
zj
+
1 k j
1 (1 − a) k azj + (1 − a)
Jji
1
1 k
1 k
1619
APPARENT OVERCONFIDENCE j
azj
=
Pji
1
azj + (1 − a)
1 k
+
zj
(1 − a)
1 k
azj + (1 − a)
1 k
1 (1 − a) 1 1 k + ≥ > 12 1 2 azj + (1 − a) azj + (1 − a) k k azj
k Similarly, i=j Aji > yj /2. Given any Θ and p such that p(Θi ) = k1 for all i let S = {1 2 k} and fθ (j) = kAji yj for θ ∈ Θi , i j = 1 k. From Step 2 in the proof of Theorem 1, (Θ S f p) q-rationalizes y for q = 12 . We now verify that (Θ S f p) satisfies m.s.p. It is immediate that fθ f.o.s.d. fθ for θ > θ, since kP i+1 f.o.s.d. kP i for all i We need to show that for all i j < i i i k p( g=1 Θg | j) ≥ p( g=1 Θg | j + 1), which is true if and only if g=1 Ajg ≥ i r=1 Aj+1g k i i If r=i+1 Pj+1r = 0 we have r=1 Pj+1r = zj+1 and by (10), r=1 Pjr = zj . If k i r=i+1 Pj+1r = 0, we must also have i ≥ j + 1, since for i < j + 1 r=1 Pj+1r = zj+1 would imply k k
Pj+1r Pj+1r 1 0= ≥ ≥ z zj+1 2 j+1 r=i+1 r=j+1
We therefore have
i
Ajr =
r=1
i
Qjr r=1
=
=
yj
i
Pjr
azj
=
azj + (1 − a)
azj 1 azj + (1 − a) k
+
azj+1 1 azj+1 + (1 − a) k
1 k
r=1
(1 − a)
zj
1 k
1 azj + (1 − a) k 1 (1 − a) k +
+
azk + (1 − a)
=1
1 azj + (1 − a) k
i
1 (1 − a) k
=
i
r=1
Aj+1r
Jjr
r=1
1 k
1 k
1620
J.-P. BENOÎT AND J. DUBRA
i P i P Pj+1r > 0, then by (10), r=1 zjrj > r=1 zj+1r and for a sufficiently j+1 i i close to 1, r=1 Ajr > r=1 Aj+1r . This completes the proof of sufficiency for k > 4 For k = 4, suppose that x ∈ Δ4 x 0 satisfies (4). Then x3 + 43 x4 < 1. Assume w.l.o.g. that x3 + x4 ≥ x1 + x2 , and define the matrices P and P : ⎛ x1 ⎞ x1 +ε −ε 0 0 2 2 ⎜ ⎟ ⎜ x2 ⎟ x2 ⎜ ⎟ − ε + ε 0 0 ⎜ ⎟ 2 2 ⎜ ⎟ P =⎜ ⎟ x3 x3 1 − 2x1 − 2x2 1 − 2x4 ⎜ −ε + 2ε − ε⎟ ⎟ ⎜ 4 4 4 4 ⎠ ⎝ x4 − x1 − x2 x4 − x1 − x2 x1 + x2 x4 +ε − 2ε +ε 4 4 2 2 ⎞ ⎛ x1 x1 +ε −ε 0 0 2 2 ⎟ ⎜ x2 x2 ⎜ ⎟ −ε +ε 0 0 ⎜ ⎟ ⎜ ⎟ 2 2 P = ⎜ 1 − 2x − 2x ⎟ 1 − 2x1 − 2x2 1 − 2x4 1 − 2x4 1 2 ⎜ ⎟ −ε + 2ε − ε⎟ ⎜ ⎝ ⎠ 4 4 4 4 x4 x4 − 2ε +ε 0 ε 2 2 If
k
r=i+1
Given any Θ and p such that p(Θi ) = 14 , for all i let S = {1 2 4} fθ (j) = kPji , and fθ (j) = 4Pji for θ ∈ Θi , i j = 1 4. It is easily verified that for ε arbitrarily small, (θ S f p) median-rationalizes x if x4 > x1 + x2 , and (θ S f p) median-rationalizes x if x4 ≤ x1 + x2 . Furthermore, both (θ S f p) and (θ S f p) satisfy m.s.p. Necessity. Suppose that x can be median-rationalized with a model (Θ S f p) with monotone signals. We show that equation (4) holds; the argument for (5) is symmetric. Let S = {1 k} and fθ (j) = Sj d f θ (s). Then (Θ S f p) is a model with monotone signals which also median-rationalizes x. Let P be the k × k matrix defined by fθ (j) dp(θ) = F(j | Θi )p(Θi ) Pji = Θi
To show necessity, we first establish three facts about P and (Θ S f p). As in the proof of necessity of Theorem 1, for q = 12 we have that (i) for all j k such that xj > 0 p( i=j Θi | j) > 12 . Also, since (Θ S f p) rationalizes x, xj = F(Sj ) = F(j) and, therefore, (ii) for all i and all j such that xj > 0 p(Θi | j) = P F(j|Θi ) p(Θi ) = xjij F(j)
1621
APPARENT OVERCONFIDENCE
k
Since (Θ S f p) satisfies m.s.p., θ ∈ Θi , i ≤ i − 1. We have k
L ≡ inf
θ∈Θi−1
fθ (n) ≥ sup
n=j
k
θ ∈Θi n=j
n=j
fθ (n) ≥
k n=j
fθ (n) for θ ∈ Θi−1 and
fθ (n) ≡ U
so that for all j and all i i with i ≤ i − 1, k
Pni =
n=j
k
n=j
fθ (n) dp(θ )
Θi k
=
fθ (n) dp(θ ) ≤
Θi n=j
Θi
L dp(θ ) =
≤ Θi
L dp(θ) Θi−1
k
≤
U dp(θ )
fθ (n) dp(θ) =
Θi−1 n=j
k
Pni−1
n=j
k k Therefore, (iii) for all j and all i i with i ≤ i − 1 n=j Pni ≤ n=j Pni−1 . j, inequality (4) holds trivially. Let j be the largest j for which xj > 0. For i > Let (19)
Ci =
k k
Pjg
P j (i) =
g=i j=i
Since Ci ≤ (20)
k k g=i
j=1
k
Pjm
and
F i (j) =
m=i
Pjg =
k−i+1 k
k
m=j
, it suffices to show that
k
xj 2j − i − 1 Ci > 2 j−1 j=i
for i ≤ j. We proceed inductively. From (i) and (ii), Cj =
k
i= j
Pji =
k
p(Θi | j)xj >
i= j
which establishes (20) for i = j.
k xj xj 2j − j−1 = 2 2 j − 1 j=j
Pmi
1622
J.-P. BENOÎT AND J. DUBRA
Suppose that (20) holds for i = t ≤ j If xt−1 = 0, then Pt−1g = 0 for all g, and x P t−1 (t − 1) = 0 = t−1 . If x > 0 from (i) and (ii) we have t−1 2 k k
1
k
Pt−1m P t−1 (t − 1) = xt−1 xt−1 m=t−1 xt−1 2
From (iii), for all i ≤ t − 1 we have F t−1 (t) ≥ F i (t) t−1 F i (t) k k and therefore F t−1 (t) ≥ i =1 t−1 . Also, since i=1 Pji = i=1 p(Θi | j)xj = xj , we have Hence, P t−1 (t − 1) ≥
Ct−1 = F t−1 (t) + P t−1 (t − 1) + Ct k
≥
xi − Ct
i =t
t −1 k
+ P t−1 (t − 1) + Ct k
xi
xi xt−1 t −2 t −2 i =t + P t−1 (t − 1) + Ct ≥ + + Ct = t −1 t −1 t −1 2 t −1 i =t
k
k
xi 2i − t − 1 t − 2 xt−1 + + > t −1 2 2 i − 1 t −1 i =t xi
i =t
k k
xi 2i − t xt−1 xi 2i − t + = = 2 2 i − 1 i =t−1 2 i − 1 i =t
so that (20) holds for t − 1 as well. Hence, (20) holds for 1 j.
Q.E.D.
REFERENCES ALICKE, M. D., M. L. KLOTZ, D. L. BREITENBECHER, T. J. YURAK, AND D. S. VREDENBURG (1995): “Personal Contact, Individuation, and the Better-Than-Average Effect,” Journal of Personality and Social Psychology, 68, 804–825. [1591] BARBER, B., AND T. ODEAN (2001): “Boys Will Be Boys: Gender, Overconfidence, and Common Stock Investment,” Quarterly Journal of Economics, 116, 261–292. [1591] BEM, D. J. (1967): “Self-Perception Theory: An Alternative Interpretation of Cognitive Dissonance Phenomena,” Psychological Review, 74, 183–200. [1595] BÉNABOU, R., AND J. TIROLE (2002): “Self Confidence and Personal Motivation,” Quarterly Journal of Economics, 117, 871–915. [1593,1595]
APPARENT OVERCONFIDENCE
1623
BENOÎT, J. P., AND J. DUBRA (2009): “Overconfidence?” Paper 8879, Munich Personal RePEc Archive. Available at ssrn.com, abstract id 1088746. [1595,1598,1604] (2011): “Rationalizing Overconfident Data,” Unpublished Manuscript, Universidad de Montevideo. [1598,1602] BERNARDO, A., AND I. WELCH (2001): “On the Evolution of Overconfidence and Entrepreneurs,” Journal of Economics & Management Strategy, 10, 301–330. [1591] BIAIS, B., D. HILTON, K. MAZURIER, AND S. POUGET (2005): “Judgemental Overconfidence, Self-Monitoring, and Trading Performance in an Experimental Financial Market,” Review of Economic Studies, 72, 287–312. [1591] BROCAS, I., AND J. CARRILLO (2007): “Systematic Errors in Decision-Making,” Mimeo, University of Southern California. [1593] BURKS, S., J. P. CARPENTER, L. GOETTE, AND A. RUSTICHINI (2011): “Overconfidence Is a Social Signalling Bias,” Discussion Paper 4840, IZA. [1605] CAMERER, C. (1997): “Progress in Behavioral Game Theory,” Journal of Economic Perspectives, 11, 167–188. [1591] CAMERER, C., AND D. LOVALLO (1999): “Overconfidence and Excess Entry: An Experimental Approach’,” American Economic Review, 89, 306–318. [1591,1598] CHUANG, W., AND B. LEE (2006): “An Empirical Evaluation of the Overconfidence Hypothesis,” Journal of Banking & Finance, 30, 2489–2515. [1591] CLARK, J., AND L. FRIESEN (2009): “Overconfidence in Forecasts of Own Performance: An Experimental Study,” Economic Journal, 119 (534), 229–251. [1605] COOPER, P. (1990): “Elderly Drivers’ Views of Self and Driving in Relation to the Evidence of Accident Data,” Journal of Safety Research, 21, 103–113. [1603] DANIEL, K., D. HIRSHLEIFER, AND A. SUBRAHMANYAM (2001): “Overconfidence, Arbitrage, and Equilibrium Asset Pricing,” Journal of Finance, 56, 921–965. [1591] DE BONDT, W., AND R. H. THALER (1995): “Financial Decision-Making in Markets and Firms: A Behavioral Perspective’,” in Finance, Handbooks in Operations Research and Management Science, Vol. 9, ed. by R. A. Jarrow, V. Maksimovic, and W. T. Ziemba. Amsterdam: North Holland, 385–410. [1591] DOMINITZ, J. (1998): “Earnings Expectations, Revisions, and Realizations,” Review of Economics and Statistics, 80, 374–388. [1595] DUNNING, D., J. A. MEYEROWITZ, AND A. D. HOLZBERG (1989): “Ambiguity and SelfEvaluation: The Role of Idiosyncratic Trait Definitions in Self-Serving Assessments of Ability,” Journal of Personality and Social Psychology, 57, 1082–1090. [1595] FANG, H., AND G. MOSCARINI (2005): “Morale Hazard,” Journal of Monetary Economics, 52, 749–777. [1591] FESTINGER, L. (1954): “A Theory of Social Comparison Processes,” Human Relations, 7, 117–140. [1595] GARCIA, D., F. SANGIORGI, AND B. UROSEVIC (2007): “Overconfidence and Market Efficiency With Heterogeneous Agents,” Economic Theory, 30, 313–336. [1591] GREENWALD, A. (1980): “The Totalitarian Ego, Fabrication and Revision of Personal History,” American Psychologist, 35, 603–618. [1593] GRIECO, D., AND R. HOGARTH (2009): “Overconfidence in Absolute and Relative Performance: The Regression Hypothesis and Bayesian Updating,” Journal of Economic Psychology, 30, 756–771. [1602] GROEGER, J., AND G. E. GRANDE (1996): “Self-Preserving Assessments of Skill?” British Journal of Psychology, 87, 61–79. [1603] HOELZL, E., AND A. RUSTICHINI (2005): “Overconfident: Do You Put Your Money on It?” The Economic Journal, 115, 305–318. [1591,1595,1602] HOLLAND, C. (1993): “Self-Bias in Older Drivers’ Judgments of Accident Likelihood,” Accident Analysis and Prevention, 25, 431–441. [1603] KARNI, E. (2009): “A Mechanism for Eliciting Probabilities,” Econometrica, 77, 603–606. [1605]
1624
J.-P. BENOÎT AND J. DUBRA
˝ , B. (2006): “Ego Utility, Overconfidence, and Task Choice,” Journal of the European KOSZEGI Economic Association, 4, 673–707. [1591,1593] KRUGER, J. (1999): “Lake Wobegon Be Gone! The ‘Below-Average Effect’ and the Egocentric Nature of Comparative Ability Judgements,” Journal of Personality and Social Psychology, 77, 221–232. [1593,1601,1602] KRUGER, J., AND D. DUNNING (1999): “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments,” Journal of Personality and Social Psychology, 77, 1121–1134. [1593] KRUGER, J., P. WINDSCHITL, J. BURRUS, F. FESSEL, AND J. R. CHAMBERS (2008): “The Rational Side of Egocentrism in Social Comparisons,” Journal of Experimental Social Psychology, 44, 220–232. [1591] KYLE, A., AND F. A. WANG (1997): “Speculation Duopoly With Agreement to Disagree: Can Overconfidence Survive the Market Test?” Journal of Finance, 52, 2073–2090. [1591] LARRICK, R. P., K. A. BURSON, AND J. B. SOLL (2007): “Social Comparison and Confidence: When Thinking You’re Better Than Average Predicts Overconfidence (and When It Does Not),” Organizational Behavior and Human Decision Processes, 102, 76–94. [1592] MALMENDIER, U., AND G. TATE (2005): “CEO Overconfidence and Corporate Investment,” Journal of Finance, 60, 2661–2700. [1591] MAROTTOLI, R., AND E. RICHARDSON (1998): “Confidence in, and Self-Rating of, Driving Ability Among Older Drivers,” Accident Analysis and Prevention, 30, 331–336. [1603] MASSIE, D., AND K. CAMPBELL (1993): “Analysis of Accident Rates by Age, Gender, and Time of the Day, Based on the 1990 Nationwide Personal Transportation Survey,” Report UMTRI93-7, The University of Michigan Transportation Research Institute. [1593] MATHEWS, M., AND A. MORAN (1986): “Age Differences in Male Driver’s Perception of Accident Risk: The Role of Perceived Driving Ability,” Accident Analysis and Prevention, 18, 299–313. [1603] MENKHOFF, L., U. SCHMIDT, AND T. BROZYNSKI (2006): “The Impact of Experience on Risk Taking, Overconfidence, and Herding of Fund Managers: Complementary Survey Evidence,” European Economic Review, 50 (7), 1753–1766. [1591] MERKLE, C., AND M. WEBER (2011): “True Overconfidence: The Inability of Rational Information Processing to Account for Apparent Overconfidence,” Unpublished Manuscript, ssrn abstract id 1373675. [1605] MILGROM, P. R. (1981): “Good News and Bad News: Representation Theorems and Applications,” The Bell Journal of Economics, 12, 380–391. [1600] MOORE, D. (2007): “Not so Above Average After All: When People Believe They Are Worse Than Average and Its Impications for Theories of Bias in Social Comparison,” Organizational Behavior and Human Decision Processes, 102, 42–58. [1591,1601] MOORE, D. A., AND D. M. CAIN (2007): “Overconfidence and Underconfidence: When and Why People Underestimate (and Overestimate) the Competition,” Organizational Behavior and Human Decision Processes, 103, 197–213. [1602] MOORE, D. A., AND P. J. HEALY (2008): “The Trouble With Overconfidence,” Psychological Review, 115, 502–517. [1593,1595,1599,1605] MYERS, D. G. (1999): Social Psychology (Sixth Ed.). Boston: McGraw-Hill College. [1592] NOTH, M., AND M. WEBER (2003): “Information Aggregation With Random Ordering: Cascades and Overconfidence,” Economic Journal, 113, 166–189. [1591] PENG, L., AND W. XIONG (2006): “Investor Attention, Overconfidence and Category Learning,” Journal of Financial Economics, 80, 563–602. [1591] SANDRONI, A., AND F. SQUINTANI (2007): “Overconfidence, Insurance and Paternalism,” American Economic Review, 97 (5), 1994–2004. [1591] SVENSON, O. (1981): “Are We All Less Risky and More Skillful Than Our Fellow Drivers?” Acta Psychologica, 94, 143–148. [1594] VAN DEN STEEN, E. (2004): “Rational Overoptimism,” American Economic Review, 94, 1141–1151. [1591]
APPARENT OVERCONFIDENCE
1625
WALTON, D. (1999): “Examining the Self-Enhancement Bias: Professional Truck Drivers’ Perceptions of Speed, Safety, Skill and Consideration,” Transportation Research Part F, 2 (2), 91–113. [1603] WANG, A. (2001): “Overconfidence, Investor Sentiment, and Evolution,” Journal of Financial Intermediation, 10, 138–170. [1591] WEINSTEIN, N. (1980): “Unrealistic Optimism About Future Life Events,” Journal of Personality and Social Psychology, 39, 806–820. [1604] WHITT, W. (1980): “Uniform Conditional Stochastic Order,” Journal of Applied Probability, 17, 112–123. [1600] ZÁBOJNÍK, J. (2004): “A Model of Rational Bias in Self-Assessments,” Economic Theory, 23, 259–282. [1591,1593]
London Business School, Regent’s Park, London NW1 4SA, United Kingdom;
[email protected] and Dept. of Economics, Universidad de Montevideo, Prudencio de Pena 2440, Montevideo, Uruguay CP 11400;
[email protected]. Manuscript received May, 2009; final revision received February, 2011.
Econometrica, Vol. 79, No. 5 (September, 2011), 1627–1641
SIMPLE BOUNDS ON THE VALUE OF A REPUTATION BY OLIVIER GOSSNER1 We introduce entropy techniques to study the classical reputation model in which a long-run player faces a series of short-run players. The long-run player’s actions are possibly imperfectly observed. We derive explicit lower and upper bounds on the equilibrium payoffs to the long-run player. KEYWORDS: Reputation, repeated games, incomplete information, relative entropy.
1. INTRODUCTION THE BENEFITS OF BUILDING A REPUTATION can be measured by considering the equilibrium payoffs of repeated games with incomplete information on a player’s preferences. In the benchmark model of Fudenberg and Levine (1989, 1992), there is uncertainty on the type of a long-run player, who is facing a succession of short-run players. The long-run player can either be of a simple commitment type, whose preferences are to play a fixed strategy at every stage of the game, or of a normal type, whose preferences are fixed and who is acting strategically. We provide explicit lower and upper bounds on equilibrium payoffs to the long-run player of a normal type. For patient players, upper bounds converge to an optimistic measure of the long-run player’s Stackelberg payoff in the one-shot game, while lower bounds converge to a pessimistic measure of this Stackelberg payoff if commitment types have full support. The main technical contribution of this paper is to introduce a tool from information theory called relative entropy to measure the difference between the short-run player’s predictions on his next signal conditional on the longrun player’s type and unconditional on it. The major advantage of relative entropies is the chain rule property, which allows derivation of an explicit bound on the expected sum of these differences that depends only on the distribution of types. This bound is then used to derive bounds on the long-run player’s equilibrium payoffs. 2. MODEL AND MAIN RESULT 2.1. The Repeated Game We recall the classical reputation model (Fudenberg and Levine (1992), Mailath and Samuelson (2006)) in which a long-run player 1 interacts with a succession of short-lived players 2. In the stage interaction, the finite set of 1 The author is grateful to Drew Fudenberg for encouraging this project, and to Mehmet Ekmekci, Larry Samuelson, and Saturo Takahashi for insightful comments. Useful suggestions from an editor and two anonymous referees are gratefully acknowledged.
© 2011 The Econometric Society
DOI: 10.3982/ECTA9385
1628
OLIVIER GOSSNER
actions to player i is Ai . Given any finite set X, Δ(X) represents the set of probability distributions over X, so that the set of mixed strategies for player i is Si = Δ(Ai ). ˆ While ω The set of types to player 1 is Ω = {ω} ˜ ∪ Ω. ˜ is player 1’s normal ˆ type, Ω ⊆ S1 is a countable (possibly finite) set of commitment types. Each type ω ˆ ∈ Ωˆ is a simple type committed to playing the corresponding strategy ω ˆ ∈ S1 at each stage of the interaction. The initial distribution over player 1’s types is μ ∈ Δ(Ω) with full support. Actions in the stage game are imperfectly observed. Player i’s set of signals is a finite set Zi . When actions a = (a1 a2 ) are chosen in the stage game, the pair of private signals z = (z1 z2 ) ∈ Z := Z1 × Z2 is drawn according to the distribution q(·|a) ∈ Δ(Z). For α ∈ S1 × S2 , we let q(z|α) = Eα q(z|a), and q2 (·|α) denotes the marginal distribution of q(·|α) on Z2 . Each player i is privately informed of the component zi of z. Particular cases of the model include public monitoring (z1 = z2 almost surely for every a) and perfect monitoring (z1 = z2 = a almost surely for every a). Each instance of player 2 is informed of the sequence of signals received by previous players 2. We do not impose that player 2’s signal reveals player 2’s action, thus allowing for the interpretation that a signal to player 2 is a trace, such as feedback on a website, that player 2 leaves to all future instances of other players 2. The stage payoff function to player 1’s normal type is u1 , and player 2’s payoff function is u2 , where ui : A1 × A2 → R. The model includes the cases in which player i’s payoffs depend only on the chosen action and received private signal or on the private signal only. The set of private histories prior to stage t for player i is Hit−1 = Zit−1 , with the usual convention that Hi0 = {∅}. A strategy for player 1 is a map2 σ1 : Ω × H1t → S1 t≥0
ˆ since commitment types are rethat satisfies σ1 (ω ˆ h1t ) = ω ˆ for every ω ˆ ∈ Ω, quired to play the corresponding strategy in the stage game. The set of strategies for player 1 is denoted Σ1 . A strategy for player 2 at stage t is a map σ2t : H2t−1 → S2 We let Σ2t be the set of such strategies and let Σ2 = t≥1 Σ2t denote the set of sequences of such strategies. A history ht of length t is an element of Ω × (A1 × A2 × Z1 × Z2 )t describing player 1’s type and all actions and signals to the players up to stage t. A strategy 2
The model includes the case in which z1 reveals a1 , in which case player 1 has perfect recall.
BOUNDS ON THE VALUE OF REPUTATION
1629
profile σ = (σ1 σ2 ) ∈ Σ1 × Σ2 induces a probability distribution Pσ over the set of plays H∞ = Ω × (A1 × A2 × Z1 × Z2 )N endowed with the product σ-algebra. We let at = (a1t a2t ) represent the pair of actions chosen at stage t and let zt = (z1t z2t ) denote the pair of signals received at stage t. Given ω ∈ Ω, Pωσ (·) = Pσ (·|ω) represents the distribution over plays conditional on player 1 being of type ω. Player 1’s discount factor is 0 < δ < 1 and the expected discounted payoff to player 1 of a normal type is π1 (σ) = EPωσ (1 − δ) ˜
δt−1 u1 (at )
t≥1
2.2. ε-Entropy-Confirming Best Responses The relative entropy (see Cover and Thomas (1991) for more on information theory) between two probability distributions P and Q over a finite set X such that the support of Q contains that of P is d(P Q) = EP ln
P(x) P(x) = P(x) ln Q(x) x∈X Q(x)
The uniform norm distance between two probability measures over X is P − Q = max |P(x) − Q(x)| x∈X
Pinsker’s (1964) inequality implies that d(P Q) ≥ 2 P − Q 2 . Assume that player 1’s strategy in the stage game is α1 and player 2 plays a best response α2 to his belief α 1 on player 1’s strategy. Under (α1 α2 ), the distribution of signals to player 2 is q2 (·|α1 α2 ), while it is q2 (·|α 1 α2 ) under (α 1 α2 ). Player 2’s prediction error on his signals, measured by the relative entropy, is d(q2 (·|α1 α2 ) q2 (·|α 1 α2 )). We define ε-entropy-confirming best responses in a similar way as the εconfirming best responses of Fudenberg and Levine (1992). The main difference is that player 2’s prediction errors are measured using the relative entropy instead of the uniform norm. Another, albeit minor, difference is that we do not require player 2’s strategies to be non-weakly dominated. DEFINITION 1: α2 ∈ S2 is an ε-entropy-confirming best response to α1 ∈ S1 if there exists α 1 such that the following conditions hold: • α2 is a best response to α 1 . • d(q2 (·|α1 α2 ) q2 (·|α 1 α2 )) ≤ ε. d Bα1 (ε) denotes the set of ε-entropy-confirming best responses to α1 .
1630
OLIVIER GOSSNER
Pinsker’s inequality implies that every non-weakly dominated ε-entropyconfirming best response is a ε/2-confirming best response. The sets of 0entropy-confirming best responses and 0-confirming best responses coincide. When player 1’s strategy is α1 and player 2 plays an ε-entropy-confirming best response to it, the payoff to player 1 is bounded below by3 vα1 (ε) = min u1 (α1 α2 ) d (ε) α2 ∈Bα ¯ 1 The maximum payoff to player 1 if player 2 plays an ε-entropy-confirming best response to player 1’s strategy is ¯ v(ε) = max{u1 (α1 α2 ) α1 ∈ S1 α2 ∈ Bαd1 (ε)} The supremum of all convex functions below vα1 is denoted wα1 , and w¯ rep¯ ¯ Practical¯ computations resents the infimum of all concave functions above v. ¯ and w¯ are presented in Section 3. and estimations of the maps vα1 , wα1 , v, ¯ ¯ in Appendix A. Our main result below is proven THEOREM 1: If σ is a Nash equilibrium, then ¯ sup wωˆ (−(1 − δ) ln μ(ω)) ˆ ≤ π1 (σ) ≤ w(−(1 − δ) ln μ(ω)) ˜ ω ˆ ¯ Define the lower Stackelberg payoff as v∗ = supα1 ∈S1 vα1 (0). It is the least pay¯ player 2 plays ¯ a best response while off player 1 obtains when choosing α1 and predicting accurately the distribution of his own signals. ¯ The upper Stackelberg payoff is v¯ ∗ = v(0). It is the maximal payoff to player 1 when player 2 plays a best response while predicting accurately the distribution of his own signals. ¯ Let N(δ) and N(δ) be the infimum and the supremum, respectively, of Nash ¯ equilibrium payoffs to player 1 with discount factor δ. As a corollary to Theorem 1, we obtain the following version of Fudenberg and Levine’s (1992) results. The proof can be found in Appendix B. COROLLARY 1—Fudenberg and Levine (1992): ¯ ≤ v¯ ∗ lim sup N(δ) δ→1
If Ωˆ is dense in S1 , then lim inf N(δ) ≥ v∗ δ→1 ¯ ¯ 3
It is a consequence of Lemma 7 in Appendix B that the min and max below are well defined.
BOUNDS ON THE VALUE OF REPUTATION
1631
3. ESTIMATING THE VALUE OF REPUTATION FUNCTIONS The bounds provided by Theorem 1 rely on knowledge of the functions wωˆ ¯ ¯ We show how these functions can be estimated, and that the bounds and w. are sharper when monitoring is more accurate. 3.1. Lower Bounds We first consider the benchmark case of perfect monitoring; then we consider games with imperfect monitoring. 3.1.1. Games With Perfect Monitoring We assume here that player 2 has perfect monitoring in the stage game, that is, Z2 = A, and q2 (a|a) = 1 for every a. As in Fudenberg and Levine (1989, pp. 765–766), we deduce from the upper hemicontinuity of the best response correspondence of player 2 that there exists d ∗ such that Bωdˆ (d) = Bωdˆ (0) for every d < d ∗ . It follows that vωˆ (d) = vωˆ (0) for d < d ∗ and we bound vωˆ (d) by ¯ ¯ piecewise that vωˆ (d) is bounded u = mina u1 (a) for d ≥ d ∗ . It¯ is then verified ¯below by the linear map d u + (1 − d )vωˆ (0) and it follows that ¯ wωˆ (d) admits d∗ d∗ ¯ ¯ that ¯ the same lower bound. Theorem 1 shows ln μ(ω) ˆ ln μ(ω) ˆ N(δ) ≥ −(1 − δ) u + 1 + (1 − δ) (1) vωˆ (0) d∗ ¯ d∗ ¯ ¯ In case ω ˆ is a pure strategy, Fudenberg and Levine (1989) proved the result ˆ ln μ∗ ˆ ln μ∗ N(δ) ≥ 1 − δln μ(ω)/ (2) vωˆ (0) u + δln μ(ω)/ ¯ ¯ ¯ where μ∗ is such that for every α 1 , every best response to μ∗ ω ˆ + (1 − μ∗ )α 1 is also a best response to ω. ˆ To compare the bounds (1) and (2), we first relate the highest possible value ˆ is a pure strategy, of d ∗ with the smallest value for μ∗ . Observe that since ω ˆ < − ln(μ∗ ) if and only if α1 (ω) ˆ > μ∗ . Hence the relad(ω ˆ α1 ) = − ln α1 (ω) ˆ ln μ∗ ˆ < μ∗ , δln μ(ω)/ > tionship d ∗ = − ln(μ∗ ). In the interesting case where μ(ω) ∗ and we conclude, not too surprisingly, that (2) is tighter 1 + (1 − δ) ln μ(ω)/d ˆ than (1). The difference between the two bounds can be understood as coming from the fact that (2) is based on an exact study of player 2’s beliefs path per path, whereas (1) comes from a study of beliefs on average. Note, however, that both bounds converge to vωˆ (0) at the same rate when δ → 1, since ∗
ˆ ln μ δln μ(ω)/ −1 = 1 δ→1 ln μ(ω) ˆ (1 − δ) d∗
lim
Note that, unlike (1), (2) does not extend to the case in which ω ˆ is a mixed strategy. In that case, Fudenberg and Levine (1992, Theorem 3.1) (see also
1632
OLIVIER GOSSNER
Proposition 15.4.1 in Mailath and Samuelson (2006)) instead proved that for every ε, there exists K such that for every δ, N(δ) ≥ (1 − (1 − ε)δK )u + (1 − ε)δK vωˆ (0) ¯ ¯ ¯ To obtain the convergence of the lower bound to vωˆ (0) requires first taking ¯ ε small enough, thus fixing K, then taking δ close enough to 1. Since K gets larger and larger as ε → 0, (3) does not yield a rate of convergence of the lower bound to vωˆ (0). On the ¯other hand, the bound (1) is explicit and provides a rate of convergence of N(δ) as δ → 1 for mixed commitment types that is no slower than the ¯ by (2) for pure types. rate implied (3)
EXAMPLE 1: To illustrate the bound (1) when commitment types are in mixed strategies, consider a perfect monitoring version of the quality game of Fudenberg and Levine (1989, Example 4) (see also Mailath and Samuelson (2006, Example 15.4.2)) in which player 1 can produce a good of high (H) or low (L) quality, and player 2 can decide to buy the good (b) or not (n): H L
n 0 0 0 0
b 1 1 3 −1
We identify player 1’s strategies with the probability they assign to H and assume the existence of a type ω ˆ > 12 such that μ(ω) ˆ > 0. Let d ∗ = d(ω ˆ 12 ) = ln(2) + ω ˆ ln(ω) ˆ + (1 − ω) ˆ ln(1 − ω) ˆ > 0. The unique best response to α 1 > 12 d is b, so that Bωˆ (d) = {b} for every d < d ∗ . We have vωˆ (0) = 3 − 2ω ˆ and u = 0, ¯ ¯ and (1) becomes ln μ(ω) ˆ (3 − 2ω) ˆ N(δ) ≥ 1 + (1 − δ) d∗ ¯ Observe that the closer ω ˆ is to 12 , the closer 3 − 2ω ˆ is to the Stackelberg ω) ˆ is. This ilpayoff of 2, but the lower d ∗ is, hence the lower 1 + (1 − δ) ln μ( d∗ lustrates the trade-off according to which commitment types closer to 12 yield higher payoffs in the long run, but may require a longer time to establish the proper reputation. 3.1.2. Games With Identifiable Actions We say that player 2 identifies player 1’s actions whenever α1 = α 1 implies q (·|α1 α2 ) = q2 (·|α 1 α2 ) for every α2 . Let d0∗ be such that d(ω ˆ α1 ) < d0∗ imˆ By continuity plies that every best response to α1 is also a best response to ω. of the relative entropy and using the identifiability assumption, we can find d ∗ 2
BOUNDS ON THE VALUE OF REPUTATION
1633
ˆ α2 ) q2 (·|α1 α2 )) < d ∗ implies d(ω ˆ α1 ) < d0∗ . We now such that d(q2 (·|ω d d ∗ have Bωˆ (d) = Bωˆ (0) for every d < d and we deduce as in Section 3.1.1 that ln μ(ω) ˆ ln μ(ω) ˆ N(δ) ≥ −(1 − δ) u + 1 + (1 − δ) (4) vωˆ (0) d∗ ¯ d∗ ¯ ¯ Hence, similar lower bounds on player 1’s lowest equilibrium payoff, linear in (1 − δ)μ(ω), ˆ obtain under full monitoring and under the weaker assumption of identifiability. The formula remains the same, but the parameter d ∗ in (4) needs to be adjusted for the quality of monitoring. We show in Section 3.3 that poorer monitoring leads to weaker bounds. 3.1.3. Games Without Identifiable Actions In games with perfect monitoring, and more generally in games with identifiable actions, the map vωˆ is constant and equal to vωˆ (0) in a neighborhood ¯ 2 allow for statistical of 0. This is not the case¯when not all actions of player identification of player 1’s actions as in the following example. EXAMPLE 2: We take up the classical entry game from Kreps and Wilson (1982) and Milgrom and Roberts (1982). The stage game is an extensive-form game in which player 2 (the entrant) first decides whether to enter (E) a market or not (N), and following an entry of player 2, player 1 (the incumbent) decides whether to fight (F ) or accommodate (A). The payoff matrix is the entry game F A
E −1 b − 1 0 b
N a 0 a 0 ,
where a > 1 and 1 > b > 0. It is natural to assume that each player 2 observes the outcomes of the previous interactions, namely, whether each past player 2 decided to enter or not, and whether player 1 decided to fight or accommodate each entry of player 2. Thus, Z 2 = {N F A}. Consider the pure strategy commitment type ω ˆ = F and let q represent the probability that player 2’s mixed strategy assigns to E. Values q ∈ (0 1) are best responses only ˆ α2 ) = qF + (1 − q)N and to α 1 = bF + (1 − b)A. Then, for q ∈ (0 1), q2 (·|ω ˆ α2 ) q2 (·|α 1 α2 )) = q2 (·|α 1 α2 ) = qbF + q(1 − b)A + (1 − q)F , and d(q2 (·|ω ε d −q ln(b). It follows that BF (ε) = {q q ≤ − ln b } and ε ε = a + (a + 1) wF (ε) = vF (ε) = u1 F − ln b ln b ¯ ¯ Now using Theorem 1, we obtain the bound N(δ) ≥ a − ¯
(a + 1)(1 − δ) ln(μ(F)) ln b
1634
OLIVIER GOSSNER
In this example again, the rate of convergence of the lower bound on equilibrium payoffs is linear in (1 − δ)μ(ω). ˆ 3.2. Upper Bounds 3.2.1. Games With Perfect Monitoring Let U = maxa1 a 1 a2 |u1 (a1 a2 ) − u1 (a 1 a2 )|. The following lemma is proven in Appendix C. LEMMA 1: We have
∗
¯ ¯ v(ε) ≤ w(ε) ≤ v¯ + U
ε 2
The upper bound from Theorem 1 and Lemma 1 together imply (5)
U ¯ ˜ N(δ) ≤ v¯ ∗ + √ −(1 − δ) ln μ(ω) 2
Fudenberg and Levine (1992) showed that for every ε, there exists K such that for every δ, (6)
¯ N(δ) ≤ (1 − (1 − ε)δK ) max u1 (a) + (1 − ε)δK v¯ ωˆ (0) a
¯ Note that (6) provides an upper bound on N(δ) that converges to v¯ ∗ as δ → 1. However, unlike from (5), no explicit rate of convergence can be derived from (6). 3.2.2. Games With Identifiable Actions We take up games in which player 2 identifies player 1’s actions, as in Section 3.1.2. We rely on the following lemma, proven in Appendix C. LEMMA 2: If player 2 identifies player 1’s actions, then there exists a constant M such that for every α1 α 1 ∈ S1 and α2 ∈ S2 , d q2 (·|α1 α2 ) q2 (·|α 1 α2 ) ≥ M α1 − α 1 2 Following the same line of proof as Lemma 1, we obtain that for M defined by Lemma 2, √ ¯ ¯ v(ε) ≤ w(ε) ≤ v¯ ∗ + U Mε Hence, the upper bound from Theorem 1 implies ¯ ˜ N(δ) ≤ v¯ ∗ + U −M(1 − δ) ln μ(ω)
BOUNDS ON THE VALUE OF REPUTATION
1635
Thus, in games with identifiable actions as well as in games with full monitor¯ ing, we obtain an upper bound√on N(δ) that converges to v¯ ∗ when δ → 1 at a speed which is constant times 1 − δ. 3.3. The Effect of the Quality of Monitoring Consider an alternative monitoring structure q with set of signals S2 for player 2. Assume that monitoring of player 2 is poorer under q than under q in the sense that there exists Q : S2 → Δ(S2 ) such that q (s2 |a1 a2 ) =
s2 q (s2 |a1 a2 )Q(s2 )(s2 ) for every a1 , a2 . This is the case when player 2’s sig
nals under q can be reproduced, using Q, from player 2’s signals under q. Let (q ⊗ Q)2 (·|α1 α2 ) and (q ⊗ Q)2 (·|α 1 α2 ) be the probability distributions over S2 × S2 induced by q2 (·|α1 α2 ) and q2 (·|α 1 α2 ), respectively, and Q. The chain rule of relative entropies (see Appendix A.1) implies that for every α1 α 1 α2 , d q2 (·|α1 α2 ) q2 (·|α 1 α2 ) = d (q ⊗ Q)2 (·|α1 α2 ) (q ⊗ Q)2 (·|α 1 α2 ) ≤ d q 2 (·|α1 α2 ) q 2 (·|α 1 α2 ) where the inequality uses the fact that the marginals of (q ⊗ Q)2 (·|α1 α2 ) and (q⊗Q)2 (·|α 1 α2 ) over S2 are q 2 (·|α1 α2 ) and q 2 (·|α 1 α2 ). This implies that the sets of ε-entropy best responses under the monitoring structure q are subsets of their counterparts under the monitoring structure q . Let vα 1 (ε) and v¯ (ε) ¯ ¯ be the equivalents of vα1 (ε) and v(ε), defined under the monitoring structure ¯ the following theorem. q instead of q. We obtain THEOREM 2: If monitoring is poorer under q than under q, then for every α1 ¯ and ε, vα1 (ε) ≥ vα 1 (ε) and v(ε) ≤ v¯ (ε). ¯ ¯ The theorem shows that the bounds equilibrium payoffs following from Theorem 1 are tighter under more informative structures for player 2 than for less informative structures. APPENDIX A: PROOF OF THEOREM 1 A.1. The Chain Rule for Relative Entropies Our proof relies on the fundamental property of relative entropies called the chain rule. Let P and Q be two distributions over a finite product set X × Y with marginals PX and QX on X. Let PY (·|x) and QY (·|x) be P’s and Q’s conditional probabilities on Y given x ∈ X. According to the chain rule of relative entropies, d(P Q) = d(PX QX ) + EPX d PY (·|x) QY (·|x)
1636
OLIVIER GOSSNER
An implication of the chain rule is the following bound on the relative entropy between two distributions under some “grain of truth.” LEMMA 3: Assume Q = εP + (1 − ε)P for some ε > 0 and P . Then d(P Q) ≤ − ln ε PROOF: Consider the following experiment. (i) Draw a Bernoulli random variable X with probability of success ε. (ii) Conditional on X = 1, draw Y according to P; conditional on X = 0, draw X according to P . Let Q0 be the corresponding distribution of X × Y . Let P 0 be defined as Q0 , except that 0 the probability of success is 1. Denote by PX0 and PY0 (resp., QX and QY0 ) X 0 0 and Y ’s distributions under P (resp., Q ). Then according to the chain rule conditioning on X, 0 d(P 0 Q0 ) = d(PX0 QX ) = − ln(ε)
Using the chain rule conditioning on Y gives d(P 0 Q0 ) ≥ d(PY0 QY0 ) = d(P Q)
Q.E.D.
A.2. Bound on Player 2’s Prediction Errors To obtain bounds on equilibrium payoffs, we first bound the expected discounted distance between player 2’s prediction on his own signals and the true distribution of these signals given player 1’s type. For n ≥ 1, consider the marginal Pσ2n of Pσ over the sequences H2n of signals 2n be the marginal of Pωσ over H2n . of player 2, and for ω ∈ Ω, let Pωσ LEMMA 4: For ω ∈ Ω, 2n d(Pωσ Pσ2n ) ≤ − ln μ(ω) 2n + (1 − μ(ω))P , where P is Pσ2n PROOF: Decompose Pσ2n = μ(ω)Pωσ conditioned on player 1 not being of type ω. The result follows from Lemma 3. Q.E.D.
Under Pσ , player 2’s beliefs on the next stage’s signals following h2t are given by p2σ (h2t )(z2t+1 ) = Pσ (z2t+1 |h2t ) We compare p2σ (h2t ) with player 2’s belief if player 2 knew that player 1 is of type ω, given by p2ωσ (h2t )(z2t+1 ) = Pωσ (z2t+1 |h2t )
BOUNDS ON THE VALUE OF REPUTATION
1637
The expected discounted sum of relative entropies between these two predictions is δ dωσ = (1 − δ) δt−1 EPωσ d(p2ωσ (h2t ) p2σ (h2t )) t≥1
Lemma 5 provides an upper bound on the expected total discounted prediction error of player 2. LEMMA 5: For every ω ∈ Ω, δ dωσ ≤ −(1 − δ) ln μ(ω)
PROOF: From an iterated application of the chain rule, 2n d(Pωσ Pσ2n ) =
n
EPωσ d(p2ωσ (h2t ) p2σ (h2t ))
t=1
Using the decomposition of the discounted average as a convex combination of arithmetic averages (see, e.g., equation (1) in Lehrer and Sorin (1992)) and Lemma 4, we get δ dωσ = (1 − δ)2
n≥1
= (1 − δ)2
δn−1
n
EPωσ d(p2ωσ (h2t ) p2σ (h2t ))
t=1 2n δn−1 d(Pωσ Pσ2n )
n≥1
≤ (1 − δ)2
δn−1 (− ln μ(ω))
n≥1
= −(1 − δ) ln μ(ω)
Q.E.D.
A.3. Proof of the Lower Bounds on Equilibrium Payoffs Consider a commitment type ω ˆ ∈ Ωˆ and let σ1ωˆ ∈ Σ1 be player 1’s strategy in which the normal type follows ω, ˆ given by σ1ωˆ (ω ˜ h1t ) = ω ˆ for every h1t . The next lemma provides a lower bound on the payoff to player 1 of normal type playing σ1ωˆ ∈ Σ1 and facing an equilibrium strategy of player 2. It implies the lower bound of equilibrium payoffs of Theorem 1. ˆ LEMMA 6: If (σ1 σ2 ) is a Nash equilibrium, then for every ω ˆ ∈ Ω, π1 (σ1ωˆ σ2 ) ≥ wωˆ (−(1 − δ) ln μ(ω)) ˆ ¯
1638
OLIVIER GOSSNER
PROOF: Let σ = (σ1ωˆ σ2 ). Note that under σ , and conditional on h2t ∈ ˆ σ2 (h2t )). Since H2t , the expected payoff to player 1 at stage t + 1 is u1 (ω 2 ˆ σ2 (h2t ) is a d(p2ωσ ˆ (h2t ) pσ (h2t ))-entropy-confirming best response to ω, 2 ˆ σ2 (h2t )) ≥ vωˆ (d(p2ωσ (h ) p (h ))). then u1 (ω 2t 2t σ ˆ ¯ 1’s expected discounted payoff. Using Pωσ Now we bound player ˜ = Pωσ ˆ , then vωˆ ≥ wωˆ , and applying Jensen’s inequality to the convex map wωˆ , we ob¯ ¯ tain ¯ δt−1 EPωσ u (ω ˆ σ2 (h2t )) π1 (σ ) = (1 − δ) ˜ 1 t≥1
2 δt−1 EPωσ vωˆ d(p2ωσ ˆ (h2t ) pσ (h2t )) ˆ ¯ t≥1 2 ≥ (1 − δ) δt−1 EPωσ wωˆ d(p2ωσ ˆ (h2t ) pσ (h2t )) ˆ ¯ t≥1 2 2 ≥ wωˆ (1 − δ) δt−1 EPωσ d(p (h ) p (h )) 2t 2t ωσ ˆ σ ˆ ¯ t≥1
≥ (1 − δ)
δ = wωˆ (dωσ ˆ ) ¯ and the result follows from Lemma 5, since wωˆ is nonincreasing. ¯
Q.E.D.
A.4. Proof of the Upper Bounds on Equilibrium Payoffs Conditional on player 1 being of type ω ˜ and on history h2t of player 2, the average strategy of player 1 at stage t + 1 is τ1 (h2t ) = Pωσ ˜ h1t ) ˜ (h1t |h2t )σ1 (ω h1t 2 Since σ2 (h2t ) is a d(p2ωσ ˜ (h2t ) pσ (h2t ))-entropy-confirming best response to ˜ at stage t + 1 τ1 (h2t ), the payoff u1 (τ1 (h2t ) σ2 (h2t )) to player 1 of type ω 2 2 ¯ (h ) p (h )). conditional on h2t is no more than v(d(p 2t 2t ωσ ˜ σ δ ¯ ωσ As in the proof of Lemma 6, we deduce that π1 (σ) ≤ w(d ˜ ), and the result follows from Lemma 5.
APPENDIX B: PROOF OF COROLLARY 1 LEMMA 7: The map (α1 ε) → Bαd1 (ε) is upper hemicontinuous. PROOF: Consider sequences (α1n )n → α1 , (εn )n → ε, and (α2n )n → α2 with α2n ∈ Bαd1 (εn ). Let α 1n be such that α2n is a best response to it and d q2 (·|α1n α2n ) q2 (·|α 1n α2n ) ≤ εn
BOUNDS ON THE VALUE OF REPUTATION
1639
Extract a subsequence (α 1m(n) )n of (α 1n )n converging to some α 1 . From the lower semicontinuity of d(· ·), d q2 (·|α1 α2 ) q2 (·|α 1 α2 ) ≤ ε and the upper hemicontinuity of the best-response correspondence implies Q.E.D. that α2 is a best response to α 1 . Hence, α2 ∈ Bαd1 (ε). LEMMA 8: (i) vα1 and wα1 for every α1 and supα1 ∈Ωˆ wα1 are nonincreasing, ¯ lower semicontinuous, and ¯continuous at 0, and wα1 (0) =¯vα1 (0). ¯ ¯ w¯ are nondecreasing, upper semicontinuous, (ii) v, and¯ continuous at 0, and ¯ ¯ w(0) = v(0). PROOF: We prove (i). The correspondence Bαd1 (·) being monotone and upper hemicontinuous, vα1 is nonincreasing and lower semicontinuous. These properties carry from v¯ α1 to wα1 for every α1 and to supα1 ∈Ωˆ wα1 , and they imply ¯ ¯ continuity at 0 for all these maps. wα1 (0) = vα1 (0) is implied by the continuity ¯ Q.E.D. at 0 of vα1 . Point (ii) is obtained by ¯similar arguments. ¯ ¯ ¯ PROOF OF COROLLARY 1: From Theorem 1, lim supδ→1 N(δ) ≤ w(ε) for ev∗ ¯ ¯ ¯ = v(0) = v¯ . ery ε > 0, hence lim supδ→1 N(δ) ≤ w(0) From Theorem 1, lim infδ→1 N(δ) ≥ supα1 ∈Ωˆ wα1 (ε) for ε > 0, hence ¯ hemicontinuity of α → lim infδ→1 N(δ) ≥ supα1 ∈Ωˆ wα1 (0) =¯ supα1 ∈Ωˆ vα1 (0). By 1 ¯ ¯ ¯ d ˆ Bα1 (0), α1 → vα1 (0) is lower semicontinuous, and since Ω is dense in S1 , ∗ ¯ sup Q.E.D. supα1 ∈Ωˆ vα1 (0) = α1 ∈S1 vα1 (0) = v . ¯ ¯ ¯ APPENDIX C: PROOFS FROM SECTION 3 PROOF OF LEMMA 1: Since monitoring is perfect, α2 ∈ Bαd1 (ε) implies that α2 is a best response to α 1 ∈ S1 such that d(α1 α 1 ) ≤ ε. Hence ¯ v(ε) = ≤
max
π(α1 α2 )
max
π(α 1 α2 )
d (ε) α1 ∈S1 α2 ∈Bα 1
α 1 ∈S1 α2 ∈Bd (0) α
+
1
max
α2 ∈S2 d(α1 α 1 )≤ε
|π1 (α1 α2 ) − π1 (α 1 α2 )|
¯ Note that maxα ∈S1 α2 ∈Bd (0) π(α 1 α2 ) = v(0) = v¯ ∗ , while Pinsker’s inequality im1
α
plies max
α2 ∈S2 d(α1 α 1 )≤ε
≤
1
|π1 (α1 α2 ) − π1 (α 1 α2 )|
max √
α2 ∈S2 α1 −α 1 ≤
|π1 (α1 α2 ) − π1 (α 1 α2 )| ε/2
1640
OLIVIER GOSSNER
≤
max√
α1 −α 1 ≤
≤U
U α1 − α 1 ε/2
ε 2
Since the map ε → v¯ ∗ + U ¯ w.
ε 2
¯ it also lies above is concave and lies above v, Q.E.D.
PROOF OF LEMMA 2: From Pinsker’s inequality, 2 d q2 (·|α1 α2 ) q2 (·|α 1 α2 ) ≥ 2 q2 (·|α1 α2 ) − q2 (·|α 1 α2 ) To complete the proof, it is enough to show the existence of M such that for every α1 α 1 α2 , 2 q (·|α1 α2 ) − q2 (·|α α2 ) ≥ M α1 − α 1 1 which can be rewritten as
α2 (a2 )(α1 (a1 ) − α1 (a1 ))q(s2 |a1 a2 )
s2
a1 a2
≥ M
|α1 (a1 ) − α 1 (a1 )|
a1
Let B = β1 : S2 → R β(a1 ) = 0 |β(a1 )| = 1 a1
a1
The identifiability assumption shows that
α (a ) β (a )q(s |a a ) 2 2 1 1 2 1 2 >0
s2
a1 a2
a1
for β1 ∈ B, α2 ∈ S2 . Since B × A1 is compact, this implies the existence of M
such that
α2 (a2 ) β1 (a1 )q(s2 |a1 a2 )
> M
s2
a1 a2
a1
for every β2 ∈ B, α2 ∈ S2 , hence the result.
Q.E.D.
BOUNDS ON THE VALUE OF REPUTATION
1641
REFERENCES COVER, T. M., AND J. A. THOMAS (1991): Elements of Information Theory. Wiley Series in Telecommunications. New York: Wiley. [1629] FUDENBERG, D., AND D. K. LEVINE (1989): “Reputation and Equilibrium Selection in Games With a Patient Player,” Econometrica, 57, 759–778. [1627,1631,1632] (1992): “Maintaining a Reputation When Strategies Are Imperfectly Observed,” Review of Economic Studies, 59, 561–579. [1627,1629-1631,1634] KREPS, D., AND R. WILSON (1982): “Reputation and Imperfect Information,” Journal of Economic Theory, 27, 253–279. [1633] LEHRER, E., AND S. SORIN (1992): “A Uniform Tauberian Theorem in Dynamic Programming,” Mathematics of Operations Research, 17, 303–307. [1637] MAILATH, G., AND L. SAMUELSON (2006): Repeated Games and Reputations: Long-Run Relationships. New York: Oxford University Press. [1627,1632] MILGROM, P., AND J. ROBERTS (1982): “Predation, Reputation and Entry Deterrence,” Journal of Economic Theory, 27, 280–312. [1633] PINSKER, M. (1964): Information and Information Stability of Random Variables and Processes. Holden-Day Series in Time Series Analysis. San Francisco: Holden Day. [1629]
Paris School of Economics, 48 Boulevard Jourdan, 75014 Paris, France and London School of Economics, London, U.K.;
[email protected]. Manuscript received June, 2010; final revision received December, 2010.
Econometrica, Vol. 79, No. 5 (September, 2011), 1643–1664
GAMES WITH DISCONTINUOUS PAYOFFS: A STRENGTHENING OF RENY’S EXISTENCE THEOREM BY ANDREW MCLENNAN, PAULO K. MONTEIRO, AND RABEE TOURKY1 We provide a pure Nash equilibrium existence theorem for games with discontinuous payoffs whose hypotheses are in a number of ways weaker than those of the theorem of Reny (1999). In comparison with Reny’s argument, our proof is brief. Our result subsumes a prior existence result of Nishimura and Friedman (1981) that is not covered by his theorem. We use the main result to prove the existence of pure Nash equilibrium in a class of finite games in which agents’ pure strategies are subsets of a given set, and in turn use this to prove the existence of stable configurations for games, similar to those used by Schelling (1971, 1972) to study residential segregation, in which agents choose locations. KEYWORDS: Noncooperative games, discontinuous payoffs, pure Nash equilibrium, existence of equilibrium, better reply security, multiple restrictional security, diagnosable games, residential segregation.
1. INTRODUCTION MANY IMPORTANT AND FAMOUS GAMES in economics (e.g., the Hotelling location game, Bertrand competition, Cournot competition with fixed costs, and various auction models) have discontinuous payoffs, and consequently do not satisfy the hypotheses of Nash’s existence proof or its infinite dimensional generalizations, but nonetheless have at least one pure Nash equilibrium. Using an argument that is quite ingenious and involved, Reny (1999) established a result that subsumes earlier equilibrium existence results covering many such examples. His theorem’s hypotheses are simple and weak, and in many cases easy to verify. The result has been applied in novel settings many times since then. (See for example Monteiro and Page (2008).) Largely in response to his work, a number of papers on discontinuous games have appeared recently (Carmona (2005, 2009), Bagh and Jofre (2006), Bich (2006, 2009), Monteiro and Page (2007), Barelli and Soza (2009), Tian (2009), Carbonell-Nicolau (2010, 2011), Prokopovych (2010, 2011), de Castro (2011)). Of these, Barelli and Soza (2009) 1 We have benefitted from useful conversations with Paulo Barelli, Marcus Berliant, Philippe Bich, Priscilla Man, Claudio Mezzetti, Stuart McDonald, Rohan Pitchford, Pavlo Prokopovych, and Phil Reny. We are also grateful for useful comments and suggestions from Luciano de Castro, Nikolai Kukushkin, the editor, and anonymous referees. Comments of seminar audiences at the University of Chicago, the University of Illinois Urbana–Champagne, Kyoto University, the University of Minnesota, Université de Paris I, the University of New South Wales, the 2009 European Workshop on General Equilibrium Theory, the 2009 NSF/NBER/CEME Conference on General Equilibrium and Mathematical Economics, the Tenth SAET Conference, and the 2nd UECE—Lisbon Meetings are gratefully acknowledged. McLennan’s work was funded in part by Australian Research Council Grant DP0773324, Monteiro’s work was funded in part by CNPq Grant 301363/2010-2, and Tourky’s work was funded in part by Australian Research Council Grant DP1093105.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8949
1644
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
deserves special mention because it adopts many techniques from an earlier version of this paper. Briefly, a key idea in Reny (1999) and here is “securing a payoff” at a strategy profile for the other players by playing a pure strategy that insures that payoff when the profile of other players’ strategies is near the given profile. Barelli and Soza showed that results similar to our existence theorem continue to hold if one allows the securing strategy to be a continuous function of the profile of other players’ strategies. As Reny explained, the main result of Nishimura and Friedman (1981) and results concerning the existence of Cournot equilibria (Szidarovzky and Yakowitz (1977) and the independent (see Roberts and Sonnenschein (1977)) work of McManus (1962, 1964) and Roberts and Sonnenschein (1976)) seemingly have a different character and are not obvious consequences of his result. Here we provide a generalization of Reny’s theorem that easily implies the Nishimura and Friedman result. The refinements of Reny’s theorem introduced in Section 3 are of two sorts: (a) we weaken the condition of better reply security at a point by allowing several securing strategies to be used; (b) we allow subcorrespondences of the better reply correspondence to be specified by restricting the agents to subsets of their sets of pure strategies. In Section 5 we present an application of (a) and (b). We prove the existence of pure Nash equilibria for diagnosable games, which are finite games in which each agent chooses from a collection of subsets of a given finite set, and of course payoffs are restricted in a certain way. Section 6 uses this result to establish the existence of pure Nash equilibria for a model in which agents choose locations that is similar in spirit to the games that Schelling (1971, 1972) used to study residential segregation. Whereas Schelling used simulation to study the qualitative properties of myopic (in the sense that future relocations are unanticipated) adjustment dynamics, our result demonstrates the existence of stable configurations, which may be viewed as the possible rest points for the sorts of processes he studied. The remainder of the paper has the following organization. The next section reviews Reny’s theorem and states a result whose hypotheses are less restrictive than Reny’s, but more restrictive than our main result. We also explain how this result implies the Nishimura–Friedman existence theorem. Section 3 states our main result. After a required combinatoric concept has been introduced in Section 4, Section 5 presents the results concerning diagnosable games. Section 6 presents the location model and uses a particular instance of it to demonstrate that other methods of proving the existence of pure Nash equilibria are inapplicable. Section 7 presents the proof of Theorem 3.4. Reny also introduced two concepts, payoff security and reciprocal upper semicontinuity, which together imply the hypotheses of his main result and which are often easy to verify in applications. Section 8 explains the extension of those concepts to our setting. Section 9 contains some concluding remarks and thoughts about possible extensions.
GAMES WITH DISCONTINUOUS PAYOFFS
1645
2. RENY AND NISHIMURA–FRIEDMAN Our system of notation is largely taken from Reny (1999). There is a fixed normal form game G = (X1 XN u1 uN ) where, for each i = 1 N, the ith player’s strategy set Xi is a nonempty compact convex subset of a topological vector space, and the ith player’s payoff N function ui is a function from the set of strategy profiles X = i=1 Xi to R. We adopt the usual notation for “all players other than i.” Let X−i = j=i Xj . If x ∈ X is given, x−i denotes the projection of x on X−i . For given x−i ∈ X−i (or x ∈ X) and yi ∈ Xi , we write (yi x−i ) for the strategy profile z ∈ X satisfying zi = yi and zj = xj for all j = i. We endow X and each X−i with their product topologies. These conventions will also apply to other normal form games introduced later. We now review Reny’s theorem. A Nash equilibrium of G is a point x∗ ∈ X satisfying ui (x∗ ) ≥ ui (xi x∗−i ) for all i and all xi ∈ Xi . (This would usually be described as a “pure Nash equilibrium,” but we never refer to mixed equilibria, so we omit the qualifier “pure.”) For each player i, let Bi : X × R → Xi and Ci : X × R → Xi be the set-valued mappings Bi (x αi ) = {yi ∈ Xi : ui (yi x−i ) ≥ αi }
and
Ci (x αi ) = con Bi (x αi ) (Here and below con Z is the convex hull of the set Z.) Then ui is quasiconcave if and only if Ci (x αi ) = Bi (x αi ) for all x ∈ X and αi ∈ R. We say that G is quasiconcave if each ui is quasiconcave. DEFINITION 2.1: A player i can secure a payoff αi ∈ R on Z ⊂ X if there is some yi ∈ Xi such that yi ∈ Bi (z αi ) for all z ∈ Z. We say that i can secure αi at x ∈ X if she can secure αi on some neighborhood of x. Throughout we assume that each ui is bounded. Let u = (u1 uN ) : X → RN . For x ∈ X, let A(x) be the set of α ∈ RN such that (x α) is in the closure of the graph of u. Since u is bounded, each A(x) is compact. DEFINITION 2.2: The game G is better reply secure at x ∈ X if, for any α ∈ A(x), there is some player i and ε > 0 such that player i can secure αi + ε at x. The game G is better reply secure if it is better reply secure at every strategy profile that is not a Nash equilibrium. THEOREM 2.3 —Reny (1999): If G is quasiconcave and better reply secure, then it has a Nash equilibrium.
1646
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
To better understand Reny’s result in the context of this paper, we reformulate better reply security. DEFINITION 2.4: The game is B-secure on Z ⊂ X if there is α ∈ RN and ε > 0 such that the following conditions hold: (i) Every player i can secure αi + ε on Z. (ii) For each z ∈ Z, there exists some player i with ui (z) < αi − ε, that is, zi ∈ / Bi (z αi − ε). The game is B-secure at x ∈ X if it is B-secure on some neighborhood of x. LEMMA 2.5: For each x ∈ X, the game is better reply secure at x if and only if it is B-secure at x. PROOF: First assume that the game is B-secure at x, so that it is B-secure on a neighborhood U of x. Let α and ε be as in the definition. Each α ∈ A(x) is the limit of values of u along some sequence or net converging to x, so (ii) implies that there is some i with αi ≤ αi − ε. By (i), i can secure αi + ε at x, which shows that the game is better reply secure at x. Now assume that the game is better reply secure at x. Let τ be a neighborhood base of x. For each i and αi , there is an ε > 0 such that αi + ε can be secured by i at x if and only if αi < βi , where βi = sup sup inf ui (yi z−i ) yi ∈Xi U∈τ z∈U
Since the game is better reply secure at x, for each α ∈ A(x) there is some player i such that βi > αi , which implies that the inequality βi > αi + ε holds for some ε > 0 and all α in some neighborhood of α . Since A(x) is compact, it is covered by finitely many such neighborhoods, so we may choose ε > 0 such that for any α ∈ A(x), there is some i with βi > αi + 2ε. Define α ∈ RN by setting αi = βi − ε In view of the definition of βi , for each i, player i can secure αi + ε at x, as per (i). Aiming at a contradiction, suppose (ii) is false, so for each U ∈ τ, there is some zU ∈ U such that ui (zU ) ≥ αi − ε for all i. Since τi is a directed set (ordered by reverse inclusion), the boundedness of the image of u implies that there is a convergent subnet, so there is α ∈ A(x) such that αi ≥ αi − ε = βi − 2ε for all i, contrary to what we showed above. Q.E.D. When G is quasiconcave, the following condition is weaker than B-security. DEFINITION 2.6: The game is C-secure on Z ⊂ X if there is an α ∈ RN such that the following conditions hold:
GAMES WITH DISCONTINUOUS PAYOFFS
1647
(i) Every player i can secure αi on Z. / Ci (z αi ). (ii) For any z ∈ Z, there exists some player i with zi ∈ The game is C-secure at x ∈ X if it is C-secure on some neighborhood of x. A simple maximization problem illustrates how getting rid of the ε may matter. Let N = 1 and X1 = [0 1] with ⎧ x1 = 0, ⎨ 0 2 1 u1 (x1 ) = ⎩ x1 − 0 < x1 ≤ 1. 2 Setting α1 = 1/4, we see that the problem is C-secure at each nonmaximizer x1 , that is, each x1 ∈ [0 1), but it can easily be checked that it is not B-secure at the point x1 = 0, which is not a maximizer, so it is not better reply secure. In comparison with Definition 2.4, part (ii) of Definition 2.6 replaces Bi (z αi ) with Ci (z αi ), which makes it harder to satisfy in general, but not when G is quasiconcave, so the net effect is to make the hypotheses of the next result weaker than those of Theorem 2.3. The hypotheses of Theorem 3.4 will be weaker still. PROPOSITION 2.7: If the game is C-secure at each x ∈ X that is not a Nash equilibrium, then G has a Nash equilibrium. Nishimura and Friedman (1981) proved the existence of a Nash equilibrium when each Xi is a nonempty, compact, convex subset of a Euclidean space, u is continuous (but not necessarily quasiconcave), and for any x that is not a Nash equilibrium, there is an agent i, a coordinate index k, and an open neighborhood U of x, such that 1 2 (yik − x1ik )(yik − x2ik ) > 0
whenever x1 x2 ∈ U and yi1 and yi2 are best responses for i to x1 and x2 , respectively. Using compactness, and the continuity of ui , it is not difficult to 1 2 − xik )(yik − xik ) > 0 for any two best reshow that this is equivalent to (yik 1 2 sponses yi and yi to x. A more general condition that does not depend on the coordinate system, or the assumption of finite dimensionality, is that there is a hyperplane that strictly separates xi from the set of i’s best responses to x. We now show that if G satisfies Nishimura and Friedman’s hypotheses, then it is C-secure and thus satisfies the hypotheses of Proposition 2.7. Consider an x ∈ X that is not a Nash equilibrium. For each i = 1 N, let βi be the utility for i when other agents play their components of x−i and i plays a best response to x. Since u is continuous, for any ε > 0, player i can secure βi − ε at x by playing such a best response. For any neighborhood V of Bi (x βi ), it is the case that Bi (x βi − ε) ⊂ V when ε is sufficiently small, and in turn it follows
1648
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
that there is a neighborhood U of x such that Bi (z βi − ε) ⊂ V for all z ∈ U. It follows that if there is a hyperplane strictly separating xi from Bi (x βi ), then for sufficiently small ε > 0 and a sufficiently small neighborhood U of x, this hyperplane also strictly separates zi and Bi (z βi − ε) for all z ∈ U, in which case zi ∈ / Ci (z βi ). Setting α = (β1 − ε βN − ε) gives the required property. 3. THE MAIN RESULT Our main result weakens the hypotheses of Proposition 2.7 in two ways. The first is to allow multiple securing strategies at a point, with each strategy responsible for securing the payoff in response to profiles in some component of a finite closed cover of a neighborhood of the point. The second weakening is that the analyst is allowed to specify restrictions on the strategies that agents can consider in response to a profile. We introduce these in the next definition. For each player i, fix a set-valued mapping Xi : X → Xi , and let X = (X1 XN ). We call X a restriction operator. For each i, define the set-valued mappings BiX : X × R → Xi and CiX : X × R → Xi by setting BiX (x αi ) = {yi ∈ Xi (x) : ui (yi x−i ) ≥ αi }
and
CiX (x αi ) = con BiX (x αi ) DEFINITION 3.1: Player i can multiply X -secure a payoff αi ∈ R on a set Z ⊂ X if Z is covered by finitely many closed (in Z) sets F 1 F J ⊂ Z such j j that for each j there is some yi ∈ Xi satisfying yi ∈ BiX (z αi ) for all z ∈ F j . Player i can multiply X -secure αi at x ∈ X if she can multiply X -secure αi on a neighborhood of x. DEFINITION 3.2: For α ∈ RN and Z ⊂ X, the game G is multiply (X α)secure on Z ⊂ X if the following conditions hold: (i) Each player i can multiply X -secure αi on Z. (ii) For any z ∈ Z, there is some player i for whom zi ∈ / CiX (z αi ). The game is multiply X -secure on Z if there is some α ∈ RN such that it is multiply (X α)-secure on Z, and it is multiply X -secure at x ∈ X if it is multiply X -secure on some neighborhood of x. Following Reny (1999), we do not require that the Xi are Hausdorff spaces. (As Reny acknowledges, this is a mathematical refinement without any known economic applications.) Thus, for x ∈ X, the set {x} need not be closed, and we let [x] denote the closure of {x}. DEFINITION 3.3: The game G is multiply X -secure if it is multiply X -secure at each x such that [x] does not contain a Nash equilibrium. We say that G is multiply restrictionally (MR) secure if there is a restriction operator X such that it is multiply X -secure.
GAMES WITH DISCONTINUOUS PAYOFFS
1649
Our main result is as follows. THEOREM 3.4: If G is MR secure, then it has a Nash equilibrium.2 The reader may proceed directly to the proof, which is in Section 7. 4. RECTIFIABLE PAIRS In preparation for the games considered in the next section, we introduce a concept related to combinatoric geometry. Let S be a nonempty finite set, let A be a nonempty set of nonempty subsets of S, and let B be a set of subsets of S that is closed under intersection. DEFINITION 4.1: A B -zellij is a pair (X {Yb }b∈B ) in which X is a nonempty compact subset of a topological vector space, each Yb is a convex subset of X, and Yb ⊂ Yb for all b b ∈ B such that b ⊂ b . DEFINITION 4.2: The pair (A B ) is rectifiable if there is a B -zellij (X {Yb }b∈B ) with X convex for which there is a continuous function f : X → RS such that the following conditions hold: (i) A = {arg maxs∈S fs (x) : x ∈ X}. (ii) For each b ∈ B , we have {a ∈ A : a ⊂ b} = {arg maxs∈S fs (x) : x ∈ Yb }. We illustrate this concept with a rich class of rectifiable pairs. Fix an integer d ≥ 1. A simplex σ in Rd is the convex hull of a finite (possibly empty) affinely independent set of points Vσ ⊂ Rd , the elements of which are called the vertices of the simplex. A face of σ is the convex hull of a (possibly empty or nonproper) subset of Vσ . A (finite) simplicial complex is a finite collection of simplices Σ such that the following conditions hold: (i) τ ∈ Σ whenever σ ∈ Σ and τ is a face of σ. (ii) Forany σ σ ∈ Σ, σ ∩ σ is a face of both σ and σ . Let |Σ| = σ∈Σ σ. If T is a second simplicial complex in Rd , then Σ is said to be a subdivision of T if |T | = |Σ| and each σ ∈ Σ is contained in some τ ∈ T . V . For each τ ∈ T , Fix such a Σ and T , and let X = |Σ| = |T | and S = σ σ∈Σ let Wτ = σ⊂τ Vσ = τ ∩ S, and let
A = {Vσ : ∅ = σ ∈ Σ} and B = {Wτ : τ ∈ T } Note that B is closed under finite intersection: Wτ ∩ Wτ = τ ∩ τ ∩ S = Wτ∩τ . For b = Wτ ∈ B , let Yb = τ. Clearly (X {Yb : b ∈ B }) is a B -zellij. PROPOSITION 4.3: If X is convex, then the pair (A B ) is rectifiable. 2 The converse holds as well, but for a trivial reason: if x∗ is a Nash equilibrium and Xi (x) = {x∗i } for all i and x, then G is multiply X -secure.
1650
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
PROOF: For x ∈ X, let σx be the smallest simplex in Σ containing x, and let of RS given by the conditions fv (x) = 0 if v ∈ / Vσ x , f (x) be the element fv (x) = 1, and fv (x)v = x. Note that f : X → RS is continuous. We need to verify (i) and (ii) of Definition 4.2. For each x ∈ X, we have arg maxv∈S fv (x) = Vσ for some nonempty face σ of σx . On the other hand, for each nonempty σ ∈ Σ, we have arg maxv∈S fv (xσ ) = Vσ , where xσ = |V1σ | v∈Vσ v. Therefore (i) holds. To verify (ii), observe that for b = Wτ ∈ B , we have
{a ∈ A : a ⊂ b} = {Vσ : σ ⊂ τ} = arg max fv (xσ ) : σ ⊂ τ v∈S
= arg max fv (x) : x ∈ σ ⊂ τ
v∈S
= arg max fv (x) : x ∈ Yb v∈S
Q.E.D.
For the location model of Section 6, the most important example of realizability is the following corollary. COROLLARY 4.4: If A is the set of all nonempty subsets of S and B = A ∪ {∅}, then (A B ) is rectifiable. PROOF: Let Σ = T be the set of convex hulls of subsets of the set of standard unit basis vectors (0 1 0) ∈ RS , and apply the last result. Q.E.D. 5. DIAGNOSABLE GAMES We now study a finite game Γ = (A1 AN v1 vN ) in which, for each i = 1 N, the set Ai of pure strategies for agent i is a nonempty set of nonempty subsets of a finite set Si , and each vi is a function N from A = i=1 Ai to R. DEFINITION 5.1: A diagnostic for Γ is a pair (B d) of N-tuples B = (B1 BN ) and d = (d1 dN ) such that for each i, (Ai Bi ) is a rectifiable pair and di is a function from A−i to the set of all subsets of Si , and the following conditions hold: (D1) If a ∈ A is not a Nash equilibrium, then there is an i such that di (a−i ) ∈ Bi and ai is not a subset of di (a−i ). (D2) If A is a subset of A that is completely ordered by componentwise set inclusion (that is, for all a a ∈ A , either ai ⊂ ai for all i or a i ⊂ ai for all i), then for each i there is some ai ∈ Ai such that ai ⊂ a ∈A di (a−i ).
1651
GAMES WITH DISCONTINUOUS PAYOFFS
We say that Γ is diagnosable if it has a diagnostic. THEOREM 5.2: If Γ is diagnosable, then it has a Nash equilibrium. PROOF: For each i, let Ei = RSi Let S be the disjoint union of the sets N S1 SN and set E = RS = i=1 Ei . Consider a strict complete ordering of S. For each i, let si be the -maximal element of Si . We say that β ∈ E conforms to if si ∈ arg maxsi ∈Si βisi for all i, and (∗)
βisi − βisi ≤ βjsj − βjsj
for all i and j and all si ∈ Si and sj ∈ Sj such that si sj . Let C be the set of all vectors in E that conform to . Since C is defined by weak inequalities, it is closed. For any β ∈ E, we can define a relation on S by choosing si ∈ arg maxsi ∈Si βisi for each i and specifying that for all i and j and all si ∈ Si and sj ∈ Sj , si sj if and only if (∗) holds. Then β ∈ C for any that refines in the sense that si sj whenever it is not the case that sj si . Therefore, the various C constitute a closed cover of E. Let (B d) be a diagnostic for Γ . For each i, let Xi , Ybi ⊂ Xi for bi ∈ Bi , and let fi : Xi → Ei be the sets and functions from the rectifiability assumption. Set N X = i=1 Xi and define f : X → E by setting f (x) = (f1 (x1 ) fN (xN )). For each strict complete ordering of S, let F = f −1 (C ). Since f is continuous and C is closed, F is closed. Let F be the collection of nonempty F ; this is a closed cover of X. For each i, let ai : Xi → Ai be the function ai (xi ) = arg max fisi (xi ) si ∈Si
and define a : X → A by setting a(x) = (a1 (x1 ) aN (xN )). Fix such that F is nonempty. Let a1 = ({s1 } {s2 } {sN }). For 1 ≤ k ≤ |S|, let ik be the agent such that the kth largest (according to ) element of S is contained in Sik , and let sikk be this element. For k > 1, define ak = (ak1 akN ) inductively by setting
ak−1 ∪ {sikk } if ik = i, i k ai = otherwise. ak−1 i If, for β ∈ C , k is the largest integer such that sikk ∈ arg maxsi ak = arg max β1s1 arg max βNsN s1 ∈S1
k ∈Sik
βik sik , then
sN ∈SN
If z ∈ F , then a(z) = ak for some k, so the a(z) for the various z ∈ F are completely ordered by coordinatewise set inclusion.
1652
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
We define a game G = (X1 XN u1 uN ) by setting ui (x) = vi (a(x)). For each i, the image of ai is Ai (by (i) of the definition of rectifiability), so x∗ ∈ X is a Nash equilibrium of G if and only if a(x∗ ) is a Nash equilibrium of Γ . We will show that if G had no Nash equilibria, then it would necessarily be MR secure, after which Theorem 3.4 implies that it has a Nash equilibrium after all, so that Γ also has a Nash equilibrium. For each x ∈ X and player i, let Ydi (a−i (x−i )) if di (a−i (x−i )) ∈ Bi , Xi (x) = otherwise. Xi Note that Xi (x) is always convex because Xi and each Ybi are convex. For each i, let αi be a lower bound on vi , so that BiX (x αi ) = Xi (x) = CiX (x αi ) for all x ∈ X. Let α = (α1 αN ). We need to show that if G has no Nash equilibria, then G is multiply (X α)-secure at each x ∈ X, and of course this will follow if we show that G is multiply (X α)-secure on X. This is our goal in the remainder of the proof. For any x ∈ X, (D1) tell us that there exists some i such that di (a−i (x−i )) ∈ Bi and ai (xi ) is not a subset of di (a−i (x−i )). This implies, by (ii) of Definition 4.2, / Xi (x) = CiX (x αi ), which is to say that Definition 3.2(ii) holds for that xi ∈ Z = X. It remains to show that Definition 3.2(i) also holds, that is, each i can multiply X -secure αi on X. Fixing
such that F is nonempty and fixing a particular i, it suffices to find a yi ∈ x∈F BiX (x αi ). / Bi for all x ∈ There are now two possibilities. The first is that di (a−i (x−i )) ∈ F . In this case, BiX (x αi ) = Xi (x) = Xi for all x ∈ F , so yi can be any element of Xi . Otherwise let b i = di (a−i (x−i )), where the intersection is over all x ∈ F such that di (a−i (x−i )) ∈ Bi . We have b i ∈ Bi because Bi is closed under intersection. Since the a−i (x−i ) for the various x ∈ F are completely ordered by coordinatewise set inclusion, (D2) gives an a i ∈ Ai such that a i ⊂
x∈F di (a−i (x−i )), so that ai ⊂ bi . Condition (ii) of Definition 4.2 now gives
a yi ∈ Yb i such that ai (yi ) = ai . For all x ∈ F such that di (a−i (x−i )) ∈ Bi , we have yi ∈ Yb i , and the definition of a zellij gives Yb i ⊂ Ydi (a−i (x−i )) . Therefore, yi ∈ x∈F BiX (x αi ). Q.E.D. We now show that Theorem 5.2 can be used to give a brief (in comparison with, for example, McLennan and Tourky (2008)) proof of Sperner’s lemma. This is significant because it shows that Theorem 5.2 is not simpler than the fixed point principle and, consequently, not likely to be provable by more elementary methods. Let E = {e1 ed } be the set of standard unit basis vectors of Rd . Let Σ be a simplicial complex such that |Σ| is the convex hull of E, and let V = σ∈Σ Vσ . For each F ⊂ E, let WF be the set of v ∈ V contained in the convex hull of F . A Sperner labelling is a function λ : V → E such that λ(WF ) ⊂ F for all F ⊂ E.
GAMES WITH DISCONTINUOUS PAYOFFS
1653
A simplex σ ∈ Σ is completely labelled if λ(Vσ ) = E. Sperner’s lemma asserts that any Sperner labelling has a completely labelled simplex. We now define a two player game Γ = (A1 A2 v1 v2 ) in which A1 is the set of all nonempty subsets of E and A2 = {Vσ : ∅ = σ ∈ Σ}. Note that A2 contains each singleton subset of E. Let r : E → E be the function given by r(ei ) = ei+1 if i < d and r(ed ) = e1 . The payoffs of the players are given by 1 if a1 ⊂ r(λ(a2 )), v1 (a1 a2 ) = 0 otherwise, and
v2 (a1 a2 ) =
1 if a2 ⊂ Wa1 , 0 otherwise.
Let B1 = A1 ∪ {∅} and B2 = {WF : F ⊂ E}. Proposition 4.3 implies that (A1 B1 ) and (A2 B2 ) are rectifiable. Let d1 (a2 ) = r(a2 ) and d2 (a1 ) = Wa1 . If (a1 a2 ) is not a Nash equilibrium, then either a1 ⊂ r(a2 ) = d1 (a2 ) ∈ B1 or a2 ⊂ Wa1 = d2 (a1 ) ∈ B2 . Therefore, (D1) is satisfied. For (D2), note that d1 (a2 ) and d2 (a1 ) are strictly monotonic (with respect to set inclusion) functions of a2 and a1 , and are never empty. Thus Γ is a game on subsets, so it has a Nash equilibrium a∗ . Each player can obtain utility 1 by responding appropriately to a given pure strategy of the other player: v1 (a1 a2 ) = 1 if a1 = r(λ(a2 )) and v2 (a1 a2 ) = 1 if a2 is a singleton subset of a1 . Therefore, v1 (a∗ ) = v2 (a∗ ) = 1, which is to say that a∗1 ⊂ r(λ(a∗2 )) and a∗2 ⊂ Wa∗1 . Since λ is a Sperner labelling, the latter condition implies that λ(a∗2 ) ⊂ a∗1 , so λ(a∗2 ) ⊂ r(λ(a∗2 )). If ∅ = F ⊂ E and F ⊂ r(F), then F = E, so λ(a∗2 ) = E. That is, the σ such that a∗2 = Vσ is completely labelled. 6. A LOCATION MODEL Schelling (1971, 1972) explored a number of dynamic processes involving location choices by individuals, emphasizing the relationship between individual preferences to be co-located with similar people and segregated outcomes. More generally, the idea of locational externalities goes back at least to Marshall, and there is now a large related literature that continues to be quite active. For the most part, Schelling did not work with configurations that are stable, in the sense that each agent prefers her current location to any available alternative, taking the locations of others as fixed, and did not prove that such configurations exist. Instead, he presented simulations in which agents repeatedly relocate to preferred points myopically, insofar as agents do not anticipate the future relocation decisions of others. This section presents finite games that are similar in spirit, but instead of studying them dynamically, we show that such a game has a nonempty set of
1654
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
Nash equilibria, which can be interpreted as stable configurations of location choices. This means that the associated dynamic model has configurations that are potential rest points and which in this sense can be regarded as potential ultimate outcomes. DEFINITION 6.1: A location game is a game Γ = (A1 AN v1 vN ) such that, for some nonempty finite set S of locations, for each i, it is the case that the following statements hold: (i) Ai is the set of all nonempty subsets of some nonempty Si ⊂ S. (ii) There are sets Ui Di ⊂ {1 N} \ {i}. For s ∈ Si and a−i ∈ A−i let ui (s a−i ) = |{j ∈ Ui : s ∈ aj }| and di (s a−i ) = {j ∈ Di : aj = {s}} (iii) The payoff of player i at strategy profile a ∈ A is vi (a) = mins∈ai yi (s a−i ), where two conditions hold: (a) If Ui = ∅, then yi : Si × A−i → {0 1} is given by 0 if ui (s a−i ) > di (s a−i ), yi (s a−i ) = 1 otherwise. (b) If Ui = ∅, then yi : Si × A−i → {0 1} is given by ⎧ ⎨ 0 if s ∈ / aj , yi (s a−i ) = j∈Di ⎩ 1 otherwise. We think of Si as the set of locations that are feasible for agent i. Perhaps these are bars close to where i lives. The strategy ai is the set of locations that i sometimes visits. We say that player i frequents each s ∈ ai and that i is a denizen of s if ai = {s}. The sets Ui and Di are the sets of other people that agent i regards as undesirable and desirable, respectively. (Note that it is not necessary to assume that Ui and Di are disjoint, although that would be consistent with the spirit of the model.) Thus ui (s a−i ) and di (s a−i ) are, respectively, the number of undesirables frequenting s and the number of desirable denizens of s from i’s point of view. If Ui is nonempty, then player i’s only concern is to avoid frequenting a location where the undesirables might outnumber the desirables because it is frequented by more undesirables than the number of desirable denizens. If i does not regard anyone as undesirable, then she only wishes to avoid frequenting a location that is not frequented by anyone she likes. THEOREM 6.2: A location game Γ = (A1 AN v1 vN ) is diagnosable and, consequently, has a Nash equilibrium.
GAMES WITH DISCONTINUOUS PAYOFFS
1655
PROOF: For each i, let Bi = Ai ∪ {∅}. The pair (Ai Bi ) is rectifiable by Corollary 4.4. Let B = (B1 BN ). For each i, define di by setting di (a−i ) = arg maxs∈Si yi (s a−i ). We need to verify (D1) and (D2). If a ∈ A is not a Nash equilibrium, then for some i, we have vi (a) = 0 and vi (ai a−i ) = 1 for some ai ∈ Ai , so that yi (s a−i ) = 0 for some s ∈ ai and yi (s a−i ) = 1 for all s ∈ ai . Thus s ∈ / di (a−i ), so ai is not subset of di (a−i ). Since any subset of Si is an element of Bi , it follows that (D1) holds. Let a1 a2 aK be a sequence in A that is decreasing with respect to componentwise set inclusion. Fix a player i and note that for any s, ui (s ak−i ) is a weakly decreasing function of k and di (s ak−i ) is a weakly increasing function of k. First suppose that Ui is not empty. If ui (s ak−i ) > di (s ak−i ) for all s ∈ Si and all k, then for any s∗ , we have s∗ ∈ di (a1−i ) ∩ · · · ∩ di (aK−i ). Otherwise, let k∗ ∗ ∗ be the first number such that ui (s∗ ak−i ) ≤ di (s∗ ak−i ) for some s∗ ∈ Si . Then ∗ k ∗ k ∗ k ui (s a−i ) ≤ di (s a−i ) and, consequently, yi ({s } a−i ) = 1 for all k ≥ k∗ . If k < ∗ ∗ k∗ , then ui (s ak−i ) > di (s ak−i ) and yi (s aki ) = 0 for all s ∈ Si , so that s ∈ di (ak−i ) for all s ∈ Si . In particular, for all k, we have s∗ ∈ di (ak−i ). Letting ai = {s∗ }, we conclude that (D2) is satisfied. Suppose now that Ui is empty. If a1j ∩ Si = ∅ for all j ∈ Di , then for any s∗ ∈ Si , we have s∗ ∈ di (ak−i ) for all k. Otherwise, let k∗ be the largest integer such that ∗ akj ∩ Si = ∅ for some j ∈ Di and let s∗ be an element of such an intersection. Again we have s∗ ∈ di (ak−i ) for all k. We conclude that (D2) is also satisfied in this case. Q.E.D. Insofar as we wish to convincingly illustrate the usefulness of our results, it is important to consider whether other methods could be used to show that a game on subsets has a Nash equilibrium. The following particular instance of the location model illustrates how various other methods are inapplicable. There are two players and three locations. Let S = {a b c}, S1 = S, and S2 = {a b}. Player 1 considers player 2 desirable while player 2 considers player 1 undesirable, so that U1 = ∅, D1 = {2}, U2 = {1}, and D2 = ∅. Therefore, the payoffs are
Player 1
{a} {b} {c} {a b} {a c} {b c} {a b c}
{a} (1 0) (0 1) (0 1) (0 0) (0 0) (0 1) (0 0)
Player 2 {b} (0 1) (1 0) (0 1) (0 0) (0 0) (0 0) (0 0)
{a b} (1 0) (1 0) (0 1) (1 0) (0 0) (0 0) (0 0)
1656
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
One can easily check that ({a b} {a b}) is the only pure strategy Nash equilibrium. There is a mixed Nash equilibrium ( 12 {a} + 12 {b} 12 {a} + 12 {b}) that is not a pure equilibrium. Therefore, one cannot prove the existence of a pure Nash equilibrium by combining the existence of a mixed equilibrium with an argument showing that all equilibria are pure. A strategic form game is a potential game (Rosenthal (1973), Monderer and Shapley (1996)) if there is a real-valued potential function on the set of pure strategy profiles such that for any pair of profiles that differ in only one agent’s component, the difference in that agent’s utility at the two profiles has the same sign as the difference in the potential function at the two profiles. Any maximizer of the potential function is a Nash equilibrium, so Nash equilibria exist when the pure strategy sets are finite. For any potential game, any game obtained by restricting each agent to a subset of her strategies is also a potential game. The game obtained by restricting each agent to play either {a} or {b} is an instance of matching pennies, which of course has no Nash equilibrium and is thus not a potential game. A strategic form game exhibits strategic complementarities or increasing differences (Bulow, Geanakoplos, and Klemperer (1985)) if the agents’ sets of pure strategies are partially ordered and increasing one agent’s strategy, relative to the given ordering, weakly increases the desirability for the other agents to increase their strategies. If various auxiliary conditions are satisfied, a game that exhibits strategic complementarities necessarily has Nash equilibria. In a game on subsets, the pure strategies are ordered by both inclusion and reverse inclusion. However, we have v1 ({a b} {a b}) − v1 ({a} {a b}) > v1 ({a b} {a}) − v1 ({a} {a}) and v1 ({a b c} {a b}) − v1 ({a b} {a b}) < v1 ({a b c} {a}) − v1 ({a b} {a}) so the location game does not exhibit strategic complementarities. This example can also be used to show that Reny’s theorem cannot be used to prove Theorem 6.2 by following the proof of Theorem 5.2 up to the construction of the derived game G, then showing that G is better reply secure when Γ is a location game. In this example, A1 is the set of all nonempty subsets of S1 , and B1 = A1 ∪ {∅}. As Corollary 4.4 points out, (A1 B1 ) is rectifiable because we can follow the proof of Proposition 4.3 with Σ1 = T1 being the set of all faces S of the simplex Δ1 = {x1 ∈ R+1 : x1a + x1b + x1c = 1}, setting X1 = |Σ1 | = |T1 | = Δ1 S1 and letting f1 : X1 → R be the identity. Similarly, Σ2 = T2 is the set of all S faces of the simplex Δ2 = {x2 ∈ R+2 : x2a + x2b = 1}, X2 = |Σ2 | = |T2 | = Δ2 , and
GAMES WITH DISCONTINUOUS PAYOFFS
1657
f2 : X2 → RS2 is the identity. The proof of Theorem 5.2 constructs the game G = (X1 X2 ; u1 u2 ) by setting ui (x) = vi arg max x1s1 arg max x2s2 s1 ∈S1
s2 ∈S2
for both i. The point ((0 0 1) (1/2 1/2)) is not a Nash equilibrium, and it is not hard to show that the game is not better reply secure at this point, so G is not better reply secure. It can happen that a game has an upper hemicontinuous best reply correspondence even though the payoff functions are discontinuous, but one can see that this is not the case for G by considering a sequence xn1 which correspond to player 1 playing {a} converging to x1 that corresponds to {a b}. Note that player 2’s best response to player 1 playing x1 does not include (1 0). Therefore, the best response correspondence is not upper hemicontinuous. 7. THE PROOF OF THEOREM 3.4 We say that a (not necessarily Hausdorff) topological space is regular if each point has a neighborhood base of closed sets. (This is the definition given by Kelley (1955), which differs from the usage of some other authors.) Topological vector spaces are regular topological spaces, even if they are not Hausdorff (e.g., Schaefer (1971, p. 16)). It is easy to see that any subspace of a regular space is regular and that finite cartesian products of regular spaces are regular, so each Xi , each X−i , and X are all regular. In preparation for the main body of the argument, we present two lemmas, the first of which is a variant of the fixed point principle that holds in topological vector spaces that are neither Hausdorff nor locally convex. Although it bears some resemblance to the Knaster–Kuratowski–Mazurkiewicz lemma, we have not managed to forge a direct connection between the two results. LEMMA 7.1: Let X be a nonempty convex subset of a topological vector space and let P : X → X be a set-valued mapping. If there is a finite closed cover H1 HL of X such that z∈Hj P(z) = ∅ for each j = 1 L, then there exists x∗ ∈ X such that x∗ ∈ con P(x∗ ). PROOF: For each j = 1 L, choose a yj ∈ z∈Hj P(z). Let e1 eL be L the standard unit basis vectors of R , let Δ be their convex hull, and let π : Δ → X be the map ω → j ωj yj ; this is continuous because addition and scalar multiplication are continuous operations in any topological vector space. Define a set-valued mapping Q : Δ → Δ by letting Q(ω) be the convex hull of {ej : π(ω) ∈ Hj }. This is nonempty-valued because the Hj cover X, it is upper hemicontinuous because each Hj is closed, and of course it is convex-valued,
1658
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
so Kakutani’s fixed point theorem implies that it has a fixed point ω∗ . Let x∗ = π(ω∗ ). Then ∗ ∗ x ∈ con{yj : x ∈ Hj } ⊂ con P(x) ⊂ con P(x∗ ) j : x∗ ∈Hj
x∈Hj
Q.E.D. For the remainder of the section we fix a game G = (X1 XN ; u1 uN ) and a restriction operator X . LEMMA 7.2: Suppose that α1 α ∈ RN , U1 U are subsets of X, and, for each h = 1 , G is multiply (X αh )-secure on Uh . Let α = max αh = max αh1 max αhN h=1
h
h
Then G is multiply (X α)-secure on any nonempty U ⊂
h=1
Uh .
J
PROOF: For each h, there is a cover Fh1 Fhh of Uh by relatively closed subsets such that for each i and j, we have z∈F j BiX (z αhi ) = ∅. Let G1 GJ j
h
j
be the nonempty intersections of the form F11 ∩ · · · ∩ F ∩ U; this is a cover of J U by nonempty relatively subsets. It refines each cover Fh1 Fhh , so closed X for all i and j, we have z∈Gj Bi (z αi ) = ∅ because there is some h such that / CiX (z αhi ). Since αi = αhi . For any h and z ∈ U, there is some i such that zi ∈ h X αi ≥ αi , this implies that zi ∈ / Ci (z αi ). Q.E.D. We now have the tools we need to complete the proof of Theorem 3.4. Aiming at a contradiction, suppose that G is multiply X -secure and has no Nash equilibrium. Because X is compact and regular, it is covered by closed sets F1 F such that for each h = 1 , there is αh ∈ RN and an open neighborhood Uh of Fh such that G is multiply (X αh )-secure on Uh . Define ψ : X → R by setting ψ(x) = max αh x∈Fh
For each x, let Ux = h:x∈Fh Uh . The last result implies that G is multiply (X ψ(x))-secure on any nonempty subset of Ux , whence it is multiply (X ψ(x))-secure at x. For each i and x ∈ X, let Pi (x) = BiX (x ψi (x)) and let P(x) = P1 (x) × · · · × PN (x). Below we will show that the hypotheses of Lemma 7.1 are satisfied, so some x∗ is an element of con P(x∗ ), which is to say that x∗i ∈ CiX (x∗ ψi (x∗ )) for all i. But G is multiply (X ψ(x∗ ))-secure at x∗ , so x∗i ∈ / CiX (x∗ ψi (x∗ )) for some i, which is the desired contradiction.
GAMES WITH DISCONTINUOUS PAYOFFS
1659
Consider a particular x ∈ X. The function ψ is upper semicontinuous and takes on finitely many values, so there is a closed neighborhood Zx ⊂ Ux of x such that ψi (z) ≤ ψi (x) for all i and z ∈ Zx . Since G is multiply (X ψ(x))secure on Zx , there is a cover Zx1 ZxJx of Zx by closed subsets such that for each j = 1 Jx , we have Pi (z) = BiX (z ψi (z)) ⊃ BiX (z ψi (x)) = ∅ j
j
z∈Zx
j
z∈Zx
z∈Zx
for all i, so that P(z) = P1 (z) × · · · × PN (z) j
j
z∈Zx
z∈Zx
=
j z∈Zx
P1 (z) × · · · ×
PN (z) = ∅
j z∈Zx
Since X is compact, it is covered by some finite collection Zx1 Zxm , and j the various Zxh constitute a cover of the sort required by Lemma 7.1. 8. PAYOFF SECURE GAMES Reny pointed out that a combination of two conditions, payoff security and reciprocal upper semicontinuity, imply the hypotheses of his existence result, and in applications it is typically relatively easy to verify them when they hold. We now define generalizations of these notions in our setting and establish that together they imply that the game is MR secure. Reny’s notion of payoff security requires that for each x and ε > 0, each player i can secure ui (x) − ε at x. DEFINITION 8.1: The game G is multiply X -payoff secure at x ∈ X if, for each ε > 0, each player i can multiply X -secure ui (x) − ε at x. The game G is multiply X -payoff secure if it is X -secure at each x ∈ X. Reny’s reciprocal upper semicontinuity (which was introduced by Simon (1987) under the name “complementary discontinuities” and which generalizes the requirement in Dasgupta and Maskin (1986) that the sum of payoffs be upper semicontinuous) requires that u(x) = α whenever (x α) is in the closure of the graph of u and α ≥ u(x). Equivalently, for each possible “jump up” δ > 0, there is a neighborhood U of x and a “jump down” ε > 0 such that for each z ∈ U, if ui (z) > ui (x) + δ for some i, then there must be a player j with uj (z) < uj (x) − ε. DEFINITION 8.2: The game G is X -reciprocally upper semicontinuous at x ∈ X if for every δ > 0, there exists ε > 0 and a neighborhood U of x such that
1660
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
for each z ∈ U, if there is a player i with zi ∈ CiX (z ui (x) + δ), then there is a / CjX (z uj (x) − ε). The game G is X -reciprocally upper player j such that zj ∈ semicontinuous if it is X -reciprocally upper semicontinuous at each x ∈ X. The next result is the analog in our setting of Reny’s Proposition 3.2, which asserts that payoff security and reciprocal upper semicontinuity imply better reply security. In conjunction with Theorem 3.4, it implies a generalization of Reny’s Corollary 3.3. We say that X is other-directed if, for each i, Xi (x) depends only on x−i , so that Xi (yi x−i ) = Xi (zi x−i ) for all x−i ∈ X−i and yi zi ∈ Xi . PROPOSITION 8.3: If X is other-directed and G is multiply X -payoff secure and X -reciprocally upper semicontinuous, then it is multiply X -secure. PROOF: Fixing an x ∈ X that is not a Nash equilibrium, our goal is to show that the game is multiply X -secure at x. Let I be the set of players i such that xi is not a best response to x−i . For each i ∈ I , choose yi such that ui (yi x−i ) > ui (x), and let δ > 0 be small enough that ui (yi x−i ) > ui (x) + δ for all i ∈ I . Since the game is multiply X -payoff secure, each i ∈ I can multiply X -secure αi = ui (x) + δ at (yi x−i ). Since X is other-directed, player i can also multiply X -secure αi at x. Since the game is X -reciprocally upper semicontinuous, there is an ε > 0 and a neighborhood U of x such that for all z ∈ U, if zi ∈ CiX (z ui (x) + δ) for / CjX (z uj (x) − ε) for some j. Since the game is multiply X some i, then zj ∈ payoff secure, each j ∈ / I can multiply X -secure αj = uj (x) − ε at x. It is now the case that for each z ∈ U, either zi ∈ / CiX (z αi ) for all i ∈ I or zj ∈ / CjX (z αj ) Q.E.D. for some j ∈ / I. Reny pointed out (Corollary 3.4) that if each ui is lower semicontinuous in the strategies of the other players, then the game is necessarily payoff secure because for each x, ε > 0, and i, player i can use xi to secure ui (x) − ε. DEFINITION 8.4: We say that ui is multiply lower semicontinuous in the strategies of the other players if, for every x ∈ X, there exists is a finite closed cover j F 1 F J of a neighborhood of x such that for each j, there is some yi ∈ Xi j j satisfying ui (yi x−i ) ≥ ui (x) and the function z−i → ui (yi z−i ) restricted to F j is lower semicontinuous. The universal restriction operator is given by Xi (x) = Xi for all i and x ∈ X. We drop the qualifier X for this particular restriction operator. Similarly, it makes sense to drop “multiply” if all the relevant closed covers have a single element. The next result follows from Proposition 8.3.
GAMES WITH DISCONTINUOUS PAYOFFS
1661
COROLLARY 8.5: If each ui is multiply lower semicontinuous in the strategies of other players, then it is multiply payoff secure. If in addition G is quasiconcave and reciprocally upper semicontinuous, then there exists a Nash equilibrium. Bagh and Jofre (2006) showed that better reply security is implied by payoff security and a condition that is weaker than reciprocal upper semicontinuity. We have not found a suitable analog of that concept. Carmona (2009) showed that existence of equilibrium is implied by a weakening of payoff security and a weak form of upper semicontinuity. We do not know of analogs of these conditions in our setting. 9. CONCLUSION We have weakened the hypotheses of Reny’s theorem in several directions, achieved a briefer3 proof, and also extended his notions of payoff security and reciprocal upper semicontinuity. The generalized theorem has been used to prove the existence of pure Nash equilibrium in a new class of finite games, the diagnosable games, which includes examples in the spirit of locations models studied by Schelling (1971, 1972). Our result implies the Nishimura and Friedman (1981) existence theorem, which was one of the two earlier results not encompassed by Reny’s theorem, the other being Cournot equilibrium with fixed costs. The concepts introduced in Section 3 were inspired in part by the hope of achieving a result that could be used to prove the existence result of Novshek (1985), which is the most refined result asserting the existence of Cournot equilibrium in the stream of literature following McManus (1962, 1964), Roberts and Sonnenschein (1976), and Szidarovzky and Yakowitz (1977). Specifically, it often happens that the set of quantities that allow a firm to obtain at least a certain level of profits will not be convex because it includes both increases and reductions to zero. Introducing a restriction operator allows the analyst to restrict attention to one possibility or the other at each point. A proof of Novshek’s result by applying our result can be accomplished for the case of two firms, and was presented in an earlier version of this paper, but extending the argument to an arbitrary number of firms proved to be quite difficult. This seems to be related to the fact that although the general case is a game of strategic substitutes, in the sense of Bulow, Geanakoplos, and Klemperer (1985), the case of two firms can be construed as a game of strategic complements if the ordering of one of the two firm’s sets of quantities is reversed. Subsequent literature related to Novshek’s result has gone in a different direction. Kukushkin (1994) presented a brief proof of the general case. As 3 Prokopovych (2011) gave a nice short proof for quasiconcave better reply secure games whose sets of pure strategies are metric spaces.
1662
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
with various proofs of topological fixed point theorems, Kukushkin’s argument combines a nontrivial combinatoric result with a straightforward limiting argument. Kukushkin (2004) gave a more general treatment of finite games in which there is an ordering of strategy profiles that is increased by best response dynamics. Huang (2002), Dubey, Haimanko, and Zapechelnyuk (2006), and Kukushkin (2007) developed an approach to games of strategic substitutes and complements that generalizes the notion of a potential game. Jensen (2010) introduced a class of games that allows the methods of all these authors to be applied more broadly. In all of this literature a key feature is the presence of useful univariate aggregators. There are many issues related to games on subsets that could be pursued. For example, we do not know of an algorithm for computing a Nash equilibrium of a location game other than exhaustive search. Since the size of the set of pure strategy profiles for the location model is an exponential function of the data required to define an instance, there is the question of whether there is a polynomial time algorithm for computing an equilibrium and, if not, what can be said about the computational complexity of this problem. The class of games on subsets is evidently quite rich and might have a number of applications that are not apparent at present. Even considering only the location model, an examination of our methods quickly reveals that the setup could be varied in many directions. REFERENCES BAGH, A., AND A. JOFRE (2006): “Reciprocal Upper Semicontinuity and Better Reply Secure Games: A Comment,” Econometrica, 74, 1715–1721. [1643,1661] BARELLI, P., AND I. SOZA (2009): “On the Existence of Nash Equilibria in Discontinuous and Qualitative Games,” Working Paper, University of Rochester. [1643] BICH, P. (2006): “A Constructive and Elementary Proof of Reny’s Theorem,” Working Paper, Centre d’Economie de la Sorbonne. [1643] (2009): “Existence of Pure Nash Equilibria in Discontinuous and Non Quasiconcave Games,” Internation Journal of Game Theory, 38, 395–410. [1643] BULOW, J., J. GEANAKOPLOS, AND P. KLEMPERER (1985): “Multimarket Oligopoly: Strategic Substitutes and Strategic Complements,” Journal of Political Economy, 93, 488–511. [1656,1661] CARBONELL -NICOLAU, O. (2010): “Perfect and Limit Admissible Perfect Equilibria in Discontinuous Games,” Working Paper, Rutgers University. [1643] (2011): “On the Existence of Pure-Strategy Perfect Equilibrium in Discontinuous Games,” Games and Economic Behavior, 71, 23–48. [1643] CARMONA, G. (2005): “On the Existence of Equilibria in Discontinuous Games: Three Counterexamples,” International Journal of Game Theory, 33, 181–187. [1643] (2009): “An Existence Result for Discontinuous Games,” Journal of Economic Theory, 144, 1333–1340. [1643,1661] DASGUPTA, P., AND E. MASKIN (1986): “The Existence of Equilibrium in Discontinuous Economic Games I: Theory,” Review of Economic Theory, 53, 1–26. [1659] DE CASTRO, L. (2011): “Equilibria Existence and Approximation of Regular Discontinuous Games,” Economic Theory (forthcoming). [1643] DUBEY, P., O. HAIMANKO, AND A. ZAPECHELNYUK (2006): “Strategic Complements and Substitutes, and Potential Games,” Games and Economic Behavior, 54, 77–94. [1662]
GAMES WITH DISCONTINUOUS PAYOFFS
1663
HUANG, Z. (2002): “Fictitious Play in Games With a Continuum of Strategies”, Ph.D. Thesis, State University of New York at Stony Brook. [1662] JENSEN, M. (2010): “Aggregative Games and Best-Reply Potentials,” Economic Theory, 43, 45–66. [1662] KELLEY, J. (1955): General Topology. New York: Springer-Verlag. [1657] KUKUSHKIN, N. (1994): “A Fixed-Point Theorem for Decreasing Mappings,” Economics Letters, 46, 23–26. [1661] (2004): “Best Response Dynamics in Finite Games With Additive Aggregation,” Games and Economic Behavior, 48, 94–110. [1662] (2007): Acyclicity and Aggregation in Strategic Games. Moscow: Russian Academy of Sciences, Dorodnicyn Computing Center. [1662] MCLENNAN, A., AND R. TOURKY (2008): “Using Volume to Prove Sperner’s Lemma,” Economic Theory, 35, 593–597. [1652] MCMANUS, M. (1962): “Numbers and Size in Cournot Oligopoly,” Yorkshire Bulletin of Social and Economic Research, 14, 14–22. [1644,1661] (1964): “Equilibrium, Numbers and Size in Cournot Oligopoly,” Yorkshire Bulletin of Social and Economic Research, 16, 68–75. [1644,1661] MONDERER, D., AND L. SHAPLEY (1996): “Potential Games,” Games and Economic Behavior, 14, 124–143. [1656] MONTEIRO, P., AND F. PAGE (2007): “Uniform Payoff Security and Nash Equilibrium in Compact Games,” Journal of Economic Theory, 134, 566–575. [1643] (2008): “Catalog Competition and Nash Equilibrium in Nonlinear Pricing Games,” Economic Theory, 34, 503–524. [1643] NISHIMURA, K., AND J. FRIEDMAN (1981): “Existence of Nash Equilibrium in n Person Games Without Quasiconcavity,” International Economic Review, 22, 637–648. [1643,1644,1647,1661] NOVSHEK, W. (1985): “On the Existence of Cournot Equilibrium,” Review of Economic Studies, 52, 85–98. [1661] PROKOPOVYCH, P. (2010): “Domain L-Majorization and Equilibrium Existence in Discontinuous Games,” Working Paper, Kyiv School of Economics. [1643] (2011): “On Equilibrium Existence in Payoff Secure Games,” Economic Theory (forthcoming). [1643,1661] RENY, P. (1999): “On the Existence of Pure and Mixed Strategy Nash Equilibria in Discontinuous Games,” Econometrica, 67, 1029–1056. [1643-1645,1648] ROBERTS, J., AND H. SONNENSCHEIN (1976): “On the Existence of Cournot Equilibrium Without Concave Profit Functions,” Journal of Economic Theory, 13, 112–117. [1644,1661] (1977): “On the Existence of Cournot Equilibrium Without Concave Profit Functions: Acknowledgement,” Journal of Economic Theory, 16, 521. [1644] ROSENTHAL, R. W. (1973): “A Class of Games Possessing Pure-Strategy Nash Equilibria,” International Journal of Game Theory, 2, 65–67. [1656] SCHAEFER, H. (1971): Topological Vector Spaces. Springer-Verlag. [1657] SCHELLING, T. C. (1971): “Dynamic Models of Segregation,” Journal of Mathematical Sociology, 1, 143–186. [1643,1644,1653,1661] (1972): “A Process of Residential Discrimination; Neighborhood Tipping,” in Discrimination in Economic Life, ed. by A. Pascal. New York: Lexington Books. [1643,1644,1653,1661] SIMON, L. (1987): “Games With Discontinuous Payoffs,” Review of Economic Studies, 54, 569–597. [1659] SZIDAROVZKY, F., AND S. YAKOWITZ (1977): “A New Proof of the Existence and Uniqueness of the Cournot Equilibrium,” International Economic Review, 18, 787–789. [1644,1661] TIAN, G. (2009): “Existence of Equilibria in Games With Arbitrary Spaces and Payoffs: A Full Characterization,” Working Paper, Texas A&M University. [1643]
School of Economics, University of Queensland, Level 6 Colin Clark Building, St. Lucia, QLD 4072 Australia;
[email protected],
1664
A. MCLENNAN, P. K. MONTEIRO, AND R. TOURKY
FGV-EPGE, Praia de Botafogo 190, sala 1103, 22250-900, RJ, Brazil;
[email protected], and School of Economics, University of Queensland, Level 6 Colin Clark Building, St. Lucia, QLD 4072 Australia;
[email protected]. Manuscript received November, 2009; final revision received March, 2011.
Econometrica, Vol. 79, No. 5 (September, 2011), 1665
ANNOUNCEMENTS 2011 LATIN AMERICAN MEETING
THE 2011 LATIN AMERICAN MEETINGS will be held jointly with the Latin American and Caribbean Economic Association in Santiago, Chile, from November 10 to 12, 2011. The Meetings will be hosted by Universidad Adolfo Ibaniez. The Annual Meetings of these two academic associations will be run in parallel, under a single local organization. By registering for LAMES 2011, participants will be welcome to attend to all sessions of both meetings. The Program Chair is Andrea Repetto. The Local Organizers are Andrea Repetto, Matias Braun, Fernando Larrain, and Claudio Soto. The deadline for submissions is June 1, 2011 and decisions will be sent July 10, 2011. Authors can submit only one paper to each meeting, but the same paper cannot be submitted to both meetings. Authors submitting papers must be a member of the respective association at the time of submission. Membership information can be found at http://www. econometricsociety.org and http://www.lacea.org. A limited number of papers will be invited to be presented in poster sessions that will be organized by topic. Further information can be found at the conference website at http://www. lacealames2011.cl or by email at
[email protected]. 2012 NORTH AMERICAN WINTER MEETING
THE 2012 NORTH AMERICAN WINTER MEETING of the Econometric Society will be held in Chicago, IL, on January 6–8, 2012, as part of the annual meeting of the Allied Social Science Associations. A preliminary program can be viewed on the ASSA website at http://www. aeaweb.org/aea/2012conference/program/preliminary.php. JONATHAN LEVIN Program Committee Chair
© 2011 The Econometric Society
DOI: 10.3982/ECTA795ANN
Econometrica, Vol. 79, No. 5 (September, 2011), 1667
FORTHCOMING PAPERS THE FOLLOWING MANUSCRIPTS, in addition to those listed in previous issues, have been accepted for publication in forthcoming issues of Econometrica. AHN, DAVID S., AND SANTIAGO OLIVEROS: “Combinatorial Voting.” ARCIDIACONO, PETER, AND ROBERT A. MILLER: “CCP Estimation of Dynamic Discrete Choice Models With Unobserved Heterogeneity.” ATTAR, ANDREA, THOMAS MARIOTTI, AND FRANÇOIS SALANIÉ: “Non-Exclusive Competition in the Market for Lemons.” BERESTEANU, ARIE, ILYA MOLCHANOV, AND FRANCESCA MOLINARI: “Sharp Identification Regions in Models With Convex Moment Predictions.” BOLLERSLEV, TIM, AND VIKTOR TODOROV: “Estimation of Jump Tails.” BOUTON, LAURENT, AND MICAEL CASTANHEIRA: “One Person, Many Votes: “Divided Majority and Information Aggregation.” CARROLL, GABRIEL: “When Are Local Incentive Constraints Sufficient?” CHEN, XIAOHONG, AND DEMIAN POUZO: “Estimation of Nonparametric Conditional Moment Models With Possibly Nonsmooth Generalized Residuals.” DE PAULA, ÁUREO, AND XUN TANG: “Inference of Signs of Interaction Effects in Simultaneous Games With Incomplete Information.” FUDENBERG, DREW, AND DAVID K. LEVINE: “Timing and Self Control.” JU, NENGJIU, AND JIANJUN MIAO: “Ambiguity, Learning, and Asset Returns.” KITAMURA, YUICHI, ANDRES SANTOS, AND AZEEM M. SHAIKH: “On the Asymptotic Optimality of Empirical Likelihood for Testing Moment Restrictions.” KOMUNJER, IVANA, AND SERENA NG: “Dynamic Identification of DSGE Models.” LILIENFELD -TOAL, ULF, DILIP MOOKHERJEE, AND SUJATA VISARIA: “The Distributive Impact of Reforms in Credit Enforcement: “Evidence From Indian Debt Recovery Tribunals.” MIKUSHEVA, ANNA: “One-Dimensional Inference in Autoregressive Models With the Potential Presence of a Unit Root.” PENTA, ANTONIO: “Higher Order Uncertainty and Information: Static and Dynamic Games.” PETERS, MICHAEL, AND BALÁZS SZENTES: “Definable and Contractible Contracts.” PHILLIPS, PETER C. B.: “Folklore Theorems, Implicit Maps and Indirect Inference.” PYCIA, MAREK: “Stability and Preference Alignment in Matching and Coalition Formation.” SANTOS, ANDRES: “Inference in Nonparametric Instrumental Variables With Partial Identification.” VIVES, XAVIER: “Strategic Supply Function Competition With Private Information.” © 2011 The Econometric Society
DOI: 10.3982/ECTA795FORTH
Econometrica, Vol. 79, No. 5 (September, 2011), 1669–1673
THE ECONOMETRIC SOCIETY 2010 ANNUAL REPORT OF THE PRESIDENT
EVERY FIVE YEARS the Econometric Society puts almost all of its regional meetings on hold to bring people together at a single event—our World Congress. The Society is the leading learned society for economists in the world, so this event could not be more central to our activities. It was my great honor to serve as President during one of these special years. Although I was thus reprieved from travelling as much as other Presidents, I was ultimately responsible for the 2010 World Congress, which brought a degree of anxiety. My worries were much lessened once I appointed a superb trio of Program Chairs and it became clear that the local arrangements were in excellent hands. It is appropriate that much of my report focuses on the Congress. Another critical challenge that arose in my year as President was the appointment of a new slate of editors of the Society’s flagship journal, Econometrica. But let me come to that matter after reporting on the World Congress.
1. WORLD CONGRESS IN SHANGHAI Back in 2007, the Executive Committee took the bold decision to hold the Society’s Tenth World Congress, 17–21 August 2010, at the magnificent International Convention Center on the Huangpu River opposite the famous Bund in the heart of Shanghai. This decision proved to be inspired. All records were broken. By way of advance preparation, to predict the number of papers submitted and hence the likely number of attendees, a highly sophisticated forecasting technique was employed, the subtlety of which can be fully appreciated only by members of the Econometric Society—namely, a regression of a linear trend plus Europe and Asia dummies. This proved quite hopeless. Instead of the predicted number around 2300, no less than 3031 submissions were received. As a consequence, the Program Committee had to be expanded from 95 to 118, eight extra lecture rooms had to be hurriedly found in a nearby building (not easy in downtown Shanghai), and even more hotel rooms had to be booked (this at the time of the World Expo in Shanghai). The 2010 World Congress was set to be the largest event in the history of the Society. Was there a danger that our Shanghai World Congress would eclipse the Beijing Olympic Games? The parallel with the Olympics was drawn by the Mayor of Shanghai, Han Zheng, in his address that opened the Congress. The Econometric Society is most grateful to him for attending and for providing the crucial support of the city authorities. It had been expected that a number of institutions in Shanghai might collaborate to run the Congress. As it turned out, essentially all the responsibility, and the financial underwriting, fell to Shanghai Jiao Tong University (SJTU), and it is to that institution that the success of the Congress must be owed. I am most grateful to SJTU President, Zhang Jie, for his hospitality and for giving the backing of his university from the start. The heavy burden of locally organizing the Congress was most ably carried by Lin Zhou, Dean of SJTU’s Antai College of Economics and Management, his colleagues, and his two principal assistants, Emily Gu and Larry Liu. Together with their large team of cheerful helpers, they made the Congress the especially happy and smoothly © 2011 The Econometric Society
DOI: 10.3982/ECTA795PRES
1670
ANNUAL REPORTS
run occasion that it unquestionably was. On behalf of everyone who attended, I deeply thank all of them. Much gratitude must also be expressed to Anne Usher, who, drawing on her experience helping to run the 2005 World Congress in London, supported Lin Zhou and made a number of extended trips to Shanghai. The intellectual program was terrific. Daron Acemoglu, Manuel Arellano, and Eddie Dekel formed an extraordinary triumvirate of Program Chairs. One could not dream of a team to better these three, who through their selection of program committee members, topics, and invited lecturers stamped their personalities on the Congress. They managed the complex process of putting together the program superbly. I am hugely indebted to them for agreeing to take on this onerous task. On behalf of all the members of the Society, may I warmly thank them. We are also grateful to the 118 members of the program committee, who selected 1200 accepted papers from the 3031 submissions by authors in no fewer than 54 countries. Veritably, a World Congress. To give an idea of the scale and scope of the program, let me list, and thank, the four plenary lecturers, Elhanan Helpman, Orazio Attanasio, Whitney Newey, and Drew Fudenberg, together with the invited lecturers and co-authors: Tayfun Sonmez, Atila Abdulkadiroglu, Jonathan Levin, Monika Piazzesi, Markus Brunnermeier, Patrick Bajari, Han Hong, Victor Aguirregabiria, Aviv Nevo, Yuliy Sannikov, Bruno Biais, Thomas Mariotti, Jean-Charles Rochet, James Robinson, David Stromberg, Jesus Fernandez-Villaverde, Juan Rubio-Ramirez, Frank Schorfheide, Massimo Marinacci, Itzhak Gilboa, Wolfgang Pesendorfer, Bart Lipman, Robert Shimer, Gianluca Violante, Jonathan Heathcote, Kjetil Storesletten, Victor Chernozhukov, Susanne Schennach, Joel Sobel, Andrea Prat, Luis Garicano, John Van Reenen, Ivan Fernandez-Val, Josh Angrist, Marc Melitz, Ariel Burstein, Samuel Kortum, Jonathan Eaton, Pierpaolo Battigalli, Dov Samet, Dean Foster, Rakesh Vohra, Fabrizio Zilibotti, Gino Gancia, Andreas Muller, Chad Jones, Jushan Bai, Xiaohong Chen, and Oliver Linton. In addition, I thank their discussants and the contributors to the invited policy sessions. An important objective of this World Congress was to reach young economists working at Chinese universities and institutions, who might not be able to afford to attend. With this in mind, I asked Lin Zhou to set up a competitive scheme for young faculty, postdocs, and promising graduate students. I thank him for running the scheme so well. Over a hundred participants were sponsored, who, if they chose, wore “Future Star” Tshirts in recognition of their success. Rumor has it that the T-shirts fetched a premium on the resale market. In the years leading up to the World Congress, successive Executive Committees had set aside an annual amount of $80,000 so as to build up a $400,000 fund to grant financial support to participants. The task of distributing this money fell to the Program Chairs, who decided to offer funds only to accepted papers graded A or B, and to give more to farther regions. No money was given to senior faculty in the United States, Canada, or Europe (junior faculty were defined as 2004 and later PhDs). This eventually led to 36 grants of $600 being awarded to Chinese participants, 14 grants of $1300 to Latin American participants, and 349 grants of $1000 to junior participants from the United States, Canada, and Europe. (Only a few invited lecturers chose to request funds.)
ANNUAL REPORTS
1671
Whether this level of support can be offered to future World Congresses, 2015 and beyond, remains to be seen. Among other things, the Society will have to cope with the decline in revenue from institutional subscriptions that all journals face.
2. JOURNALS With the term of appointment of Stephen Morris as Editor of Econometrica ending on 30 June 2011, the process of finding his successor began in 2009 and continued through to the summer of 2010. A very considerable amount of energy and thought went into this process, with views sought widely. I am extremely pleased to report that Daron Acemoglu accepted the challenge, and on 1 July 2011 began work as Editor. Daron is a truly outstanding appointment, and I am immensely grateful to him, not least coming so soon after the work he did as a Program Chair for the World Congress. The Society owes him a great deal. Equally, we are deeply indebted to Stephen for the extraordinarily thorough and wise manner in which he edited Econometrica during his four-year term. The journal is thriving thanks to his care and commitment, and he will be a tough act to follow. There were other exciting developments at Econometrica in 2010. Philippe Jehiel commenced his term as Co-Editor. Matt Jackson agreed to become a Co-Editor, starting on 1 July 2011. And, if I may encroach on the terrain of my successor, two further Co-Editors have since been appointed: Lars Hansen and Joel Sobel. The outstanding additions of Philippe, Matt, Lars, and Joel to the editorial team, together with Daron’s arrival, sends out a powerful signal that Econometrica intends to maintain and extend its reputation at the forefront of the profession. The carefully considered, yet courageous, decision to establish two sister journals, Quantitative Economics and Theoretical Economics, has been extensively discussed in previous Presidents’ Reports. The Society pledges its full support to this new venture, from which the early indications look highly promising under the expert editorships of Orazio Attanasio and Martin Osborne, respectively. I am extremely pleased that in 2010, Gadi Barlevy and Johannes Horner joined Theoretical Economics as Co-Editors. These, too, are exciting and outstanding appointments. It is well understood that the success of the Econometric Society is closely tied in with the success of the Society’s journals, which in turn depends on the vision and hard work of all our Editors and Co-Editors. I would like profusely to thank them, the few, on behalf of the many, the members of the Society.
3. WINTER MEETINGS Two regional winter meetings were held in 2010: North American Winter Meeting, Atlanta, January 3–5, 2010 European Winter Meeting, Rome, November 3–5, 2010 I thank the organizers and members of the Program Committees for these meetings, and, in particular, express my gratitude to the Chairs, respectively, Dirk Bergemann and Georg Kirchsteiger.
1672
ANNUAL REPORTS
4. NOMINATING COMMITTEES As required in the Constitution, the annual elections of Officers, Council, and Fellows are preceded by work in three independent Nomination Committees. In 2010, I appointed the following committees: OFFICERS: Torsten Persson, Chair; Richard Blundell; Lars Hansen; Bengt Holmstrom; Eric Maskin; John Moore; Roger Myerson; Jean-Charles Rochet. COUNCIL: Roger Myerson, Chair; Manuel Arellano; Darrell Duffie; Hidehiko Ichimura; Michael Keane; Margaret Meyer; Juan Pablo Nicolini. FELLOWS: Michele Boldrin, Chair; Trevor Breusch; Han Hong; Cesar Martinelli; Rosa Matzkin; Margaret Meyer; Dilip Mookherjee; Ilya Segal. I am grateful to all the committee members for their careful work.
5. ELECTION OF NEW FELLOWS In his President’s Report last year, Roger Myerson explained how efforts to increase the number of Fellows elected from underrepresented regions had borne fruit in the 2009 election round. I am pleased to say that this happened again in 2010. Of the 16 new Fellows elected, two were from Latin America and one each was from Australasia, the Far East, and South and Southeast Asia.
6. ECONOMETRICS AND THE ECONOMETRIC SOCIETY In June 2010, Peter Phillips and David Hendry wrote to me, as President, an open letter expressing concerns about how econometrics and econometricians are treated within the Econometric Society. These concerns ranged widely: the election of Officers and Fellows; the publication of econometric papers in the Society’s journals; the representation of econometrics at meetings and of econometricians’ interests generally. I replied that I took their concerns extremely seriously and that I thought the best way forward was to establish a working group, my successor Bengt Holmstrom in the Chair, with a remit to consider econometrics and the Econometric Society broadly. Donald Andrews, Richard Blundell, Steven Durlauf, Robert Engle, Charles Manski, and Christopher Sims all kindly agreed to serve on this group. I also expressed to Peter Phillips and David Hendry the strong desire—shared by the Executive Committee and by the wider community of Fellows that I consulted who have a special interest in econometrics—to establish warmest relations between the Econometric Society and their proposed new Econometric Foundation, with full cooperation and mutual understanding.
7. CLOSING REMARKS It is a daunting privilege to serve as President. I would like to express my personal appreciation to past and future Presidents, Torsten Persson, Roger Myerson, Bengt Holmstrom, and Jean-Charles Rochet; to the other members of the Executive Committee; to Claire Sashi, the Society’s General Manager; to Helmut Bester and Enrique Sentana in the European Region; and to my assistant at the University of Edinburgh, Christina Napier.
ANNUAL REPORTS
1673
Most of all, I thank Executive Vice-President Rafael Repullo for his wonderful friendship and support, and for directing the affairs of the Society with such consummate skill and dedication. John Moore PRESIDENT IN 2010