CONTENTS JOEL L. HOROWITZ: Applied Nonparametric Instrumental Variables Estimation . . . . . . . . ULRICH K. MÜLLER: Efficient Tests Under a Weak Convergence Assumption . . . . . . . . . .
Efficiency Bounds for Missing Data Models With Semiparametric Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PETER R. HANSEN, ASGER LUNDE, AND JAMES M. NASON: The Model Confidence Set . . . . . PHILIP J. RENY: On the Existence of Monotone Pure-Strategy Equilibria in Bayesian Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DARON ACEMOGLU AND ALEXANDER WOLITZKY: The Economics of Labor Coercion . . . . . JAWWAD NOOR: Temptation and Revealed Preference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
347 395
BRYAN S. GRAHAM:
ANNOUNCEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FORTHCOMING PAPERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VOL. 79, NO. 2 — March, 2011
437 453 499 555 601 645 649
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org EDITOR STEPHEN MORRIS, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A.;
[email protected] MANAGING EDITOR GERI MATTSON, 2002 Holly Neck Road, Baltimore, MD 21221, U.S.A.; mattsonpublishingservices@ comcast.net CO-EDITORS DARON ACEMOGLU, Dept. of Economics, MIT, E52-380B, 50 Memorial Drive, Cambridge, MA 021421347, U.S.A.;
[email protected] PHILIPPE JEHIEL, Dept. of Economics, Paris School of Economics, 48 Bd Jourdan, 75014 Paris, France; University College London, U.K.;
[email protected] WOLFGANG PESENDORFER, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A.;
[email protected] JEAN-MARC ROBIN, Dept. of Economics, Sciences Po, 28 rue des Saints Pères, 75007 Paris, France and University College London, U.K.;
[email protected] JAMES H. STOCK, Dept. of Economics, Harvard University, Littauer M-26, 1830 Cambridge Street, Cambridge, MA 02138, U.S.A.;
[email protected] ASSOCIATE EDITORS YACINE AÏT-SAHALIA, Princeton University JOSEPH G. ALTONJI, Yale University JAMES ANDREONI, University of California, San Diego JUSHAN BAI, Columbia University MARCO BATTAGLINI, Princeton University PIERPAOLO BATTIGALLI, Università Bocconi DIRK BERGEMANN, Yale University YEON-KOO CHE, Columbia University XIAOHONG CHEN, Yale University VICTOR CHERNOZHUKOV, Massachusetts Institute of Technology J. DARRELL DUFFIE, Stanford University JEFFREY ELY, Northwestern University HALUK ERGIN, Duke University JIANQING FAN, Princeton University MIKHAIL GOLOSOV, Yale University FARUK GUL, Princeton University JINYONG HAHN, University of California, Los Angeles PHILIP A. HAILE, Yale University JOHANNES HORNER, Yale University MICHAEL JANSSON, University of California, Berkeley PER KRUSELL, Stockholm University FELIX KUBLER, University of Zurich OLIVER LINTON, London School of Economics BART LIPMAN, Boston University
THIERRY MAGNAC, Toulouse School of Economics (GREMAQ and IDEI) DAVID MARTIMORT, IDEI-GREMAQ, Université des Sciences Sociales de Toulouse, Paris School of Economics STEVEN A. MATTHEWS, University of Pennsylvania ROSA L. MATZKIN, University of California, Los Angeles SUJOY MUKERJI, University of Oxford LEE OHANIAN, University of California, Los Angeles WOJCIECH OLSZEWSKI, Northwestern University NICOLA PERSICO, New York University JORIS PINKSE, Pennsylvania State University BENJAMIN POLAK, Yale University PHILIP J. RENY, University of Chicago SUSANNE M. SCHENNACH, University of Chicago ANDREW SCHOTTER, New York University NEIL SHEPHARD, University of Oxford MARCIANO SINISCALCHI, Northwestern University JEROEN M. SWINKELS, Northwestern University ELIE TAMER, Northwestern University EDWARD J. VYTLACIL, Yale University IVÁN WERNING, Massachusetts Institute of Technology ASHER WOLINSKY, Northwestern University
EDITORIAL ASSISTANT: MARY BETH BELLANDO, Dept. of Economics, Princeton University, Fisher Hall, Princeton, NJ 08544-1021, U.S.A.;
[email protected] Information on MANUSCRIPT SUBMISSION is provided in the last two pages. Information on MEMBERSHIP, SUBSCRIPTIONS, AND CLAIMS is provided in the inside back cover.
SUBMISSION OF MANUSCRIPTS TO ECONOMETRICA 1. Members of the Econometric Society may submit papers to Econometrica electronically in pdf format according to the guidelines at the Society’s website: http://www.econometricsociety.org/submissions.asp Only electronic submissions will be accepted. In exceptional cases for those who are unable to submit electronic files in pdf format, one copy of a paper prepared according to the guidelines at the website above can be submitted, with a cover letter, by mail addressed to Professor Stephen Morris, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 085441021, USA. 2. There is no charge for submission to Econometrica, but only members of the Econometric Society may submit papers for consideration. In the case of coauthored manuscripts, at least one author must be a member of the Econometric Society for the current calendar year. Note that Econometrica rejects a substantial number of submissions without consulting outside referees. 3. It is a condition of publication in Econometrica that copyright of any published article be transferred to the Econometric Society. Submission of a paper will be taken to imply that the author agrees that copyright of the material will be transferred to the Econometric Society if and when the article is accepted for publication, and that the contents of the paper represent original and unpublished work that has not been submitted for publication elsewhere. If the author has submitted related work elsewhere, or if he does so during the term in which Econometrica is considering the manuscript, then it is the author’s responsibility to provide Econometrica with details. There is no page fee and no payment made to the authors. 4. Econometrica has the policy that all results (empirical, experimental and computational) must be replicable. 5. Current information on turnaround times is available is published in the Editor’s Annual Report in the January issue of the journal. They are re-produced on the journal’s website at http:// www.econometricsociety.org/editorsreports.asp. 6. Papers should be accompanied by an abstract of no more than 150 words that is full enough to convey the main results of the paper. 7. Additional information on submitting papers is available on the journal’s website at http:// www.econometricsociety.org/submissions.asp. Typeset at VTEX, Akademijos Str. 4, 08412 Vilnius, Lithuania. Printed at The Sheridan Press, 450 Fame Avenue, Hanover, PA 17331, USA. Copyright ©2011 by The Econometric Society (ISSN 0012-9682). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation, including the name of the author. Copyrights for components of this work owned by others than the Econometric Society must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works, requires prior specific permission and/or a fee. Posting of an article on the author’s own website is allowed subject to the inclusion of a copyright statement; the text of this statement can be downloaded from the copyright page on the website www.econometricsociety.org/permis.asp. Any other permission requests or questions should be addressed to Claire Sashi, General Manager, The Econometric Society, Dept. of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA. E-mail:
[email protected]. Econometrica (ISSN 0012-9682) is published bi-monthly by the Econometric Society, Department of Economics, New York University, 19 West 4th Street, New York, NY 10012. Mailing agent: Sheridan Press, 450 Fame Avenue, Hanover, PA 17331. Periodicals postage paid at New York, NY and additional mailing offices. U.S. POSTMASTER: Send all address changes to Econometrica, Journals Department, John Wiley & Sons Inc., 350 Main Street, Malden, MA 02148, USA.
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Membership Joining the Econometric Society, and paying by credit card the corresponding membership rate, can be done online at www.econometricsociety.org. Memberships are accepted on a calendar year basis, but the Society welcomes new members at any time of the year, and in the case of print subscriptions will promptly send all issues published earlier in the same calendar year. Membership Benefits • Possibility to submit papers to Econometrica, Quantitative Economics, and Theoretical Economics • Possibility to submit papers to Econometric Society Regional Meetings and World Congresses • Full text online access to all published issues of Econometrica (Quantitative Economics and Theoretical Economics are open access) • Full text online access to papers forthcoming in Econometrica (Quantitative Economics and Theoretical Economics are open access) • Free online access to Econometric Society Monographs, including the volumes of World Congress invited lectures • Possibility to apply for travel grants for Econometric Society World Congresses • 40% discount on all Econometric Society Monographs • 20% discount on all John Wiley & Sons publications • For print subscribers, hard copies of Econometrica, Quantitative Economics, and Theoretical Economics for the corresponding calendar year Membership Rates Membership rates depend on the type of member (ordinary or student), the class of subscription (print and online or online only) and the country of residence (high income or middle and low income). The rates for 2011 are the following:
Ordinary Members Print and Online Online only Print and Online Online only
1 year (2011) 1 year (2011) 3 years (2011–2013) 3 years (2011–2013)
Student Members Print and Online Online only
1 year (2011) 1 year (2011)
High Income
Other Countries
$100 / €80 / £65 $55 / €45 / £35 $240 / €192 / £156 $132 / €108 / £84
$60 / €48 $15 / €12 $144 / €115 $36 / €30
$60 / €48 / £40 $15 / €12 / £10
$60 / €48 $15 / €12
Euro rates are for members in Euro area countries only. Sterling rates are for members in the UK only. All other members pay the US dollar rate. Countries classified as high income by the World Bank are: Andorra, Aruba, Australia, Austria, The Bahamas, Bahrain, Barbados, Belgium, Bermuda, Brunei Darussalam, Canada, Cayman Islands, Channel Islands, Croatia, Cyprus, Czech Republic, Denmark, Equatorial Guinea, Estonia, Faeroe Islands, Finland, France, French Polynesia, Germany, Gibraltar, Greece, Greenland, Guam, Hong Kong (China), Hungary, Iceland, Ireland, Isle of Man, Israel, Italy, Japan, Rep. of Korea, Kuwait, Latvia, Liechtenstein, Luxembourg, Macao (China), Malta, Monaco, Netherlands, Netherlands Antilles, New Caledonia, New Zealand, Northern Mariana Islands, Norway, Oman, Poland, Portugal, Puerto Rico, Qatar, San Marino, Saudi Arabia, Singapore, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Taiwan (China), Trinidad and Tobago, Turks and Caicos Islands, United Arab Emirates, United Kingdom, United States, Virgin Islands (US). Institutional Subscriptions Information on Econometrica subscription rates for libraries and other institutions is available at www.econometricsociety.org. Subscription rates depend on the class of subscription (print and online or online only) and the country classification (high income, middle income, or low income). Back Issues and Claims For back issues and claims contact Wiley Blackwell at
[email protected].
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Administrative Office: Department of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA; Tel. 212-9983820; Fax 212-9954487 General Manager: Claire Sashi (
[email protected]) 2011 OFFICERS BENGT HOLMSTRÖM, Massachusetts Institute of Technology, PRESIDENT JEAN-CHARLES ROCHET, University of Zurich and Toulouse School of Economics, FIRST VICE-PRESIDENT JAMES HECKMAN, University of Chicago, SECOND VICE-PRESIDENT JOHN MOORE, University of Edinburgh and London School of Economics, PAST PRESIDENT RAFAEL REPULLO, CEMFI, EXECUTIVE VICE-PRESIDENT
2011 COUNCIL DARON ACEMOGLU, Massachusetts Institute of Technology (*)MANUEL ARELLANO, CEMFI ORAZIO ATTANASIO, University College London MARTIN BROWNING, University of Oxford DAVID CARD, University of California, Berkeley JACQUES CRÉMER, Toulouse School of Economics MATHIAS DEWATRIPONT, Free University of Brussels DARRELL DUFFIE, Stanford University GLENN ELLISON, Massachusetts Institute of Technology HIDEHIKO ICHIMURA, University of Tokyo (*)MATTHEW O. JACKSON, Stanford University MICHAEL P. KEANE, University of Technology Sydney LAWRENCE J. LAU, Chinese University of Hong Kong CHARLES MANSKI, Northwestern University CESAR MARTINELLI, ITAM
ANDREU MAS-COLELL, Universitat Pompeu Fabra and Barcelona GSE AKIHIKO MATSUI, University of Tokyo HITOSHI MATSUSHIMA, University of Tokyo ROSA MATZKIN, University of California, Los Angeles ANDREW MCLENNAN, University of Queensland COSTAS MEGHIR, University College London and Yale University MARGARET MEYER, University of Oxford STEPHEN MORRIS, Princeton University JUAN PABLO NICOLINI, Universidad Torcuato di Tella (*)ROBERT PORTER, Northwestern University JEAN-MARC ROBIN, Sciences Po and University College London LARRY SAMUELSON, Yale University ARUNAVA SEN, Indian Statistical Institute JÖRGEN W. WEIBULL, Stockholm School of Economics
The Executive Committee consists of the Officers, the Editors of Econometrica (Stephen Morris), Quantitative Economics (Orazio Attanasio), and Theoretical Economics (Martin J. Osborne), and the starred (*) members of the Council.
REGIONAL STANDING COMMITTEES Australasia: Andrew McLennan, University of Queensland, CHAIR; Maxwell L. King, Monash University, SECRETARY. Europe and Other Areas: Jean-Charles Rochet, University of Zurich and Toulouse School of Economics, CHAIR; Helmut Bester, Free University Berlin, SECRETARY; Enrique Sentana, CEMFI, TREASURER. Far East: Hidehiko Ichimura, University of Tokyo, CHAIR. Latin America: Juan Pablo Nicolini, Universidad Torcuato di Tella, CHAIR; Juan Dubra, University of Montevideo, SECRETARY. North America: Bengt Holmström, Massachusetts Institute of Technology, CHAIR; Claire Sashi, New York University, SECRETARY. South and Southeast Asia: Arunava Sen, Indian Statistical Institute, CHAIR.
Econometrica, Vol. 79, No. 2 (March, 2011), 347–394
APPLIED NONPARAMETRIC INSTRUMENTAL VARIABLES ESTIMATION1 BY JOEL L. HOROWITZ Instrumental variables are widely used in applied econometrics to achieve identification and carry out estimation and inference in models that contain endogenous explanatory variables. In most applications, the function of interest (e.g., an Engel curve or demand function) is assumed to be known up to finitely many parameters (e.g., a linear model), and instrumental variables are used to identify and estimate these parameters. However, linear and other finite-dimensional parametric models make strong assumptions about the population being modeled that are rarely if ever justified by economic theory or other a priori reasoning and can lead to seriously erroneous conclusions if they are incorrect. This paper explores what can be learned when the function of interest is identified through an instrumental variable but is not assumed to be known up to finitely many parameters. The paper explains the differences between parametric and nonparametric estimators that are important for applied research, describes an easily implemented nonparametric instrumental variables estimator, and presents empirical examples in which nonparametric methods lead to substantive conclusions that are quite different from those obtained using standard, parametric estimators. KEYWORDS: Nonparametric estimation, instrumental variable, ill-posed inverse problem, endogenous variable, eigenvalues, linear operator.
1. INTRODUCTION INSTRUMENTAL VARIABLES are widely used in applied econometrics to achieve identification and carry out estimation and inference in models that contain endogenous explanatory variables. In most applications, the function of interest (e.g., an Engel curve or demand function) is assumed to be known up to finitely many parameters (e.g., a linear model), and instrumental variables are used to identify and estimate these parameters. However, linear and other finite-dimensional parametric models make strong assumptions about the population being modeled that are rarely if ever justified by economic theory or other a priori reasoning and can lead to seriously erroneous conclusions if they are incorrect. This paper explores what can be learned when the function of interest is identified through an instrumental variable but is not assumed to be known up to finitely many parameters. Specifically, this paper is about estimating the unknown function g in the model (1.1)
Y = g(X) + U;
E(U|W = w) = 0
1 This article is based on the Fisher–Schultz Lecture that I presented at the 2008 Econometric Society European Meeting. I thank Richard Blundell for providing data from the Family Expenditure Survey, Xiaohong Chen and Charles F. Manski for comments and suggestions, and Brendan Kline for research assistance. This research was supported in part by NSF Grant SES-0817552.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8662
348
JOEL L. HOROWITZ
for all w or, equivalently, (1.2)
E[Y − g(X)|W = w] = 0
In this model, g is a function that satisfies regularity conditions but is otherwise unknown, Y is a scalar dependent variable, X is an explanatory variable or vector that may be correlated with U (that is, X may be endogenous), W is an instrument for X, and U is an unobserved random variable. For example, if Y is a household’s expenditure share on a good or service and X is the household’s total expenditure, then g is an Engel curve. If income from wages and salaries is not influenced by household budgeting decisions, then the household head’s total earnings from wages and salaries can be used as an instrument, W , for X (Blundell, Chen, and Kristensen (2007), Blundell and Horowitz (2007)). The data used to estimate g are an independent random sample of (Y X W ). If some explanatory variables are exogenous, it is convenient to use notation that distinguishes between endogenous and exogenous explanatory variables. We write the model as (1.3)
Y = g(X Z) + U
E(U|W = w Z = z) = 0
or (1.4)
E[Y − g(X Z)|W = w Z = z] = 0
for all w and z. In this model, X denotes the explanatory variables that may be endogenous, Z denotes the exogenous explanatory variables, and W is an instrument for X. The data are an independent random sample of (Y X W Z). Methods for estimating g in (1.1)–(1.2) and, to a lesser extent, (1.3)–(1.4) have become available recently but have not yet been used much in applied research. This paper explores the usefulness of nonparametric instrumental variables (IV) estimators for applied econometric research. Among other things, the paper pursues the following points: (i) It explains that nonparametric and parametric estimators differ in ways that are important for applied research. Nonparametric estimation is not just a flexible form of parametric estimation. (ii) It presents an estimator of g in (1.1)–(1.2) that is as easy to compute as an IV estimator of a linear model. Thus, computational complexity is not a barrier to the use of nonparametric IV estimators in applications. (iii) It presents empirical examples in which nonparametric methods lead to substantive conclusions that are quite different from those obtained using standard, parametric estimators. Some characteristics of nonparametric IV methods may be unattractive to applied researchers. One of these is that nonparametric IV estimators can be very imprecise. This is not a defect of the estimators. Rather, it reflects the fact that the data often contain little information about g when it is identified
APPLIED NONPARAMETRIC IV ESTIMATION
349
through instrumental variables. When this happens, applied researchers may prefer to add “information” in the form of a priori assumptions about the functional form of g so as to increase the precision of the estimates. For example, g may be assumed to be a linear or quadratic function. However, the improvement in apparent precision obtained from a parametric model carries the risk of misleading inference if the model is misspecified. There is no assurance that a parametric model that is chosen for analytic or computational convenience or because of frequent use in the literature contains the true g or even a good approximation to it. Moreover, neither economic theory nor econometric procedures can lead one reliably to a correct parametric specification. Depending on the substantive meaning of g (e.g., a demand function), economic theory may provide information about its shape (e.g., convex, concave, monotonic) or smoothness, but theory rarely if ever provides a parametric model. The risk of specification error cannot be eliminated through specification testing. Failure to reject a parametric model in a specification test does not necessarily imply that the model is correctly specified. In fact, a specification test may accept several parametric models that yield different substantive conclusions. Nonparametric estimation reveals the information that is available from the data as opposed to functional form assumptions. It enables one to assess the importance of functional form assumptions in drawing substantive conclusions from a parametric model. Even if an applied researcher ultimately decides to use a parametric model, he or she should be aware of the conclusions that are justified under the weak assumptions of nonparametric estimation and of how these conclusions may differ from those obtained from the parametric model. Another possible obstacle to the use of nonparametric IV in applications is that certain methodological problems are not yet solved. Some of these problems are outlined later in this paper. It is likely that the problems will be solved in the near future and will not present serious long-run obstacles to applied nonparametric IV estimation. 1.1. Summary of Recent Literature Nonparametric estimation of g in (1.1)–(1.2) when X and W are continuously distributed has been the object of much recent research. Several estimators are now available, and much is known about the properties of some of them. The available estimators include kernel-based estimators (Darolles, Florens, and Renault (2006), Hall and Horowitz (2005)) and series or sieve estimators (Newey and Powell (2003), Blundell, Chen, and Kristensen (2007)). The estimator of Hall and Horowitz (2005) also applies to model (1.3)–(1.4). The estimators of Hall and Horowitz (2005) and Blundell, Chen, and Kristensen (2007) converge in probability at the fastest possible rates under their assumptions (Hall and Horowitz (2005), Chen and Reiss (2011)), so these estimators are the best possible in that sense. Horowitz (2007) gave conditions under which the Hall–Horowitz (2005) estimator is asymptotically normally distributed. Horowitz and Lee (2010) showed how to obtain uniform confidence
350
JOEL L. HOROWITZ
bands for g in (1.1)–(1.2). Horowitz (2006) showed how to test a parametric specification for g (e.g., the hypothesis that g is a linear function) against a nonparametric alternative, and Blundell and Horowitz (2007) showed how to test the hypothesis that X is exogenous. Horowitz (2011b) showed how to test the hypothesis that a function g satisfying (1.1)–(1.2) exists. There are also estimators for a quantile version of (1.1)–(1.2) with continuously distributed X and W (Chen and Pouzo (2008), Chernozhukov, Imbens, and Newey (2007), Horowitz and Lee (2007)). In the quantile model, the conditional moment restriction E(U|W = w) = 0 is replaced by a conditional quantile restriction. The resulting model is (1.5)
Y = g(X) + U;
P(U ≤ 0|W = w) = q
for some q such that 0 < q < 1. Horowitz and Lee (2007) showed that this model subsumes the nonseparable model (1.6)
Y = g(X U)
where U is independent of W and g is strictly increasing in its second argument. Chernozhukov and Hansen (2005) and Chernozhukov, Imbens, and Newey (2007) gave conditions under which g is identified in (1.5) or (1.6). When X and W are discretely distributed, as happens in many applications, g is not identified except in special cases. However, informative bounds on g may be identified, even if g is not identified. Manski and Pepper (2000) and Chesher (2004, 2005) gave conditions under which informative identified bounds are available. 1.2. The Control Function Model The control function model is an alternative formulation of the nonparametric IV estimation problem that is nonnested with the formulation of equations (1.2) and (1.3). In the control function model, (1.7)
Y = g(X) + U
and (1.8)
X = h(W ) + V
where g and h are unknown functions, (1.9)
E(V |W = w) = 0
for all w, and (1.10)
E(U|X = x V = v) = E(U|V = v)
for all x and v. Assuming that the mean of X conditional on W exists, (1.8) and (1.9) can always be made to hold by setting h(w) = E(X|W = w). Iden-
APPLIED NONPARAMETRIC IV ESTIMATION
351
tification in the control function approach comes from (1.10). It follows from (1.7) and (1.10) that (1.11)
E(Y |X = x V = v) = g(x) + k(v)
where g and k are unknown functions. If V were observable, g could be estimated by using any of a variety of estimators for nonparametric additive models. See, for example, Horowitz (2009, Chap. 3). Although V is not observable, it can be estimated consistently by the residuals from nonparametric estimation of h in (1.8). The estimated V can be used in place of the true one for the purposes of estimating g from (1.11). Newey, Powell, and Vella (1999) presented an estimator and gave conditions under which it is consistent and achieves the optimal nonparametric rate of convergence. Further discussion of the control function approach is available in Pinkse (2000) and Blundell and Powell (2003). Models (1.1)–(1.2) and (1.7)–(1.10) are nonnested. It is possible for (1.2) to be satisfied but not (1.10) and for (1.10) to be satisfied but not (1.2). Therefore, neither model is more general than the other. Blundell and Powell (2003) and Heckman and Vytlacil (2007) discussed the relative merits of the two models in various settings. At present, there is no statistical procedure for distinguishing empirically between the two models. This paper is concerned mainly with estimation of g in models (1.1)–(1.2) and (1.3)–(1.4). A version of the control function approach will be discussed in Section 6.1 in connection with models in which X and W are discrete. In other respects, the control function approach will not be discussed further. The remainder of the paper is organized as follows. Section 2 deals with the question of whether there is any important difference between a nonparametric estimator of g and a sufficiently flexible parametric one. Section 3 summarizes the theory of nonparametric estimation of g when X and W are continuous random variables. Section 4 presents a nonparametric estimator that is easy to compute. Section 5 presents empirical examples that illustrate the methods and conclusions of Sections 2–4. Section 6 discusses identification and, when possible, estimation of g when X and W are discrete random variables. Section 7 presents concluding comments. The exposition in this paper is informal. The emphasis is on conveying ideas and important results, not on technical details. Proofs and other details of mathematical rigor are available in the cited reference material. Data and programs are provided in the Supplemental Material (Horowitz (2011a)). 2. THE DIFFERENCE BETWEEN PARAMETRIC AND NONPARAMETRIC METHODS If g in (1.1) were known up to a finite-dimensional parameter θ (that is, g(x) = G(x θ) for all x, some known function G, and some finitedimensional θ), then n−1/2 -consistent, asymptotically normal estimators of θ and g could be obtained by using the generalized method of moments (GMM)
352
JOEL L. HOROWITZ
(Hansen (1982)). When g is unknown, one can consider approximating it by a finite-dimensional parametric model, G(x θ), for some suitable G. It is easy to find functions G that yield good approximations. Engel curves, demand functions, and many other functions that are important in economics are likely to be smooth. They are not likely to be wiggly or discontinuous. A smooth function on a compact interval can be approximated arbitrarily well by a polynomial of sufficiently high degree. Thus, for example, if X is a scalar random variable with compact support, we can write (2.1)
g(x) ≈ θ0 + θ1 x + · · · + θK xK ≡ G1 (x θ)
where K > 0 is an integer, θ0 θK are constants, and θ = (θ0 θK ) . The approximation error can be made arbitrarily small by making K sufficiently large. Alternatively, one can use a set of basis functions {ψj : j = 1 2 }, such as trigonometric functions, orthogonal polynomials, or splines, in place of powers of x. In this case, (2.2)
g(x) ≈ θ1 ψ1 (x) + · · · + θK ψK (x) = G2 (x θ)
Again, the approximation error can be made arbitrarily small by making K sufficiently large. The parameter vector θ in either (2.1) or (2.2) can be estimated by GMM based on the approximate moment condition E[G(X θ)|W = w)] = 0. The parameter estimates are n−1/2 -consistent and asymptotically normal. As will be discussed further in Section 3, nonparametric series estimators of g are based on estimating θ in G2 for some set of basis functions {ψj }. Therefore, it is possible for parametric and nonparametric estimates to be identical. This makes it reasonable to ask whether there is any practical difference between a nonparametric estimator and a sufficiently flexible parametric one. The answer is that parametric and nonparametric estimators lead to different inference (confidence intervals and hypothesis tests). This is because inference based on a parametric model treats the model as if it were exact, whereas nonparametric estimation treats it as an approximation that depends on the size of the sample. Specifically, in nonparametric estimation, the “size” of the model (e.g., K in (2.2)) is larger with large samples than with small ones. Consequently, the approximation error is smaller with large samples than with small ones. In contrast, the size (or dimension) of a parametric model and a parametric model’s approximation error are fixed and independent of the sample. Although it is possible to find a parametric model that coincides with a nonparametric model, a given parametric model coincides with a nonparametric model only for a narrow range of sample sizes. This makes inference based on parametric and nonparametric models different because the two models are different except when the sample size is in a small range that depends on the details of the estimation problem. As an analogy, it may be useful
APPLIED NONPARAMETRIC IV ESTIMATION
353
to consider the difference between estimates based on random and arbitrary samples. One can always find an arbitrary sample that coincides with a random sample, but a given arbitrary sample is unlikely to coincide with a random one. Therefore, estimates obtained from a given arbitrary sample and a random sample are different except in the unlikely event that the two coincide. Because parametric estimation assumes a fixed model that does not depend on the sample size, a parametric estimate tends to give a misleading indication of estimation precision unless the parametric model is really correct. Parametric methods typically indicate that the estimates are more precise than they really are. Often the assumptions of a highly restrictive parametric model are much more “informative” than the data are. Consequently, conclusions that are supported by the parametric model may not be supported by nonparametric methods. This is illustrated by empirical examples that are presented in Sections 5 and 6. 3. NONPARAMETRIC IV ESTIMATION WHEN X AND W ARE CONTINUOUSLY DISTRIBUTED
This section summarizes the theory of nonparametric IV estimation and explains why nonparametric IV estimation presents problems that are not present in parametric IV estimation. The discussion is concerned with estimating g in model (1.1)–(1.2) when X and W are continuously distributed scalars. Allowing X and W to be vectors complicates the notation but does not change the essential ideas or results, though it may reduce estimation precision owing to curse-of-dimensionality effects. It is assumed that the support of (X W ) is contained in [0 1]2 . This assumption can always be satisfied by, if necessary, carrying out monotone increasing transformations of X and W . For example, one can replace X and W by (X) and (W ), where is the normal distribution function. 3.1. Identification We begin by deriving a mapping from the population distribution of (Y X W ) to g. This mapping identifies g and provides the starting point for estimation of g. Let fX|W denote the probability density function of X conditional on W . Let fXW and fW , respectively, denote the probability density functions of (X W ) and W . Note that fX|W = fXW /fW . Model (1.1)–(1.2) can be written (3.1)
E(Y |W = w) = E[g(X)|W = w] 1 = g(x)fX|W (x w) dx
0 1
=
g(x) 0
fXW (x w) dx fW (w)
354
JOEL L. HOROWITZ
Therefore, (3.2)
E(Y |W = w)fW (w) =
1
g(x)fXW (x w) dx 0
and (3.3)
1
E(Y |W = w)fXW (z w)fW (w) =
g(x)fXW (x w)fXW (z w) dx 0
for any z ∈ [0 1]. Define 1 t(x z) = fXW (x w)fXW (z w) dw 0
Then integrating with respect to w on both sides of (3.3) yields 1 E[YfXW (z W )] = (3.4) g(x)t(x z) dx 0
for any z ∈ [0 1], where the expectation on the left-hand side is over the distribution of (Y W ). Equation (3.4) shows that g is the solution to an integral equation. The integral equation is called a Fredholm equation of the first kind in honor of the Swedish mathematician Erik Ivar Fredholm. Now define the operator (that is, mapping from one set of functions to another) T by 1 (T h)(z) = h(x)t(x z) dx 0
Define r(z) = E[YfXW (z W )]. Then (3.4) is equivalent to the operator equation (3.5)
r(z) = (T g)(z)
It may be useful to think of T as the infinite-dimensional generalization of a matrix and of (3.5) as the infinite-dimensional generalization of a system of simultaneous equations. Assume that T is nonsingular or one-to-one.2 That is, if T h = 0, then h = 0 almost everywhere. Then T has an inverse and the solution to (3.5) is (3.6)
g(x) = (T −1 r)(x)
2 Blundell, Chen, and Kristensen (2007) gave examples of distributions that satisfy the nonsingularity condition. There has been little research on what can be learned about g when X and W are continuously distributed and T is singular. Section 6 reviews research on what can be learned about g when X and W are discrete and the discrete analog of T is singular.
APPLIED NONPARAMETRIC IV ESTIMATION
355
Equation (3.6) is the desired mapping from the population distribution of (Y X W ) to g. Equation (3.6) identifies g and can be used to form estimators of g. 3.2. Background From Functional Analysis The properties of estimators of g depend on those of T .3 Stating the relevant properties of T requires the use of concepts and results from functional analysis. These are infinite-dimensional analogs of similar concepts and results for finite-dimensional vectors and matrices, and will be stated briefly here. Mathematical details can be found in textbooks on functional analysis, such as Conway (1990) and Liusternik and Sobolev (1961). Define the function space L2 [0 1] as the set of functions that are square integrable on [0 1]. That is, 1 L2 [0 1] = h : h(x)2 dx < ∞ 0
Define the norm, h, of any function h ∈ L2 [0 1] by 1/2 1 h(x)2 dx h = 0
For any functions h1 h2 ∈ L2 [0 1], define the inner product 1 h1 (x)h2 (x) dx h1 h2 = 0
Let {λj φj : j = 1 2 } denote the eigenvalues and eigenvectors of T . These are the solutions to the equation T φj = λj φj
j = 1 2
and are analogous to the eigenvalues and eigenvectors of a real, symmetric matrix. T is always positive semidefinite or definite and is assumed to be nonsingular, so λj > 0 for all j = 1 2 Sort the eigenvalues and eigenvectors so that λ1 ≥ λ2 ≥ · · · > 0. Assume that 1 1 fXW (x w)2 dx dw < ∞ 0
0
3 The investigation of properties of estimators of g can also be based on (3.1) or (3.2). The conclusions are the same as those obtained using (3.4)–(3.6), and the necessary mathematical tools are simpler with (3.4)–(3.6). If X is exogenous and W = X, then fXW (x w) = fW (w)δ(x − w), where δ is the Dirac delta function. The delta function in fXW changes the properties of T , and the results of Sections 3 and 4 of this paper no longer apply.
356
JOEL L. HOROWITZ
Then eigenvalues and eigenvectors of T have the following properties: (i) Zero is a limit point of the eigenvalues. Therefore, there are infinitely many λj ’s within any neighborhood of zero. Zero is the only limit point of the eigenvalues. (ii) The eigenvectors are orthonormal. That is, φj φk = 1 if j = k and 0 otherwise. (iii) The eigenvectors are a basis for L2 [0 1]. That is, any function h ∈ L2 [0 1] has the series representation h(x) =
∞
hj φj (x)
j=1
where hj = h φj . Moreover, ∞
h2 =
h2j
j=1
(iv) For any h ∈ L2 [0 1], (T h)(x) =
∞
λj hj φj (x)
j=1
In addition, if 2 ∞ hj j=1
λj
< ∞
then −1
(T h)(x) =
∞ hj j=1
λj
φj (x)
Because of property (iii), we can write r(z) =
∞
rj φj (z)
j=1
and g(x) =
∞ j=1
gj φj (x)
APPLIED NONPARAMETRIC IV ESTIMATION
357
where rj = r φj and gj = g φj for each j. The coefficients rj and gj are called generalized Fourier coefficients of r and g, respectively. Because of property (iv), (3.7)
(T −1 r)(x) =
∞ rj φj (x) λj j=1
if (3.8)
2 ∞ rj j=1
λj
< ∞
Combining (3.6) and (3.7) yields the result that (3.9)
g(x) =
∞ rj φj (x) λj j=1
if (3.8) holds. Equation (3.9) provides a representation of g that can be used to investigate the properties of estimators. 3.3. The Ill-Posed Inverse Problem The key fact about (3.9) that makes nonparametric IV different from parametric IV is that because λj → 0 as j → ∞ g is not a continuous functional of r. To see this, let r1 and r2 be functions in L2 [0 1] with the representations r1 (x) =
∞
r1j φj (x)
j=1
and r2 (x) =
∞
r2j φj (x)
j=1
Define g1 (x) =
∞ r1j j=1
λj
φj (x)
and g2 (x) =
∞ r2j j=1
λj
φj (x)
358
JOEL L. HOROWITZ
Then
∞ r2 − r1 = (r1j − r2j )2
1/2
j=1
and
g2 − g1 =
2 1/2 ∞ r1j − r2j λj
j=1
Given any ε > 0, no matter how small, and any M > 0, no matter how large, it is possible to choose the r1j ’s and r2j ’s such that r1 − r2 < ε and g1 − g2 > M. Therefore, an arbitrarily small change in r in (3.5) can produce an arbitrarily large change in g. This phenomenon is called the ill-posed inverse problem. The ill-posed inverse problem also arises in deconvolution and nonparametric density estimation (Härdle and Linton (1994), Horowitz (2009)). The ill-posed inverse problem has important consequences for how much information the data contain about g and how accurately g can be estimated. To see why, denote the data by {Yi Xi Wi : i = 1 n}, where n is the sample size. Suppose that fXW and, therefore, T and the λj ’s are known. Then the rj ’s are the only unknown quantities on the right-hand side of (3.9). It follows from (3.4) and rj = r φj that 1 rj = E Y fXW (z W )φj (z) dz
j = 1 2
0
Therefore, rj is a population moment and can be estimated n−1/2 consistently by the sample analog rˆj = n
−1
n i=1
1
Yi
fXW (z Wi )φj (z) dz
j = 1 2
0
The generalized Fourier coefficients of g are estimated consistently and without bias by gˆ j =
rˆj λj
Because λj → 0 as n → ∞, random sampling errors in rˆj can have large effects on gˆ j when j is large. Indeed, Var(gˆ j ) = Var(ˆrj )/λ2j → ∞ as j → ∞, except in special cases. As a consequence, except in special cases, only low-order generalized Fourier coefficients of g can be estimated with useful precision with
APPLIED NONPARAMETRIC IV ESTIMATION
359
samples of practical size. Thus, the ill-posed inverse problem limits what can be learned about g. The following example illustrates the problem. EXAMPLE 3.1—The Ill-Posed Inverse Problem: Let g(x) = x. Let (3.10)
fXW (x w) =
∞
λ1/2 j φj (x)φj (w)
0 ≤ x w ≤ 1
j=1
√ where φ1 (z) = 1 φj (z) = 2 cos[(j − 1)πz] for j ≥ 2 λ1 = 1, and λj = 02(j − 1)−4 for j ≥ 2. With this fXW , the marginal distributions of X and W are uniform on [0 1], but X and W are not independent of one another. The generalized Fourier coefficients of g are g1 = 05 and √ gj = 2[(−1)j−1 − 1][π(j − 1)]−2 j ≥ 2 The reduced form model is Y = E[g(X)|W ] + V =
∞
gj E[φj (X)|W ] + V
j=1
where V is a random variable satisfying E(V |W = w) = 0. Now
1
E[φj (X)|W ] =
φj (x) 0
=
fXW (x W ) dx fW (W )
1
φj (x)fXW (x W ) dx 0
where the last line uses the fact that the marginal distribution of W is U[0 1]. By (3.10),
1
φj (x)fXW (x W ) dx = λ1/2 j φj (W ) 0
Therefore, the reduced-form model can be written Y=
∞ j=1
where cj = gj λ1/2 j .
cj φj (W ) + V
360
JOEL L. HOROWITZ
FIGURE 1.——Illustration of the ill-posed inverse problem. The solid line with circles is the absolute values of the generalized Fourier coefficients; the dashed line with triangles is the standard deviation of maximum likelihood estimates of these coefficients.
Now let V ∼ N(0 001) independently of W . With data {Yi Xi Wi : i = 1 n}, the maximum likelihood (and asymptotically efficient) estimator of the cj ’s can be obtained by applying ordinary least squares to Yi =
∞
cj φj (Wi ) + Vi ;
i = 1 n
j=1
Let cˆj (j = 1 2 ) denote the resulting estimates. The maximum likelihood estimator of gj is cˆj /λ1/2 j . Figure 1 shows a graph of |gj | and the standard deviation of gˆ j for n = 10000. Even with this large sample, only the first four generalized Fourier coefficients are estimated with useful precision. The standard deviation of gˆ j is much larger than |gj | when j > 4. The result of Example 3.1 is very general. Except in special cases, only loworder generalized Fourier coefficients of g can be estimated with useful precision with samples of practical size. This is a consequence of the ill-posed inverse problem and is a characteristic of the estimation problem, not a defect of the estimation method. When identification is through the moment condition (1.2), the data contain little information about the higher-order generalized Fourier coefficients of g. Therefore, to obtain a useful estimator of g, one
APPLIED NONPARAMETRIC IV ESTIMATION
361
must find a way to avoid the need for estimating higher-order coefficients. Procedures for doing this are called regularization. They amount to modifying T in a suitable way. The amount of modification is controlled by a parameter (the regularization parameter) and decreases as n → ∞ to ensure consistent estimation. Several regularization methods are available. See Engl, Hanke, and Neubauer (1996), Kress (1999), and Carrasco, Florens, and Renault (2007). In this paper, regularization will be carried out by replacing T with a finitedimensional approximation. The method for doing this is described in Section 4. Section 3.4 provides the mathematical rationale for the method. 3.4. Avoiding Estimation of Higher-Order Generalized Fourier Coefficients: The Role of Smoothness One way to avoid the need to estimate higher-order generalized Fourier coefficients is to specify a low-dimensional parametric model for g. That is, g(x) = G(x θ) for some known function G and low-dimensional θ. A parametric model, in effect, specifies high-order coefficients in terms of a few loworder ones, so only a few low-order ones have to be estimated. But the assumption that g has a known parametric form is strong and leads to incorrect inference unless the parametric model is exact or a good approximation to the true g. The parametric model provides no information about the accuracy of the approximation or the effect of approximation error on inference. Therefore, it is useful to ask whether we can make an assumption that is weaker than parametric modeling but provides asymptotically correct inference. The assumption that g is smooth in the sense of having one or more derivatives achieves this goal. Assuming smoothness is usually weaker than assuming that g belongs to a known parametric family, because most parametric families used in applied research are subsets of the class of smooth functions. The smoothness assumption is likely to be satisfied by many functions that are important in applied econometrics, including demand functions and Engel curves, so smoothness is not excessively restrictive in a wide variety of applications. Moreover, as will be explained, smoothness provides enough information about higher-order generalized Fourier coefficients to make consistent estimation of g and asymptotically correct inference possible. We first provide a formal definition of the smoothness concept that will be used for estimating g. Let Dk g(x) = d k g(x)/dxk for k = 0 1 2 with D0 g(x) = g(x). Define g to have smoothness s if g ≡ 2 s
s
Dj g2 ≤ C02
j=0
for some finite, positive constant C0 . In other words, g has smoothness s if it has s square-integrable derivatives.
362
JOEL L. HOROWITZ
To see why smoothness is useful for estimating g, let {ψj } be a basis for L2 [0 1]. The ψj ’s need not be eigenfunctions of T . If g has smoothness s > 0 and {ψj } is any of a variety of bases that includes trigonometric functions, orthogonal polynomials, and splines (see, e.g., Chen (2007)), then there are coefficients {gj } and a constant C < ∞ not depending on g such that J (3.11) g − gj ψj ≤ CJ −s j=1
for each J = 1 2 Therefore, smoothness provides an upper bound on the error of a truncated series approximation to g. This bound is sufficient to permit consistent estimation of g and asymptotically correct inference. In other words, smoothness makes nonparametric estimation and inference possible. Although smoothness makes nonparametric estimation of g possible, it does not eliminate the need for judgment in estimation. Depending on the details of g and the basis functions, many generalized Fourier coefficients gj may be needed to achieve a good approximation to g. This is a concern because, due to the ill-posed inverse problem, it is possible to estimate only low-order gj ’s with useful precision. Therefore, it is desirable to choose basis functions that provide a good low-dimensional approximation to g. This is not the same as parametric modeling because we do not assume that the truncated series approximation is exact and, consequently, the length of the series approximation depends on the sample size. Theoretically justified methods for choosing basis functions in applications are not yet available. 4. NONPARAMETRIC ESTIMATION AND TESTING OF A SMOOTH FUNCTION Section 4.1 presents an estimator of g in model (1.1)–(1.2). The estimator is extended to model (1.3)–(1.4) in Section 4.2. Section 4.3 describes two specification tests that will be used in the empirical illustrations of Section 5. It is assumed that X W , and Z are scalar random variables. The extension to random vectors complicates the notation, but does not affect the main ideas and results. See Hall and Horowitz (2005), Horowitz (2006, 2011b), Blundell, Chen, and Kristensen (2007), and Blundell and Horowitz (2007). 4.1. Estimation of g in Model (1.1)–(1.2) This section presents an estimator of g in model (1.1)–(1.2). The estimator is a simplified version of the estimator of Blundell, Chen, and Kristensen (2007). It is analogous to an IV estimator for a linear model and can be computed the same way. The estimator is also a version of the Petrov–Galerkin method for solving a Fredholm integral equation of the first kind (Kress (1999)). To begin the derivation of the estimator, define m(w) = E(Y |W = w)fW (w)
APPLIED NONPARAMETRIC IV ESTIMATION
363
Define the operator A on L2 [0 1] by 1 h(x)fXW (x w) dx (Ah)(w) = 0
Then (3.2) is equivalent to (4.1)
Ag = m
The estimator of this section is obtained by replacing A and m with series estimators and solving the resulting empirical version of (4.1).4 To obtain the estimators, let {ψj } be an orthonormal basis for L2 [0 1] that satisfies (3.11). Then we can write g(x) =
∞
gj ψj (x)
j=1
m(w) =
∞
mj ψj (w)
j=1
and fXW (x w) =
∞ ∞
ajk ψj (x)ψk (w)
j=1 k=1
where gj = g ψj mj = m ψj , and 1 1 fXW (x w)ψj (x)ψk (w) dx dw ajk = 0
0
In addition, (Ag)(w) =
∞ ∞
gj ajk ψk (w)
j=1 k=1
The mj ’s and ajk ’s are estimated n−1/2 consistently by ˆ j = n−1 m
n
Yi ψj (Wi )
i=1 4 Equation (3.5) and the results of Section 3 can be obtained from (4.1) by setting T = A∗ A and r = A∗ m, where A∗ is the adjoint of A. The eigenvalues λj are squares of the singular values of A. The formulation of Section 3 is useful for expository purposes because it does not require familiarity with the singular value decomposition of an operator. However, (4.1) yields an estimator that is easier to compute.
364
JOEL L. HOROWITZ
and aˆ jk = n
−1
n
ψj (Xi )ψk (Wi )
i=1
The functions m and operator A are estimated consistently by ˆ m(w) =
Jn
ˆ j ψj (w) m
j=1
and ˆ (Ah)(x) =
Jn Jn
hj aˆ jk ψk (w)
j=1 k=1
where h is any function in L2 [0 1] hj = h ψj , and the integer Jn is a truncation point that increases at a suitable rate as n → ∞.5 The empirical version of (4.1) is (4.2)
ˆ Aˆ gˆ = m
The solution to (4.2) has the form of a conventional linear IV estimator. To obtain it, let Wn and Xn , respectively, denote the n × Jn matrices whose (i j) elements are ψj (Wi ) and ψj (Xi ). Define Yn = (Y1 Yn ) . Let {gˆ j : j = ˆ That is, 1 Jn } denote the generalized Fourier coefficients of g. (4.3)
ˆ g(x) =
Jn
gˆ j ψj (x)
j=1
ˆ = (gˆ 1 gˆ Jn ) . Then the solution to (4.2) is (4.3) with Define G (4.4)
ˆ = (W Xn )−1 W Yn G n n
ˆ has the form of an IV estimator for a linear model in which the matrix of G variables is Xn and the matrix of instruments is Wn . When n is small, gˆ in (4.3)–(4.4) can be highly variable. Blundell, Chen, and Kristensen (2007) proposed stabilizing gˆ by replacing (4.4) with the solution to a penalized least-squares problem. Blundell, Chen, and Kristensen (2007) 5 ˆ and Aˆ or in the x and w directions can use different basis More generally, the series for m functions and have different lengths. This extension is not carried out here. The effects on estimation efficiency of using different basis functions and series lengths for different functions or directions are unknown at present.
APPLIED NONPARAMETRIC IV ESTIMATION
365
provided an analytic, easily computed solution to this problem and presented the results of numerical experiments on the penalization method’s ability to stabilize gˆ in small samples. ˆ When Horowitz (2011b) derived the rate of convergence in probability of g. fXW has r < ∞ continuous derivatives with respect to any combination of its arguments and certain other regularity conditions hold, then
gˆ − g = Op Jn−s + Jnr (Jn /n)1/2 (4.5) If r = ∞, the rate of convergence is slower, as is discussed below. When r < ∞, the rate of convergence of gˆ − g is fastest when the terms Jn−s and Jnr (Jn /n)1/2 converge to zero at the same rate. This happens when Jn ∝ n1/(2r+2s+1) , which gives
gˆ − g = Op n−s/(2r+2s+1) (4.6) Chen and Reiss (2011) showed that n−s/(2r+2s+1) is the fastest possible rate of convergence in probability of gˆ − g that is achievable uniformly over functions g and fXW satisfying Horowitz’s (2011b) conditions. The rate of convergence in (4.6) is slower than the n−1/2 rate that is usually achieved by finitedimensional parametric models. It is also slower than the rate of convergence of a nonparametric estimator of a conditional mean or quantile function. For example, if E(Y |X = x) and the probability density function of X are twice continuously differentiable, then a nonparametric estimator of E(Y |X = x) can achieve the rate of convergence n−2/5 , whereas the rate in (4.6) with r = s = 2 is n−2/9 . A nonparametric IV estimator converges relatively slowly because the data contain little information about g in model (1.1)–(1.2), not because of any defect of the estimator. In (4.5), the term Jn−s arises from the bias of gˆ that is caused by truncating the series approximation (4.3). The truncation bias decreases as s increases and g becomes smoother (see (3.11)). Therefore, increased smoothness of ˆ The term Jnr (Jn /n)1/2 in (4.5) is g accelerates the rate of convergence of g. caused by random sampling errors in the estimates of the generalized Fourier coefficients gˆ j . Specifically, Jnr (Jn /n)1/2 is the rate of convergence in probabilJn ity of [ j=1 (gˆ j − gj )2 ]1/2 . Because gj is inversely proportional to λj (see the Jn discussion in Section 3), [ j=1 (gˆ j − gj )2 ]1/2 converges more slowly when the eigenvalues of T converge rapidly than when they converge slowly. When fXW has smoothness r, the eigenvalues decrease at a rate that is at least as fast as j −2r (Pietsch (1980)). Therefore, the fastest possible rates of convergence of Jn ˆ j − gj )2 and gˆ − g decrease as fXW becomes smoother. Smoothness j=1 (g of fXW increases the severity of the ill-posed inverse problem and reduces the accuracy with which g can be estimated. When fXW is the bivariate normal density, r = ∞ and the eigenvalues of T decrease at the rate e−cj , where c > 0 is a constant. The problem of estimating g is said to be severely ill posed, and the rate of convergence of gˆ − g
366
JOEL L. HOROWITZ
in (4.3) is Op [(log n)−s ]. This is the fastest possible rate. Therefore, when fXW is very smooth, the data contain very little information about g in (1.1)–(1.2). Unless g is restricted in other ways, such as assuming that it belongs to a lowdimensional parametric family of functions or is infinitely differentiable, a very large sample is needed to estimate g accurately in the severely ill-posed case. Now let x1 x2 xL be L points in [0 1]. Horowitz and Lee (2010) gave ˆ 1 ) g(x ˆ L )] is asymptotically L-variate normally conditions under which [g(x distributed and the bootstrap can be used to obtain simultaneous confidence intervals for [g(x1 ) g(xL )]. Horowitz and Lee (2010) also showed how to interpolate the simultaneous confidence intervals to obtain a uniform confidence band for g. The bootstrap procedure of Horowitz and Lee (2010) estimates the joint distribution of the leading terms of the asymptotic expansions ˆ ) − g(x ) ( = 1 L). To describe this procedure, let sn2 (x ) denote of g(x ˆ the consistent estimator of the variance of the asymptotic distribution of g(x), sn2 (x) = n−2
n
2 ˆ − δ¯ n (· g)](x) ˆ Aˆ −1 [δn (· Yi Xi Wi g)
i=1
where for any h ∈ L2 [0 1], δn (x Y X W h) = [Y − h(X)]
Jn
ψk (W )ψk (x)
k=1
and δ¯ n (x h) = n−1
n
δn (x Yi Xi Wi h)
i=1 ∗ i
∗ i
∗
Let {Y X Wi : i = 1 n} be a bootstrap sample that is obtained by sampling the estimation data {Yi Xi Wi : i = 1 n} randomly with replacement. ˆ The bootstrap version of the asymptotic form of g(x) − g(x) is n −1 ˆ − δ¯ n (· g)] ˆ (x) Aˆ [δn (· Yi∗ Xi∗ Wi ∗ g) Δn (x) = n −1
i=1 ∗ n ∗ n
Let A be the estimator of A that is obtained from the bootstrap sample. Define [s (x)]2 as the bootstrap estimator of the variance of Δn (x), n 2 ∗ −1 [s (x)] = n (An ) [δn (· Yi∗ Xi∗ Wi ∗ g∗ ) − δ¯ ∗n (· g∗ )](x) ∗ n
2
−2
i=1 ∗
where g is the nestimate of g obtained from the bootstrap sample and δ¯ ∗n (· g∗ ) = n−1 i=1 δn (· Yi∗ Xi∗ Wi ∗ g∗ ). The bootstrap procedure is as follows.
APPLIED NONPARAMETRIC IV ESTIMATION
367
Step 1. Draw a bootstrap sample {Yi∗ Xi∗ Wi ∗ : i = 1 n} by sampling the estimation data {Yi Xi Wi : i = 1 n} randomly with replacement. Use this sample to form bootstrap estimates Δn (x1 ) Δn (xL ) and sn∗ (x1 ) sn∗ (xL ). Compute the statistic tn∗ = max
1≤ ≤L
|Δn (x )| sn∗ (x )
Step 2. Repeat Step 1 many times. Let M be the number of repetitions and let ∗ ∗ be the value of tn∗ obtained on the mth repetition. Let ζnα = inf{ζ : FM∗ (ζ) ≥ tnm α} for any α ∈ (0 1), where ∗ M
F (τ) = M
−1
M
∗ I(tnm ≤ τ)
m=1 ∗ and I is the indicator function. Then ζnα is a consistent estimator of the 1 − α quantile of the bootstrap distribution of tn∗ . ˆ L )] ˆ 1 ) g(x Step 3. The simultaneous 1 − α confidence intervals for [g(x are ∗ ∗ ˆ ) + ζnα ˆ ) − ζnα sn (x ) ≤ g(x ) ≤ g(x sn (x ) g(x
= 1 L
Implementation of the estimator (4.3) requires choosing the value of Jn . One possible choice is an estimator of the asymptotically optimal Jn . The asymptotically optimal Jn , denoted here by Jnopt , minimizes Qn (J) ≡ EA gˆ − g2 , where EA denotes the expectation with respect to the asymptotic distribuˆ Define Jˆnopt to be tion of gˆ − g. Note that Qn depends on J through g. an asymptotically optimal estimator of Jnopt if Qn (Jˆnopt )/Qn (Jnopt ) →p 1 as n → ∞. At present, it is unknown whether such a Jˆnopt exists. Horowitz (2010) gave an empirical method for obtaining a truncation point Jˆn that satisfies Qn (Jˆn )/Qn (Jnopt ) ≤ [2 + (4 + ε) log n] for any ε > 0. Horowitz (2010) presented Monte Carlo evidence indicating that this estimator performs well with samples of practical size in both mildly and severely ill-posed estimation problems. 4.2. Extension to Model (1.3)–(1.4) This section extends the estimator (4.3) to model (1.3)–(1.4), which contains the exogenous explanatory variable Z. Assume that Z is a scalar whose support is [0 1]. The data are the independent random sample {Yi Xi Wi Zi : i = 1 n}. If Z is discretely distributed with finitely many mass points, then g(x z), where z is a mass point, can be estimated by using (4.3) with only observations i with n replaced by the number for which Zi = z. The results of Section 4.1 hold n of observations for which Zi = z, which is nz = i=1 I(Zi = z).
368
JOEL L. HOROWITZ
If Z is continuously distributed, then g(x z) can be estimated by using (4.3) with observations i for which Zi is “close” to z. Kernel weights can be used to select the appropriate observations. To this end, let K be a kernel function in the sense of nonparametric density estimation or regression, and let {bn } be a positive sequence of bandwidths that converges to 0 as n → ∞. Define Kb (v) = K(v/b) for any real v and b. Also define ˆ jz = m
n 1 Yi ψj (Wi )Kbn (z − Zi ) nbn i=1
n 1 ψj (Xi )ψk (Wi )Kbn (z − Zi ) aˆ jkz = nbn i=1
ˆ z (w) = m
Jn
ˆ jz ψj (w) m
j=1
and fˆXW Z (x w z) =
Jn Jn
aˆ jkz ψj (x)ψk (w)
j=1 k=1
Define the operator Aˆ z by 1 (Aˆ z h)(w z) = h(x)fˆXW Z (x w z) dx 0
for any h ∈ L2 [0 1]. Let fXW Z and fW Z denote the probability density functions of (X W Z) and (W Z), respectively. Estimate g(x z) for any z ∈ (0 1) by solving (4.7)
ˆ z Aˆ z gˆ = m
This is a finite-dimensional matrix equation because Aˆ z is a Jn × Jn matrix and ˆ z is a Jn × 1 vector. Equation (4.7) is an empirical analog of the relation m (4.8)
E(Y |W = w Z = z)fW Z (w z) = (Az g)(w z)
where the operator Az is defined by 1 (Az h)(w z) = h(x z)fXW Z (x w z) dw 0
Equation (4.8) can be derived from (1.3)–(1.4) by using reasoning like that used to obtain (3.6).
APPLIED NONPARAMETRIC IV ESTIMATION
369
Under regularity conditions that are stated in the Section A.1 of the Appendix, (4.9)
ˆ z) − g(· z)2 = Op n−2sκ/(2r+2s+1) g(·
where κ = 2r/(2r + 1). The estimator can be extended to z = 0 and z = 1 by using a boundary kernel (Gasser and Müller (1979), Gasser, Müller, and ˆ jz and aˆ jkz . Boundary kernels are explained in the Mammitzsch (1985)) in m discussion of the second specification test in Section 4.3. 4.3. Two Specification Tests This section presents two specification tests that will be used in the empirical illustrations of Section 5. One test is of the hypothesis that g(x z) = G(x z θ) for all (x z) ∈ [0 1]2 , where G is a known function and θ is a finite-dimensional parameter whose value must be estimated from the data. Under this hypothesis, the parametric model G(x z θ) satisfies (1.3)–(1.4) for some θ. A similar test applies to (1.1)–(1.2). In this case, the hypothesis is g(x) = G(x θ). The second test presented in this section is of the hypothesis that g(x z) does not depend on x. The first test was developed by Horowitz (2006). The second test is new. Testing a Parametric Model Against a Nonparametric Alternative In this test, the null hypothesis, H0 , is that (4.10)
g(x z) = G(x z θ)
for a known function G, some finite-dimensional θ in a parameter set Θ, and almost every (x z) ≡ [0 1]2 . “Almost every (x z)” means every (x z) except, possibly, a set of (x z) values whose probability is 0. The alternative hypothesis, H1 , is that there is no θ ∈ Θ such that (4.10) holds for almost every (x z). The discussion here applies to model (1.3)–(1.4). A test of H0 : g(x) = G(x θ) for model (1.1)–(1.2) can be obtained by dropping z and setting (x z) = 1 in the discussion below. The test statistic is 1 1 Sn2 (x z) dx dz τn = 0
0
where Sn (x z) = n−1/2
n i=1
(−i) ˆ fˆXW [Yi − G(Xi Zi θ)] Z (x Wi Zi ) (Zi z)
370
JOEL L. HOROWITZ
(−i) θˆ is a GMM estimator of θ, and fˆXW Z is a leave-observation-i-out kernel estimator of fXW Z . That is n 1 (−i) ˆ fXW Z (x w z) = 3 Kb (x − Xj )Kbn (w − Wj )Kbn (z − Zj ) nbn j=1 n j=i
where K is a kernel function and bn is a bandwidth. In applications, the value of bn can be chosen by cross-validation. The function is any function on [0 1] with the property that
1
(x z)h(x) dx = 0 0
for almost every z ∈ [0 1] only if h(x) = 0 for almost every x ∈ [0 1]. H0 is rejected if τn is too large. Horowitz (2006) derived the asymptotic distribution of τn under H0 and H1 , and gave a method for computing its critical value. The τn test is consistent against any fixed alternative model and against a large class of alternative models whose distance from the null-hypothesis parametric model is O(n−1/2 ) or greater. The test can be understood intuitively by observing that as n → ∞, n−1/2 Sn (x z) converges in probability to S∞ (x z) = EXW Z [g(X Z) − G(X Z θ∞ )]fXW Z (x W Z) (Z z) where EXW Z denotes the expectation with respect to the distribution of (X W Z) and θ∞ is the probability limit of θn as n → ∞. If g is identified, then S∞ (x z) = 0 for almost every (x z) ∈ [0 1]2 only if g(x z) = G(x z θ∞ ) for almost every (x z). Therefore, 1
1
τ∞ =
S∞ (x z)2 dx dz 0
0
is a measure of the distance between g(x z) and G(x z θ∞ ). The test statistic τn is an empirical analog of τ∞ . Testing the Hypothesis That g(x z) Does Not Depend on x This test is a modification of the exogeneity test of Blundell and Horowitz (2007). The null hypothesis, H0 , is that (4.11)
g(x z) = G(z)
for almost every (x z) ∈ [0 1]2 and some unknown function G. The alternative hypothesis, H1 , is that there is no G such that (4.11) holds for almost every
APPLIED NONPARAMETRIC IV ESTIMATION
371
(x z) ∈ [0 1]2 . It follows from (1.3)–(1.4) that G(z) = E(Y |Z = z) if H0 is true. Accordingly, we set G(z) = E(Y |Z = z) for the rest of the discussion of the test of H0 . The test statistic is 1 1 τ˜ n = S˜n2 (x z) dx dz 0
0
where (4.12)
S˜ n (x z) = n−1/2
n
(−i) ˆ (−i) (Zi ) fˆXW Yi − G Z (x Wi Zi ) (Zi z)
i=1 (−i) ˆ (−i) and fˆXW In (4.12), is defined as in the test of a parametric model. G Z, respectively, are leave-observation-i-out “boundary kernel” estimators of the mean of Y conditional on Z and fXW Z . Boundary kernels are defined in the next paragraph. The estimators are (−i) fˆXW Z (x w z)
1 Kb (x − Xj x)Kb1 (w − Wj w)Kb1 (z − Zj z) nb31 j=1 1 n
=
j=i
and ˆ (−i) (z) = G
1 Yi Kb2 (z − Zj z) (−i) nb2 fˆZ (z) j=1 n
j=i
where b1 and b2 are bandwidths, and 1 Kb (z − Zj z) fˆZ(−i) (z) = nb2 j=1 2 n
j=i
In applications, b1 can be chosen by cross-validation. The value of b2 can be set at n−7/40 times the value obtained by cross-validation. The boundary kernel function Kb has the property that for all ξ ∈ [0 1], (4.13)
ξ+1
−(j+1)
u Kb (u ξ) du = j
b
ξ
1 if j = 0, 0 if j = 1.
372
JOEL L. HOROWITZ
If b is small and ξ is not close to 0 or 1, then we can set Kb (u ξ) = K(u/b), where K is an “ordinary” kernel. If ξ is close to 1, then we can set Kb (u ξ) = ¯ K(u/b), where K¯ is a bounded, compactly supported function that satisfies ∞ 1 if j = 0, ¯ uj K(u) du = 0 if j = 1. 0 ¯ Gasser and Müller (1979) If ξ is close to 0, we can set Kb (u ξ) = K(−u/b). and Gasser, Müller, and Mammitzsch (1985) gave examples of boundary kernels. A boundary kernel is used here instead of an ordinary kernel because, to prevent imprecise estimation of G, the probability density function of Z fZ , is assumed to be bounded away from 0. This causes fZ (z) and fXW Z (x w z) to be discontinuous at z = 0 and z = 1. The boundary kernel overcomes the resulting edge effects. The τ˜ n test rejects H0 if τ˜ n is too large. Section A.2 of the Appendix gives the asymptotic properties of the test, including the asymptotic distribution of τ˜ n under H0 , a method for computing the critical value of the test, and the test’s consistency. The τ˜ n test can be understood intuitively by observing that as n → ∞ n−1/2 S˜ n (x z) converges in probability to S˜ ∞ (x z) = EXW Z [g(X Z) − G∞ (Z)]fXW Z (x W Z) (Z z) where G∞ (z) = E(Y |Z = z). Therefore, τ˜ n is an empirical measure of the distance between g(x z) and E(Y |Z = z). 5. EMPIRICAL EXAMPLES This section presents two empirical examples that illustrate the usefulness of nonparametric IV estimation and how conclusions drawn from parametric and nonparametric IV estimators may differ. The first example is about estimation of an Engel curve. The second is about estimating the effects of class size on students’ performances on standardized tests. 5.1. Estimating an Engel Curve This section shows the result of using the method of Section 4.1 to estimate an Engel curve for food. The data are 1655 household-level observations from the British Family Expenditure Survey. The households consist of married couples with an employed head-of-household between the ages of 25 and 55 years. The model is (1.1)–(1.2). Y denotes a household’s expenditure share on food, X denotes the logarithm of the household’s total expenditures, and W denotes the logarithm of the household’s gross earnings. Blundell, Chen, and Kristensen (2007) used the Family Expenditure Survey for nonparametric
APPLIED NONPARAMETRIC IV ESTIMATION
373
FIGURE 2.——Estimated Engel curve for food.
IV estimation of Engel curves. Blundell, Chen, and Kristensen (2007) also reported the results of an investigation of the validity of the logarithm of gross earnings as an instrument for expenditures. The basis functions used here are B-splines with four knots. The estimation results are similar with five or six knots. The estimated Engel curve is shown in Figure 2. The curve is nonlinear and different from what would be obtained with a simple parametric model such as a quadratic or cubic model. The τn test of Horowitz (2006) that is described in Section 4.3 rejects the hypothesis that the Engel curve is a quadratic or cubic function (p < 005). Thus, in this example, nonparametric methods reveal an aspect of data (the shape of the Engel curve) that would be hard to detect using conventional parametric models. Of course, with sufficient effort, it may be possible to find a simple parametric model that gives a curve similar to the nonparametric one. Although such a parametric model may be a useful way to represent the curve, it could not be used for valid inference for the reasons explained in Section 2. 5.2. The Effect of Class Size on Students’ Performances on Standardized Tests Angrist and Lavy (1999) studied the effects of class size on test scores of fourth and fifth grade students in Israel. Here, I use one of their models for fourth grade reading comprehension and their data to illustrate differences between parametric and nonparametric IV estimation and the effects that parametric assumptions can have on the conclusions drawn from IV estimation.
374
JOEL L. HOROWITZ
The data are available at http://econ-www.mit.edu/faculty/angrist/data1/data/ anglavy99. Angrist and Lavy’s substantive conclusions are based on several different models and methods. The discussion in this section is about one model and is not an evaluation or critique of Angrist and Lavy’s substantive findings, which are more broadly based. One of the models that Angrist and Lavy (1999) used is (5.1)
YCS = β0 + β1 XCS + β2 DCS + νS + UCS
In this model, YCS is the average reading comprehension test score of fourth grade students in class C of school S, XCS is the number of students in class C of school S, DCS is the fraction of disadvantaged students in class C of school S, νS is a school-specific random effect, and UCS is an unobserved random variable that is independently distributed across schools and classes. XCS is a potentially endogenous explanatory variable. The instrument for XCS is ZCS = ES / int[1 + (ES − 1)/40] where ES is enrollment in school S. The data consist of observations of 2049 classes that were tested in 1991. The IV estimate of β1 in (5.1) is −0.110 with a standard error of 0.040 (Angrist and Lavy (1999, Table V)). Thus, according to model (5.1), increasing class size has a negative and statistically significant effect on reading comprehension test scores. The nonparametric version of (5.1) is (5.2)
YCS = g(XCS DCS ) + νS + UCS ;
E(νS + UCS |ZCS DCS ) = 0
Figure 3 shows the result of using the method of Section 4.2 to estimate g as a function of XCS for DCS = 15 percent. The basis functions are orthogonal (Legendre) polynomials, the series length is 3, and the bandwidth is bn = 15. The solid line in the figure is the estimate of g, and the dots show a bootstrapbased uniform 95% confidence band obtained using the method of Horowitz and Lee (2010). Unobserved school-specific effects, νS , were handled by using schools as the bootstrap sampling units. The nonparametrically estimated relation between test scores and class size is nonlinear and nonmonotonic, but the confidence band is very wide. Functions that are monotonically increasing and decreasing can fit easily in the band. Moreover, the τ˜ n test of Section 4.3 does not reject the hypothesis that test scores are independent of class size (p > 010). Thus, the data and the instrumental variable assumption, by themselves, are uninformative about the form of any dependence of test scores on class size. This does not necessarily imply that test scores and class sizes are independent. For example, the τ˜ n test may not be sufficiently powerful to detect any dependence, or the effects of class size might be obscured by heterogeneity that is not accounted for by DCS . However, the nonparametric model does not support the conclusion drawn from the linear model that increases in class sizes are associated with decreased test scores.
APPLIED NONPARAMETRIC IV ESTIMATION
375
FIGURE 3.——Estimate of test score as a function of class size. The solid line is the estimate; dashed lines indicate a uniform 95% confidence band.
Average derivatives can be estimated more precisely than functions can, so it is possible that an estimator of E ∂g(X D|D = 15)/∂X is more informative about the effects of class size on test scores than is the function g(x 15). The average here is over the distribution of X conditional on D = 15. Ai and Chen (2009) provided asymptotic distributional results for nonparametric IV estimators of unconditional average derivatives, but there is no existing theory on nonparametric IV estimation of conditional average derivatives such as E ∂g(X D|D = 15)/∂X. To get some insight into whether an estimate of the conditional derivative can clarify the relation between test scores and class size, E ∂g(X D|D = 15)/∂X was estimated by ∂g(X ˆ CS 15) (5.3)
ˆ ∂g(X D|D = 15) = Eˆ ∂X
CS
Kbn (DCS − 15) ∂X Kbn (DCS − 15) CS
The standard error of the estimate was obtained by applying the bootstrap to the leading term of the asymptotic expansion of the right-hand side of (5.3) with schools as the bootstrap sampling units. The resulting estimate of the conditional average derivative is 0.064 with a standard error of 0.14. Therefore, the nonparametric average derivative estimate does not support the conclusion from the linear model that increases in class size are associated with decreases in test scores.
376
JOEL L. HOROWITZ
The conclusions drawn from the linear model might be persuasive, nonetheless, if this model were consistent with the data. However, the τn test of Section 4.3 rejects the hypothesis that g is a linear function of XCS and DCS (p < 005). This does not necessarily imply that the linear model is a poor approximation g in (5.2), but the quality of the approximation is unknown. Therefore, one should be cautious in drawing conclusions from the linear model. In summary, the data are uninformative about the dependence, if any, of g in (5.2) on XCS . The conclusion from (5.1) that increases in class size decrease test scores is a consequence of the linearity assumption, not of information contained in the data per se. 6. DISCRETELY DISTRIBUTED EXPLANATORY VARIABLES AND INSTRUMENTS This section is concerned with identification and estimation of g when, as happens in many applications, X W , and Z are discretely distributed random variables with finitely many points of support. Because Z is exogenous and discrete, all of the analysis can be carried out conditional on Z being held fixed at one of its points of support. Accordingly, the discussion in this section is concerned with identifying and estimating g as a function of X at a fixed value of Z. The notation displays dependence only on X and W . Section 6.1 discusses identification and estimation of g. Section 6.2 presents empirical illustrations of the results of Section 6.1. 6.1. Identification and Estimation of g Let the supports of X and W , respectively, be {x1 xJ } and {w1 wK } for finite, positive integers J and K. For j = 1 J and k = 1 K, define gj = g(xj ) mk = E(Y |W = wk ), and πjk = P(X = xj |W = wk }. When X and W are discretely distributed, condition (1.2) is equivalent to (6.1)
mk =
J
πjk gj
k = 1 K
j=1
Let Π be the J ×K matrix whose (j k) element is πjk . If K ≥ J and Rank(Π) = J, then (6.1) can be solved to obtain (6.2)
g = (ΠΠ )−1 ΠM
where M = (m1 mK ) and g = (g1 gJ ) . An estimator of g that is n−1/2 -consistent and asymptotically normal can be obtained by replacing Π and M in (6.2) with estimators. With data
APPLIED NONPARAMETRIC IV ESTIMATION
377
{Yi Xi Wi : i = 1 n}, the mk ’s and πjk ’s are estimated n−1/2 consistently by ˆ k = n−1 m k
n
Yi I(Wi = wk )
i=1
and πˆ jk = n−1 k
n
I(Xi = xj )I(Wi = wk )
i=1
where nk =
n
I(Wi = wk )
i=1
The estimator of g is ˆ gˆ = (Πˆ Πˆ )−1 Πˆ M ˆ = (m ˆ 1 m ˆ K ) , where Πˆ is the J × K matrix whose (j k) element is πˆ jk M and gˆ = (gˆ 1 gˆ J ) . There is no ill-posed inverse problem and, under mild regularity conditions, there are no other complications. There are, however, many applications in which K < J. In some applications, W is binary, so K = 2. For example, Card (1995) estimated models of earnings as a function of years of schooling and other variables. Years of schooling is an endogenous explanatory variable. The instrument for it is a binary indicator of whether there is an accredited four-year college in an individual’s metropolitan area. When W is binary, g is not identified nonparametrically if J > 2, nor are there informative, nonparametrically identified bounds on g in the absence of further information or assumptions. A linear model for g, such as that used by Card (1995), is identified but not testable. Thus, in contrast to the case in which X and W are continuously distributed, when X and W are discretely distributed and W has too few points of support, the problem is identification, not estimation. The remainder of this section discusses what can be learned about g when it is not point identified. Chesher (2004) gave conditions under which there are informative, nonparametrically identified bounds on g. Write model (1.1)–(1.2) in the form (6.3)
Y = g(X) + U;
E(U|W = wk ) = 0
and (6.4)
X = H(W ε)
ε ∼ U[0 1] ε ⊥ W
k = 1 K
378
JOEL L. HOROWITZ
Equation (6.4) defines H to be the conditional quantile function of X and is a tautology. Order the points of support of X so that x1 < x2 < · · · < xJ . Assume that (6.5)
E(U|W = wk ε = e) = c(e)
for all k = 1 K and some monotonic function c. This is a version of assumption (1.10) of the control function model that is discussed in Section 1.2. Also assume that there are e¯ ∈ (0 1) and points wj−1 , wj in the support of W such that (6.6)
P(X ≤ xj |W = wj ) ≤ e¯ ≤ P(X ≤ xj−1 |W = wj−1 )
for some j = 1 J. Chesher (2004) showed that if (6.5) and (6.6) hold, then (6.7)
min[E(Y |X = xj W = wj ) E(Y |X = xj W = wj−1 )] ¯ ≤ gj + c(e) ≤ max[E(Y |X = xj W = wj ) E(Y |X = xj W = wj−1 )]
Inequality (6.7) makes it possible to obtain identified bounds on differences ¯ Specifically, gj − gk if (6.6) holds for j and k with the same value of e. (6.8)
gjmin − gkmax ≤ gj − gk ≤ gjmax − gkmin
where gjmin and gjmax , respectively, are the lower and upper bounds on gj in (6.7). The quantities gkmin and gkmax are the bounds obtained by replacing j with k in (6.7). The bounds on gj − gk can be estimated consistently by replacing the conditional expectations in (6.7) with sample averages. Specifically, E(Y |X = x W = w) for any (x w) in the support of (X W ) is estimated by ˆ |X = x W = w) = n−1 E(Y xw
n
Yi I(Xi = x Wi = w)
i=1
where nxw =
n
I(Xi = x Wi = w)
i=1
Manski and Pepper (2000) gave conditions under which there are identified upper and lower bounds on g and an identified upper bound on gj − gk . The conditions are specified as follows:
APPLIED NONPARAMETRIC IV ESTIMATION
379
MONOTONE TREATMENT RESPONSE (MTR): Let y (1) and y (2) denote the outcomes (e.g., earnings) that an individual would receive with treatment values (that is, values of x) x(1) and x(2) , respectively. Then x(2) ≥ x(1) implies y (2) ≥ y (1) . MONOTONE TREATMENT SELECTION (MTS): Let XS denote the treatment (e.g., years of schooling) that an individual selects. Let x denote any possible treatment level. Then x(2) ≥ x(1) implies E Y |XS = x(2) ≥ E Y |XS = x(1) Assumption MTR is analogous to Chesher’s (2004) monotonicity condition (6.5). Assumption MTS replaces the assumption that a conventional instrument is available. Manski and Pepper (2000) showed that under MTR and MTS, E(Y |X = x )P(X = x ) + E(Y |X = xj )P(X ≥ xj ) :x <xj
≤ gj ≤ E(Y |X = x )P(X = x ) + E(Y |X = xj )P(X ≤ xj ) :x >xj
and (6.9)
0 ≤ gj − gk ≤
k−1
[E(Y |X = xj ) − E(Y |X = x )]P(X = x )
=1
+ [E(Y |X = xj ) − E(Y |X = xk )]P(xk ≤ X ≤ xj ) +
J
[E(Y |X = x ) − E(Y |X = xk )]P(X = x )
=j+1
These bounds can be estimated consistently by replacing expectations with sample averages. Confidence intervals for these bounds and for those in (6.8) can be obtained by taking advantage of the asymptotic normality of sample averages. See, for example, Horowitz and Manski (2000), Imbens and Manski (2004), and Stoye (2009). 6.2. An Empirical Example This section applies the methods of Section 6.1 to nonparametric estimation of the return to a college education, which is defined here as the percentage change in earnings from increasing an individual’s years of education from 12 to 16. The data are those used by Card (1995). They are available at http://emlab.berkeley.edu/users/card/data_sets.html and consist of 3010
380
JOEL L. HOROWITZ
records taken from the National Longitudinal Survey of Young Men. Card (1995) treated years of education as endogenous. The instrument for years of education is a binary variable equal to 1 if there is an accredited four-year college in what Card (1995) calls an individual’s “local labor market” and 0 otherwise. A binary instrument point identifies returns to education in Card’s parametric models, but it does not provide nonparametric point identification. We investigate the possibility of obtaining bounds on returns to a college education by using the methods of Chesher (2004) and Manski and Pepper (2000). In the notation of Section 6.1, Y is the logarithm of earnings, X is the number of years of education, and W is the binary instrument. To use Chesher’s (2004) method for bounding returns to a college education, the monotonicity condition (6.6) must be satisfied. This requires either (6.10)
P(X ≤ J|W = 1) ≤ P(X ≤ J − 1|W = 0)
or (6.11)
P(X ≤ J|W = 0) ≤ P(X ≤ J − 1|W = 1)
for J = 12 and J = 16. Table I shows the relevant empirical probabilities obtained from Card’s (1995) data. Neither (6.10) nor (6.11) is satisfied. Therefore, Chesher’s (2004) method with Card’s (1995) data and instrument cannot be used to bound returns to a college education. Manski’s and Pepper’s (2000) approach does not require an instrument but depends on the MTR and MTS assumptions, which are not testable. If these assumptions hold for the population represented by Card’s data, then replacing population expectations in (6.9) with sample averages yields estimated upper bounds on returns to a college education. These are shown in Table II for TABLE I EMPIRICAL PROBABILITIES OF VARIOUS LEVELS OF EDUCATIONa Years of Education
With Nearby College
Without Nearby College
11
0.136 (0.022)
0.228 (0.028)
12
0.456 (0.016)
0.578 (0.021)
15
0.707 (0.012)
0.775 (0.015)
16
0.866 (0.008)
0.915 (0.009)
a Table entries are the empirical probabilities that years of education is less than or equal to 11, 12, 15, and 16 conditional on whether there is a four-year accredited college in an individual’s local labor market. Quantities in parentheses are standard errors.
APPLIED NONPARAMETRIC IV ESTIMATION
381
TABLE II MANSKI–PEPPER (2000) UPPER BOUNDS ON RETURNS TO A UNIVERSITY EDUCATION Years of Experience
6–7 8–10 11–23
Point Estimate of Upper Bound
Upper 95% Confidence Limit
0.38 0.40 0.52
0.44 0.47 0.62
several levels of labor-force experience. Card (1995) estimated returns from linear models with a variety of specifications. He obtained point estimates in the range of 36%–78%, depending on the specification, regardless of experience. The estimates of returns at the lower end of Card’s range are consistent with the Manski–Pepper bounds in Table II. 7. CONCLUSIONS Nonparametric IV estimation is a new econometric method that has much to offer applied research. • It minimizes the likelihood of specification errors. • It reveals the information that is available from the data and the assumption of validity of the instrument as opposed to functional form assumptions. • It enables one to assess the importance of functional form assumptions in drawing substantive conclusions from a parametric model. As this paper has illustrated with empirical examples, nonparametric estimates may yield results that are quite different from those reached with a parametric model. Even if one ultimately chooses to rely on a parametric model to draw conclusions, it is important to understand when the restrictions of the parametric model, as opposed to information in the data and the assumption of instrument validity, are driving the results. There are also unresolved issues in nonparametric IV estimation. These include choosing basis functions for series estimators and choosing instruments if the dimension of W exceeds that of X. APPENDIX Section A.1 outlines the proof of (4.9). Section A.2 presents the asymptotic distributional properties of the τ˜ n test of the hypothesis that g(x z) does not depend on x.
382
JOEL L. HOROWITZ
A.1. Outline of Proof of (4.9) Let (x1 w1 ) − (x2 w2 )E denote the Euclidean distance between (x1 w1 ) and (x2 w2 ). Let Dj fXW Z (x w z) denote any jth partial or mixed partial derivative of fXW Z (x w z) with respect to its first two arguments. Let D0 fXW Z (x w z) = fXW Z (x w z). For each z ∈ [0 1], define m(w z) = E(Y |W = w Z = z)fW Z (w z). Define the sequence of function spaces Jn Hns = h = hj ψj : hs ≤ C0 j=1
Let Hs be the function space obtained by replacing Jn with ∞ in Hns . Let A∗z denote the adjoint of Az . For z ∈ [0 1], define ρnz = sup
h∈Hns
h (A∗z Az )1/2 h
Blundell, Chen, and Kristensen (2007) called ρnz the sieve measure of illposedness and discussed its relation to the eigenvalues of A∗z Az . Define gnz (x) =
Jn
gjz ψj (x)
j=1
For z ∈ [0 1], define ajkz = fXW Z (x w z)ψj (x)ψk (w) dx dw Let Anz be the operator whose kernel is anz (x w) =
Jn Jn
ajkz ψj (x)ψk (w)
j=1 k=1
Also define mnz = Anz gnz . Make the following assumptions. ASSUMPTION 1: (i) The support of (X W Z) is contained in [0 1]3 . (ii) (X W Z) has a probability density function fXW Z with respect to Lebesgue measure. (iii) There are an integer r ≥ 2 and a constant Cf < ∞ such that |Dj fXW Z (x w z)| ≤ Cf for all (x w z) ∈ [0 1]3 and j = 0 1 r. (iv) |Dr fXW Z (x1 w1 z) − Dr fXW Z (x2 w2 z)| ≤ Cf (x1 w1 ) − (x2 w2 )E for any order r derivative, any (x1 w1 ) and (x2 w2 ) in [0 1]2 , and any z ∈ [0 1]. ASSUMPTION 2: E(Y 2 |W = w Z = z) ≤ CY for each (w z) ∈ [0 1]2 and some constant CY < ∞.
APPLIED NONPARAMETRIC IV ESTIMATION
383
ASSUMPTION 3: (i) For each z ∈ [0 1] (1.3) has a solution g(· z) with g(· z)s < C0 and s ≥ 2. (ii) The estimator gˆ is as defined in (4.7). (iii) The function m(w z) has r + s square-integrable derivatives with respect to w and r bounded derivatives with respect to z. ASSUMPTION 4: (i) The basis functions {ψj } are orthonormal, complete on L2 [0 1], and satisfy Cramér’s conditions. (ii) Anz − Az = O(Jn−r ) uniformly over z ∈ [0 1]. (iii) For any ν ∈ L2 [0 1] with square-integrable derivatives, there are coefficients νj (j = 1 2 ) and a constant C < ∞ that does not depend on ν such that J νj ψj ≤ CJ − ν − j=1
ASSUMPTION 5: (i) The operator Az is nonsingular for each z ∈ [0 1]. (ii) ρnz = O(Jnr ) uniformly over z ∈ [0 1]. (iii) As n → ∞, ρnz sup ν∈Hns
(Anz − Az )ν = O(Jn−s ) ν
uniformly over z ∈ [0 1]. ASSUMPTION 6: The kernel function K is a symmetrical, twice continuously differentiable function on [−1 1], and 1 1 if j = 0, j v K(v) dv = 0 if j ≤ r − 1. −1 ASSUMPTION 7: (i) The bandwidth, bn , satisfies bn = cb n−1/(2r+1) , where cb is a constant and 0 < cb < ∞. (ii) Jn = CJ nκ/(2r+2s+1) for some constant CJ < ∞. Assumptions 1 and 2 are smoothness and boundedness conditions. Assumption 3 defines the function being estimated and the estimator. The assumption requires g(· z)s < C0 (strict inequality) to avoid complications that arise when g is on the boundary of Hs . Assumption 3 also ensures that the function m is sufficiently smooth. This function has more derivatives with respect to w than z because m(w z) = [Az g(· z)](w z), and Az smooths g along its first argument but not its second. Assumption 4 is satisfied by trigonometric bases, orthogonal polynomials, and splines that have been orthogonalized by, say, the Gram–Schmidt procedure. Assumption 5(ii) is a simplified version of Assumption 6 of Blundell, Chen, and Kristensen (2007). Blundell, Chen, and Kristensen (2007), and Chen and Reiss (2011) gave conditions under which this assumption holds. Assumption 5(iii) ensures that Anz is a “sufficiently accurate” approximation to Az on Hns . This assumption complements Assumption 4(ii), which specifies the accuracy of Anz as an approximation to Az on the
384
JOEL L. HOROWITZ
larger set Hs . Assumption 5(iii) can be interpreted as a smoothness restriction on fXW Z . For example, Assumption 5(iii) is satisfied if Assumptions 4 and 5(ii) hold and Az maps Hs to Hr+s . Assumption 5(iii) also can be interpreted as a restriction on the sizes of the values of ajkz for j = k. Hall and Horowitz (2005) used a similar diagonality restriction. Assumption 6 requires K to be a higherorder kernel if fXW Z is sufficiently smooth. K can be replaced by a boundary kernel (Gasser and Müller (1979), Gasser, Müller, and Mammitzsch (1985)) if fXW Z does not approach 0 smoothly on the boundary of its support. ˆ z) = gˆ z (x), and PROOF OF (4.9): Use the notation g(x z) = gz (x) g(x m(w z) = mz (w). For each z ∈ (0 1), (A.1)
gˆ z − gz ≤ gˆ z − gnz + gnz − gz
Moreover, gnz − gz = O(J −s ) by Assumption 4(iii). Therefore, (A.2)
gˆ z − gz ≤ gˆ z − gnz + O(J −s )
Now consider gˆ z − gnz . By P(gˆ z ∈ Hns ) → 1 as n → ∞ and the definition of ρnz , (A.3)
gˆ z − gnz ≤ ρnz Az (gˆ z − gnz )
ˆ z and Az gz = with probability approaching 1 as n → ∞. In addition, Aˆ z gˆ z = m mz . Therefore, Az (gˆ z − gnz ) = (Az − Aˆ z )gˆ z + Aˆ z gˆ z − Az (gnz − gz ) − Az gz ˆ z − mz − Az (gnz − gz ) = (Az − Aˆ z )gˆ z + m The triangle inequality now gives ˆ z − mz + Az (gnz − gz ) Az (gˆ z − gnz ) ≤ (Aˆ z − Az )gˆ z + m Standard calculations for kernel estimators show that under Assumptions 3(iii), 6, and 7,
ˆ z = Op Jn1/2 n−r/(2r+1) ˆ z − Em m and
ˆ z − mz = O Jn1/2 n−r/(2r+1) + Jn−r−s E m
APPLIED NONPARAMETRIC IV ESTIMATION
385
Therefore,
ˆ z − mz = Op Jn1/2 n−r/(2r+1) + Jn−r−s m In addition, Anz (gnz − gz ) = 0, so Az (gnz − gz ) = (Anz − Az )(gnz − gz ). Therefore, Az (gnz − gz ) =
(Anz − Az )(gnz − gz ) gnz − gz gnz − gz
= O(Jn−r−s ) by Assumptions 4 and 5. Therefore, we have Az (gˆ z − gnz ) ≤ (Aˆ z − Az )gˆ z + Op Jn1/2 n−r/(2r+1) + Jn−r−s
(A.4)
Now consider (Aˆ z − Az )gˆ z . By the triangle inequality and Assumption 5, ˆ ≤ (Aˆ z − Anz )g ˆ + (Anz − Az )g ˆ (Aˆ z − Az )g ˆ + O(Jn−r−s ) = (Aˆ z − Anz )g For each z ∈ (0 1), (Aˆ z − Anz )gˆ z ≤ sup (Aˆ z − Az )ν ν∈Hns
Write ν in the form ν=
Jn
νj ψj
j=1
where
νj =
ν(x)ψj (x) dx
Then (Aˆ z − Anz )ν =
(A.5)
J Jn n k=1
But
Jn j=1
2 (aˆ jkz − ajkz )νj
j=1
|νj | is bounded uniformly over ν ∈ Hns and n. Moreover, Jn j=1
νj aˆ jkz =
Jn j=1
n 1 νj ψj (Xi )ψk (Wi )Kbn (z − Zi ) nbn i=1
386
JOEL L. HOROWITZ
Therefore, it follows from Bernstein’s inequality that Jn
νj (aˆ jkz − E aˆ jkz ) = Op (nbn )−1/2
j=1
uniformly over ν ∈ Hns . Therefore,
(A.6) (Aˆ z − E Aˆ nz )ν = O Jn1/2 /(nbn )1/2 uniformly over ν ∈ Hns . In addition, E aˆ jkz = ajkz + O(brn ). Therefore, boundJn edness of j=1 |νj | gives Jn (E aˆ jkz − ajkz )νj = O(brn ) j=1
and (A.7)
(E Aˆ z − Anz )ν = O Jn1/2 brn
uniformly over ν ∈ Hns . Combining (A.6) and (A.7) and using Assumption 7 gives
sup (Aˆ z − Anz )ν = Op Jn1/2 n−r/(2r+1) ν∈Hns
Therefore, (A.8)
sup (Aˆ z − Az )ν = Op Jn1/2 n−r/(2r+1) + Jn−r−s
ν∈Hns
Combining (A.4) and (A.8) gives
Az (gˆ z − gnz ) = Op Jn1/2 n−r/(2r+1) + Jn−r−s This result and Assumption 5(ii) imply that
(A.9) ρnz Az (gˆ z − gnz ) = Op Jnr+1/2 n−r/(2r+1) + Jn−s The theorem follows by combining (A.2), (A.3), and (A.9).
Q.E.D.
A.2. Asymptotic Properties of the τ˜ n Test Let (x1 w1 z1 ) − (x2 w2 z2 )E denote the Euclidean distance between the points (x1 w1 z1 ) and (x2 w2 z2 ). Let Dj fXW Z denote any jth partial or mixed partial derivative of fXW Z . Set D0 fXW Z (x w z) = fXW Z (x w z). Let s ≥ 2 be an integer. Define V = Y − G(Z) and let fZ denote the density of Z. Define Tz = A∗z Az . Make the following assumptions.
APPLIED NONPARAMETRIC IV ESTIMATION
387
ASSUMPTION A: (i) The support of (X W Z) is contained in [0 1]3 . (ii) (X W Z) has a probability density function fXW Z with respect to Lebesgue measure. (iii) There is a constant CZ > 0 such that fZ (z) ≥ CZ for all z ∈ supp(Z). (iv) There is a constant Cf < ∞ such that |Dj fXW Z (x w z)| ≤ Cf for all (x w z) ∈ [0 1]3 andj = 0 1 2, where derivatives at the boundary of supp(X W Z) are defined as one-sided. (v) |Ds fXW Z (x1 w1 z1 ) − Ds fXZW (x2 w2 z2 )| ≤ Cf (x1 w1 z1 ) − (x2 w2 z2 )E for any second derivative and any(x1 w1 z1 ) (x2 w2 z2 ) ∈ [0 1]3 . (vi) Tz is nonsingular for almost every z ∈ [0 1]. ASSUMPTION B: (i) E(U|Z = z W = w) = 0 and E(U 2 |Z = z W = w) ≤ CUV for each (z w) ∈ [0 1]2 and some constant CUV < ∞. (ii) |g(x z)| ≤ Cg for some constant Cg < ∞ and all (x z) ∈ [0 1]2 . ASSUMPTION C: (i) The function G satisfies |Dj G(z)| ≤ Cf for all z ∈ [0 1] and j = 0 1 2. (ii) |Ds G(z1 ) − Ds G(z2 )| ≤ Cf |z1 − z2 | for any second derivative and any (z1 z2 ) ∈ [0 1]2 . (iii) E(V 2 |Z = z) ≤ CUV for each z ∈ [0 1]. ASSUMPTION D: (i) Kb satisfies (4.13) and |Kb (u2 ξ) − Kb (u1 ξ)| ≤ CK |u2 − u1 |/b for all u2 u1 , all ξ ∈ [0 1], and some constant CK < ∞. For each ξ ∈ [0 1] Kh (b ξ) is supported on [(ξ − 1)/b ξ/b] ∩ K, where K is a compact interval not depending on ξ. Moreover, sup b>0ξ∈[01]u∈K
|Kb (bu ξ)| < ∞
(ii) The bandwidth b1 satisfies b1 = cb1 n−1/7 , where cb1 < ∞ is a constant. (iii) The bandwidth, b2 , satisfies b2 = cb2 n−α , where cb2 < ∞ is a constant and 1/4 < α < 1/2. Assumption A(iii) is used to avoid imprecise estimation of G in regions where fZ is close to 0. The assumption can be relaxed by replacing the fixed distribution of (X Z W ) by a sequence of distributions with densities {fnXZW } and {fnZ } (n = 1 2 ) that satisfy fnZ (z) ≥ Cn for all (z) ∈ [0 1] and a sequence {Cn } of strictly positive constants that converges to 0 sufficiently slowly. Assumption A(vi) combined with the moment condition E(U|X Z) = 0 implies that g is identified and the instruments W are valid in the sense of being suitably related to X. Assumption D(iii) implies that the estimator of G is ˆ (−i) from undersmoothed. Undersmoothing prevents the asymptotic bias of G dominating the asymptotic distribution of τ˜ n . The remaining assumptions are standard in nonparametric estimation. The τ˜ n test is a modification of the exogeneity test of Blundell and Horowitz (2007), and its properties can be derived by using the methods of that paper.
388
JOEL L. HOROWITZ
Accordingly, the properties of the τ˜ n test are stated here without proof. Define Vi = Yi − G(Zi ) (i = 1 n), Bn (x z) = n
−1/2
n Vi fXZW (x Zi Wi ) − i=1
1 fZ (Zi )
1
tZi (ξ x) dξ 0
× (Zi z) and R(x1 z1 ; x2 z2 ) = E[Bn (x1 z1 )Bn (x2 z2 )] Define the operator Ω on L2 [0 1]2 by
1
(Ωh)(x z) =
R(x z; ξ ζ)h(ξ ζ) dξ dζ 0
Let {ωj : j = 1 2 } denote the eigenvalues of Ω sorted so that ω1 ≥ ω2 ≥ · · · ≥ 0. Let {χ21j : j = 1 2 } denote independent random variables that are distributed as chi-square with 1 degree of freedom. Define the random variable τ˜ ∞ =
∞
ωj χ21j
j=1
For any α such that 0 < α < 1, let ξα denote the 1 − α quantile of the distribution of τ∞ . Then RESULT 1: Under H0 τ˜ n →d τ˜ ∞ . RESULT 2: Under H1 , lim P(τ˜ n > ξα ) = 1
n→∞
for any α such that 0 < α < 1. Thus, the τ˜ n test is consistent. Result 3 shows that for any ε > 0 and as n → ∞, the τ˜ n test rejects H0 with probability exceeding 1 − ε uniformly over a set of functions g whose distance from G is O(n−1/2 ). The practical consequence of this result is to define a large class of alternatives against which the τ˜ n test has high power in large samples. The following additional notation is used. Let L be the operator on L2 [0 1] that is defined by (Lh)(z) =
1
h(ζ) (ζ z) dζ 0
APPLIED NONPARAMETRIC IV ESTIMATION
389
Define q(x z) = g(x z) − G(z). Let fXZW be fixed. For each n = 1 2 and finite C > 0, define Fnc as a set of distributions of (Y X Z W ) such that (i) fXZW satisfies Assumption A; (ii) E[Y − g(X Z)|Z W ] = 0 for some function g that satisfies Assumption B with U = Y − g(X Z); (iii) E(Y |Z = z) = G(z) for some function G that satisfies Assumption C with V = Y − G(Z); (iv) LTz q ≥ n−1/2 C, where · denotes the L2 [0 1]2 norm; and (v) hs1 (log n)q/LTz q = o(1) as n → ∞. Fnc is a set of distributions of (Y X Z W ) for which the distance of g from G shrinks to zero at the rate n−1/2 in the sense that Fnc includes distributions for whichq = O(n−1/2 ). Condition (v) rules out distributions for which q depends on (x z) only through sequences of eigenvectors of Tz whose eigenvalues converge to 0 too rapidly. The practical significance of condition (v) is that the τ˜ n test has low power when g differs from G only through eigenvectors of Tz with very small eigenvalues. Such differences tend to oscillate rapidly (that is, to be very wiggly) and are unlikely to be important in most applications. The uniform consistency result is as follows. RESULT 3: Given any ε > 0, any α such that 0 < α < 1, and any sufficiently large (but finite) C, lim inf P(τ˜ n > ξα ) ≥ 1 − ε
n→∞ Fnc
The remainder of this section explains how to obtain an approximate asymptotic critical value for the τ˜ n test. The method is based on replacing the asymptotic distribution of τ˜ n with an approximate distribution. The difference between the true and approximate distributions can be made arbitrarily small under both the null hypothesis and alternatives. Moreover, the quantiles of the approximate distribution can be estimated consistently as n → ∞. The approximate 1 − α critical value of the τ˜ n test is a consistent estimator of the 1 − α quantile of the approximate distribution. We now describe the approximation to the asymptotic distribution of τ˜ n . Given any ε > 0, there is an integer Kε < ∞ such that K ε 0
uniformly over t. Define τ˜ ε =
Kε
ωj χ21j
j=1
Let zεα denote the 1 − α quantile of the distribution of τ˜ ε . Then 0 < P(τ˜ ∞ > zεα ) − α < ε. Thus, using zεα to approximate the asymptotic 1 − α critical value
390
JOEL L. HOROWITZ
of τ˜ n creates an arbitrarily small error in the probability that a correct null hypothesis is rejected. Similarly, use of the approximation creates an arbitrarily small change in the power of the τ˜ n test when the null hypothesis is false. The approximate 1 − α critical value for the τ˜ n test is a consistent estimator of the ˆ j (j = 1 2 Kε ) be 1 − α quantile of the distribution of τ˜ ε . Specifically, let ω a consistent estimator of ωj under H0 . Then the approximate critical value of τ˜ n is the 1 − α quantile of the distribution of τˆ nε =
Kε
ω ˆ j χ21j
j=1
This quantile can be estimated with arbitrary accuracy by simulation. In appliˆ j ’s in decreasing order cations, Kε can be chosen informally by sorting the ω and plotting them as a function of j. They typically plot as random noise near ω ˆ j = 0 when j is sufficiently large. One can choose Kε to be a value of j that is near the lower end of the “random noise” range. The rejection probability of the τ˜ n test is not highly sensitive to Kε , so it is not necessary to attempt precision in making the choice. We now explain how to obtain the estimated eigenvalues {ω ˆ j }. Let fˆXZW be a kernel estimator of fXZW . Define 1 fˆXZW (x1 z w)fˆXZW (x2 z w) dw tˆz (x1 x2 ) = 0
Estimate the Vi ’s by generating data from an estimated version of the model (A.10)
Y˜ = G(Z) + V˜
where Y˜ = Y − E[Y − G(Z)|Z W ] and V˜ = Y˜ − G(Z). Model (A.10) is identical to model (1.3)–(1.4) under H0 . Moreover, the moment condition E(V˜ |Z W ) = 0 holds regardless of whether H0 is true. Observe that V˜ = Y − E(Y |Z W ). Let Eˆ (−i) (Y |Z W ) denote the leave-observation-i-out nonparametric regression of Y on (Z W ). Estimate Vi by Vˆi = Yi − Eˆ (−i) (Zi Wi ) Now define rˆ(x Zi Wi ) = fˆXZW (x Zi Wi ) −
1 fˆZ (Zi )
1
tˆZi (ξ x) dξ
0
R(x1 z1 ; x2 z2 ) is estimated consistently by ˆ 1 z1 x2 z2 ) = n−1 R(x
n i=1
rˆ(x1 Zi )ˆr (x2 Zi ) (Zi z1 ) (Zi z2 )Vˆi 2
APPLIED NONPARAMETRIC IV ESTIMATION
391
Define the operator Ωˆ on L2 [0 1] by
ˆ (Ωψ)(x z) =
1
ˆ R(x z; ξ ζ)ψ(ξ ζ) dξ dζ
0
Denote the eigenvalues of Ωˆ by {ω ˆ j : j = 1 2 } and order them so that ω ˆ M1 ≥ ˆ j ’s are consistent estimators of the ωj ’s. ω ˆ M2 ≥ · · · ≥ 0. Then the ω ˆ To obtain an accurate numerical approximation to the ω ˆ j ’s, let F(x z) denote the n × 1 vector whose ith component is rˆ(x Zi Wi ) (Zi z) and let Υ denote the n × n diagonal matrix whose (i i) element is Vˆi 2 . Then ˆ 1 z1 ) Υ F(x ˆ 1 z1 ; x2 z2 ) = n−1 F(x ˆ 2 z2 ) R(x The computation of the eigenvalues can now be reduced to finding the eigenvalues of a finite-dimensional matrix. To this end, let {φj : j = 1 2 } be a complete, orthonormal basis for L2 [0 1]2 . Let {ψj } be a complete orthonormal basis for L2 [0 1]. Then fˆXZW (x Z W ) (Z z) =
∞ ∞
dˆjk φj (x z)φk (Z W )
j=1 k=1
where dˆjk =
1
1
dx
dz1
0
1
1
dz2
0
0
dw fˆXZW (x z2 w)
0
× (z2 z1 )φj (x z1 )φk (z2 w) and
1
(Z z)
tˆZ (ξ x) dξ =
∞ ∞
0
aˆ jk φj (x z)ψk (Z)
j=1 k=1
where
1
aˆ jk = 0
1
dx 0
1
dz1
1
dz2 0
dξ tˆz1 (ξ x) (z1 z2 )φj (x z2 )ψk (z1 )
0
Approximate fˆXZW (x Z W ) (Z z) and (Z z) the finite sums Πf (x z W Z) =
M M j=1 k=1
1 0
tˆZ (ξ x) dξ, respectively, by
dˆjk φj (x z)φk (Z W )
392
JOEL L. HOROWITZ
and Πt (x z Z) =
M M
aˆ jk φj (x z)ψk (Z)
j=1 k=1
1 for M < ∞. Since fˆXZW and 0 tˆZ dξ are known functions, M can be chosen to approximate them with any desired accuracy. Let Φ be the n × L matrix whose (i j) component is Φij = n
−1/2
L [dˆjk φk (Zi Wi ) − aˆ jk ψk (Zi )/fˆZ (Zi )] k=1
The eigenvalues of Ωˆ are approximated by those of the L × L matrix Φ Υ Φ. REFERENCES AI, C., AND X. CHEN (2009): “Semiparametric Efficiency Bound for Models of Sequential Moment Restrictions Containing Unknown Functions,” Working Paper, Department of Economics, Yale University, New Haven, CT. [375] ANGRIST, J. D., AND V. LAVY (1999): “Using Maimonides’ Rule to Estimate the Effect of Class Size on Scholastic Achievement,” Quarterly Journal of Economics, 114, 533–575. [373,374] BLUNDELL, R., AND J. L. HOROWITZ (2007): “A Nonparametric Test for Exogeneity,” Review of Economic Studies, 74, 1035–1058. [348,350,362,370,387] BLUNDELL, R., AND J. L. POWELL (2003): “Endogeneity in Nonparametric and Semiparametric Regression Models,” in Advances in Economics and Econometrics: Theory and Applications: Eighth World Congress, Vol. 2, ed. by Dewatripont, M., L. P. Hansen, and S. Turnovsky. Cambridge, U.K.: Cambridge University Press, 312–357. [351] BLUNDELL, R., X. CHEN, AND D. KRISTENSEN (2007): “Semi-Nonparametric IV Estimation of Shape-Invariant Engel Curves,” Econometrica, 75, 1613–1669. [348,349,354,362,364,372,373, 382,383] CARD, D. (1995): “Using Geographic Variation in College Proximity to Estimate Returns to Schooling,” in Aspects of Labor Market Behaviour: Essays in Honour of John Vanderkamp, ed. by L. N. Christofides, E. K. Grant, and R. Swidinsky. Toronto: University of Toronto Press. [377,379-381] CARRASCO, M., J.-P. FLORENS, AND E. RENAULT (2007): “Linear Inverse Problems in Structural Econometrics: Estimation Based on Spectral Decomposition and Regularization,” in Handbook of Econometrics, Vol. 6, ed. by E. E. Leamer and J. J. Heckman. Amsterdam: NorthHolland, 5634–5751. [361] CHEN, X. (2007): “Large Sample Sieve Estimation of Semi-Nonparametric Models,” in Handbook of Econometrics, Vol. 6b, ed. by J. J. Heckman and E. E. Leamer. Amsterdam: NorthHolland, 5549–5632. [362] CHEN, X., AND D. POUZO (2008): “Estimation of Nonparametric Conditional Moment Models With Possibly Nonsmooth Moments,” Working Paper, Department of Economics, Yale University. [350] CHEN, X., AND M. REISS (2011): “On Rate Optimality for Ill-Posed Inverse Problems in Econometrics,” Econometric Theory (forthcoming). [349,365,383] CHERNOZHUKOV, V., AND C. HANSEN (2005): “An IV Model of Quantile Treatment Effects,” Econometrica, 73, 245–261. [350]
APPLIED NONPARAMETRIC IV ESTIMATION
393
CHERNOZHUKOV, V., G. W. IMBENS, AND W. K. NEWEY (2007): “Instrumental Variable Identification and Estimation of Nonseparable Models via Quantile Conditions,” Journal of Econometrics, 139, 4–14. [350] CHESHER, A. (2004): “Identification in Additive Error Models With Discrete Endogenous Variables,” Working Paper CWP11/04, Centre for Microdata Methods and Practice, Department of Economics, University College London. [350,377-380] (2005): “Nonparametric Identification Under Discrete Variation,” Econometrica, 73, 1525–1550. [350] CONWAY, J. B. (1990): A Course in Functional Analysis (Second ed.). New York: Springer-Verlag. [355] DAROLLES, S., J.-P. FLORENS, AND E. RENAULT (2006): “Nonparametric Instrumental Regression,” Working Paper, University of Toulouse. [349] ENGL, H. W., M. HANKE, AND A. NEUBAUER (1996): Regularization of Inverse Problems. Dordrecht: Kluwer Academic Publishers. [361] GASSER, T., AND H. G. MÜLLER (1979): “Kernel Estimation of Regression Functions,” in Smoothing Techniques for Curve Estimation. Lecture Notes in Mathematics, Vol. 757. New York: Springer, 23–68. [369,372,384] GASSER, T., H. G. MÜLLER, AND V. MAMMITZSCH (1985): “Kernels and Nonparametric Curve Estimation,” Journal of the Royal Statistical Society, Ser. B, 47, 238–252. [369,372,384] HALL, P., AND J. L. HOROWITZ (2005): “Nonparametric Methods for Inference in the Presence of Instrumental Variables,” The Annals of Statistics, 33, 2904–2929. [349,362,384] HANSEN, L. P. (1982): “Large Sample Properties of Generalized Method of Moments Estimators,” Econometrica, 50, 1029–1054. [352] HÄRDLE, W., AND O. LINTON (1994): “Applied Nonparametric Methods,” in Handbook of Econometrics, Vol. 4, ed. by R. F. Engle and D. F. McFadden. Amsterdam: Elsevier, Chapter 38. [358] HECKMAN, J. J., AND E. J. VYLACIL (2007): “Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs and to Forecast Their Effects in New Environments,” in Handbook of Econometrics, Vol. 6B, ed. by J. J. Heckman and E. E. Leamer. Amsterdam: Elsevier, Chapter 71. [351] HOROWITZ, J. L. (2006): “Testing a Parametric Model Against a Nonparametric Alternative With Identification Through Instrumental Variables,” Econometrica, 74, 521–538. [350,362,369,370, 373] (2007): “Asymptotic Normality of a Nonparametric Instrumental Variables Estimator,” International Economic Review, 48, 1329–1349. [349] (2009): Semiparametric and Nonparametric Methods in Econometrics. New York: Springer-Verlag. [351,358] (2010): “Adaptive Nonparametric Instrumental Variables Estimation: Empirical Choice of the Regularization Parameter,” Working Paper, Department of Economics, Northwestern University, Evanston, IL. [367] (2011a): “Supplement to ‘Applied Nonparametric Instrumental Variables Estimation’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/Supmat/ 8662_data and programs.zip. [351] (2011b): “Specification Testing in Nonparametric Instrumental Variables Estimation,” Journal of Econometrics (forthcoming). [350,362,365] HOROWITZ, J. L., AND S. LEE (2007): “Nonparametric Instrumental Variables Estimation of a Quantile Regression Model,” Econometrica, 75, 1191–1208. [350] (2010): “Uniform Confidence Bands for Functions Estimated Nonparametrically With Instrumental Variables,” Working Paper CWP19/10, Centre for Microdata Methods and Practice, Department of Economics, University College London. [349,366,374] HOROWITZ, J. L., AND C. F. MANSKI (2000): “Nonparametric Analysis of Randomized Experiments With Missing Covariate and Outcome Data,” Journal of the American Statistical Association, 95, 77–84. [379]
394
JOEL L. HOROWITZ
IMBENS, G., AND C. F. MANSKI (2004): “Confidence Intervals for Partially Identified Parameters,” Econometrica, 72, 1845–1857. [379] KRESS, R. (1999): Linear Integral Equations (Second ed.). New York: Springer-Verlag. [361,362] LIUSTERNIK, L. A., AND V. J. SOBOLEV (1961): Elements of Functional Analysis. New York: Ungar Publishing Company. [355] MANSKI, C. F., AND J. V. PEPPER (2000): “Monotone Instrumental Variables: With an Application to Returns to Schooling,” Econometrica, 68, 997–1010. [350,378-381] NEWEY, W. K., AND J. L. POWELL (2003): “Instrumental Variables Estimation of Nonparametric Models,” Econometrica, 71, 1565–1578. [349] NEWEY, W. K., J. L. POWELL, AND F. VELLA (1999): “Nonparametric Estimation of Triangular Simultaneous Equations Models,” Econometrica, 67, 565–603. [351] PIETSCH, A. (1980): “Eigenvalues of Integral Operators. I,” Mathematische Annalen, 247, 169–178. [365] PINKSE, J. (2000): “Nonparametric Two-Step Regression Estimation When Regressor and Error Are Dependent,” Canadian Journal of Statistics, 28, 289–300. [351] STOYE, J. (2009): “More on Confidence Intervals for Partially Identified Parameters,” Econometrica, 77, 1299–1315. [379]
Dept. of Economics, Northwestern University, Evanston, IL 60208, U.S.A.;
[email protected]. Manuscript received June, 2009; final revision received March, 2010.
Econometrica, Vol. 79, No. 2 (March, 2011), 395–435
EFFICIENT TESTS UNDER A WEAK CONVERGENCE ASSUMPTION BY ULRICH K. MÜLLER1 The asymptotic validity of tests is usually established by making appropriate primitive assumptions, which imply the weak convergence of a specific function of the data, and an appeal to the continuous mapping theorem. This paper, instead, takes the weak convergence of some function of the data to a limiting random element as the starting point and studies efficiency in the class of tests that remain asymptotically valid for all models that induce the same weak limit. It is found that efficient tests in this class are simply given by efficient tests in the limiting problem—that is, with the limiting random element assumed observed—evaluated at sample analogues. Efficient tests in the limiting problem are usually straightforward to derive, even in nonstandard testing problems. What is more, their evaluation at sample analogues typically yields tests that coincide with suitably robustified versions of optimal tests in canonical parametric versions of the model. This paper thus establishes an alternative and broader sense of asymptotic efficiency for many previously derived tests in econometrics, such as tests for unit roots, parameter stability tests, and tests about regression coefficients under weak instruments. KEYWORDS: Robustness, unit root test, semiparametric efficiency.
1. INTRODUCTION A CONTINUED FOCUS of the recent econometrics literature has been the development of asymptotically optimal inference procedures for nonstandard problems. The usual derivation proceeds in two steps: first, derive the optimal test in some canonical version of the model, usually assuming independent and identically distributed (i.i.d.) Gaussian disturbances. Second, derive a corresponding robustified test that yields the same asymptotic rejection probability under the null hypothesis (and local alternatives) for a wide range of data generating processes. This second step is important, because in most applications, one would not want to assume that the canonical model is necessarily correct. Examples of this approach include the univariate unit root tests by Elliott, Rothenberg, and Stock (1996) (subsequently abbreviated ERS), Elliott (1999), Müller and Elliott (2003), and Müller (2009); the unit root test with stationary covariates by Elliott and Jansson (2003); the test of the null hypothesis of no cointegration with known cointegrating vector by Elliott, Jansson, and Pesavento (2005); the test of the null hypothesis of cointegration by Jansson (2005); the parameter stability tests by Nyblom (1989), Andrews and Ploberger (1994), and Elliott and Müller (2006); the tests about regression coefficients 1 A previous version of this paper was circulated under the title “An Alternative Sense of Asymptotic Efficiency.” The author would like to thank Whitney Newey, three anonymous referees, Mark Watson, seminar participants at Berkeley, Brown, Harvard/MIT, Penn State, and Stanford, and conference participants at the NBER Summer Institute 2008 and the ESEM Summer Meeting 2008 for helpful comments and discussions, as well as Jia Li for excellent research assistance. Financial support by the NSF through Grant SES-0751056 is gratefully acknowledged.
© 2011 The Econometric Society
DOI: 10.3982/ECTA7793
396
ULRICH K. MÜLLER
with nearly integrated regressors by Stock and Watson (1996) and Jansson and Moreira (2006); and the tests about regression coefficients with weak instruments by Andrews, Moreira, and Stock (2006, 2008). The robustness in the second step is established by making suitable primitive assumptions, so that the law of large numbers and (functional) central limit theorems can be invoked to a specific function of the data, and the large sample properties of the test then follow from the continuous mapping theorem. By construction, these tests are thus optimal in the canonical specification of the model and are asymptotically valid whenever the weak convergence implied by the limit theorems holds. This paper, instead, starts out with a specific function XT = hT (YT ) of the data YT . This function is chosen so that typical primitive assumptions imply the weak convergence of XT to a random element X, whose distribution is different under the null and local alternative. For instance, in the unit root testing problem, a suitable transformation of the data to the unit interval converges weakly to a Wiener process under the null hypothesis, and to an Ornstein– Uhlenbeck process under the local alternative. We then ask the question, “What is the efficient test in the class C of tests that remain asymptotically valid whenever XT = hT (YT ) converges weakly to its null limiting distribution?” It is clear that the performance of the robustified test derived from the canonical model provides an important benchmark, as it is a member of this class for an appropriately defined function hT . But it is not clear that one could not do better. After all, if one knew that the data were generated by some model other than the canonical model, then the best test would typically not coincide with the best test in the canonical model. For instance, Rothenberg and Stock (1997) and Jansson (2008) derived unit root tests with higher local asymptotic power than ERS’s test for the AR(1) model with nonnormal i.i.d. driving errors, which continue to induce the weak convergence to a Wiener and Ornstein–Uhlenbeck process. The main contribution of the paper is a simple characterization of the efficient test in the class C, that is the best test among all tests that remain asymptotically valid whenever XT converges weakly to its null limiting distribution. The characterization is in terms of the efficient test in the “limiting problem,” where the limiting random element X is directly observed. For instance, in the unit root testing problem, the Radon–Nikodym derivative of the distribution of an Ornstein–Uhlenbeck process (for some fixed mean reversion parameter) and the distribution of a Wiener process is, by the Neyman–Pearson lemma, the point-optimal test statistic in the limiting problem. It is shown that the efficient test in the limiting problem ϕ∗ (X), evaluated at the small sample analogue ϕ∗ (XT ) = ϕ∗ (hT (YT )), is the efficient test in C. In the unit root testing case, ϕ∗ (XT ) is asymptotically equivalent to the test statistic derived by ERS, so that this result establishes a sense of efficiency of their test also outside the canonical model. Whether or not a given test ϕT (YT ) is a member of the class C is determined by its asymptotic rejection probability under data generating processes that induce XT to converge to its null limiting distribution. One could alternatively
EFFICIENT TESTS UNDER WEAK CONVERGENCE
397
restrict attention to tests of a particular functional form, such as tests that can be written as continuous functions of XT , ϕT (YT ) = ϕc (XT ) with ϕc continuous. For instance, Stock (1999) considered tests of this form in the context of unit root testing. By the continuous mapping theorem, the asymptotic rejection probability of the test ϕc (XT ) is determined by the distribution of X. Any test of the form ϕT (YT ) = ϕc (XT ) is, therefore, a member of C if (and only if) ϕc (X) is a valid test in the limiting problem. As a corollary, the main result thus implies that if ϕ∗ is continuous, ϕ∗ (XT ) is also best among all such tests. This corollary might be quite useful, since as a practical matter, tests of the form ϕc (XT ) are easy to relate to and thus a reasonable starting point. The efficiency of ϕ∗ (XT ) in the class C is stronger though, as noncontinuous tests are not ruled out a priori. For instance, one might hope that a more powerful unit root test can be obtained by first carefully checking whether the data are drawn from an AR(1) with i.i.d. non-Gaussian disturbances. If they seem to, apply a Jansson (2008) type test, which is more powerful than ERS’s statistic. If not, revert to ERS’s statistic. The efficiency of ϕ∗ (XT ) in C implies, however, that any such scheme cannot result in a test that is a member of C: Whenever a test yields higher asymptotic power for some model that induces convergence to an Ornsetein–Uhlenbeck process, there also exists a data generating process for YT where XT converges to a Wiener process, and the test overrejects asymptotically. The value of the efficiency claim about ϕ∗ (XT ) depends, in general, on the appeal of restricting attention to tests in C. The class obviously depends on the choice of the function hT . The strongest case can be made if the weak convergence assumption about hT (YT ) embeds all relevant knowledge about the testing problem. In time series applications, primitive regularity assumptions (such as mixing and moment assumptions) are arguably sometimes invoked only because they imply a certain weak convergence, but not because there are particularly good reasons for the true model to satisfy these constraints. In such instances, it seems quite natural to restrict attention to tests that remain valid whenever the implied weak convergences hold. On the other hand, if, say, an economic model implies some observable data to follow an AR(1) with i.i.d. shocks, then it does not necessarily make sense to restrict attention to unit root tests that remain valid whenever the data suitably transformed converge to a Wiener process. In general, knowledge of the i.i.d. nature of disturbances (or observations in a cross sectional setting) is not easily reexpressed in terms of weak convergence for some hT (YT ). In applications with such knowledge, the test ϕ∗ (XT ) might still be a reasonable starting point, but it only provides a fairly limited answer to the question of efficient inference. One appeal of the framework of this paper is that the characterization of the efficient test is quite straightforward, even for nonstandard testing problems. The basic result that efficient tests in the limiting problem translate to efficient tests in C holds quite generally, also for cases where tests are restricted to satisfy some asymptotic unbiasedness or conditional similarity constraint, or to be
398
ULRICH K. MÜLLER
invariant. Standard arguments from testing theory can thus be applied to identify an efficient test in the limiting problem. The results of this paper may thus be seen as a formalization and generalization of Sowell (1996), who employed the Neyman–Pearson lemma to the limiting measure of partial sums of sample moment conditions to derive parameter stability tests in a generalized method of moments (GMM) framework. Additional potential applications include a broader sense of asymptotic efficiency of the test statistics derived in the 14 papers cited in the first paragraph of this Introduction. Finally, the results of this paper also imply efficiency of some recent nonstandard methods that take a weak convergence assumption as their starting point, such as those suggested in Müller and Watson (2008b, 2008a) and Ibragimov and Müller (2010). The approach pursued in this paper is related to the literature on semiparametric efficiency and Le Cam’s limits of experiments theory; see Le Cam (1986), Bickel, Klaassen, Ritov, and Wellner (1998), Newey (1990), van der Vaart (1998), and Shiryaev and Spokoiny (2000) for introductions. The starting point of the study of semiparametric efficient tests is a semiparametric model, which is a description of the distribution of YT for each sample of size T as a function of the parameter of interest, and an infinite dimensional nuisance parameter. Asymptotic efficiency results can then be deduced by limit of experiments arguments from the limiting behavior of the log-likelihood process. In most applications, the log-likelihood is locally asymptotically normal (see, for instance, Choi, Hall, and Schick (1996) for general results on semiparametric efficient tests), but especially time series applications also give rise to locally asymptotic quadratic behavior (as defined in Jeganathan (1995)). For instance, Ling and McAleer (2003) derived semiparametric efficient estimators for an autoregressive moving average model with generalized autoregressive conditional heteroskedasticity errors of unknown distribution, and Jansson (2008) derived semiparametric efficient unit root test in an AR(1) with i.i.d. disturbances of unknown distribution. In this literature, as well as in this paper, the heuristic argument is the same: if for any specific sequence of distributions of YT under the local alternative, it is possible to identify a sequence of distributions under the null that also satisfies the constraints and for which the same test is the best test, then this test is the overall efficient test. The details of the construction are different, though, because in the traditional semiparametric setup, the constraints are on the small sample distribution of YT . In contrast, the restriction to tests in C amounts to a constraint on the limiting behavior of the distributions of YT . Also, the limit experiment of a semiparametric problem explicitly characterizes the effect of not knowing the nonparametric component,2 in contrast to the limiting problem of this paper. We explicitly compare the two constructions in a simple locally asymptotically normal (LAN) model in Section 2.5 below. As a practical matter, the approach of this paper leads to 2 Or, at least, of not knowing the parameter of a suitably chosen parametric submodel as in Stein (1956).
EFFICIENT TESTS UNDER WEAK CONVERGENCE
399
a relatively more easily applicable characterization of the efficient tests, especially for nonstandard problems, although arguably at the cost of providing a less definite answer. The remainder of the paper is organized as follows. Section 2 introduces the formal framework and contains the main result. Section 3 discusses extensions regarding consistently estimable nuisance parameters, invariance restrictions, and uniformity issues. A running example throughout Sections 2 and 3 is the problem of testing for an autoregressive unit root in a univariate time series. Section 4 discusses in detail three applications: Elliott and Jansson’s (2003) point-optimal tests for unit roots with stationary covariates; Andrews, Moreira, and Stock’s (2006) optimal tests for linear instrumental variable regressions; and Sowell’s (1996) tests for GMM parameter stability. Section 5 concludes. All proofs are collected in the Appendix. 2. EFFICIENCY UNDER A WEAK CONVERGENCE ASSUMPTION 2.1. Setup The following notation and conventions are used throughout the paper: All limits are taken as T → ∞. Suppose S1 and S2 are spaces with metrics dS1 and dS2 . Then B(S1 ) denotes the Borel σ-algebra of S1 . Furthermore, if μ is a probability measure on B(S1 ), then its image measure under the B(S1 ) \ B(S2 ) measurable mapping f : S1 → S2 is denoted f μ. If no ambiguity arises, we suppress the dummy variable of integration, that is, we write f dμ for f (x) dμ(x). By default, the product space S1 × S2 is equipped with the metric dS1 + dS2 . We write μT μ0 or XT X0 for the weak convergence of the random elements X0 X1 with probability measures μ0 μ1 on B(S1 ) p and write XT → X for convergence in probability. The R → R function x → x is the integer part of x. In a sample of size T , suppose we observe data YT ∈ RnT , which is the T th row of a double array of T random vectors of fixed dimension n. For given T , a statistical model for YT with parameter θ in the metric space Θ is a family of probability measures ΛT = {λT (θ) : θ ∈ Θ} on B(RnT ), so that A dλT (θ) is a B(Θ)-measurable function of θ for each A ∈ B(RnT ). Let m be a model for the whole sequence of observations {YT }∞ T =1 with the same parameter space Θ . We write F for all T , that is, m = {ΛT }∞ T (θ m) for the distribution of YT in T =1 model m with parameter value θ. The hypotheses of interest are (1)
H 0 : θ ∈ Θ0
against H1 : θ ∈ Θ1
where Θ0 ∪ Θ1 = Θ. Let hT be a a given sequence of measurable functions hT : RnT → S, where S is a complete and separable metric space. Denote by PT (m θ) the distribution of XT = hT (YT ) in model m with parameter θ, that is, PT (m θ) = hT FT (m θ)
400
ULRICH K. MÜLLER
in the notation defined above. Suppose the typical model m induces the weak convergence (2)
PT (m θ) P(θ)
pointwise for all θ ∈ Θ
where {P(θ) : θ ∈ Θ} is a statistical model on B(S). We assume throughout that the probability measures P(θ1 ) and P(θ2 ) on B(S) are equivalent for all θ1 θ2 ∈ Θ, so that the family of measures P(θ) is dominated by μP = P(θ0 ) for some fixed θ0 ∈ Θ. The parameter θ should be thought of as describing local alternatives, such as the magnitude of the Pitman drift. UNIT ROOT TEST EXAMPLE: Consider testing for a unit root in a model with no deterministics against the local-to-unity alternative: We observe data YT = (uT1 uTT ) from the model uTt = ρT uTt−1 + νTt and uT0 = 0 for all T , where ρT = 1 − c/T for some fixed c ≥ 0 and the hypotheses are H0 : c = 0 against H1 : c > 0 (so that n = 1, θ = c, Θ0 = {0}, and Θ1 = (0 ∞)). Let ω ˆ 2T be a specific, “reasonable” long-run variance estimator. With S = D[01] being the space of cadlag functions on the unit interval, equipped with the Billingsley (1968) metric, a typical model m for the disturbances νTt leads to ˆ T −1/2 ω ˆ −1 T uT·T = hT (YT ) =JT (·) Jc (·) on D[01] , where Jc is an Ornstein– s Uhlenbeck process Jc (s) = 0 e−c(s−r) dW (r) with W being a standard Wiener process. It is well known that the measure of Jc is absolutely continuous with respect to the measure of J0 = W . 2.2. Limiting Problem In the typical model m the observed random element XT satisfies XT X ∼ P(θ). It will be useful in the sequel to first consider in detail the limiting problem, where X is assumed to be observed: (3)
lp
H0 : X ∼ P(θ)
θ ∈ Θ0
lp
against H1 : X ∼ P(θ)
θ ∈ Θ1
lp
Possibly randomized tests of H0 are measurable functions ϕS : S → [0 1], where ϕS (x) indicates the probability of rejection conditional on observing X = x, so that the overall rejection probability of the test ϕS is ϕS dP(θ) when X ∼ P(θ). The test ϕS is thus of level α if supθ∈Θ0 ϕS dP(θ) ≤ α. In many nonstandard problems, no uniformly most powerful test exists, so consider tests that maximize a weighted average power criterion WAP(ϕS ) = (4) ϕS dP(θ) dw(θ) where w is a probability measure on Θ1 . In general, the weighting function w describes the importance a researcher attaches to the ability of the test to reject for certain alternatives. A point-optimal test, as suggested by King (1988),
EFFICIENT TESTS UNDER WEAK CONVERGENCE
401
is a special case of a weighted average power (WAP) maximizing test for a degenerate weighting function w that puts all mass on one point. Also, if a uniformly most powerful test exists, then it maximizes WAP for all choices for w. The WAP criterion is statistically convenient, since by standard arguments, the WAP maximizing test equivalently maximizes power against the single alternalp tive H1w : X ∼ P(θ) dw(θ). With the WAP criterion as the efficiency measure, efficient level-α tests ϕ∗S in the limiting problem (3) maximize WAP subject to supθ∈Θ0 ϕS dP(θ) ≤ α. UNIT ROOT TEST EXAMPLE—CONTINUED: The weak convergence JˆT (·) Jc (·) leads to the limiting problem where we observe the continuous time lp lp process X, and H0 : X ∼ J0 (·) against H1 : X ∼ Jc (·) with c > 0. As a weighting function in the WAP criterion, consider a degenerate distribution with all mass at c1 , so that we consider a point-optimal test, just as ERS. By Girsanov’s theorem, the Radon–Nikodym derivative of the distribution of Jc1 with respect to the distribution of J0 , evaluated at X is given by L(X) = 1 exp[− 12 c1 (X(1)2 − 1) − 12 c12 0 X(s)2 ds]. Thus, by the Neyman–Pearson lemma, the point-optimal test in the limiting problem is ϕ∗S (X) = 1[L(X) > cv], where the critical value cv solves P(L(J0 (·)) > cv) = α. lp
When Θ0 is not a singleton, that is, if H0 is composite, the derivation of a WAP maximizing test is often more involved. The weighted average power maximizing test under a composite null given hypothesis is typically lp lp : X ∼ P(θ) dv(θ) against H : by the Neyman–Pearson test of H 0v 1w X ∼ P(θ) dw(θ), where v is the least favorable distribution for θ; see Chapter 3.8 of Lehmann (1986) for a discussion. For many problems, however, it is difficult to identify the least favorable distribution v (see, for instance, King (1988), Sriananthakumar and King (2006), Müller and Watson (2008a), or Elliott and Müller (2009)). To make further progress, researchers often restrict the class of tests under consideration by additional constraints and derive the best test in the restricted class. Sometimes the WAP maximizing test in the restricted class turns out to be uniformly most powerful (that is, maximizes WAP for all weighting functions), so that the issue of how to choose an appropriate weighting function is also avoided by imposing additional constraints. We discuss invariance as a restriction in Section 3.2 below and focus here on two other constraints on the tests ϕS . First, consider ϕS dP(θ) ≥ π0 (θ) for all θ ∈ Θ1 (5) for some function π0 : Θ1 → R. The formulation (5) allows for a range of cases: with π0 = 0, (5) never binds; with π0 = α, (5) imposes unbiasedness; with π0 equal to the power of the locally best test for all θ close to Θ0 , (5) effectively selects ϕS to be the locally best test.
402
ULRICH K. MÜLLER
Second, consider a (conditional) similarity constraint of the form (6) (ϕS − α)fS dP(θ) = 0 for all θ ∈ Θ¯ 0 and fS ∈ FS for some Θ¯ 0 ⊂ Θ0 , which would typically be the intersection of Θ0 with the closure of Θ1 , and FS some set of μP almost everywhere continuous and bounded functions fS : S → R. With FS only containing the zero function, (6) never binds. With FS only containing the function that is equal to 1, (6) imposes similarity on Θ¯ 0 . Finally, suppose ϑ : S → U is a μP almost everywhere continuous function and that FS contains all S → R functions of the form fU ◦ ϑ, where fU : U → R is continuous and bounded. Then (6) amounts to the restriction that the rejection probability of ϕS , conditional on ϑ(X), is equal to α for all θ ∈ Θ¯ 0 , so that ϕS is a conditionally similar test. To sum up, we will refer to level-α tests ϕ∗S in the limiting problem (3) as efficient when ϕ∗S maximizes weighted average power (4), subject to (5) and (6). 2.3. Class of Models and Tests In the original hypothesis testing problem (1) with YT observed, tests are measurable functions ϕT : RnT → [0 1], where ϕT (yT ) indicates the probability probability of rejection conditional on observing YT = yT . The overall rejection of the test ϕT in model m with parameter θ is thus given by ϕT dFT (m θ). As in the discussion of the limiting problem, we consider weighted average power as the criterion to measure the efficiency of tests ϕT . In analogy to WAP(ϕS ) in the limiting problem (4), define WAPT (ϕT m) = ϕT dFT (m θ) dw(θ) Also, define the asymptotic null rejection probability (ARP0 ) of the test ϕT in model m as ARP0 (ϕT m) = sup lim sup ϕT dFT (m θ) θ∈Θ0
T →∞
With these definitions, an asymptotically powerful level-α test ϕT has large limT →∞ WAPT (ϕT m), while ARP0 (ϕT m) ≤ α. If the exact model m is unknown, it makes sense to impose that ARP0 (ϕT m) ≤ α for a large set of models m. Now the key premise of this paper is that an interesting set of such models is the set M of models satisfying (2), that is, (7)
M = {m : PT (m θ) = hT FT (m θ) P(θ) for all θ ∈ Θ}
EFFICIENT TESTS UNDER WEAK CONVERGENCE
403
We thus restrict attention to the class C of tests ϕT with asymptotic null rejection probability no larger than the nominal level for all models m ∈ M, that is, ϕT ∈ C if and only if (8) sup lim sup ϕT dFT (m θ) ≤ α for all m ∈ M θ∈Θ0
T →∞
The constraint (8) only imposes pointwise validity: for a given m ∈ M and θ ∈ Θ0 , the test ϕT has asymptotic rejection probability of at most α. A stronger, uniform restriction analogous to (8) is discussed in Section 3.3 below. Also, the constraint (8) technically requires asymptotic validity only for models, parametrized by θ ∈ Θ, that satisfy (2). But for any sequence of measures GT on B(RnT ) with hT GT P(θG ) for some θG ∈ Θ0 , one can construct a model m ∈ M such that FT (m θG ) = GT .3 Members of C thus have no more than nominal asymptotic rejection probability for all data generating processes for YT that induce XT = hT (YT ) to converge weakly to some null limiting distribution P(θ), θ ∈ Θ0 . The class C is never empty. Any test statistic of the form τ ◦ hT , where τ : S → R is a μP almost everywhere continuous function, has a limiting distribution equal to τP(θ) for any model m ∈ M and θ ∈ Θ by virtue of (2) and the continuous mapping theorem. The class C thus includes all tests ϕT of the form 1[τ ◦ hT > cv], where cv is chosen large enough to ensure that the event τ(X) > cv has probability of at most α for all X ∼ P(θ), θ ∈ Θ0 . The interest of considering only tests that satisfy (8), rather than the more standard constraint of correct asymptotic null rejection probability for a wide range of primitive assumptions about disturbances that all imply (2), might be motivated in two ways. On the one hand, one might genuinely worry that the true data generating process happens to be in the set of models that satisfy (2), but the disturbances do not satisfy the primitive conditions. This line of argument then faces the question whether such nonstandard data generating processes are plausible. Especially in a time series context, primitive conditions are often quite opaque (could it be that interest rates are not mixing?), so it is often not clear how and with what arguments one would discuss such a possibility. It is probably fair to say, however, that very general forms of sufficient primitive conditions for central limit theorems and alike were derived precisely because researchers felt uncomfortable assuming more restricted (but still quite general) conditions, so one might say that imposing (8) constitutes only one more step in this progression of generality. Pick m0 ∈ M with associated probability kernels Λ0T = {λ0T (θ) : θ ∈ Θ}. The family of probability measures ΛT = {λT (θ) : θ ∈ Θ} with λT (θ) = λ0T (θ) for θ = θG and λT (θG ) = GT then also forms a probability kernel and defines a model m ∈ M. 3
404
ULRICH K. MÜLLER
UNIT ROOT TEST EXAMPLE—CONTINUED: The literature has developed a large number of sufficient conditions on the disturbances νTt that imply JˆT (·) Jc (·); see, for instance, McLeish (1974) for a martingale difference sequence framework, Wooldridge and White (1988) for mixing conditions, Phillips and Solo (1992) for linear process assumptions, and Davidson (2002) for near-epoch dependence. Arguably, when invoking such assumptions, researchers do not typically have a specific data generating process in mind that is known to satisfy the conditions; rather there is great uncertainty about the true data generating process, and the hope is that by deriving tests that are valid for a large class of data generating processes, the true model is also covered. The primitive conditions are, therefore, quite possibly not a reflection of what researchers are sure is true about the data generating process, but rather an attempt to assume little so as to gain robustness. In that perspective, it seems quite natural to impose that the asymptotic rejection probability is no larger than the nominal level for all models that satisfy JˆT (·) W (·). In fact, Stock (1994), White (2001, p. 179), Breitung (2002), Davidson (2002, 2009), and Müller (2008) defined the unit root null hypothesis in terms of the convergence T −1/2 uT·T ωW (·), making the requirement (8) quite natural (although JˆT (·) W (·) under the null hypothesis also requires consistency of the long-run variance estimator ω ˆ 2T as an additional assumption; see Müller (2007) for discussion). On the other hand, one might argue that the only purpose of an asymptotic analysis is to generate approximations for the small sample under study. In that perspective, it is irrelevant whether interest rates are indeed mixing or not, and the only interesting question becomes whether asymptotic properties derived under an assumption of mixing are useful approximations for the small sample under study. So even in an i.i.d. setting, one might be reluctant to fully exploit all restrictions of a given model—not because it would not be true that with a very large data set, a fully efficient test would be excellent, but because asymptotics might be a poor guide to the behavior of such a test in the sample under study. Focus on the class C is then motivated by a concern that additional asymptotic implications of the primitive conditions beyond (2) are potentially poor approximations for the sample under study, and attempts to exploit them may lead to nontrivial size distortions. LOW-FREQUENCY UNIT ROOT TEST EXAMPLE: Müller and Watson (2008b) argued that in a macroeconomic context, it makes sense to take asymptotic implications of standard models of low frequency variability seriously only over frequencies below the business cycle. So in particular, when uTt is modelled as local to unity, then the usual asymptotic implication is the functional convergence T −1/2 uT·T ωJc (·). Müller and Watson (2008b) instead derived a
EFFICIENT TESTS UNDER WEAK CONVERGENCE
405
scale invariant (we discuss invariance in Section 3.2) point-optimal unit root test that only assumes a subset of this convergence, that is, q 1
q T −3/2 T (9) ψl (t/T )uTt ω ψl (s)Jθ (s) ds t=1
√
l=1
0
l=1
where ψl (s) = 2 cos(πls) and q is some fixed number. No additional (and potentially hard to interpret) assumption is made about the existence of a particular consistent estimator ω ˆ 2T of ω2 . The number q is chosen so that the frequency of the weight functions ψl , l = 1 q are below business cycle frequency for the span of the sample under study. The rationale is that picking q larger would implicitly imply a flat spectrum for uTt = uTt − uTt−1 in the I(1) model over business cycle frequencies, which is not an attractive assumption for macroeconomic data. So even if one were certain that for a long enough span of observations, the functional convergence T −1/2 uT·T ωJc (·) becomes a good approximation eventually, it does not seem well advised to exploit its implications beyond (9) for the sample under study. 2.4. Main Result In addition to the constraint (8), we allow for the possibility that tests ϕT are restricted to possess further asymptotic properties. In particular, we consider (10) lim inf ϕT dFT (m θ) ≥ π0 (θ) for all m ∈ M θ ∈ Θ1 T →∞
(11)
lim
T →∞
(ϕT − α)(fS ◦ hT ) dFT (m θ) = 0
for all m ∈ M θ ∈ Θ¯ 0 and fS ∈ FS The constraints (10) and (11) are asymptotic analogues of the constraints (5) and (6) in the limiting problem introduced above. So setting π0 = α, for instance, imposes asymptotic unbiasedness of the test ϕT in the sense that for all models m ∈ M, the asymptotic rejection probability of ϕT under the alternative is not smaller than the nominal level. The formulation (11) of “asymptotic conditional similarity” is convenient, as it avoids explicit limits of conditional distributions; see Jansson and Moreira (2006) for discussion and references. Also, without loss of generality, we can always impose (10) and (11), since with π0 = 0 and FS = {0}, they do not constrain the tests ϕT in any way. The main result of this paper is that in the class of tests that satisfy (8), efficient tests in the limiting problem ϕ∗S , evaluated at sample analogues with X replaced by XT = hT (YT ), are asymptotically efficient. THEOREM 1: Let ϕ∗S : S → [0 1] be a level-α test in the limiting problem (3) that maximizes weighted average power (4) subject to (5) and (6). Suppose ϕ∗S is
406
ULRICH K. MÜLLER
μP almost everywhere continuous and define ϕˆ ∗T : RnT → [0 1] as ϕˆ ∗T = ϕ∗S ◦ hT . Then the following statements hold: (i) ϕˆ ∗T satisfies (8), (10), and (11), and limT →∞ WAPT (ϕˆ ∗T m) = WAP(ϕ∗S ) for all m ∈ M. (ii) For any test ϕT : RnT → [0 1] satisfying (8), (10), and (11), lim sup WAPT (ϕT m) ≤ WAP(ϕ∗S ) T →∞
for all m ∈ M
UNIT ROOT TEST EXAMPLE—CONTINUED: The function ϕ∗S : D[01] → [0 1] is continuous at almost all realizations of W , so that part (i) of Theorem 1 shows that the test ϕˆ ∗T (YT ) = ϕ∗S (JˆT (·)) = 1[exp[− 12 c1 (JˆT (1)2 − 1) − 1 2 c JˆT (s)2 ds] > cv] has asymptotic null rejection probability equal to the 2 1 nominal level and asymptotic weighted average power equal to P(L(Jc1 ) > cv) for all models in M, that is, models that satisfy JˆT (·) = T −1/2 ω ˆ −1 T uT·T W (·) and JˆT (·) Jc (·). Note that ϕˆ ∗T (YT ) is asymptotically equivalent to the efficient unit root test statistic derived by ERS, so the contribution of part (i) of Theorem 1 for the unit root testing example is only to point out that ϕˆ ∗T has the same asymptotic properties under the null and alternative hypothesis for all models in M. The more interesting finding is part (ii) of Theorem 1: For any unit root test that has higher asymptotic power than ϕˆ ∗T for any model m1 ∈ M, that is, some model for which JˆT (·) Jc (·), there exists another model m0 ∈ M for which the test has asymptotic null rejection probability larger than the nominal level. In other words, ERS’s test is point-optimal in the class of tests satisfying (8). There is thus a sense in which ERS’s test is better than, say, a test based on the Dickey and Fuller (1979) t-test type statistic (JˆT (1)2 − 1)/( JˆT (s)2 ds)1/2 (which is an element of the class of tests defined by (8)) also without an assumption of Gaussian disturbances. The proof of part (i) of Theorem 1 follows from the definition of weak convergence, the continuous mapping theorem, and dominated convergence. To gain some intuition for part (ii), consider first the case where the hypotheses are simple, Θ0 = {θ0 } and Θ1 = {θ1 }, and (10) and (11) do not bind. Let L : S → R be the Radon–Nikodym derivative of P(θ1 ) with respect to P(θ0 ), so that by the Neyman–Pearson lemma, ϕ∗S rejects for large values of L. For simplicity, assume that Li = 1/L is continuous and bounded. The central idea is to take, for any m1 ∈ M, the implied distribution FT (m1 θ1 ) of YT and to “tilt” the probabilities according to Li ◦ hT to construct a corresponding distribution GT so that hT GT P(θ0 ). This tilted probability distribution needs to integrate to 1, so let κT = (Li ◦ hT ) dFT (m θ1 )= Li dPT (m θ1 ) and define the measure GT on B(RnT ) via A dGT = κ−1 (Li ◦ hT ) dFT (m θ1 ) for all T A nT A ∈ B(R ). By construction, under GT , the function hT induces the measure
EFFICIENT TESTS UNDER WEAK CONVERGENCE
407
ϑLi dPT (m θ1 ) for any bounded QT on S, where QT satisfies ϑ dQT = κ−1 T and continuous function ϑ : S → R. Furthermore, the S → R functions ϑLi and Li are bounded and continuous, so that PT (m θ1 ) i P(θ1 ) implies i i κ → L dP(θ ) = L L dP(θ ) = dP(θ0 ) = 1 and ϑL dPT (m θ1 ) → T 1 0 ϑLi dP(θ1) = ϑ dP(θ0 ), so that hT GT P(θ0 ). Thus, by (8), lim supT →∞ ϕT dGT ≤ α. Furthermore, by construction, the Radon–Nikodym derivative between GT and FT (m θ1 ) is given by κT (L ◦ hT ). Therefore, by the Neyman–Pearson lemma, the best test of H˜ 0 : YT ∼ GT against H˜ 1 : YT ∼ FT (m θ1 ) rejects for large values of L ◦ hT , and no test can have a better asymptotic level and power trade-off than this sequence of optimal tests. But ϕˆ ∗T also rejects for large values of L ◦ hT and has the same asymptotic null rejection probability, and the result follows. In the more general case where Θ0 is composite, and (10) and (11) potentially bind, one can similarly construct, for any distribution FT (m1 θ1 ), m1 ∈ M, a corresponding model m0 ∈ M via a tilting by Li (θ) ◦ hT for θ ∈ Θ, where Li (θ) is the Radon–Nikodym derivative of P(θ) with respect to P(θ1 ). By construction, XT is a sufficient statistic for distinguishing H˜ 0 : YT ∼ FT (m0 θ), θ ∈ Θ0 against H˜ 1 : YT ∼ FT (m0 θ1 ) = FT (m1 θ1 ). Standard arguments thus imply that it suffices to consider tests that are functions of XT . But the testing problem involving XT is essentially the same as that involving X, since the likelihood ratio statistic of XT ∼ PT (m0 θ1 ) against XT ∼ PT (m0 θ) is proportional to L(θ), with a factor of proportionality that converges to unity as T → ∞ This again suggests that one cannot do better than the best in the limiting problem under the constraints (5) and (6). 2.5. Discussion Comment 1. Recall how the 14 papers mentioned in the Introduction derive an asymptotically efficient and robust test: Initially, restrict attention to the canonical parametric version of the model of interest, usually with Gaussian i.i.d. disturbances. Call this model m∗ , so that YT ∼ FT (m∗ θ) and, for simplicity, consider the problem of testing the simple hypotheses H0 : θ = θ0 against H1 : θ = θ1 . In this parametric model, FT (m∗ θ1 ) is absolutely continuous with respect to FT (m∗ θ0 ) and the small sample likelihood ratio statistic LRT can be derived. The small sample optimal test in model m∗ thus rejects for large values of LRT . Express LRT (up to asymptotically negligible terms) as a continuous function L : S → R of a random element XT∗ = h∗T (YT ) that converges weakly under the null and contiguous alternative: LRT = L(XT∗ ) + op (1), where XT∗ X with X ∼ P(θ). Thus, by the continuous mapping theorem, the likelihood ratio statistic also converges weakly under the null and alternative, LRT LR ∼ LP(θ), and the asymptotic critical value is computed from the distribution LP(θ0 ). Furthermore, an asymptotically robust test statistic is given by L(XT ), where XT = hT (YT ) is a “robustified” version of XT∗ such that
408
ULRICH K. MÜLLER
XT X ∼ P(θ) in many models m of interest, including m∗ (whereas typically XT∗ = h∗T (YT ) X for some plausible models). UNIT ROOT TEST EXAMPLE—CONTINUED: With i.i.d. standard normal driving disturbances, the small sample efficient unit root test rejects for large T T values of LRT = exp[− 12 c1 T −1 (u2TT − t=1 (uTt )2 )− 12 c12 T −2 t=1 u2Tt−1 ]; compare Dufour and King (1991). With h∗T (YT ) = XT∗ = T −1/2 uT·T Jc (·), we thus have LRT = L(XT∗ ) + op (1) with L(x) = exp[− 12 c1 (x(1)2 − 1) − 1 2 1 c x(s)2 ds]. The asymptotically robustified test that allows for serially cor2 1 0 related and non-Gaussian disturbances is based on L(XT ), where XT = JˆT (·) = ˆ −1 T −1/2 ω T uT·T . The end product of this standard approach is a test based on the statistic L(XT ), with critical value computed from the distribution LP(θ0 ). Now, generically, this test is identical to the test ϕˆ ∗T of Theorem 1. This follows from a general version of Le Cam’s third lemma (see, for instance, Lemma 27 of Pollard (2001)): If the measures FT (m∗ θ0 ) and FT (m∗ θ1 ) are contiguous with likelihood ratio statistic LRT , and under YT ∼ FT (m∗ θ0 ), (LRT XT∗ ) (L(X) X) with X ∼ P(θ0 ) and some function L : S → R, then under YT ∼ FT (m∗ θ1 ), XT∗ X ∼ Q and the Radon–Nikodym derivative of Q with respect to P(θ0 ) is equal to L. So if it is known that under YT ∼ FT (m∗ θ1 ), XT∗ X ∼ P(θ1 ), then it must be that Q = P(θ1 ), and L(X) is recognized as the Neyman– lp lp Pearson test statistic of the limiting problem H0 : θ = θ0 against H1 : θ = θ1 with X ∼ P(θ) observed. The test that rejects for large values of L(XT ) is thus simply the efficient test of this limiting problem, evaluated at sample analogues. This explains why in the unit root example the test ϕˆ ∗T of Theorem 1 had to be asymptotically equivalent to ERS’s statistic. Comment 2. This standard construction of tests starting from the canonical parametric model m∗ , that is, rejecting for large values of L(XT ), is by construction asymptotically efficient in model m∗ and, by part (i) of Theorem 1, it has the same asymptotic rejection probabilities for all models m ∈ M. This does not, however, make the test L(XT ) necessarily overall asymptotically efficient: It might be that there exists another test with the same asymptotic power in model m∗ and higher asymptotic power for at least some other model m1 ∈ M The semiparametric efficient unit root test by Jansson (2008) is an example of a test with the same asymptotic power as ERS’s test for Gaussian i.i.d. disturbances and higher asymptotic power for some other driving disturbances. Now part (ii) of Theorem 1 shows that whenever a test has higher asymptotic power than ϕˆ ∗T for some model m1 ∈ M, then it cannot be a member of C. Any partial adaption to models m1 = m∗ , if successful, necessarily implies the existence of a model m0 ∈ M for which the test has asymptotic rejection probability larger than the nominal level. In particular, Theorem 1 implies the existence
EFFICIENT TESTS UNDER WEAK CONVERGENCE
409
ˆ −1 of a double array process (uT1 uTT ) satisfying T −1/2 ω T uT·T W (·) for which Jansson’s (2008) test has asymptotic rejection probability greater than the nominal level. In other words, Theorem 1 shows ϕˆ ∗T to be an asymptotically efficient test in the class C, because no test can exist with higher asymptotic (weighted average) power for any model m ∈ M. Comment 3. In this sense, Theorem 1 implies a particular version of an asymptotic essentially complete class result for the hypothesis test (1): Set π0 in (10) equal to the power function of an admissible test in the limiting problem (3), so that (10) effectively determines ϕ∗S . The theorem then shows that no test ϕT in C can have higher asymptotic power than ϕˆ ∗T uniformly over θ ∈ Θ1 . As long as all admissible ϕ∗S are μP almost everywhere continuous, the resulting tests ϕˆ ∗T thus form an essentially complete subset of asymptotically admissible tests of C. In particular, if a uniformly most powerful test exists in the limiting problem, then ϕ∗S is this test for any weighting function w, and repeated application of Theorem 1 with w having point mass for any θ ∈ Θ1 then shows that the test ϕˆ ∗T is correspondingly asymptotically uniformly most powerful. Comment 4. As discussed in the Introduction, an important subset of tests that are automatically members of C are those asymptotic level-α tests that can be written as a (sufficiently) continuous function of XT : Let ϕc : S → [0 1] be a μP almost everywhere continuous function. Then ϕT = ϕc ◦ hT has asymptotic rejection probability equal to ϕc dP(θ) ≤ α for all θ ∈ Θ0 by the continuous mapping theorem and thus satisfies (8) whenever it is of level α in the canonical model. As a corollary to Theorem 1, ϕˆ ∗T is thus asymptotic weighted average power maximizing among all such tests, and this also holds if ϕc (and ϕ∗S in ϕˆ ∗T = ϕ∗S ◦ hT ) is restricted by (5) and (6). This reasoning can be generalized further by considering the set of tests ϕT : RnT → [0 1] that are amenable to a continuous mapping theorem for a sequence of functions (cf. Theorem 1.11.1 in van der Vaart and Wellner (1996)) so that their limiting behavior is still governed by some μP almost everywhere continuous function ϕc . Reliance on ϕˆ ∗T can therefore be motivated also if (8) is not an appealing restriction. In nonlinear and/or dynamic models, it might be easier to think about high level weak convergences, rather than small sample distributions, even under strong parametric assumptions. The problem of testing for parameter instability in a general GMM framework, as considered by Sowell (1996) and discussed in detail in Section 4.3 below, or the weak instrument problem in a general GMM framework, as considered by Stock and Wright (2000), arguably fall into this class. In these problems, one natural starting point is tests that are (sufficiently) continuous functions of the weakly converging statistics, and Theorem 1 identifies the best such test.
410
ULRICH K. MÜLLER
UNIT ROOT TEST EXAMPLE—CONTINUED: Stock (1999) exactly considered tests of the form ϕc (JˆT (·)) in the notation developed here. Theorem 1 thus also identifies ϕ∗S (JˆT (·)) as the point-optimal test in his class. Comment 5. When Θ is of finite dimension, one might call the set of models M “semiparametric,” with the weak convergence assumption (2) constituting the nonparametric aspect. This raises the question of how Theorem 1 relates to the more traditional concept of semiparametric efficient inference. In the following discussion, we draw this comparison for a simple model with locally asymptotically normal (LAN) likelihood ratios. For expositional ease, consider T scalar observations YT = (yT1 yTT )
with mean E[yTt ] = μ and variance E[(yTt − μ)2 ] = 1,√where the hypotheses are H0 : μ = 0 against H1 : μ > 0. Reparametrize θ = T μ, so that in terms of θ, the problem becomes H0 : θ = 0 against H1 : θ > 0. It is natural to take T the sample mean y¯T = T −1 t=1 yTt as the basis √ for inference, so consider the monotone transformation XT = hT (YT ) = T y¯T X ∼ N (θ 1) where the convergence stems from a central limit theorem. The likelihood ratio statistic of X ∼ N (θ1 1) against X ∼ N (0 1) equals L(X) = exp[θ1 X − 12 θ12 ], so that by the Neyman–Pearson lemma, ϕ∗S (X) = 1[X > 1645] is the the uniformly most powerful 5% level test in the limiting problem of observing X ∼ N (θ 1) Thus, by Theorem 1, ϕˆ ∗T (YT ) = ϕ∗S (XT ) is the best 5% level test among all tests whose asymptotic null rejection probability is no greater than 5% whenever XT N (0 1). Ignoring the unboundedness of Li = 1/L for simplicity, the basic argument in the proof of Theorem 1 starts from any alternative model FT (m θ1 ) (so that under FT (m θ1 ), XT ∼ N (θ1 1)) and constructs a corresponding null model GT by tilting the probabilities according to L−1 ◦ hT = exp[−θ1 XT + 12 θ12 ], that −1 i T is, A dGT = κT A (L ◦ hT ) dFT (m θ1 ) for all A ∈ B(R ), where κT → T1 insures that RT dGT = 1. If under the alternative model FT (m θ1 ), {yTt }t=1 is i.i.d. with log density y → g(y − T −1/2 θ1 ), we obtain dGT =
(12) A
T
exp g yTt − T −1/2 θ1 − T −1/2 θ1 yTt
A t=1
1 −1 2 −1 + T θ1 − T ln κT dYT 2 so that the tilted model GT is also i.i.d., with log density g(y − T −1/2 θ1 ) − T −1/2 θ1 y + 12 T −1 θ12 − T −1 ln κT . Now consider the analysis of inference about the mean in an i.i.d. scalar series using traditional semiparametric techniques. Write ε for a mean zero random variable with distribution yTt − μ, and embed the log density g of ε in a smooth parametric model gη with parameter η ∈ R. Under the local
EFFICIENT TESTS UNDER WEAK CONVERGENCE
411
√ parametrization (θ υ) = T (μ η − η0 ) and some regularity conditions, we obtain that the log-likelihood process satisfies (13)
T
gη0 +T −1/2 υ yTt − T −1/2 θ − gη0 (yTt )
t=1
1 θ θ θ =S I − + op (1) υ υ υ 2 T T where ST = T −1/2 ( t=1 μ (yTt ) t=1 η (yTt )) N (0 I ), I = ((Iμμ Iμη ) (Iμη Iηη ) ), and μ and η are R → R functions with E[μ (ε)] = E[η (ε)] = 0, E[μ (ε)2 ] = Iμμ , E[η (ε)2 ] = Iηη , E[μ (ε)η (ε)] = Iμη , and E[ε(μ (ε) η (ε))] = (1 0). It is useful to think of μ and η as appropriate generalizations of the usual score functions dgη0 (y − μ)/dμ|μ=0 and dgη (y)/dη|η=η0 , respectively, so that under weak regularity conditions, E[ε(μ (ε) η (ε))] = (1 0) follows from the identity y exp gη (y − μ) dy = μ. Furthermore, by Le Cam’s third lemma, ST N (I (θ υ) I ) when the true log density of yTt is given by gη0 +T −1/2 υ (y − T −1/2 θ). For any θ1 > 0, consider the best test of the simple null −1 hypothesis H0 : (θ υ) = (0 υ0 ) with υ0 = Iηη Iμη θ1 against the single alternative H1 : (θ υ) = (θ1 0). In large samples, the Neyman–Pearson test rejects for large values of ((θ1 −υ0 )I (θ1 −υ0 ) )−1/2 (θ1 −υ0 )ST N (a 1), where 2 /I a = 0 under the null hypothesis and a = θ1 Iμμ − Iμη ηη under the alternative. Because a = 0 for any local parameter of the form (0 υ), the choice −1 Iμη θ1 is least favorable, so that by Theorem 7 of Lehmann (1986, of υ0 = Iηη p. 104), this test is the best test against H1 also under the composite null hypothesis that leaves υ unrestricted. Furthermore, this test does not depend on θ1 (since it is is equivalent to rejecting for large values of (Iηη −Iμη )ST ) and is thus uniformly most powerful against alternatives of the form (θ 0) with θ > 0 Now Stein’s (1956) key insight underlying semiparametric efficiency calculations is that inference with g unknown is at least as hard as inference in any of the smooth parametric submodels with parameter η. Because the power of 2 /Iηη , the least the best test in any particular submodel increases in Iμμ − Iμη favorable submodel is the one that minimizes this quantity, subject to the constraint that E[ε(μ (ε) η (ε))] = (1 0). This is solved by η (ε) = μ (ε) −√ε for which Iηη = Iμη = Iμμ − 1 and a = θ1 under the alternative. But XT = T y¯T achieves this noncentrality parameter, so that ϕˆ ∗T (YT ) = 1[XT > 1645] is also the semiparametric efficient 5% level test. As just discussed, the rationale for this semiparametric efficiency claim is that for any alternative model with log density g(y − T −1/2 θ1 ), there exists (or, more precisely, one can arbitrarily well approximate) a least favorable smooth parametric submodel gη with g = gη0 and η (ε) = μ (ε) − ε, and a least fa−1 Iμη θ1 = θ1 of the parameter η = η0 + T −1/2 υ0 so vorable null value υ0 = Iηη T that by (13), the likelihood ratio process satisfies t=1 (gη0 (yTt − T −1/2 θ1 ) −
T
412
ULRICH K. MÜLLER
T gη0 +T −1/2 θ1 (yTt )) = θ1 T −1/2 t=1 yTt − 12 θ12 + op (1) Solving this for the least favorable null log density, we obtain T
gη0 +T −1/2 θ1 (yTt )
t=1
=
T t=1
1 gη0 yTt − T −1/2 θ1 − θ1 T −1/2 yTt + T −1 θ12 + op (1) 2
which is essentially the same as the tilted model (12). So not only do both approaches to the question of efficient tests lead to the same test ϕˆ ∗T (YT ) = 1[XT > 1645], but also the least favorable null model that “defeats” attempts to learn from the data about μ through statistics other than XT is (at least approximately) the same in both approaches. The semiparametric efficiency result is arguably sharper, though, as it accommodates the moment condition and i.i.d. property of the observations, whereas Theorem 1 yields a less specific efficiency claim. Comment 6. One advantage of Theorem 1 relative to the semiparametric efficiency approach is the general nature of its claim. The result requires only minimal regularity conditions beyond the basic weak convergence assumption and covers a wide range of nonstandard testing problems. In contrast, relatively little is known about traditional semiparametric efficiency in models outside the LAN class, and deriving such results is nontrivial and involves additional regularity assumptions (cf. Jansson (2008)). When Theorem 1 yields a less powerful test than what is obtained from a traditional semiparametric efficiency calculation, one might try to reconcile the results either by assuming additional weak convergences (which potentially increases the power of ϕ∗S and ϕˆ ∗T ) or by considering less constrained semiparametric models (which potentially decreases the power of the semiparametric efficient test). For the latter, inspection of the form of the tilted model GT in the proof of Theorem 1 may suggest an appropriate relaxation of the constraints. UNIT ROOT TEST EXAMPLE—CONTINUED: Proceeding as in Rothenberg and Stock (1997) and Jansson (2008) yields that the log of the likelihood ratio statistic for testing H0 : c = 0 against H1 : c = c1 in an AR(1) model with coefficient 1 − c/T and i.i.d. driving errors with the same distribuT T tion as ε satisfies −c1 T −1 t=1 μ (uTt )uTt−1 − 12 c12 Iμμ T −2 t=1 u2Tt−1 + op (1) with ε, μ , and Iμμ as defined in Comment 5 above. Heuristically, the tilting by 1/L is approximately equal to a tilting by 1/LRT , and a model T −1 tilted by 1/LRT has joint log-density (μ (uTt ) − t=1 [g(uTt ) − c1 T 1 2 −2 2 uTt )uTt−1 − 2 c1 T (Iμμ − 1)uTt−1 ] + op (1) This is recognized as an exT M with exp(gηM (y x)) = eg(y) (1 + pansion of t=1 gη0 −c1 /T (uTt uTt−1 )
EFFICIENT TESTS UNDER WEAK CONVERGENCE
413
(η − η0 )(μ (y) − y)x) for η ∈ R. Note that exp(gηM (y x)) dy = 1 and T y exp(gηM (y x)) dy = 0 for all x and η, suggesting that t=1 gηM0 −c1 /T (uTt uTt−1 ) are the log Markov kernels of a time homogeneous first order Markov martingale for {uTt }Tt=1 . As (μ (y) − y)x is unbounded, though, gηM (y x) is not well defined even in the T −1 neighborhood of η0 . But one could presumably employ an appropriate approximation scheme, so that an analysis analogous to Jansson (2008) would yield ϕˆ ∗T to be the semiparametrically point-optimal unit root tests in the first order time homogeneous Markov model against AR(1) alternatives. A detailed analysis of this conjecture is beyond the scope of this paper. Comment 7. The weak convergence assumption (2) can be viewed as a way to express regularity one is willing to impose on some inference problem. Implicitly, this is standard practice: invoking standard normal asymptotics for the ordinary least squares (OLS) estimator of the largest autoregressive root ρ is formally justified for any value of |ρ| < 1, but effectively amounts to the assumption that the true parameter in the sample under study is not close to the local-to-unity region. Similarly, a choice of weak versus strong instrument asymptotics or local versus nonlocal time varying parameter asymptotics expresses knowledge of regularity in terms of weak convergences. In some instances, it might be natural to express all regularity that one is willing to impose in this form, and Theorem 1 then shows that ϕˆ ∗T efficiently exploits this information. The i.i.d. nature of (standard) cross sectional data is not easily embedded in a weak convergence statement, so that such a starting point is much more convincing in a time series context. Also, interesting high level weak convergence assumptions are certainly not arbitrary, but derive their plausibility from the knowledge that there exists a range of underlying primitive conditions that would imply them. If one expresses regularity of a problem in terms of weak convergences, one faces a choice of what to assume. Of course, additional weak convergence assumptions cannot reduce the information about θ in the limiting problem. But not all weak convergences are relevant for deciding between H0 and H1 : Whenever the conditional distribution of the additional limiting element does not depend on θ, the efficient test remains unaltered. This holds, for example, for any additional convergence in probability to a constant limiting element whose value does not depend on θ. In the context of unit root tests, for instance, ERS’s test remains asymptotically efficient in the sense of Theorem 1 if in addition to JˆT (·) Jc (·), the average sample skewness T of uTt = uTt − uTt−1 , T −1 t=1 (uTt )3 , is assumed to converge to zero in ·T probability, or that T −1/2 t=1 (uTt )3 W (·) for any c ≥ 0, where W (·) is a Wiener process independent of Jc . At the same time, one might also be reluctant to impose the full extent of the “usual” weak convergence assumption; in general, this leads to less powerful
414
ULRICH K. MÜLLER
inference. The efficiency claim of Theorem 1 then shows that it is impossible to use data-dependent methods to improve inference for more regular data while still remaining robust in the sense of (8). LOW-FREQUENCY UNIT ROOT TEST EXAMPLE—CONTINUED: Since (9) is strictly weaker than the standard assumption T −1/2 uT·T ωJc (·), Müller and Watson’s (2008b) low-frequency unit root test is less powerful than ERS’s test. It is nevertheless point-optimal in the sense of efficiently extracting all regularity contained in the weaker statement (9): Theorem 1 implies that it is impossible to let the data decide whether (9) holds for q larger than assumed (that is, whether the local-to-unity model provides good approximations also over business cycle frequencies), and to conduct more powerful inference if it is, without inducing size distortions for some model satisfying (9). 3. EXTENSIONS 3.1. Consistently Estimable Nuisance Parameters Suppose the testing problem (1) involves an additional nuisance parameter γ ∈ Γ , where Γ is a metric space, so that now YT ∼ FT (m θ γ), and the null and alternative hypotheses become (14)
H0 : (θ γ) ∈ (Θ0 Γ ) against H1 : (θ γ) ∈ (Θ1 Γ )
Suppose γ can be consistently estimated by the estimator γˆ T under the null and alternative hypothesis. For a fixed value of γ = γ0 , that is, when Γ is a singleton, this is a special case of what is covered by Theorem 1, as discussed in Comment 7. But when Γ is not a singleton, the analysis above is not immediately applicable, because the limiting measures of X were assumed to be mutually absolutely continuous for all parameter values. Thus denote by PTe (m θ γ) (e denotes extended) the distribution of (γˆ T XT ) = heT (YT ) when YT ∼ FT (m θ γ), so that the weak convergence assumption analogous to (2) now becomes (15)
PTe (m θ γ) P e (θ γ) pointwise for all (θ γ) ∈ (Θ × Γ )
where P e (θ γ) is the product measure between the measure P(θ γ) of X on B(S) (which might depend on γ and where for all γ ∈ Γ , the measures P(θ γ), θ ∈ Θ are equivalent to some measure μP (γ)) and the degenerate probability measure on B(Γ ) that puts all mass on the point γ. The limiting problem now becomes testing (14) with X ∼ P(θ γ) observed and γ known, and optimal e∗ tests ϕe∗ S in the limiting problem are indexed by γ ∈ Γ , ϕS : Γ × S → [0 1], so e∗ that for each γ0 ∈ Γ , ϕS (γ0 ·) is the weighted average power maximizing test of (14) for γ = γ0 , possibly subject to constraints of the form (5) and (6). Now as long as the test ϕe∗ S is μP (γ) almost everywhere continuous for all γ ∈ Γ , the same arguments as employed in the proof of Theorem 1(i) imply
EFFICIENT TESTS UNDER WEAK CONVERGENCE
415
e∗ ˆ T XT ) has the same that under (15) and for all γ ∈ Γ , the test ϕˆ e∗ T = ϕS (γ e∗ asymptotic size and power as ϕS does. Furthermore, one can invoke Theorem 1(ii) to conclude that ϕˆ e∗ T is efficient when restricting attention to models with γ = γ0 known. Since ϕˆ e∗ T does not require knowledge of γ, this shows that ϕˆ e∗ T is overall efficient. Problems that involve consistently estimable parameters γ with an impact on the efficient limiting tests are thus covered by the results of Theorem 1 under an additional assumption of the family of efficient limiting tests, indexed by γ, to depend on γ sufficiently smoothly.
UNIT ROOT TEST EXAMPLE—CONTINUED: Instead of T −1/2 ω ˆ −1 T uT·T = e 2 −1/2 JˆT (·) Jc (·), consider the weak convergences hT (YT ) = (ω ˆ TT uT·T ) (ω2 ωJc (·)) as a starting point. In the limiting problem, X = ωJc (·) is ob2 served with ω2 known, and the point-optimal test is of the form ϕe∗ S (ω X) = 1 1[exp[− 12 c1 (ω−2 X(1)2 − 1) − 12 c12 ω−2 0 X(s)2 ds] > cv]. A calculation shows this to be a continuous function (0 ∞) × D[01] → R for almost all realizations of (ω2 J0 ), so the test ϕe∗ ˆ 2T T −1/2 uT·T ) is asymptotically efficient S (ω among all unit root tests with correct asymptotic null rejection probability whenever (ω ˆ 2T T −1/2 uT·T ) (ω2 ωJ0 (·)) against all models satisfying 2 −1/2 uT·T ) (ω2 ωJc1 (·)). (ω ˆ TT 3.2. Invariance The majority of efficient tests for nonstandard problems cited in the Introduction rely on invariance considerations. In the framework here, invariance may be invoked at two levels: On one hand, one might consider a weak convergence as a starting point that is a function of a small sample maximal invariant; on the other hand, invariance might instead be employed in the limiting problem as a way to deal with nuisance parameters. This subsection discusses the link between these two notions, and the interaction of the concept of invariance with the efficiency statements of Theorem 1. The first case is entirely straightforward: suppose φT (YT ) with φT : RnT → nT R is a maximal invariant to some group of transformations. By Theorem 1 of Lehmann (1986, p. 285), all invariant tests can be written as functions of a maximal invariant. So if hT is of the form hT = hφT ◦ φT , then Theorem 1 applies and yields an asymptotic efficiency statement among all invariant tests in C. UNIT ROOT TEST EXAMPLE—CONTINUED: Consider the problem of testing for a unit root in a model with unknown mean, and suppose ω = 1 is known for simplicity. A maximal invariant is given by the demeaned data {uˆ Tt }Tt=1 with T uˆ Tt = yTt − y¯T , where y¯T = T −1 t=1 yTt and YT = (yT1 yTT ) . The typical 1 model satisfies T −1/2 uˆ T·T Jcμ (·), where Jcμ (s) = Jc (s) − 0 Jc (l) dl. Theorem 1 now shows that rejecting for large values of Lμ (T −1/2 uˆ T·T ) is asymptotically point-optimal among all tests whose asymptotic null rejection probability
416
ULRICH K. MÜLLER
is at most α whenever T −1/2 uˆ T·T Jcμ (·), where Lμ is the Radon–Nikodym derivative of the probability measure of Jcμ1 with respect to the measure of J0μ .4 Rejecting for large values of Lμ (T −1/2 uˆ T·T ) is, therefore, the asymptotically point-optimal translation invariant test in the class of tests that do not overreject asymptotically whenever T −1/2 uˆ T·T J0μ (·). In the second case, one considers the typical weak convergence in a model with nuisance parameters, and applies invariance only in the limiting problem. ˜ x), inFormally, let g˜ : R × S → S be such that the S → S functions x → g(r ˜ ˜ dexed by r ∈ R, form the group G . Further suppose φ : S → S is a maximal invariant to G˜ , so that the efficient invariant test in the limiting problem ϕφ∗ S φlp φlp ˜ ˜ is the efficient test of H0 : X˜ ∼ φP(θ), θ ∈ Θ0 against H1 : X˜ ∼ φP(θ), ˜ It is not clear whether or in which sense this test, θ ∈ Θ1 , where X˜ = φ(X). evaluated at sample analogues would be asymptotically efficient. UNIT ROOT TEST EXAMPLE—CONTINUED: In the parametrization yTt = y uTt + T 1/2 αy , we obtain with θ = (c αy ) that T −1/2 yT·T Jθ (·) where y Jθ (s) = Jc (s) + αy . The parameter αy is a nuisance parameters in the limiting ˜ x) = x(·) + r, problem. Define the transformations g˜ : R × D[01] → D[01] as g(r with r ∈ R = R. The limiting problem is invariant to these transformations and 1 ˜ = x(·) − 0 x(l) dl is a maximal invariant. Since φ˜ : D[01] → D[01] with φ(x) ˜ θy ) ∼ J μ (·), the point-optimal invariant test in the limiting problem rejects φ(J c ˜ θy ). This test, evaluated at sample analogues, yields for large values of Lμ ◦ φ(J μ −1/2 L (T uˆ T·T ), just as above. Even though in this example, the efficient invariant test in the limiting problem, evaluated at sample analogues, is small sample invariant, one still cannot claim this test to be the asymptotically point-optimal test among all small sample invariant tests in the class C relative to the weak convergence T −1/2 yT·T y y Jθ (·). The reason is that the set of models satisfying T −1/2 yT·T Jθ (·) is a −1/2 μ proper subset of the set of models satisfying T uˆ T·T Jc (·). The efficiency of Lμ (T −1/2 uˆ T·T ) noted above is thus relative to a potentially smaller class of tests, and it remains unclear whether an efficiency claim can also be made relative to the weaker constraint of asymptotic size control whenever y T −1/2 yT·T Jθ (·). We now show how one can make an asymptotic efficiency claim when invariance is employed in the limiting problem by relating the limiting group of trans4 Under the assumption of uT0 = 0 for the initial condition, Lμ (x) = L(x(·) − x(0)), where L is the Radon–Nikodym derivative of the probability measure of Jθ1 with respect to the measure of J0 so that rejecting for large values of Lμ (T −1/2 uˆ T·T ) leads to the same asymptotic power as without translation invariance. This equivalence does not hold, however, when the initial condition uT0 is on the same order of magnitude as uTsT for s > 0. See Müller and Elliott (2003) for discussion.
EFFICIENT TESTS UNDER WEAK CONVERGENCE
417
formations g˜ to a sequence of small sample groups. So for each T , suppose the measurable function gT : R × RnT → RnT is such that the RnT → RnT functions y → gT (r y), indexed by r ∈ R, form the group GT . Let φT : RnT → RnT be a maximal invariant of GT . Assume that the small sample and limiting invariance correspond in the sense that the small sample maximal invariant converges weakly to the maximal invariant of the limiting problem, that is, (16)
˜ (hT ◦ φT )FT (m θ) φP(θ)
pointwise for θ ∈ Θ
Let Mφ be the set of models m satisfying (16). Since ϕφ∗ S was assumed to be the φlp φlp ˜ ˜ ˜ ˜ θ ∈ Θ1 , efficient test of H0 : X ∼ φP(θ), θ ∈ Θ0 , against H1 : X ∼ φP(θ), φ∗ one can apply Theorem 1 to conclude that ϕS ◦ hT ◦ φT is the asymptotically efficient small sample invariant (with respect to GT ) test in the class of tests that are asymptotically valid for all models Mφ . As noted in the unit root example above, however, one cannot conclude that ϕφ∗ S ◦ hT ◦ φT is also the asymptotically efficient small sample invariant test in the (potentially larger) class of tests that are asymptotically valid for all models in M, that is, relative to the weak convergence hT FT (m θ) P(θ). What could go wrong is that for some small sample invariant test, (8), (10), and (11) hold for all m ∈ M, but there exists m ∈ Mφ for which at least one of (8), (10), and (11) does not hold. But this cannot happen if for any m ∈ Mφ , there exists a corresponding m ∈ M such that φT FT (m θ) = FT (m θ). Given that the constraints (8), (10), and (11) are pointwise in θ, the following theorem shows this to be the case under suitable conditions. THEOREM 2: Pick any m ∈ Mφ and θ ∈ Θ Suppose (i) R is a separable and ˜ x) is continuous for all r ∈ R; complete metric space; (ii) the mapping x → g(r ˜ ˜ ρ(x) (iii) there exists a measurable function ρ˜ : S → R such that x = g( ˜ φ(x)) for ˜ hT (φT (YT )))) 0 when all x ∈ S; (iv) for all r ∈ R, dS (hT (gT (r φT (YT ))) g(r YT ∼ FT (m θ), where dS is the metric on S; (v) φ˜ and φT select specific orbits, that is, for all y ∈ RnT and x ∈ S, there exist ry rx ∈ R so that φT (y) = gT (ry y) ˜ ˜ x x). Then there exists a sequence of measures GT on B(RnT ) and φ(x) = g(r such that φT FT (m θ) = φT GT (θ) and hT GT (θ) P(θ). UNIT ROOT TEST EXAMPLE—CONTINUED: With gT (r YT ) = (yT1 + rT 1/2 1 yTT + rT 1/2 ) , φT (YT ) = gT (−T −1/2 y¯T YT ), and ρ(x) ˜ = 0 x(s) ds, we ˜ c ), sup T dD (T −1/2 (ysT + find (hT ◦ φT )(YT ) = T −1/2 uˆ T·T Jcμ ∼ φ(J y∈R [01] ˜ ˜ ˜ ρ(x) T 1/2 r) T −1/2 ysT + r) = 0, and x = φ(x) + ρ(x) ˜ = g( ˜ φ(x)) for all x ∈ D[01] , so that the assumptions of Theorem 2 hold. We can, therefore, conclude that rejecting for large values of Lμ (T −1/2 uˆ T·T ) is also the asymptotically point-optimal test among all translation invariant tests with asymptotic null rejection probability of at most α whenever T 1/2 yT·T J0 (·) + αy , αy ∈ R.
418
ULRICH K. MÜLLER
˜ ˜ ρ(x) For the proof of Theorem 2, note that with x = g( ˜ φ(x)) for all x ∈ S, one can construct the distribution P(θ) by applying an appropriate random ˜ transformation g˜ to each x drawn under φP(θ). Under the assumptions of Theorem 2, for each FT (m θ) with m ∈ Mφ and θ ∈ Θ, one can thus obtain a corresponding data generating process GT with hT GT (θ) P(θ) by applying an appropriate random transformation gT ∈ GT for each T . While asymptotic efficiency statements about invariant tests based on Theorems 1 and 2 require a tight link between the limiting group G˜ and the small sample groups GT , the link does not need to be perfect: Even if the distribu˜ tion of the limiting maximal invariant φP(θ) does not depend on a subset of the parameter θ (so that a nuisance parameter is eliminated by invariance), it is not assumed that the small sample counterpart φT FT (m θ) shares this feature. Also assumption (iv) does not require the small sample and limiting group actions to exactly coincide, which is useful for, say, arguing for the asymptotic efficiency of translation and trend invariant unit root tests with respect to the weak convergence T −1/2 yT·T Jc (·) + αy + ·βy . 3.3. Uniformity The discussion so far concerned the pointwise asymptotic properties of tests ϕT , that is, the rejection probability as T → ∞ for a fixed model m and parameter value θ ∈ Θ. As argued by Dufour (1997) and Pötscher (2002), among others, one might instead want to focus on uniform properties of tests in m and in θ ∈ Θ. It is clear, however, that the pointwise constraint (8) is not enough to generate any useful bound for the small sample rejection probabilities of tests, even pointwise in θ. The reason is that for any fixed sample size T , there exist models in M for which the distribution of YT is arbitrary, since the convergence PT (m θ) = hT FT (m θ) P(θ) occurs “later.” The interest in results derived from (2) must stem from an implicit assumption that these asymptotic properties are reasonable approximations for the small sample distribution of XT in the actual sample size T . A formalization of this implicit assumption is obtained by imposing a lower limit on the speed of convergence. For two probability measures μ1 and μ2 on B(S), define BL as BL (μ1 μ2 ) = supf BL ≤1 | f dμ1 − f dμ2 |, where f : S → R are B(S) \ B(R) measurable and f BL = supx∈S |f (x)| + (y)| supx=yxy∈S |f (x)−f . It is known that BL metrizes weak convergence on sepdS (xy) arable metric spaces (Dudley (2002, p. 395)). Also, let the real sequence u δ = {δT }∞ T =1 be such that δT → 0. Now define M (δ) (u denotes uniform) as the set of models m satisfying (17)
sup BL (PT (m θ) P(θ)) ≤ δT θ∈Θ
EFFICIENT TESTS UNDER WEAK CONVERGENCE
419
that is, Mu (δ) is the collection of models m for which the distribution PT (m θ) = hT FT (m θ) of hT (YT ) differs by at most δT from its limit P(θ) as measured by BL , uniformly over Θ. It then makes sense to ask whether the rejection probability of a test ϕT converges to the nominal level uniformly over θ ∈ Θ0 and Mu (δ), that is, if ϕT dFT (m θ) ≤ α (18) sup lim sup T →∞
θ∈Θ0 m∈Mu (δ)
By the continuity of ϕ∗S , (18) holds for the test ϕT = ϕˆ ∗T in Theorem 1, that is, for large enough T , the rejection probability of ϕˆ ∗T is close to α for all models in Mu (δ), uniformly over Θ0 .5 Similar restrictions and arguments could be made regarding the constraints (10) and (11). It is not clear, however, whether all tests in the class that satisfy (8) also satisfy (18) or vice versa. Theorem 1, therefore, does not imply that ϕˆ ∗T also maximizes asymptotic weighted average power in the class of all tests that satisfy (18). A partial result in that regard is provided by the following theorem for the case of a single null hypothesis and without constraints (10) and (11). THEOREM 3: Suppose Θ0 = {θ0 }, let L : Θ × S → R be the Radon–Nikodym ¯ = L(θ x) dw(θ), let derivative of P(θ) with respect to P(θ0 ), define L(x) ¯ and define ϕ∗S : S → [0 1] be the level α test that rejects for large values of L, ∗ ∗ = ϕ ◦ h . Suppose that for all ε > 0 there exists an open set D ϕ ˆ T ε ∈ B(S) with S T ¯ dP(θ0 ) > 1 − ε so that the Dε → R function x → L(x) is Lipschitz, suppose Dε thereexists a model m0 with BL (PT (m0 θ0 ) P(θ0 ))/δT → 0, and assume that BL ( PT (m1 θ) dw(θ) P(θ) dw(θ))/δT → 0. Then for any test ϕT that satisfies (18), lim supT →∞ WAP(ϕT m1 ) ≤ limT →∞ WAP(ϕˆ ∗T m1 ) = WAP(ϕ∗S ) Under a stronger continuity assumption on the limiting problem, Theorem 3 shows that no test can satisfy (18) and have higher asymptotic weighted average power than ϕˆ ∗T in models whose (average) weak convergence under the alternative is faster than the lower bound δT . The proof closely follows the heuristic sketch of the proof of Theorem 1 outlined at the end of Section 2.4 and exploits the linearity in both the definition of BL and the probability assignments in the tilted model GT . Dudley (2002, p. 411) showed that the Prohorov metric P (which also metrizes weak convergence) satisfies P ≤ 21/2 BL and BL ≤ 2P , so that Theorem 3 could equivalently be formulated in terms of P . 5 Construct a sequence of functions T : S → R with T (x) ≥ ϕ∗S (x) for x ∈ S and δT T BL → 0 that converge to ϕ∗S pointwise μP almost everywhere, as in Chapter 7.1 of Pollard (2002). Then supθ∈Θ0 m∈Mu (δ) ϕˆ ∗T dFT (m θ) − α ≤ supθ∈Θ0 m∈Mu (δ) T (dPT (m θ) − dP(θ)) ≤ T BL supθ∈Θ0 BL (PT (m θ) P(θ)) → 0, where the last inequality uses f1 · f2 BL ≤ f1 BL · f2 BL . For this uniform validity result, it would suffice to assume (17) with Θ replaced by Θ0 .
420
ULRICH K. MÜLLER
4. APPLICATIONS 4.1. Unit Root Tests With Stationary Covariates Elliott and Jansson (2003) considered the model αy + βy t uTt yTt (19) = + x xTt αx + βx t νTt where YT = ((yT1 x T1 ) (yTT x TT )) ∈ RnT is observed; αy , βy , and uTt = y ρT uTt−1 + νTt are scalars, uT0 = Op (1), ρT = 1 − c/T for some fixed c ≥ 0, x are n − 1 dimensional vectors. The objective is to and xTt , αx , βx , and νTt efficiently exploit the stationary covariates xTt in the construction of a test of the null hypothesis of a unit root in yTt , H0 : c = 0 against the alternative H1 : c > 0. Consider first the case with αy = αx = βy = βx = 0 known. The approach of Elliott and Jansson (2003) is first to apply the Neyman–Pearson lemma to determine, for each T , the point-optimal test against c = c1 when y x
) ∼ i.i.d. N (0 Ω) for known Ω. In a second step, they conνTt = (νTt νTt structed a feasible test that is (i) asymptotically equivalent to the point-optimal test when νTt ∼ i.i.d. N (0 Ω) and (ii) that is robust to a range of autocorrelation structures and error distributions. To apply the results in Sections 2 and 3 of this paper, we consider the typical weak convergence properties of model (19). Standard weak dependence assumptions on νTt imply for some suitable long-run covariance matrix estimator Ωˆ T that ⎞ ⎛ −1/2 T uT·T p ⎟ ⎜ ·T Ωˆ T → Ω and GT (·) = ⎝ −1/2 x ⎠ G(·) (20) T νTt s
t=1
where Ω is positive definite, G(s) = 0 diag(e−c(s−r) 1 1)Ω1/2 dW (r), and W is a n × 1 standard Wiener process. By Girsanov’s theorem, the Radon– Nikodym derivative of the distribution of G with c = c1 with respect to the distribution of G with c = 0, evaluated at G = (Gy G x ) , is given by
1 (21) G(s) S1 Ω−1 dG(s) L(Ω G) = exp −c1 0
1 G(s) S1 Ω−1 S1 G(s) ds − c12 2 0
1 1 2 2 Gy (s) dGx (s) = exp − c1 ωyy (Gy (1) − σyy ) − c1 ωyx 2 0 1 1 2 2 Gy (s) ds − c1 ωyy 2 0
1
EFFICIENT TESTS UNDER WEAK CONVERGENCE
421
where S1 is the n × n matrix S1 = diag(1 0 0), σyy2 is the (1, 1) element of Ω, and the first row of Ω−1 is (ωyy ωyx ). By the Neyman–Pearson lemma, the point-optimal test in the limiting problem rejects for large values of L(Ω G). 1 Since 0 Gy (s) dGx (s) is not a continuous mapping, we cannot directly apply Theorem 1. However, typical weak dependence assumptions on νTt also imply (see, for instance, Phillips (1988), Hansen (1990), and de Jong and Davidson (2000)) that (22)
ΥT = T −1
T t=2
x uTt−1 νTt − Σˆ T Υ =
1
Gy (s) dGx (s) 0
∞ p y x for a suitably defined (n−1)×1 vector Σˆ T → Σ (which equals s=1 E[νTt νTt+s ] when νTt is covariance stationary) jointly with (20). Clearly, the Radon– Nikodym derivative of the measure of (G Υ ) for c = c1 with respect to the measure of (G Υ ) with c = 0, evaluated at G, is also given by L(Ω G) in (21), and one can write L(Ω G) = LΥ (Ω G Υ ) for a continuous function LΥ . The discussion of Section 3.1 thus applies, and Theorem 1 shows that rejecting for large values of LΥ (Ωˆ T GT ΥT ) is the point-optimal unit root test for the alternative c = c1 in the class of tests that have correct asymptotic null rejection probabilities whenever (20) and (22) hold. Since the model with νTt ∼ i.i.d. N (0 Ω) satisfies (20) and (22), the test derived by Elliott and Jansson (2003) is by construction—as explained in Comment 1 in Section 2.5—asymptotically equivalent to a test that rejects for large values of LΥ (Ωˆ T GT ΥT ) The derivation here, which starts with the Radon– Nikodym derivative directly, is arguably a more straightforward way to determine a test in this equivalence class. Furthermore, while Elliott and Jansson (2003) can only claim optimality for the model with i.i.d. Gaussian disturbances, Theorem 1 shows that the test is efficient against all alternatives satisfying (20) and (22) with c = c1 if one imposes size control for all models satisfying (20) and (22) with c = 0. In other words, under this constraint, no test exists with higher asymptotic power for any disturbance distribution or autocorrelation structure satisfying (20) and (22) with c = c1 . When the deterministic terms are not fully known, that is, the parameters αy , αx , βy , and/or βx are not known, it is natural to impose appropriate invariance requirements. Specifically, for the case where αy and αx are unconstrained and βy = βx = 0, consider (23)
{(yTt x Tt ) }Tt=1 → {(yTt + ay x Tt + a x ) }Tt=1
ay ∈ R ax ∈ Rn−1
A maximal invariant of this group of transformations is given by the demeaned T data {(yˆTt xˆ Tt ) }Tt=1 , where yˆTt = yTt − y¯T xˆ Tt = xTt − x¯ T , y¯T = T −1 t=1 yTt , T and x¯ T = T −1 t=1 xTt . Elliott and Jansson (2003) derived the limiting behavior of the likelihood ratio statistics of this maximal invariant when νTt ∼
422
ULRICH K. MÜLLER
i.i.d. N (0 Ω), and thus obtained the asymptotically point-optimal invariant unit root test under that assumption. Considering again the weak convergence properties of a typical model, we obtain ⎛
(24)
⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
⎞
T −1/2 yˆT·T
⎞ ⎛ ⎟ ˆ y (·) G ⎟ ⎟ ⎟ ⎜ T −1/2 xˆ Tt ˆ x (·) ⎟ ⎟ ⎜ G ⎟ ⎟ ⎜ t=1 ⎟ ⎝ 1 ⎠ ⎟ T ˆ ˆ (s) d G (s) G y x ⎠ T −1/2 yˆTt−1 xˆ Tt − Σˆ T 0 ·T
and
t=2
Ωˆ T → Ω p
ˆ y (s) = Gy (s) − 1 Gy (l) dl and G ˆ x (s) = Gx (s) − sGx (1), and Ωˆ T and where G 0 Σˆ T are functions of {(yˆTt xˆ ) }T . For brevity, we omit an explicit expression Tt
t=1
ˆ ˆ y G ˆ x ) with c = c1 for the Radon–Nikodym derivative LG of the measure of (G ˆ y G ˆ x ) when c = 0. By the Neyman–Pearson with respect to the measure of (G ˆ lemma and Theorem 1, rejecting for large values of LG , evaluated at sample analogues, is the asymptotically point-optimal test in the class of tests that are asymptotically valid whenever (24) holds with c = 0. Furthermore, in the notation of Section 3.2, with hT (YT ) = (T −1/2 yT·T ·T T n−1 × Rn−1 , r = (ry rx ) ∈ T −1/2 t=1 xTt T −1 t=2 yTt−1 xTt − Σˆ T ) ∈ D[01] × D[01] n−1
T 1/2
˜ (y x R × R , gT (r {(yTt xTt ) }t=1 ) = {(yTt + T ry xTt + T −1/2 rx ) }Tt=1 , g(r 1 z)) = (y(·) + ry x(·) + rx · z + ry (x(1) − x(0)) + rx 0 y(s) ds + ry rx ) φT ({(yTt 1 ˜ x Tt ) }Tt=1 ) = {(yˆTt xˆ Tt ) }Tt=1 , and φ(y x z) = (y(·) − 0 y(s) ds x(·) − ·x(1) 1 z − (x(1) − x(0)) 0 y(s) ds), we find that Theorem 2 (and the discussion in ˆ Section 3.1) is applicable and that rejecting for large values of LG , evaluated at sample analogues, is also the small sample invariant, asymptotically pointoptimal unit root test in the class of tests that are asymptotically valid whenever
⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
T −1/2 yT·T
⎞
⎞ ⎟ ⎛ Gαy (·) ⎟ ⎟ ⎜ T −1/2 xTt ⎟ Gαx (·) ⎟ ⎜ ⎟ ⎟ ⎝ 1 t=1 ⎠ ⎟ α α ⎟ T G (s) dG (s) y x ⎠ 0 T −1/2 yTt−1 xTt − Σˆ T ·T
and
t=2
Ωˆ T → Ω p
where Gαy (s) = Gy (s) + αy and Gαx (s) = Gx (s) + sαx for some αy ∈ R, αx ∈ Rn−1 .
EFFICIENT TESTS UNDER WEAK CONVERGENCE
423
4.2. Linear Regression With Weak Instruments As in Andrews, Moreir, and Stock (2006) (subsequently abbreviated AMS), consider the problem of inference about the coefficient of a scalar endogenous variable in the presence of weak instruments. The reduced form equations are given by (cf. equation (2.4) of AMS) (25)
y1t = zt πβ + x t ζ1 + vt1 y2t = zt π + x t ζ2 + vt2
for t = 1 T where y1t and y2t are scalars, zt is k × 1, xt is p × 1, and zt are the residuals of a linear regression of the original instruments z˜ t on xt . AMS initially considered small sample efficient tests of (26)
H0 : β = β0
for nonstochastic (xt zt ) and vt = (v1t v2t ) ∼ i.i.d. N (0 Ω) with Ω known. A sufficiency argument shows that tests may be restricted to functions of the 2k × 1 multivariate normal statistic T zt y1t Sz πβ Ω ⊗ Sz (27) ∼N zt y2t Sz π t=1
T where Sz = t=1 zt zt and AMS derived weighted average power maximizing similar tests that are invariant to the group of transformations (28)
{zt }Tt=1 → {Ozt }Tt=1
for any orthogonal matrix O
For their asymptotic analysis, AMS employed Staiger and Stock’s (1997) weak instrument asymptotics, where π = T −1/2 C for some fixed matrix C AMS then exploited their small sample efficiency results to construct tests (i) that maximize weighted average asymptotic power among all asymptotically invariant and asymptotically similar test when vt ∼ i.i.d. N (0 Ω) independent of {(x t zt )}Tt=1 and (ii) that yield correct asymptotic null rejection probability under much broader conditions (the working paper of AMS contains the details of the construction of heteroskedasticity, and of heteroskedasticity and autocorrelation robust tests). To apply the results from Sections 2 and 3 to this problem, consider the set of weak convergences in the double array version of model (25), (29)
ˆ Z = T −1 D
T
p
zTt zTt → Dz
p Σˆ → Σ
t=1
XT = T
−1/2
T zTt y1Tt t=1
zTt y2Tt
X ∼N
Dz Cβ Σ Dz C
424
ULRICH K. MÜLLER
where Σˆ is some estimator of the long-run variance of vec(zt vt ), and Dz and Σ have full rank. The limiting problem in the sense of Section 2.2 above is thus the test of (26) based on observing the random variable X distributed as in (29), with Dz and Σ known. For k = 1, that is, in the just identified case, this problem is exactly equivalent to the small sample problem (27) considered by AMS. In fact, already Moreira (2001) has shown that for k = 1, the Anderson and Rubin (1949) statistic AR0 = (b 0 X)2 /b 0 Σb0 with b0 = (1 −β0 ) yields the uniformly most powerful unbiased test ϕ∗S . The discussion of Section 3.1 and Theorem 1 thus ˆ 0 maximizes imply that for k = 1, rejecting for large values of (b 0 XT )2 /b 0 Σb asymptotic power uniformly in all models that satisfy (29) with β = β0 among all tests that are asymptotically unbiased with at most nominal asymptotic null rejection probability for all models that satisfy (29) with β = β0 . The class of asymptotically valid tests for all models that satisfy (29) with T β = β0 is potentially quite small, since there are many ways T −1/2 t=1 zTt yTt can converge to a normal vector (for instance, the convergence is compatible with zTt yTt = 0 for all t < T/2). To the extent that one would be prepared to rule out such models a priori, this decreases the appeal of the efficiency result. But as discussed in Comment 7 in Section 2.5, one can impose additional weak convergences without necessarily affecting ϕ∗S : Supplementing (29) by the functional central limit theorem type convergence T −1/2
·T zTt y1Tt zTt y2Tt
t=1
G(·)
with
Dz Cβ G(s) = s Dz C
+ Σ1/2 W (s)
where W (s) is a 2k × 1 standard Wiener process, for instance, rules out such pathological cases and yet still yields ϕ∗S to be the efficient test, since G(1) is sufficient for the unknown parameters C and β. For k > 1, Theorems 1 and 2 can again be invoked to yield analogous asymptotic efficiency statements for the statistics developed in AMS under the assumption that Σ is of the Kronecker form Σ = Ω ⊗ Dz , as in (27). But this form naturally arises only in the context of a serially uncorrelated homoskedastic model, so the resulting efficiency statements are of limited appeal. The approach here thus points to a general solution of the limiting problem without the constraint Σ = Ω ⊗ Dz as an interesting missing piece in the literature on efficient inference in linear regressions with weak instruments.
425
EFFICIENT TESTS UNDER WEAK CONVERGENCE
4.3. GMM Parameter Stability Tests Following Sowell (1996), suppose we are interested in testing the null hypothesis that a parameter β ∈ Rk in a GMM framework is constant through time. Parametrizing βTt = β0 + T −1/2 θ(t/T ), where θ ∈ Dk[01] and θ(0) is normalized to zero, θ(0) = 0, this is equivalent to the hypothesis test (30)
H0 : θ = 0 against
H1 : θ = 0
Denote by gTt (β) ∈ Rp with p ≥ k the sample moment condition for yTt evaluated at β, so that under the usual assumptions, the moment condition evaluated at the true parameter value satisfies a central limit theorem, that T is, T −1/2 t=1 gTt (βTt ) N (0 V ) for some positive definite p × p matrix V . Furthermore, with βˆ T the usual full sample GMM estimator of β with optimal weighting matrix converging to V −1 , we obtain under typical assumptions that for some suitable estimators Hˆ T and VˆT (cf. Theorem 1 of Sowell (1996)), (31)
GT (·) = T
−1/2
·T
gTt (βˆ T ) G(·)
and
p Hˆ T → H
p VˆT → V
t=1 p
where the convergence to G is on D[01] , G(s) = V 1/2 W (s) − sH(H V −1 H)−1 × s 1 H V −1/2 W (1)+H( 0 θ(l) dl −s 0 θ(l) dl), where W is a p×1 standard Wiener process and H is some p × k full column rank matrix (which is the probability limit of the average of the partial derivatives of gTt ). Andrews (1993), Sowell (1996), and Li and Müller (2009) discussed primitive conditions for these convergences. Sowell (1996) proceeded to derive weighted average power maximizing tests of (30) as a function of G (that is, he computed ϕ∗S in the notation of Theorem 1) and he denoted the resulting test evaluated at GT (·), Hˆ T , and VˆT (that is, ϕˆ ∗T in the notation of Theorem 1), an “optimal” test for structural change. Without further restrictions, however, such tests cannot claim to be efficient. As a simple example, consider the scalar model with yTt = β + T −1/2 θ(t/T ) + εt , where εt is i.i.d. with P(εt = −1) = P(εt = 1) = 1/2. This model is a standard time varying parameter GMM model with gTt (β) = yTt − β and T βˆ T = T −1 t=1 yTt satisfying (31), yet in this model, the test ϕ∗∗ T that rejects whenever any one of {yTt − yTt−1 }Tt=1 is not −2, 0, or 2 has level zero for any T ≥ 2 and has asymptotic power equal to 1 against any local alternative. Theorem 1 provides a sense in which the tests derived by Sowell (1996) are asymptotically optimal: they maximize asymptotic weighted average power among all tests that have correct asymptotic null rejection probability whenever (31) holds with θ = 0. More powerful tests that exploit specificities of the error distribution, such as ϕ∗∗ T , do not lead to correct asymptotic null rejection probability for all stable models satisfying (31).
426
ULRICH K. MÜLLER
5. CONCLUSION This paper analyzes an alternative notion of asymptotic efficiency of tests. The starting point is the idea that in many models, natural statistics of interest converge weakly under reasonable primitive assumptions. An observed random element thus converges weakly to a limiting random element, whose distribution is different under the null hypothesis and local alternative. It is shown that if one restricts attention to the class of tests that remain asymptotically valid for all models that induce the weak convergence, then efficient tests in the original problem are simply given by efficient tests in the limiting problem (that is, with the limiting random element observed), evaluated at sample analogues. These efficient tests generically coincide with robustified versions of efficient tests that are derived as the limit of small sample efficient tests in canonical parametric versions of the model. The results of this paper thus provide an alternative and broader sense of asymptotic efficiency for many previously derived tests in econometrics. It is a strong restriction to force tests to have correct asymptotic null rejection probability for all models that satisfy the weak convergence, because there are typically many such models. Some of these models one might be willing to rule out a priori, which reduces the appeal of the efficiency claim. At the same time, the setup allows for a general and easily applied efficiency claim under minimal additional regularity conditions. The approach of the paper and the traditional semiparametric derivations are thus best seen as complements. APPENDIX PROOF OF THEOREM 1: (i) Since ϕ∗S is μP almost everywhere continuous, it is also P(θ) almost everywhere continuous for any θ ∈ Θ by assumption about the family of measures P(θ), and by the continuous mapping theorem (CMT), PT (m θ) P(θ) implies ∗ (32) ϕS dPT (m θ) → ϕ∗S dP(θ) for all θ ∈ Θ. Thus (8) and (10) hold for ϕˆ ∗T . Similarly, also the S → R mapping (ϕ∗S − α)fS is μP almost everywhere continuous, so (11) follows from the CMT too. The last claim follows from (32) and dominated convergence, since 0 ≤ ϕ∗S ≤ 1. (ii) Heuristically, the proof follows the basic logic of constructing a tilted probability measure from a given alternative model described in the main text, but it addresses the difficulty that (a) Li is not necessarily bounded and (b) the testing problem is more general than a simple null against a simple alternative, and tests are further constrained by similarity and unbiasedness conditions.
EFFICIENT TESTS UNDER WEAK CONVERGENCE
427
The proof consists of three steps. Consider first complication (a). Note that in the limit where X ∼ P(θ), P(θ1 ) to P(θ0 ) any change of measure from using L = dP(θ0 )/dP(θ1 ), A dP(θ0 ) = A L1 dP(θ1 ) = A L1 L dP(θ0 ) for all A ∈ B(S), is well defined, even if Li is unbounded. The idea of the proof is thus to perform the tilting “in the limit” (Step 2) and ensure that it has the desired weak convergence implications for XT by constructing a probability space in which XT = h(YT ) → X almost surely (Step 1). Such almost sure representations of weakly converging random elements are known to exist in a complete and separable space. Because XT is a function of YT and the set of models M is defined relative to the distribution of YT , we extend hT in a way that preserves the information about the distribution of YT . The tilting of the limiting measure in Step 2 then also implies the appropriate change of measure for YT = hT (XT ), as demonstrated in Step 3. Due to complication (b), we cannot directly apply the Neyman–Pearson lemma to learn about the efficiency of a candidate test ϕT for distinguishing between the original alternative model and the tilted model. Rather, we compare ϕT with ϕ∗S directly in the limit in Step 2 by making its (best) asymptotic performance under the alternative part of the almost surely converging construction in Step 1. This is possible, because (randomized) tests take on values in the unit interval, so that any sequence of ϕT has a subsequence that converges weakly by Prohorov’s theorem. Because the tilting in the limit is a function of X only, X is a sufficient statistic, so that ϕ∗S (X) is an overall best test for learning about the tilting: see Step 2. Finally, as ϕT is known to satisfy (8), (10), and (11), its weak limit is a level α test, and satisfies (5) and (6) under the tilted probability measures, so that it cannot be better than ϕ∗S : see Step 3. Step 1. Pick any m ∈ M. Define F¯T = FT (m θ) dw(θ) P¯T = PT (m θ) dw(θ) and P¯ =
P(θ) dw(θ)
For any bounded and continuous function ϑ : S → R, ϑ dPT (m θ) → convergence, also ϑ dP(θ) for all θ ∈ Θ1 by the CMT, so that by dominated ¯ Thus, hT F¯T = P¯T P. ¯ Let Dn[01] be the space of n-valued ϑ d P¯T → ϑ d P. cadlag functions on the unit interval, equipped with the Billingsley (1968) metric, and define the mapping χT : RnT → Dn[01] as {yt }Tt=1 → T −1 "Z (y·T ), where "Z is the cumulative distribution function of a standard normal applied elen nT ment by element. Note that χT is injective and denote by χ−1 T a D[01] → R nT function such that χ−1 T (χT (y)) = y for all y ∈ R . Since sups∈[01] χT (s) ≤ 1/T → 0, the probability measures (hT χT )F¯T on the complete and separable space S × Dn[01] converge weakly to the product measure P¯ × δ0 , where
428
ULRICH K. MÜLLER
δ0 puts all mass at the zero function in Dn[01] . Let T1 → ∞ be any subse quence of T such that limT1 →∞ ϕT1 d F¯T1 = lim supT →∞ ϕT d F¯T . Since the probability measures (hT1 χT1 ϕT1 )F¯T1 on the complete and separable space S × Dn[01] × [0 1] are uniformly tight, by Prohorov’s theorem (see, for instance, Theorem 36 of Pollard (2002, p. 185)), there exists a subsequence T2 of T1 such that (hT2 χT2 ϕT2 )F¯T2 ν¯ as T2 → ∞, where (πX πY )¯ν = P¯ × δ0 , and πX , πY , and πϕ are the projections of S × Dn[01] × [0 1] on S, Dn[01] , and [0 1], respectively. For notational convenience, write T for T2 in the following. By Theorem 11.7.2 of Dudley (2002), there exists a probability space (Ω∗ F ∗ P ∗ ) and functions ηT : Ω∗ → S × Dn[01] × [0 1] such that ηT P ∗ = (hT χT ϕT )F¯T , η0 P ∗ = ν¯ , and ηT (ω∗ ) → η0 (ω∗ ) for P ∗ almost all ω∗ ∈ Ω∗ , and by Theorem 11.7.3 of Dudley (2002), we may assume Ω∗ to be complete and separable. In this construction, note that for P ∗ almost all ω∗ , −1 ∗ ∗ ∗ ∗ hT ◦ χ−1 ϕ ◦ ηT (ω ). T ◦ πY ◦ ηT (ω ) = πX ◦ ηT (ω ) and ϕT ◦ χT ◦ πY ◦ ηT (ω ) = π ∗ ¯ Furthermore, (πX ◦ ηT )P ∗ = P¯T , (χ−1 ϕT d F¯T = T ◦ πY ◦ ηT )P = FT , and ∗ ∗ (πϕ ◦ ηT ) dP → (πϕ ◦ η0 ) dP , where the convergence follows from the ∗ ∗ ∗ dominated convergence theorem, since πϕ∗◦ ηT (ω ) ∈ [0 1] for all ω ∈ Ω ∗ ¯ We need to show that (πϕ ◦ η0 ) dP ≤ ϕS d P Step 2. Note that P(θ) is absolutely continuous with respect to P¯ for any θ ∈ Θ, and denote by L(θ) the Radon–Nikodym derivative of P(θ) with re¯ so that for all A ∈ B(S), spect to P, dP(θ) = L(θ) d P¯ (existence of L A A is ensured by the Radon–Nikodym theorem; see, for instance, Pollard (2002, p. 56)). Define Q∗ (θ) to be the probability measure on F ∗ , indexed by θ ∈ Θ, as A dQ∗ (θ) = A (L(θ) ◦ πX ◦ η0 ) dP ∗ for all A ∈ F ∗ . Note that Q∗ (θ) is ∗ a probability kernel. By construction, (πX ◦ η0 )Q (θ) = P(θ), since for all ∗ ¯ A ∈ B(S), A (πX ◦ η0 ) dQ (θ) = A L(θ) d P = A dP(θ). Consider the hypothesis test (33)
H0∗ : ω∗ ∼ Q∗ (θ)
θ ∈ Θ0
against H1∗ : ω∗ ∼ Q∗ (θ)
θ ∈ Θ1
Because the Radon–Nikodym derivative between Q∗ (θ1 ) and Q∗ (θ2 ) is given by (L(θ1 )/L(θ2 )) ◦ πX ◦ η0 , the statistic πX ◦ η0 : Ω∗ → S is sufficient for θ by the factorization theorem (see, for instance, Theorem 2.21 of Schervish (1995)). Thus, for any test ϕΩ : Ω∗ → [0 1], one can define a correspond∗ [0 1] via ϕS (x) =∗E[ϕΩ |(π X ◦ η0 )(ω ) = x], which satisfies ing test ∗ϕS : S → ϕΩ dQ (θ) = (ϕS ◦ πX ◦ η0 ) dQ (θ) = ϕS dP(θ) for all θ ∈ Θ (cf. Theorem 3.18 of Schervish (1995)), and also (fS ◦ πX ◦ η0 )(ϕΩ − α) dQ∗ (θ) = fS (ϕS − α) dP(θ) for any fS ∈ FS Since the level α test ϕ∗S : S → [0 1] of H0 : X ∼ P(θ), θ ∈ Θ0 , against H1 : X ∼ P(θ), θ ∈ Θ1 , maximizes weighted av∗ ∗ erage power subject to (5) and (6), the level α test ϕ∗ S ◦ πX ◦ η0 : Ω → [0 1] of (33) maximizes weighted average power ϕΩ dQ (θ) dw(θ) among all level
EFFICIENT TESTS UNDER WEAK CONVERGENCE
429
α tests ϕΩ : Ω∗ → [0 1] of (33) that satisfy (34) ϕΩ dQ∗ (θ) ≥ π0 (θ) for all θ ∈ Θ1 (35) (fS ◦ πX ◦ η0 )(ϕΩ − α) dQ∗ (θ) = 0 for all θ ∈ Θ¯ 0 and fS ∈ FS and it achieves the same weighted average power ∗ ∗ ϕ∗S dP(θ) dw(θ) (ϕS ◦ πX ◦ η0 ) dQ (θ) dw(θ) = ¯ = ϕ∗S d P Step 3. Now define the sequence of measures GT (θ) on B(RnT ), indexed ∗ by θ ∈ Θ, via GT (θ) = (χ−1 T ◦ πY ◦ ηT )Q (θ), which induce the measures −1 ∗ hT GT (θ) = (hT ◦ χT ◦ πY ◦ ηT )Q (θ) = (πX ◦ ηT )Q∗ (θ) on B(S). By construction of ηT and absolute continuity of Q∗ (θ) with respect to P ∗ , we have ηT (ω∗ ) → η0 (ω∗ ) for Q∗ (θ) almost all ω∗ , and since almost sure convergence implies weak convergence, hT GT (θ) (πX ◦ η0 )Q∗ (θ) = P(θ) pointm0 ∈ M. Since ϕT satisfies wise in θ ∈ Θ. GT (θ) thus defines a model ϕ dG for all θ ∈ Θ0 , (8), (10), and (11), we have lim sup T T (θ) ≤ α T →∞ lim infT →∞ ϕT dGT (θ) ≥ π0 (θ) for all θ ∈ Θ1 , and limT →∞ (ϕT − α)(fS ◦ hT ) dGT (θ) = 0 for all θ ∈ Θ¯ 0 and fS ∈ FS . By the dominated conver gence theorem and the construction of GT , ϕT dGT (θ) = (ϕT ◦ χ−1 T ◦ πY ◦ηT ) dQ∗ (θ) = (πϕ ◦ ηT ) dQ∗ (θ) → (πϕ ◦ η0 ) dQ∗ (θ) for all θ ∈ Θ and (ϕT − α)(fS ◦ hT ) dGT (θ) = ((πϕ ◦ ηT ) − α)(fS ◦ πX ◦ ηT ) dQ∗ (θ) → ((πϕ ◦ η0 ) − α)(fS ◦ πX ◦ η0 ) dQ∗ (θ) for all θ ∈ Θ¯ 0 and fS ∈ FS . We conclude that the test πϕ ◦ η0 : Ω∗ → [0 1] of(33) is of level α and satisfies (34) is, thereand (35). Its weighted average power (πϕ ◦ η0 ) dQ∗ (θ) dw(θ) fore, smaller than or equal to the weighted average power ϕ∗S dP(θ) dw(θ) ∗ ∗ noting of the 1] The result now follows from test ϕS ◦ πX ◦∗ η0 : Ω → [0 ∗ ◦ η ) dQ (θ) dw(θ) = (L(θ) ◦ π ◦ η )(π ◦ η ) dP dw(θ) = that (π ϕ 0 X 0 ϕ 0 ∗ ¯ (πϕ ◦ η0 ) dP , since L(θ) dw(θ) = 1 P almost surely and the change of the order of integration is allowed by Fubini’s theorem. Q.E.D. PROOF OF THEOREM 2: For notational convenience, write F0T = FT (m θ), φ ˜ 0 . Proceed in anal= (hT ◦ φT )F0T , P0 = P(θ), and P0φ = φP PT = hT F0T , P0T ogy to the proof of Theorem 1(ii) to argue for the existence of a probability space (Ω∗ F ∗ P ∗ ) with complete and separable Ω∗ and functions ηT : Ω∗ → S × Dn[01] such that ηT P ∗ = (hT ◦ φT χT )F0T , η0 P ∗ = P0φ × δ0 , where the probability measure δ0 puts all mass on the zero function Dn[01] → R, and φ ηT (ω∗ ) → η0 (ω∗ ) for P ∗ almost all ω∗ ∈ Ω∗ . In particular, (πX ◦ ηT )P ∗ = P0T
430
ULRICH K. MÜLLER
n ∗ and (χ−1 T ◦ πY ◦ ηT )P = F0T , where πX and πY are the projections of S × D[01] −1 n ∗ ∗ on S and D[01] , respectively. Also, for P almost all ω , hT ◦ φT ◦ χT ◦ πY ◦ ηT (ω∗ ) = πX ◦ ηT (ω∗ ). ˜ : S → R × S Let ν be the probability measure on B(R × S) induced by (ρ ˜ φ) ˜ ˜ ˜ ρ(x) ˜ under P0 , that is, ν = (ρ ˜ φ)P0 Since x = g( ˜ φ(x)) for all x ∈ S, P0 = gν By Proposition 10.2.8 of Dudley (2002) there exists a probability kernel νx from (S B(S)) to (R B(R)) such that for each A ∈ B(S) and B ∈ B(R), ν(A × B) = A νx (B) dP0φ (x). Note that the Ω∗ × B(R) → [0 1] mapping defined via (ω∗ B) → νπX ◦η0 (ω∗ ) (B) is a probability kernel from (Ω∗ F ∗ ) to (R B(R)). We can thus construct the probability measure μ∗ on (Ω∗ × R), (F ∗ ⊗ B(R)) ∗ via μ (C × B) = C νπX ◦η0 (ω∗ ) (B) dP ∗ (ω∗ ), and by construction, the mapping ˜ πX ◦ η0 (ω∗ )) induces the measure P0 under μ∗ (ω∗ r) → g(r ∗ ∗ ˜ ˜ πX ◦ Let ξT (ω∗ r) = hT (gT (r φT ◦ χ−1 T ◦ πY ◦ ηT (ω ))) and ξT (ω r) = g(r ηT (ω∗ )). Let d˜S ≤ 1 be a metric equivalent to dS . By assumption (iv), d˜S (ξT (ω∗ r) ξ˜ T (ω∗ r)) converges to zero in P ∗ probability for any fixed r ∈ R. Thus, it also converges to zero in μ∗ probability by dominated convergence. ˜ πX ◦ η0 (ω∗ )) for μ∗ almost all (ω∗ r) by the Furthermore, ξ˜ T (ω∗ r) → g(r ˜ Since convergence in probability, and also almost sure convercontinuity of g. gence, imply weak convergence, the measures GT on B(RnT ) induced by the ∗ ∗ mapping (ω∗ r) → gT (r φT ◦ χ−1 T ◦ πY ◦ ηT (ω )) under μ satisfy hT GT P0 . Finally, from φT ◦ gT (r φT (y)) = φT (y) for all r ∈ R and y ∈ RnT , it follows ∗ that φT GT = (φT ◦ χ−1 Q.E.D. T ◦ πY ◦ ηT )P = φT F0T .
PROOF OF THEOREM 3: The equality follows as in the proof of Theorem 1(i). ¯ 0 , F0T = FT (m0 θ0 ), P0T = For the inequality, write P0 = P(θ0 ) P¯ = LP hT F0T , F¯T = FT (m1 θ) dw(θ), and P¯T = hT F¯T so that P0T P0 and P¯T P¯ by assumption. Pick 1/2 > $ > 0 such that P0 (L¯ = $) = 0 and define B$ = ¯ {x ∈ S : L(x) > $} Note that S\B$ d P¯ = S\B$ L¯ dP0 ≤ $ S\B$ dP0 ≤ $, so that d P¯ ≥ 1 − $. The assumption about L¯ in the statement of the theorem also B$ implies the existence of an open set B¯ $ such that B¯ $ d P¯ > 1 − $ and L¯ : B¯ $ → R is Lipschitz, since P¯ is absolutely continuous with respect to P0 (cf. Example 3 of Pollard (2002, p. 55)). With B = B$ ∩ B¯ $ , L¯ i : B → R with L¯ i = 1/L¯ is thus bounded and Lipschitz. Furthermore, since S \ B is closed, there exists a Lipschitz function : S → [0 1] that is zero on S \ B and for which d P¯ ≥ 1 − 3$ (see Pollard (2002, pp. 172–173) for an explicit construction). For future reference, define B and B$ as the indicator functions of B and B$ , respectively, and note that B = B$ = . Define the scalar sequence ( ◦ hT )(L¯ i ◦ hT ) d F¯T κT = ( ◦ hT ) dF0T
EFFICIENT TESTS UNDER WEAK CONVERGENCE
=
dP0T
431
L¯ i d P¯T
and that κT → 1 because dP0T → dP0 and L¯ i d P¯T → L¯ i d P¯ = note B L¯ i L¯ dP0 = dP0 by the continuous mapping theorem. Further define the probability distribution GT on B(RnT ) via i ¯ ¯ dGT = κT ( ◦ hT )(L ◦ hT ) d FT + ((1 − ) ◦ hT ) dF0T A
A
A
for any A ∈ B(R ). Then by construction, QT = hT GT , where QT satisfies i ¯ ¯ dQT = κT L d PT + (1 − ) dP0T nT
A
A
A
for any A ∈ B(S). Now
BL (QT P0 ) = sup ϑ(dQT − dP0 ) ϑBL ≤1 i ¯ ≤ sup ϑL¯ (κT d P¯T − d P) ϑBL ≤1 + sup ϑ(1 − )(dP0T − dP0 ) ϑBL ≤1
¯ + |κT − 1|) ≤ L¯ i BL (BL (P¯T P) + 1 − BL BL (P0T P0 )
where the manipulations after the first inequality use ϑ dP0 = ϑL¯ i d P¯ and the second inequality exploits that · BL is a submultiplicative norm on the set of bounded Lipschitz functions S → R (cf. Proposition 11.2.1 of Dudley (2002)). Also dP0T dP0 − |κT − 1| = i i L¯ d P¯T L¯ d P¯ ≤
¯ + BL BL (P0T P0 ) L¯ i BL BL (P¯T P) L¯ i d P¯T
Thus, limT →∞ BL (QT P0 )/δT = 0, so that (18) implies lim sup ϕT dGT ≤ α T →∞
432
ULRICH K. MÜLLER
Now define the probability measures F˜T on B(RnT ) via d F˜T = κ˜ T (B$ ◦ hT )(L¯ ◦ hT ) dGT A
A
( ◦ hT ) d F¯T + κ˜ T
= κ˜ T κT
A
(B$ (1 − )L¯ ◦ hT ) dF0T A
for any A ∈ B(RnT ), where κ˜ T = 1/(κT ( ◦ hT ) d F¯T + (B$ (1 − )L¯ ◦ hT ) dF0T ) → κ˜ = 1/ ( + B$ − B$ ) d P¯ = 1/ B$ d P¯ and 1 ≤ 1/ B$ d P¯ ≤ 1 + 2$. By the Neyman–Pearson lemma, the best test of H˜ 0 : YT ∼ GT against ¯ ◦ hT , that is, L¯ ◦ hT . H˜ 1 : YT ∼ F˜T thus rejects for large values of (B$ L) ∗ nT For any T , denote by ϕ˜ T : R → [0 1] the test that rejects for large values of L¯ ◦ hT of level ϕ˜ ∗T dGT = max( ϕT dGT α), so that (ϕ˜ ∗T − ϕT ) d F˜T ≥ 0 for all T . By Le Cam’s first lemma (Lemma 6.4 in van der Vaart (1998)), F˜T is ¯ ◦ contiguous to GT , since under GT , the Radon–Nikodym derivative κ˜ T (B$ L) ¯ hT converges weakly to the distribution κ˜ B$ LP0 by the continuous mapping ∗ for large values of theorem, and κ˜ B$ L¯ dP0 = 1. Since both ϕ˜ ∗T and ∗ϕˆ T reject ∗ ¯ L ◦ hT and are of asymptotic level α, we have |ϕ˜ T − ϕˆ T | dGT → 0, so that by contiguity, also |ϕ˜ ∗T − ϕˆ ∗T | d F˜T → 0. Thus lim supT →∞ (ϕT − ϕˆ ∗T ) d F˜T ≤ 0. To complete the proof, note that the total variation distance between F˜T and F¯T is bounded above by |1 − κ˜ T κT ( ◦ hT )| d F¯T ¯ ≤ |1 − κ˜ T κT | + κ˜ T κT (1 − ) d PT ¯ ¯ →1 B$ d P − 1 + 1 − d P B$ d P¯ ≤ 8$ so that lim supT →∞ (ϕT − ϕˆ ∗T ) d F¯T ≤ 8$, and the result follows, since 1/2 > $ > 0 can be chosen arbitrarily small. Q.E.D. REFERENCES ANDERSON, T. W., AND H. RUBIN (1949): “Estimators of the Parameters of a Single Equation in a Complete Set of Stochastic Equations,” The Annals of Mathematical Statistics, 21, 570–582. [424] ANDREWS, D. W. K. (1993): “Tests for Parameter Instability and Structural Change With Unknown Change Point,” Econometrica, 61, 821–856. [425] ANDREWS, D. W. K., AND W. PLOBERGER (1994): “Optimal Tests When a Nuisance Parameter Is Present Only Under the Alternative,” Econometrica, 62, 1383–1414. [395] ANDREWS, D. W. K., M. J. MOREIRA, AND J. H. STOCK (2006): “Optimal Two-Sided Invariant Similar Tests for Instrumental Variables Regression,” Econometrica, 74, 715–752. [396,399,423]
EFFICIENT TESTS UNDER WEAK CONVERGENCE
433
(2008): “Efficient Two-Sided Nonsimilar Invariant Tests in IV Regression With Weak Instruments,” Journal of Econometrics, 146, 241–254. [396] BICKEL, P. J., C. A. J. KLAASSEN, Y. RITOV, AND J. A. WELLNER (1998): Efficient and Adaptive Estimation for Semiparametric Models. New York: Springer-Verlag. [398] BILLINGSLEY, P. (1968): Convergence of Probability Measure. New York: Wiley. [400,427] BREITUNG, J. (2002): “Nonparametric Tests for Unit Roots and Cointegration,” Journal of Econometrics, 108, 343–363. [404] CHOI, A., W. J. HALL, AND A. SCHICK (1996): “Asymptotically Uniformly Most Powerful Tests in Parametric and Semiparametric Models,” The Annals of Statistics, 24, 841–861. [398] DAVIDSON, J. (2002): “Establishing Conditions for the Functional Central Limit Theorem in Nonlinear and Semiparametric Time Series Processes,” Journal of Econometrics, 106, 243–269. [404] (2009): “When Is a Time-Series I(0)?” in The Methodology and Practice of Econometrics: A Festschrift in Honour of David F. Hendry, ed. by J. Castle and N. Shephard. Oxford: Oxford University Press, 322–342. [404] DE JONG, R. M., AND J. DAVIDSON (2000): “The Functional Central Limit Theorem and Weak Convergence to Stochastic Integrals I: Weakly Dependent Processes,” Econometric Theory, 16, 621–642. [421] DICKEY, D. A., AND W. A. FULLER (1979): “Distribution of the Estimators for Autoregressive Time Series With a Unit Root,” Journal of the American Statistical Association, 74, 427–431. [406] DUDLEY, R. M. (2002): Real Analysis and Probability. Cambridge, U.K.: Cambridge University Press. [418,419,428,430,431] DUFOUR, J.-M. (1997): “Some Impossibility Theorems in Econometrics With Applications to Structural and Dynamic Models,” Econometrica, 65, 1365–1387. [418] DUFOUR, J.-M., AND M. L. KING (1991): “Optimal Invariant Tests for the Autocorrelation Coefficient in Linear Regressions With Stationary or Nonstationary AR(1) Errors,” Journal of Econometrics, 47, 115–143. [408] ELLIOTT, G. (1999): “Efficient Tests for a Unit Root When the Initial Observation Is Drawn From Its Unconditional Distribution,” International Economic Review, 40, 767–783. [395] ELLIOTT, G., AND M. JANSSON (2003): “Testing for Unit Roots With Stationary Covariates,” Journal of Econometrics, 115, 75–89. [395,399,420,421] ELLIOTT, G., AND U. K. MÜLLER (2006): “Efficient Tests for General Persistent Time Variation in Regression Coefficients,” Review of Economic Studies, 73, 907–940. [395] (2009): “Pre and Post Break Parameter Inference,” Working Paper, Princeton University. [401] ELLIOTT, G., M. JANSSON, AND E. PESAVENTO (2005): “Optimal Power for Testing Potential Cointegrating Vectors With Known Parameters for Nonstationarity,” Journal of Business & Economic Statistics, 23, 34–48. [395] ELLIOTT, G., T. J. ROTHENBERG, AND J. H. STOCK (1996): “Efficient Tests for an Autoregressive Unit Root,” Econometrica, 64, 813–836. [395] HANSEN, B. E. (1990): “Convergence to a Stochastic Integrals for Dependent Heterogeneous Processes,” Econometric Theory, 8, 489–500. [421] IBRAGIMOV, R., AND U. K. MÜLLER (2010): “T-Statistic Based Correlation and Heterogeneity Robust Inference,” Journal of Business & Economic Statistics, 28, 453–468. [398] JANSSON, M. (2005): “Point Optimal Tests of the Null Hypothesis of Cointegration,” Journal of Econometrics, 124, 187–201. [395] (2008): “Semiparametric Power Envelopes for Tests of the Unit Root Hypothesis,” Econometrica, 76, 1103–1142. [396-398,408,409,412,413] JANSSON, M., AND M. J. MOREIRA (2006): “Optimal Inference in Regression Models With Nearly Integrated Regressors,” Econometrica, 74, 681–714. [396,405] JEGANATHAN, P. (1995): “Some Aspects of Asymptotic Theory With Applications to Time Series Models,” Econometric Theory, 11, 818–887. [398]
434
ULRICH K. MÜLLER
KING, M. L. (1988): “Towards a Theory of Point Optimal Testing,” Econometric Reviews, 6, 169–218. [400,401] LE CAM, L. (1986): Asymptotic Methods in Statistical Decision Theory. New York: Springer-Verlag. [398] LEHMANN, E. L. (1986): Testing Statistical Hypotheses (Second Ed.). New York: Wiley. [401,411, 415] LI, H., AND U. K. MÜLLER (2009): “Valid Inference in Partially Unstable General Method of Moment Models,” Review of Economic Studies, 76, 343–365. [425] LING, S., AND M. MCALEER (2003): “On Adaptive Estimation in Nonstationary ARMA Models With GARCH Errors,” The Annals of Statistics, 31, 642–674. [398] MCLEISH, D. L. (1974): “Dependent Central Limit Theorems and Invariance Principles,” The Annals of Probability, 2, 620–628. [404] MOREIRA, M. J. (2001): “Tests With Correct Size When Instruments Can Be Arbitrarily Weak,” Working Paper, University of California, Berkeley. [424] MÜLLER, U. K. (2007): “A Theory of Robust Long-Run Variance Estimation,” Journal of Econometrics, 141, 1331–1352. [404] (2008): “The Impossibility of Consistent Discrimination Between I(0) and I(1) Processes,” Econometric Theory, 24, 616–630. [404] (2009): “Comment on: Unit Root Testing in Practice Dealing With Uncertainty Over the Trend and Initial Condition,” Econometric Theory, 25, 643–648. [395] MÜLLER, U. K., AND G. ELLIOTT (2003): “Tests for Unit Roots and the Initial Condition,” Econometrica, 71, 1269–1286. [395,416] MÜLLER, U. K., AND M. W. WATSON (2008a): “Low-Frequency Robust Cointegration Testing,” Working Paper, Princeton University. [398,401] (2008b): “Testing Models of Low-Frequency Variability,” Econometrica, 76, 979–1016. [398,404,414] NEWEY, W. K. (1990): “Semiparametric Efficiency Bounds,” Journal of Applied Econometrics, 5, 99–135. [398] NYBLOM, J. (1989): “Testing for the Constancy of Parameters Over Time,” Journal of the American Statistical Association, 84, 223–230. [395] PHILLIPS, P. C. B. (1988): “Weak Convergence of Sample Covariance Matrices to Stochastic Integrals via Martingale Approximations,” Econometric Theory, 4, 528–533. [421] PHILLIPS, P. C. B., AND V. SOLO (1992): “Asymptotics for Linear Processes,” The Annals of Statistics, 20, 971–1001. [404] POLLARD, D. (2001): “Contiguity,” Working Paper, Yale University. [408] (2002): A User’s Guide to Measure Theoretic Probability. Cambridge, U.K.: Cambridge University Press. [419,428,430] PÖTSCHER, B. M. (2002): “Lower Risk Bounds and Properties of Confidence Sets for Ill-Posed Estimation Problems With Applications to Spectral Density and Persistence Estimation, Unit Roots, and Estimation of Long Memory Parameters,” Econometrica, 70, 1035–1065. [418] ROTHENBERG, T. J., AND J. H. STOCK (1997): “Inference in a Nearly Integrated Autoregressive Model With Nonnormal Innovations,” Journal of Econometrics, 80, 269–286. [396,412] SCHERVISH, M. J. (1995): Theory of Statistics. New York: Springer. [428] SHIRYAEV, A. N., AND V. G. SPOKOINY (2000): Statistical Experiments and Decisions: Asymptotic Theory. Singapore: World Scientific Publishing. [398] SOWELL, F. (1996): “Optimal Tests for Parameter Instability in the Generalized Method of Moments Framework,” Econometrica, 64, 1085–1107. [398,399,409,425] SRIANANTHAKUMAR, S., AND M. L. KING (2006): “A New Approximate Point Optimal Test of a Composite Null Hypothesis,” Journal of Econometrics, 130, 101–122. [401] STAIGER, D., AND J. H. STOCK (1997): “Instrumental Variables Regression With Weak Instruments,” Econometrica, 65, 557–586. [423] STEIN, C. (1956): “Efficient Nonparametric Estimation and Testing,” in Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1. Berkeley: University of California Press, 187–196. [398,411]
EFFICIENT TESTS UNDER WEAK CONVERGENCE
435
STOCK, J. H. (1994): “Deciding Between I(1) and I(0),” Journal of Econometrics, 63, 105–131. [404] (1999): “A Class of Tests for Integration and Cointegration,” in Cointegration, Causality and Forecasting: A Festschrift in Honour of Clive W. J. Granger, ed. by R. F. Engle and H. White. Oxford: Oxford University Press, 135–167. [397,410] STOCK, J. H., AND M. W. WATSON (1996): “Confidence Sets in Regressions With Highly Serially Correlated Regressors,” Working Paper, Harvard University. [396] STOCK, J. H., AND J. H. WRIGHT (2000): “GMM With Weak Identification,” Econometrica, 68, 1055–1096. [409] VAN DER VAART, A. W. (1998): Asymptotic Statistics. Cambridge, U.K.: Cambridge University Press. [398,432] VAN DER VAART, A. W., AND J. A. WELLNER (1996): Weak Convergence and Empirical Processes With Applications to Statistics. New York: Springer. [409] WHITE, H. (2001): Asymptotic Theory for Econometricians (Revised Ed.). San Diego: Academic Press. [404] WOOLDRIDGE, J. M., AND H. WHITE (1988): “Some Invariance Principles and Central Limit Theorems for Dependent Heterogeneous Processes,” Econometric Theory 4, 210–230. [404]
Dept. of Economics, Princeton University, Princeton, NJ 08544, U.S.A.;
[email protected]. Manuscript received March, 2008; final revision received September, 2010.
Econometrica, Vol. 79, No. 2 (March, 2011), 437–452
EFFICIENCY BOUNDS FOR MISSING DATA MODELS WITH SEMIPARAMETRIC RESTRICTIONS BY BRYAN S. GRAHAM1 This paper shows that the semiparametric efficiency bound for a parameter identified by an unconditional moment restriction with data missing at random (MAR) coincides with that of a particular augmented moment condition problem. The augmented system consists of the inverse probability weighted (IPW) original moment restriction and an additional conditional moment restriction which exhausts all other implications of the MAR assumption. The paper also investigates the value of additional semiparametric restrictions on the conditional expectation function (CEF) of the original moment function given always observed covariates. In the program evaluation context, for example, such restrictions are implied by semiparametric models for the potential outcome CEFs given baseline covariates. The efficiency bound associated with this model is shown to also coincide with that of a particular moment condition problem. Some implications of these results for estimation are briefly discussed. KEYWORDS: Missing data, semiparametric efficiency, propensity score, (augmented) inverse probability weighting, double robustness, average treatment effects, causal inference.
1. INTRODUCTION LET Z = (Y1 X ) BE A VECTOR of modelling variables, let {Zi }∞ i=1 be an independent and identically distributed random sequence drawn from the unknown distribution F0 , let β be a K × 1 unknown parameter vector, and let ψ(Z β) be a known vector-valued function of the same dimension.2 The only prior restriction on F0 is that for some β0 ∈ B ⊂ RK , (1)
E[ψ(Z β0 )] = 0
Chamberlain (1987) showed that the maximal asymptotic precision with which β0 can be estimated under (1) (subject to identification and regularity con1 I would like to thank Gary Chamberlain, Jinyong Hahn, Guido Imbens, Michael Jansson, and Whitney Newey for comments on earlier draft. Helpful discussions with Oliver Linton, Cristine Pinto, Jim Powell, and Geert Ridder as well as participants in the Berkeley Econometrics Reading Group and Seminars are gratefully acknowledged. This revision has benefited from Tom Rothenberg’s skepticism, discussions with Michael Jansson, Justin McCrary, Jim Powell, and the comments of a co-editor and three especially meticulous/generous anonymous referees. All the usual disclaimers apply. This is a heavily revised version of material that previously circulated under the titles “A Note on Semiparametric Efficiency in Moment Condition Models With Missing Data,” “GMM ‘Equivalence’ for Semiparametric Missing Data Models,” and “Efficient Estimation of Missing Data Models Using Moment Conditions and Semiparametric Restrictions.” 2 Extending what follows to the overidentified case is straightforward.
© 2011 The Econometric Society
DOI: 10.3982/ECTA7379
438
BRYAN S. GRAHAM
ditions) is given by If (β0 ) = Γ0 Ω−1 0 Γ0 , with Γ0 = E[∂ψ(Z β0 )/∂β ] and Ω0 = 3 V(ψ(Z β0 )) Now consider the case where a random sequence from F0 is unavailable; instead, only a selected sequence of samples is available. Let D be a binary selection indicator. When D = 1, we observe Y1 and X; when D = 0, we observe only X.4 This paper considers estimation of β0 under restriction (1) and the following additional assumptions.
ASSUMPTION 1.1 —Random Sampling: {Zi Di }∞ i=1 is an independent and identically distributed random sequence from F0 . ASSUMPTION 1.2—Observed Data: For each unit, we observe D X, and Y = DY1 ASSUMPTION 1.3—Conditional Independence: Y1 ⊥ D|X ASSUMPTION 1.4 —Overlap: 0 < κ ≤ p0 (x) ≤ 1 for p0 (x) = Pr(D = 1| X = x) and for all x ∈ X ⊂ Rdim(x) Restriction (1) and Assumptions 1.1–1.4 constitute a semiparametric model for the data. Henceforth, I refer to this model as the semiparametric missing data model or the missing at random (MAR) setup. Robins, Rotnitzky, and Zhao (1994, Proposition 2.3, p. 850) derived the efficient influence function for this problem and proposed a locally efficient augmented inverse probability weighting (AIPW) estimator (cf. Scharfstein, Rotnitzky, and Robins (1999), Bang and Robins (2005), Tsiatis (2006)). Cheng (1994), Hahn (1998), Hirano, Imbens, and Ridder (2003), Imbens, Newey, and Ridder (2005), and Chen, Hong, and Tarozzi (2008) developed globally efficient estimators. The MAR setup has been applied to a number of important econometric and statistical problems, including program evaluation as surveyed by Imbens (2004), nonclassical measurement error (e.g., Robins, Hsieh, and Newey (1995), Chen, Hong, and Tamer (2005)), missing regressors (e.g., Robins, Rotnitzky, and Zhao (1994)), attrition in panel data (e.g., Robins, Rotnitzky, and Zhao (1995), Robins and Rotnitzky (1995), Wooldridge (2002)), and M-estimation under variable probability sampling (e.g., Wooldridge (1999a, 3 Throughout uppercase letters denote random variables, lowercase letters denote specific realizations of them, and calligraphic letters denote their support. I use the notation E[A|c] = E[A|C = c], V(A|c) = Var(A|C = c), and C(A B|c) = Cov(A B|C = c) 4 An earlier version of this paper considered the slightly more general setup with ψ(Z β) = ψ1 (Y1 X β) − ψ0 (Y0 X β) with (X Y ) observed, where Y = DY1 + (1 − D)Y0 Results for this extended model, which contains the standard causal inference model and the two-sample instrumental variables model as special cases (cf. Imbens (2004), Angrist and Krueger (1992)), follow directly and straightforwardly from those outlined below.
EFFICIENCY BOUNDS FOR MISSING DATA MODELS
439
2007)). Chen, Hong, and Tarozzi (2004), Wooldridge (2007), and Graham, Pinto, and Egel (2010) discussed several other applications. The maximal asymptotic precision with which β0 can be estimated under the MAR setup has been characterized by Robins, Rotnitzky, and Zhao (1994) and is given by (2)
Im (β0 ) = Γ0 Λ−1 0 Γ0
with Λ0 = E[Σ0 (X)/p0 (X) + q(X; β0 )q(X; β0 ) ], where Σ0 (x) = V(ψ(Z β0 )| x) and q(x; β) = E[ψ(Z β)|x] The associated efficient influence function, also due to Robins, Rotnitzky, and Zhao (1994), is given by d q(x; β0 ) ψ(z β0 ) − (d − p0 (x)) (3) φ(z θ0 ) = Γ0−1 × p0 (x) p0 (x) for θ = (p q β ) The calculation of (2) is now standard. Knowledge of (2) is useful because it quantifies the cost—in terms of asymptotic precision—of the missing data and because it can be used to verify whether a specific estimator for β0 is efficient. To simplify what follows, I will explicitly assume that Im (β0 ) is well defined (i.e., that all its component expectations exist and are finite, and that all its component matrices are nonsingular). This paper shows that the semiparametric efficiency bound for β0 under the MAR setup coincides with the bound for a particular augmented moment condition problem. The augmented system consists of the inverse probability of observation weighted (IPW) original moment restriction (1) and an additional conditional moment restriction that exhausts all other implications of the MAR setup. This general equivalence result, while implicit in the form of the efficient influence function (3), is apparently new. It provides fresh intuitions for several “paradoxes” in the missing data literature, including the well known results that projection onto, or weighting by the inverse of, a known propensity score results in inefficient estimates (e.g., Hahn (1998), Hirano, Imbens, and Ridder (2003)), that smoothness and exclusion priors on the propensity score do not increase the precision with which β0 can be estimated (Robins, Hsieh, and Newey (1995), Robins and Rotnitzky (1995), Hahn (1998, 2004)), and that weighting by a nonparametric estimate of the propensity score results in an efficient estimator (Hirano, Imbens, and Ridder (2003); cf. Hahn (1998), Wooldridge (2007), Prokhorov and Schmidt (2009), Hitomo, Nishiyama, and Okui (2008)). This paper also analyzes the effect of imposing additional semiparametric restrictions on the conditional expectation function (CEF) q(x; β) = E[ψ(Z β)|x]. If ψ(Z β) = Y1 − β, as when the target parameter is β0 = E[Y1 ] then such restrictions may arise from prior information on the form of E[Y1 |x]. Such restrictions may arise in other settings as well. For example,
440
BRYAN S. GRAHAM
if the goal is to estimate a vector of linear predictor coefficients in the presence of missing regressors, then a semiparametric model for the CEFs of the missing regressors given always observed variables generates restrictions on the form of q(x; β) (cf. Robins, Rotnitzky, and Zhao (1994)).5 Formally I consider the semiparametric model defined by restriction (1), Assumptions 1.1–1.4 and the following additional assumption. ASSUMPTION 1.5—Functional Restriction: For X = (X1 X2 ) let E[ψ(Z β0 )|x] = q(x δ0 h0 (x2 ); β0 ) where q(x δ h(x2 ); β) is a known K × 1 function, δ is a J × 1 finite-dimensional unknown parameter, and h(·) is an unknown function mapping from a subset of X2 ⊂ Rdim(X2 ) into H ⊂ RP . To the best of my knowledge, the variance bound for this problem—the MAR setup with “functional” restrictions—has not been previously calculated. In an innovative paper, Wang, Linton, and Härdle (2004) considered a special case of this model where ψ(Z β) = Y1 − β. They imposed a partial linear structure, as in Engle, Granger, Rice, and Weiss (1986), on E[Y1 |x] such that q(x δ0 h0 (x2 ); β0 ) = x1 δ0 + h0 (x2 ) − β0 . In making their variance bound calculation, they assumed that the conditional distribution of Y1 given X is normal with a variance that does not depend on X. They did not provide a bound for the general case, but conjectured that it is “very complicated” (Wang, Linton, and Härdle (2004, p. 338)). The result given below extends their work to moment condition models, general forms for q(x δ h(x2 ); β), and, importantly, does not require that ψ(Z β) be conditionally normally distributed and/or homoscedastic. Augmenting the MAR setup with Assumption 1.5 generates a middle ground between the fully parametric likelihood-based approaches to missing data described by Little and Rubin (2002) and those which leave E[ψ(Z β0 )|x] unrestricted (e.g., Cheng (1994), Hahn (1998), Hirano, Imbens, and Ridder (2003)). Likelihood-based approaches are very sensitive to misspecification (cf. Imbens (2004)), while approaches which utilize only the basic MAR setup require high dimensional smoothing which may deleteriously affect small sample performance (cf. Wang, Linton, and Härdle (2004), Ichimura and Linton (2005)). Assumption 1.5 is generally weaker than a parametric specification for the conditional distribution of ψ(Z β0 ) given X, but at the same time reduces the dimension of the nonparametric smoothing problem. Below I show how to efficiently exploit prior information on the form of E[ψ(Z β0 )|x]. I also provide conditions under which consistent estimation of β0 is possible even if the exploited information is incorrect. 5 The formation of predictive models of this type is the foundation of the imputation approach to missing data described by Little and Rubin (2002).
EFFICIENCY BOUNDS FOR MISSING DATA MODELS
441
Section 2 reports the first result of the paper: an equivalence between the MAR setup and a particular method-of-moments problem. Equivalence, which is suggested by the form of the efficient influence function derived by Robins, Rotnitzky, and Zhao (1994), was previously noted for special cases by Newey (1994a) and Hirano, Imbens, and Ridder (2003). I discuss the connection between their results and the general result provided below. I also highlight some implications of the equivalence result for understanding various aspects of the MAR setup. Section 3 calculates the variance bound for β0 when the MAR setup is augmented by Assumption 1.5. I discuss when Assumption 1.5 is likely to be informative and also when consistent estimation is possible even if it is erroneously maintained. 2. EQUIVALENCE RESULT Under the MAR setup, the inverse probability weighted (IPW) moment condition D ψ(Z β0 ) = 0 (4) E p0 (X) is valid (e.g., Hirano, Imbens, and Ridder (2003), Wooldridge (2007)). The conditional moment restriction D E − 1X = 0 ∀X ∈ X (5) p0 (X) also holds and nonparametrically identifies p0 (x) While the terminology is inexact, in what follows I call (4) the identifying moment and (5) the auxiliary moment. Consider the case where p0 (x) is known such that (5) is truly an auxiliary moment. One efficient way to exploit the information (5) contains is, following Newey (1994a) and Brown and Newey (1998), to reduce the sampling variation in (4) by subtracting from it the fitted value associated with its regression onto the infinite-dimensional vector of unconditional moment functions implied by (5)6 : D D D s(Z θ0 ) = ψ(Z β0 ) − E∗ ψ(Z β0 ) − 1; X p0 (X) p0 (X) p0 (X) =
D q(X; β0 ) ψ(Z β0 ) − (D − p0 (X)) p0 (X) p0 (X)
6 The notation E∗ [Y |X; Z] denotes the (mean squared error minimizing) linear predictor of Y given X within a subpopulation homogenous in Z: E∗ [Y |X; Z] = X π(Z) π(Z) = E[XX |Z]−1 × E[XY |Z]
Wooldridge (1999b, Section 4) collected some useful results on conditional linear predictors. See also Newey (1990) and Brown and Newey (1998).
442
BRYAN S. GRAHAM
That this population residual is equal to the efficient score function derived by Robins, Rotnitzky, and Zhao (1994) strongly suggests an equivalence between the generalized method-of-moments (GMM) problem defined by restrictions (4) and (5) and the MAR setup outlined above. One way to formally show this is to verify that the efficiency bounds for β0 in the two problems coincide.7 The bound for β0 under the MAR setup is given in (2) above, while under the moment problem, it is established by the following theorem. THEOREM 2.1—GMM Equivalence: Suppose that (i) the distribution of Z has a known, finite support, (ii) there is some β0 ∈ B ⊂ RK and ρ0 = (ρ1 ρL ) , where ρl = p0 (xl ) ∈ [κ 1] for each l = 1 L and some 0 < κ < 1 (with X = {x1 xL } the known support of X) such that restrictions (4) and (5) hold, (iii) Λ0 and Im (β0 ) = Γ0 Λ−1 0 Γ0 are nonsingular, and (iv) other regularity conditions hold (cf. Chamberlain (1992b, Section 2)), then Im (β0 ) is the Fisher information bound for β0 All proofs are provided in the Supplemental Material (Graham (2011)). The proof of Theorem 2.1 involves only some tedious algebra and a straightforward application of Lemma 2 of Chamberlain (1987). Assuming that Z has known, finite support makes the problem fully parametric. The unknown parameters are the probabilities associated with each possible realization of Z, the values of the propensity score at each of the L mass points of the distribution of X, ρ0 = (ρ1 ρL ) , and the parameter of interest, β0 The multinomial assumption is not apparent in the form of Im (β0 ), which involves only conditional expectations of certain functions of the data. This suggests that the bound holds in general, since any F0 which satisfies (4) and (5) can be arbitrarily well approximated by a multinomial distribution also satisfying the restrictions. Chamberlain (1992a, Theorem 1) demonstrated that this is indeed the case. Therefore, Im (β0 )−1 is the maximal asymptotic precision, in the sense of Hájek’s (1972) local minimax approach to efficiency, with which β0 can be estimated when the only prior restrictions on F0 are (4) and (5). Since this variance bound coincides with (2), I conclude that (4) and (5) exhaust all of the useful prior restrictions implied by the MAR setup.8 The connection between semiparametrically efficient estimation of moment condition models with missing data and augmented systems of moment restrictions has been noted previously for the special case of data missing completely 7 An alternative approach to showing equivalency would involve verifying Newey’s (2004) moment spanning condition for efficiency. 8 A referee made the insightful observation that the moment condition model (4) and (5) and the MAR setup are equivalent in the stronger sense that they impose identical restrictions on the observed data. This, of course, also implies that they contain identical information on β0 . The complete data vector is given by (D X Y1 ), with only (D X Y ) = (D X DY1 ) observed. Since Y1 is not observed whenever D = 0, we are free to specify its conditional distribution given X D and D = 0 as desired. Choosing Y1 |X D = 0 ∼ Y1 |X D = 1 ensures conditional independence (Assumption 1.3). Manipulating the identifying moment (4), we then have, writing ψ(Z β0 ) =
EFFICIENCY BOUNDS FOR MISSING DATA MODELS
443
at random (MCAR). In that case, Assumptions 1.1–1.4 hold with p0 (X) equal to a (perhaps known) constant. Newey (1994a) showed that an efficient estimate of β0 can be based on the pair of moment restrictions E[Dψ(Z β0 )] = 0
C(D q(X; β0 )) = 0
with q(X; β) as defined above. Hirano, Imbens, and Ridder (2003) discussed a related example with X binary and the data also MCAR. In their example, efficient estimation is possible with only a finite number of unconditional moment restrictions. Theorem 2.1 provides a formal generalization of the Newey (1994a) and Hirano, Imbens, and Ridder (2003) examples to the missing at random (MAR) case. The method-of-moments formulation of the MAR setup provides a useful framework for understanding several apparent paradoxes found in the missing data literature. As a simple example, consider Hahn’s (1998, pp. 324–325) result that projection onto a known propensity score may be harmful for estimation of β0 = E[Y1 ]. Formally, he showed that, for p0 (x) = Q0 constant in cc = Ni=1 Di Y1i / Ni=1 Di while x and known, the complete-case estimator, β consistent, is inefficient. Observe that for the constant propensity score case, cc is the sample analog of the population solution to (4). It consequently β makes no use of any information contained in the auxiliary moment (5). However, that moment will be informative for β0 if q(x; β0 ) = E[Y1 |x] − β0 varies with x, consistent with Hahn’s (1998) finding that the efficiency loss associcc is proportional to V(q(X; β0 )). Similar reasoning explains why ated with β weighting by the (inverse of) the known propensity score is generally inefficient (cf. Robins, Rotnitzky, and Zhao (1994), Hirano, Imbens, and Ridder (2003), Wooldridge (2007)). The known weights estimator ignores the information contained in (5). That smoothness and exclusion priors on the propensity score do not lower the variance bound also has a GMM interpretation. Consider the case where the propensity score belongs to a parametric family p(X; η0 ) If η0 is known, then an efficient GMM estimator based on (4) and (5) is given by the solution ψ(X Y1 β0 ), E
D D ψ(X Y β0 ) = E p0 (X)E ψ(X DY1 β0 )X D = 1 p0 (X) p0 (X)
= E E[ψ(X DY1 β0 )|X D = 1]
= E E[ψ(X Y1 β0 )|X]
which yields (1). Finally, the auxiliary restriction (5) ties down the conditional distribution of D given X and ensures Assumption 1.4 is satisfied. I thank Michael Jansson for several helpful discussions on this point.
444
BRYAN S. GRAHAM
to N 1 s(η0 q β) N i=1
N 1 Di q (X i ; β) − ψ(Zi β) (Di − p(Xi ; η0 )) = N i=1 p(Xi ; η0 ) p(Xi ; η0 ) = 0 a consistent nonparametric estimate of E[ψ(Z β0 )|x]. Now conwith q(x; β) . From Newey and sider the effect of replacing η0 with the consistent estimate η McFadden (1994, Theorem 6.2), this replacement does not change the first or because E[∂s(η0 q0 β0 )/∂η ] = 0 der asymptotic sampling distribution of β Furthermore, if the known propensity score is replaced by a consistent non is also unaf(x), then the sampling distribution of β parametric estimate, p fected (Newey (1994b, Proposition 3, p. 1360)). Since the M-estimate of β0 based on its efficient score function has the same asymptotic sampling distribution whether the propensity score is set equal to the truth or, instead, to a noisy, but consistent, estimate, knowledge of its form cannot increase the precision with which β0 can be estimated. Another intuition for redundancy of knowledge of the propensity score can be found by inspecting the information bound for the multinomial problem. Under the conditions of Theorem 2.1, calculations provided in the Supplemental Material (Graham (2011)) imply that the GMM estimates of β0 and ρ0 (recall that ρ0 contains the values for the propensity score at each of the mass points of the distribution of X) have an asymptotic sampling distribution of √ ρ Im (ρ0 )−1 0 ρ0 0 D N →N − 0 β0 Im (β0 )−1 0 β with Im (β0 ) as defined in (2) and Im (ρ0 ) as defined in the Supplement Material. As is well known, under block diagonality, sampling error in ρ does not While affect, at least to first order, the asymptotic sampling properties of β. block diagonality is formally only a feature of the multinomial problem, the result nonetheless provides another useful intuition for understanding why prior knowledge of the propensity score is not valuable asymptotically. Finally, the redundancy of knowledge of the propensity score combined with the structure of the equivalent GMM problem, suggests why the IPW estimator based on a nonparametric estimate of the propensity score is semiparametrically efficient (Hirano, Imbens, and Ridder (2003)): when a nonparametric estimate of the propensity score is used, the sample analogs of both (4) and (5) are satisfied. In contrast, the IPW estimator based on a parametric estimate of the propensity score will only satisfy a finite number of the moment conditions
EFFICIENCY BOUNDS FOR MISSING DATA MODELS
445
implied by (5); hence, while it will be more efficient than the estimator that weights by the true propensity score (e.g., Wooldridge (2007)), it will be less efficient than the one proposed by Hirano, Imbens, and Ridder (2003). 3. SEMIPARAMETRIC FUNCTIONAL RESTRICTIONS Consider the MAR setup augmented by Assumption 1.5. To the best of my knowledge, the maximal asymptotic precision with which β0 can be estimated in this model has not been previously characterized. To calculate the bound for this problem, I first consider the conditional moment problem defined by (4), (5), and
E ρ(Z δ0 h0 (X2 ); β0 )|X = 0 (6) with ρ(Z δ0 h0 (X2 ); β0 ) = ψ(Z β0 ) − q(x δ0 h0 (x2 ); β0 ) I apply Chamberlain’s (1992a) approach to this problem to calculate a variance bound for β0 . I then show that this bound coincides with the semiparametric efficiency bound for the problem defined by restriction (1) and Assumptions 1.1–1.5 using the methods of Bickel, Klaassen, Ritov, and Wellner (1993). The value of first considering the conditional moment problem is that it provides a conjecture for the form of the efficient influence function, therefore sidestepping the need to directly calculate what is evidently a complicated projection. To present these results, I begin by letting q0 (X) = q(X δ0 h0 (X2 ); β0 ) ρ(Z; β0 ) = ψ(Z β0 ) − q0 (X)
∂q0 (X) h −1 ∂q0 (X) Υ0 (X2 ) = E D Σ0 (X) X2 ∂h ∂h P×P
∂q0 (X) −1 ∂q0 (X) Υ0hδ (X2 ) = E D Σ (X) X2 0 ∂h ∂δ K×J
∂q0 (X) ∂q0 (X) Υ0h (X2 )−1 Υ0hδ (X2 ) G0 (X) = − K×J ∂δ ∂h ∂q0 (X) X H0 (X2 ) = E 2 K×P ∂h
Imf (δ0 ) = E[DG0 (X) Σ0 (X)−1 G0 (X)] J×J
and Ξ0 = E[H0 (X2 )Υ0h (X2 )−1 H0 (X2 ) ] + E[G0 (X)]Imf (δ0 )−1 E[G0 (X)]
K×K
+ E[q0 (X)q0 (X) ]
446
BRYAN S. GRAHAM
The variance bound for β0 in the conditional moment problem defined by (4), (5), and (6) is established by the following theorem. THEOREM 3.1 —Efficiency With Functional Restrictions, Part 1: Suppose that (i) the distribution of Z has a known, finite support, (ii) there is some β0 ∈ B ⊂ RK ρ0 = (ρ1 ρL ) , where ρl = p0 (xl ) ∈ [κ 1] for each l = 1 L and some 0 < κ < 1 (with X = {x1 xL } the known support of X), δ0 ∈ D ⊂ RJ , and h0 (x2m ) = λ0m ∈ L ⊂ RP for each m = 1 M (with X2 = {x21 x2M } the known support of X2 ) such that restrictions (4), (5), and (6) hold, (iii) Ξ0 and Imf (β0 ) = Γ0 Ξ0−1 Γ0 are nonsingular, and (iv) other regularity conditions hold (cf. Chamberlain (1992b, Section 2)), then Imf (β0 ) is the Fisher information bound for β0 Note that if X1 = ∅ and X2 = X such that E[ψ(Z β0 )|x] is unrestricted, then Imf (β0 ) simplifies to Im (β0 ) above. Therefore, Theorem 2.1 may be viewed as a special case of Theorem 3.1. As with Theorem 2.1, the validity of the bound for the non-multinomial case follows from Theorem 1 of Chamberlain (1992a). The form of Ξ0 suggests a candidate efficient influence function of
f −1 h −1 ∂q0 (X) (7) φβ (Z η0 β0 ) = Γ0 DH0 (X2 )Υ0 (X2 ) ∂h × Σ0 (X)−1 ρ(Z; β0 ) + DE[G0 (X)]Imf (δ0 )−1 G0 (X)
× Σ0 (X)−1 ρ(Z; β0 ) + q(X; β0 ) where η = (h δ H Υ h Υ hδ Σ G) with G = E[G(X)]. Note that each of the three components of (7) is mutually uncorrelated. The next theorem verifies that (7) is the efficient influence function under the MAR setup with Assumption 1.5 also imposed. THEOREM 3.2—Efficiency With Functional Restrictions, Part 2: The semiparametric efficiency bound for β0 in the problem defined by restriction (1) and Assumptions 1.1–1.5 is equal to Imf (β0 ) with an efficient influence function of φfβ (Z η0 β0 ). Theorem 3.1 implies that restriction (6) can be exploited to more efficiently estimate β0 . However, its use also carries risk: if false, yet nevertheless erroneously maintained by the data analyst, an inconsistent estimate of β0 may result. This tension, between efficiency and robustness is formalized by the next two propositions, which together provide guidance as to whether prior information of the type given by Assumption 1.5 should be utilized in practice.
447
EFFICIENCY BOUNDS FOR MISSING DATA MODELS
The first proposition characterizes the magnitude of the efficiency gain associated with correctly exploiting Assumption 1.5. Define ξ1 (Z η0 β0 ) = D K×1
IK p0 (X)
−1
− H0 (X2 )Υ (X2 ) h 0
∂q0 (X) ∂h
Σ0 (X)
−1
ρ(Z; β0 )
ξ2 (Z η0 β0 ) = DG0 (X) Σ0 (X)−1 ρ(Z; β0 ) J×1
PROPOSITION 3.1: Under (1) and Assumptions 1.1–1.5, (8)
Im (β0 )−1 − Imf (β0 )−1 = Γ0−1 (V(ξ1 ) − C(ξ1 ξ2 )V(ξ2 )−1 C(ξ1 ξ2 ) )Γ0−1 ≥ 0
Equation (8) has an intuitive interpretation. The first term in parentheses, Σ0 (X) h −1 V(ξ1 ) = E − H0 (X2 )Υ0 (X2 ) H0 (X2 ) p0 (X)
equals the asymptotic variance reduction that would be available by additionally imposing restriction (6) if δ0 were known. The additional (asymptotic) sampling uncertainty induced by having to estimate δ0 is captured by the second term C(ξ1 ξ2 )V(ξ2 )−1 C(ξ1 ξ2 ) = E[G0 (X)]Imf (δ0 )−1 E[G0 (X)] where Imf (δ0 ) is the information bound for δ0 in the semiparametric regression problem (cf. Chamberlain (1992a)): Dψ(Z β0 ) = Dq(X δ0 h0 (X2 ); β0 ) + DV E[V |X D = 1] = E[V |X] = 0 The more precisely determined is δ0 , the greater the efficiency gain from imposing Assumption 1.5. The size of E[G0 (X)] also governs the magnitude of h −1 hδ 0 (X) the efficiency gain. Conditional on X2 , ( ∂q∂h Υ0 (X2 ) is a weighted )Υ0 (X2 )
448
BRYAN S. GRAHAM
0 (X) 0 (X) linear predictor of ∂q∂δ given ∂q∂h in the D = 1 subpopulation. That is,9
∂q0 (X) Υ0h (X2 )−1 Υ0hδ (X2 ) ∂h ∂q0 (X) ∂q0 (X) ∗ = EΣ0 (X) ; X2 D = 1 ∂δ ∂h 0 (X) and its predicted and hence G0 (X) is equal to the difference between ∂q∂δ value based on a weighted least squares regression in the D = 1 subpopulation. The average of these differences, E[G0 (X)], is taken across the entire population; it will be large in absolute value when the distribution of X1 conditional on X2 differs in the D = 1 versus D = 0 subpopulations. This will occur whenever X1 is highly predictive for missingness (conditional on X2 ). In such situations, the efficiency costs of sampling uncertainty in δ are greater (relative to the known δ0 case) because estimation of β0 requires greater levels of extrapolation. An example clarifies the discussion given above. Assume that ψ(Z β0 ) = Y1 − β0 with
q(X δ0 h0 (X2 ); β0 ) = X1 δ0 + h0 (X2 ) − β0 This is the model considered by Wang, Linton, and Härdle (2004). In addition to being of importance in its own right, it provides insight into the program evaluation problem (where the means of two missing outcomes, as opposed to just one, need to be estimated). The Wang, Linton, and Härdle (2004) prior restriction includes the condition that V(Y1 |X) = σ12 is constant in X For clarity of exposition, I also assume homoscedasticity holds, but that this fact is not known by the econometrician. Let e0 (X2 ) = E[p(X)|X2 ] = Pr(D = 1|X2 ); specializing the general results given above to this model and evaluating (8) gives 1 1 −1 f −1 2 Im (β0 ) − Im (β0 ) = σ1 E E X2 − p(X) e0 (X2 )
− E E[X1 |X2 ] − E[X1 |X2 D = 1]
× E E[X1 |X2 ] − E[X1 |X2 D = 1] /E[e0 (X2 )V(X1 |X2 D = 1)] ≥ 0 9
The notation E∗ω(X) [Y |X; Z D = 1] denotes the weighted conditional linear predictor E∗ω(X) [Y |X; Z D = 1] = XE[DXω(X)−1 X |Z]−1 × E[DXω(X)−1 Y |Z]
This is the population analog of the fitted value from a generalized least squares regression in a subpopulation homogenous in Z and with D = 1
EFFICIENCY BOUNDS FOR MISSING DATA MODELS
449
which shows that the efficiency gain associated with correctly exploiting Assumption 1.5 reflects three forces. First, substantial convexity in p(X)−1 , which will occur when overlap is limited, increases the efficiency gain.10 This gain reflects the semiparametric restriction allowing for extrapolation in the presence of conditional covariate imbalance. The next two effects reflect the fact that the first source of efficiency gain is partially nullified by having to estimate δ0 . If X1 varies strongly given X2 in the D = 1 subpopulation, then the information for δ0 is large, which, in turn, increases the precision with which β0 may be estimated. On the other hand, if there are large (average) differences in the conditional mean of X1 given X2 across the D = 1 and D = 0 subpopulations, then estimating β0 requires greater extrapolation, which, when δ0 is unknown, decreases the precision with which it may be estimated. Proposition 3.1 provides insight into when correctly imposing Assumption 1.5 is likely to be informative. A related question concerns the consequences of misspecifying the form of q(X δ h(X2 ); β). Under such misspecification, the conditional moment restriction (6) will be invalid. Nevertheless, the efficient score function may continue to have an expectation of zero at β = β0 . This suggest that an M-estimator based on an estimate of the efficient score function may be consistent even if Assumption 1.5 does not hold. The following proposition provides one set of conditions under which such a robustness property holds. PROPOSITION 3.2 —Double Robustness: Let q∗ (X) = q(X δ∗ h∗ (X2 ); β0 ) with δ∗ and h∗ (X2 ) arbitrary, let ρ∗ (Z; β0 ) = ψ(Z β0 ) − q∗ (X), and redefine h hδ ∗ (X) Σ0 (X) = V(ρ∗ (Z; β0 )|X) H0 (X2 ) = E[ ∂q∂h |X2 ] and Υ0 (X2 ) Υ0 (X2 ), and G0 similarly. Under restriction (1) and Assumptions 1.1–1.4, φfβ (Z η β0 ) is mean zero if either (i) β = β0 η = η0 and Assumption 1.5 holds or (ii) β = β0 η = η∗ = (h∗ δ∗ H0 Υ0h Υ0hδ Σ0 G0 ), and (a) p0 (x) = e0 (x2 ) for all x ∈ X (b) Σ0 (x) = Θ0 (x2 ) for all x ∈ X , and (c) at least one element of h∗ (x2 ) enters linearly in each row of q∗ (X) Note that there is a tension between the robustness property of Proposition 3.2 and the efficiency gain associated with Assumption 1.5. Mean-zeroness of φfβ (Z η β0 ) under misspecification requires that those variables entering q(X δ h(X2 ); β0 ) parametrically do not affect either the probability of missingness or the conditional variance of the moment function (1). Under such conditions, an estimator based on φfβ (Z η β0 ) will perform no better, at least asymptotically, than one based on the efficient score function derived by Robins, Rotnitzky, and Zhao (1994). In particular, we have the following implication. 10 When some subpopulations have low propensity scores, E[1/p(X)|X2 ] − 1/E[p(X)|X2 ] will tend to be large (Jensen’s inequality).
450
BRYAN S. GRAHAM
COROLLARY 3.1: Under the conditions of part (ii) of Proposition 3.2,
Im (β0 )−1 − Imf (β0 )−1 = 0 Collectively Propositions 3.1 and 3.2 suggest that estimation while maintaining Assumption 1.5 will be most valuable when the econometrician is highly confident in the imposed semiparametric restriction. REFERENCES ANGRIST, J. D., AND A. B. KRUEGER (1992): “The Effect of Age at School Entry on Educational Attainment: An Application of Instrumental Variables With Moments From Two Samples,” Journal of the American Statistical Association, 87, 328–336. [438] BANG, H., AND J. M. ROBINS (2005): “Doubly Robust Estimation in Missing Data and Causal Inference Models,” Biometrics, 61, 962–972. [438] BICKEL, P. J., C. A. J. KLAASSEN, Y. RITOV, AND J. A. WELLNER (1993): Efficient and Adaptive Estimation for Semiparametric Models. New York: Springer-Verlag. [445] BROWN, B. W., AND W. K. NEWEY (1998): “Efficient Semiparametric Estimation of Expectations,” Econometrica, 66, 453–464. [441] CHAMBERLAIN, G. (1987): “Asymptotic Efficiency in Estimation With Conditional Moment Restrictions,” Journal of Econometrics, 34, 305–334. [437,442] (1992a): “Efficiency Bounds for Semiparametric Regression,” Econometrica, 60, 567–596. [442,445-447] (1992b): “Comment: Sequential Moment Restrictions in Panel Data,” Journal of Business & Economic Statistics, 10, 20–26. [442,446] CHEN, X., H. HONG, AND E. T. TAMER (2005): “Measurement Error Models With Auxiliary Data,” Review of Economic Studies, 72, 343–366. [438] CHEN, X., H. HONG, AND A. TAROZZI (2004): “Semiparametric Efficiency in GMM Models of Nonclassical Measurement Errors, Missing Data And Treatment Effects,” Discussion Paper 1644, Cowles Foundation. [439] (2008): “Semiparametric Efficiency in GMM Models With Auxiliary Data,” The Annals of Statistics, 36, 808–843. [438] CHENG, P. E. (1994): “Nonparametric Estimation of Mean Functionals With Data Missing at Random,” Journal of the American Statistical Association, 89, 81–87. [438,440] ENGLE, R. F., C. W. J. GRANGER, J. RICE, AND A. WEISS (1986): “Semiparametric Estimates of the Relation Between Weather and Electricity Sales,” Journal of the American Statistical Association, 81, 310–320. [440] GRAHAM, B. S. (2011): “Supplement to ‘Efficiency Bounds for Missing Data Models With Semiparametric Restrictions’,” Econometrica Supplemental Material, 79, http://www. econometricsociety.org/ecta/Supmat/7379_Proofs.pdf. [442,444] GRAHAM, B. S., C. PINTO, AND D. EGEL (2010): “Inverse Probability Tilting and Missing Data Problems,” Working Paper 13981, NBER. [439] HAHN, J. (1998): “On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects,” Econometrica, 66, 315–331. [438-440,443] (2004): “Functional Restriction and Efficiency in Causal Inference,” Review of Economics and Statistics, 86, 73–76. [439] HÁJEK, J. (1972): “Local Asymptotic Minimax and Admissibility in Estimation,” in Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, ed. by L. M. Le Cam, J. Neyman, and E. L. Scott. Berkeley: University of California Press, 175–194. [442] HIRANO, K., G. W. IMBENS, AND G. RIDDER (2003): “Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score,” Econometrica, 71, 1161–1189. [438-441, 443-445]
EFFICIENCY BOUNDS FOR MISSING DATA MODELS
451
HITOMO, K., Y. NISHIYAMA, AND R. OKUI (2008): “A Puzzling Phenomenon in Semiparametric Estimation Problems With Infinite-Dimensional Nuisance Parameters,” Econometric Theory, 24, 1717–1728. [439] ICHIMURA, H., AND O. LINTON (2005): “Asymptotic Expansions for Some Semiparametric Program Evaluation Estimators,” in Identification and Inference for Econometric Models: Essays in Honor of Thomas Rothenberg, ed. by D. W. K. Andrews and J. H. Stock. Cambridge: Cambridge University Press, 149–170. [440] IMBENS, G. W. (2004): “Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review,” Review of Economics and Statistics, 86, 4–29. [438,440] IMBENS, G. W., W. K. NEWEY, AND G. RIDDER (2005): “Mean-Square-Error Calculations for Average Treatment Effects,” Working Paper 05.34, IEPR. [438] LITTLE, R. J. A., AND D. B. RUBIN (2002): Statistical Analysis With Missing Data. Hoboken, NJ: Wiley. [440] NEWEY, W. K. (1990): “Semiparametric Efficiency Bounds,” Journal of Applied Econometrics, 5, 99–135. [441] (1994a): “Series Estimation of Regression Functionals,” Econometric Theory, 10, 1–28. [441,443] (1994b): “The Asymptotic Variance of Semiparametric Estimators,” Econometrica, 62, 1349–1382. [444] (2004): “Efficient Semiparametric Estimation via Moment Restrictions,” Econometrica, 72, 1877–1897. [442] NEWEY, W. K., AND D. MCFADDEN (1994): “Large Sample Estimation and Hypothesis Testing,” in Handbook of Econometrics, Vol. 4, ed. by R. F. Engle and D. L. McFadden. Amsterdam: North-Holland, 2111–2245. [444] PROKHOROV, A., AND P. J. SCHMIDT (2009): “GMM Redundancy Results for General Missing Data Problems,” Journal of Econometrics, 151, 47–55. [439] ROBINS, J. M., AND A. ROTNITZKY (1995): “Semiparametric Efficiency in Multivariate Regression Models,” Journal of the American Statistical Association, 90, 122–129. [438,439] ROBINS, J. M., F. HSIEH, AND W. NEWEY (1995): “Semiparametric Efficient Estimation of a Conditional Density Function With Missing or Mismeasured Covariates,” Journal of the Royal Statistical Society, Ser. B, 57, 409–424. [438,439] ROBINS, J. M., A. ROTNITZKY, AND L. P. ZHAO (1994): “Estimation of Regression Coefficients When Some Regressors Are Not Always Observed,” Journal of the American Statistical Association, 89, 846–866. [438-443,449] (1995): “Analysis of Semiparametric Regression-Models for Repeated Outcomes in the Presence of Missing Data,” Journal of the American Statistical Association, 90, 106–121. [438] SCHARFSTEIN, D. O., A. ROTNITZKY, AND J. M. ROBINS (1999): “Rejoinder,” Journal of the American Statistical Association, 94, 1135–1146. [438] TSIATIS, A. A. (2006): Semiparametric Theory and Missing Data. New York: Springer. [438] WANG, Q., O. LINTON, AND W. HÄRDLE (2004): “Semiparametric Regression Analysis With Missing Response at Random,” Journal of the American Statistical Association, 99, 334–345. [440,448] WOOLDRIDGE, J. M. (1999a): “Asymptotic Properties of Weighted M-Estimators for Variable Probability Samples,” Econometrica, 67, 1385–1406. [438] (1999b): “Distribution-Free Estimation of Some Nonlinear Panel Data Models,” Journal of Econometrics, 90, 77–97. [441] (2002): “Inverse Probability Weighted M-Estimators for Sample Selection, Attrition and Stratification,” Portuguese Economic Journal, 1, 117–139. [438] (2007): “Inverse Probability Weighted Estimation for General Missing Data Problems,” Journal of Econometrics, 141, 1281–1301. [439,441,443,445]
452
BRYAN S. GRAHAM
Dept. of Economics, New York University, 19 West 4th Street 6FL, New York, NY 10012, U.S.A. and National Bureau of Economic Research; bryan.graham@ nyu.edu. Manuscript received August, 2007; final revision received June, 2010.
Econometrica, Vol. 79, No. 2 (March, 2011), 453–497
THE MODEL CONFIDENCE SET BY PETER R. HANSEN, ASGER LUNDE, AND JAMES M. NASON1 This paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS acknowledges the limitations of the data, such that uninformative data yield a MCS with many models, whereas informative data yield a MCS with only a few models. The MCS procedure does not assume that a particular model is the true model; in fact, the MCS procedure can be used to compare more general objects, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine the MCS of the best regression in terms of in-sample likelihood criteria. KEYWORDS: Model confidence set, model selection, forecasting, multiple comparisons.
1. INTRODUCTION ECONOMETRICIANS OFTEN FACE a situation where several models or methods are available for a particular empirical problem. A relevant question is, “Which is the best?” This question is onerous for most data to answer, especially when the set of competing alternatives is large. Many applications will not yield a single model that significantly dominates all competitors because the data are not sufficiently informative to give an unequivocal answer to this question. Nonetheless, it is possible to reduce the set of models to a smaller set of models—a model confidence set—that contains the best model with a given level of confidence. The objective of the model confidence set (MCS) procedure is to determine the set of models, M∗ that consists of the best model(s) from a collection of models, M0 where best is defined in terms of a criterion that is user-specified. ∗ , that is a collection of The MCS procedure yields a model confidence set, M models built to contain the best models with a given level of confidence. The process of winnowing models out of M0 relies on sample information about 1 The authors thank Joe Romano, Barbara Rossi, Jim Stock, Michael Wolf, and seminar participants at several institutions and the NBER Summer Institute for valuable comments, and Thomas Trimbur for sharing his code for the Baxter–King filter. The Ox language of Doornik (2006) was used to perform the calculations reported here. The first two authors are grateful for financial support from the Danish Research Agency, Grant 24-00-0363, and thank the Federal Reserve Bank of Atlanta for its support and hospitality during several visits. The views in this paper should not be attributed to either the Federal Reserve Bank of Philadelphia or the Federal Reserve System, or any of its staff. The Center for Research in Econometric Analysis of Time Series (CREATES) is a research center at Aarhus University funded by the Danish National Research Foundation.
© 2011 The Econometric Society
DOI: 10.3982/ECTA5771
454
P. R. HANSEN, A. LUNDE, AND J. M. NASON
the relative performances of the models in M0 . This sample information drives ∗ . The set M ∗ the MCS to create a random data-dependent set of models, M includes the best model(s) with a certain probability in the same sense that a confidence interval covers a population parameter. An attractive feature of the MCS approach is that it acknowledges the limitations of the data. Informative data will result in a MCS that contains only the best model. Less informative data make it difficult to distinguish between models and may result in a MCS that contains several (or possibly all) models. Thus, the MCS differs from extant model selection criteria that choose a single model without regard to the information content of the data. Another advantage is that the MCS procedure makes it possible to make statements about significance that are valid in the traditional sense—a property that is not satisfied by the commonly used approach of reporting p-values from multiple pairwise comparisons. Another attractive feature of the MCS procedure is that it allows for the possibility that more than one model can be the best, in which case M∗ contains more than a single model. The contributions of this paper can be summarized as follows: First, we introduce a model confidence set procedure and establish its theoretical properties. Second, we propose a practical bootstrap implementation of the MCS procedure for a set of problems that includes comparisons of forecasting models evaluated out of sample and regression models evaluated in sample. This implementation is particularly useful when the number of objects to be compared is large. Third, the finite sample properties of the bootstrap MCS procedure are analyzed in simulation studies. Fourth, we apply the MCS procedure to two empirical applications. We revisit the out-of-sample prediction problem of Stock and Watson (1999) and construct MCSs for their inflation forecasts. We also build a MCS for Taylor rule regressions using three likelihood criteria that include the Akaike information criterion (AIC) and Bayesian information criterion (BIC). 1.1. Theory of Model Confidence Sets We do not treat models as sacred objects; neither do we assume that a particular model represents the true data generating process. Models are evaluated in terms of a user-specified criterion function. Consequently, the “best” model is unlikely to be replicated for all criteria. Also, we use the term “models” loosely. It can refer to econometric models, competing forecasts, or alternatives that need not involve any modelling of data, such as trading rules. So the MCS procedure is not specific to comparisons of models. For example, one could construct a MCS for a set of different “treatments” by comparing sample estimates of the corresponding treatment effects or construct a MCS for trading rules with the best Sharpe ratio. A MCS is constructed from a collection of competing objects, M0 , and a criterion for evaluating these objects empirically. The MCS procedure is based
THE MODEL CONFIDENCE SET
455
on an equivalence test, δM , and an elimination rule, eM The equivalence test is applied to the set M = M0 . If δM is rejected, there is evidence that the objects in M are not equally “good” and eM is used to eliminate from M an object with poor sample performance. This procedure is repeated until δM is “accepted” and the MCS is now defined by the set of “surviving” objects. By using the same significance level, α, in all tests, the procedure guarantees ∗ ) ≥ 1 − α; in the case where M∗ consists of one that limn→∞ P(M∗ ⊂ M 1−α ∗ ) = 1 The MCS object, we have the stronger result that limn→∞ P(M∗ = M 1−α procedure also yields p-values for each of the objects. For a given object, i ∈ ∗ if and only if M0 , the MCS p-value, pˆ i , is the threshold at which i ∈ M 1−α ˆ pi ≥ α. Thus, an object with a small MCS p-value makes it unlikely that it is one of the best alternatives in M0 The idea behind the sequential testing procedure that we use to construct the MCS may be recognized by readers who are familiar with the trace-test procedure for selecting the rank of a matrix. This procedure involves a sequence of trace tests (see Anderson (1984)), and is commonly used to select the number of cointegration relations within a vector autoregressive model (see Johansen (1988)). The MCS procedure determines the number of superior models in the same way the trace test is used to select the number of cointegration relations. A key difference is that the trace-test procedure has a natural ordering in which the hypotheses are to be tested, whereas the MCS procedure requires a carefully chosen elimination rule to define the sequence of tests. We discuss this issue and related testing procedures in Section 4. 1.2. Bootstrap Implementation and Simulation Results We propose a bootstrap implementation of the MCS procedure that is convenient when the number of models is large. The bootstrap implementation is simple to use in practice and avoids the need to estimate a high-dimensional covariance matrix. White (2000b) is the source of many of the ideas that underlies our bootstrap implementation. We study the properties of our bootstrap implementation of the MCS procedure through simulation experiments. The results are very encouraging because the best model does end up in the MCS at the appropriate frequency and the MCS procedure does have power to weed out all the poor models when the data contain sufficient information. 1.3. Empirical Analysis of Inflation Forecasts and Taylor Rules We apply the MCS to two empirical problems. First, the MCS is used to study the inflation forecasting problem. The choice of an inflation forecasting model is an especially important issue for central banks, treasuries, and private sector agents. The 50-plus year tradition of the Phillips curve suggests it remains an effective vehicle for the task of inflation forecasting. Stock and
456
P. R. HANSEN, A. LUNDE, AND J. M. NASON
Watson (1999) made the case that “a reasonably specified Phillips curve is the best tool for forecasting inflation”; also see Gordon (1997), Staiger, Stock, and Watson (1997b), and Stock and Watson (2003). Atkeson and Ohanian (2001) concluded that this is not the case because they found it is difficult for any of the Phillips curves they studied to beat a simple no-change forecast in out-ofsample point prediction. Our first empirical application is based on the Stock and Watson (1999) data set. Several interesting results come out of our analysis. We partition the evaluation period in the same two subsamples as did Stock and Watson (1999). The earlier subsample covers a period with persistent and volatile inflation: this sample is expected to be relatively informative about which models might be the best forecasting models. Indeed, the MCS consists of relatively few models, so the MCS proves to be effective at purging the inferior forecasts. The later subsample is a period in which inflation is relatively smooth and exhibits little volatility. This yields a sample that contains relatively little information about which of the models deliver the best forecasts. However, Stock and Watson (1999) reported that a no-change forecast, which uses last month’s inflation rate as the point forecast, is inferior in both subsamples. In spite of the relatively low degree of information in the more recent subsample, we are able to conclude that this no-change forecast is indeed inferior to other forecasts. We come to this conclusion because the Stock and Watson no-change forecast never ends up in the MCS. Next, we add the no-change forecast employed by Atkeson and Ohanian (2001) to the comparison. Their forecast uses the past year’s inflation rate as the point prediction rather than month-over-month inflation. This turns out to matter for the second subsample, because the no-change (year) forecast has the smallest mean square prediction error (MSPE) of all forecasts. This enables us to reconcile Stock and Watson (1999) with Atkeson and Ohanian (2001) by showing that their different definitions of the benchmark forecast—no-change (month) and no-change (year), respectively—explain the different conclusions they reach about these forecasts. Our second empirical example shows that the MCS approach is a useful tool for in-sample evaluation of regression models. This example applies the MCS to choosing from a set of competing (nominal) interest rate rule regressions on a quarterly U.S. sample that runs from 1979 through 2006. These regressions fall into the class of interest rate rules promoted by Taylor (1993). His (Taylor’s) rule forms the basis of a class of monetary policy rules that gauge the success of monetary policy at keeping inflation low and the real economy close to trend. The MCS does not reveal which Taylor rule regressions best describe the actual U.S. monetary policy; neither does it identify the best policy rule. Rather the MCS selects the Taylor rule regressions that have the best empirical fit of the U.S. federal funds rate in this sample period, where the “best fit” is defined by different likelihood criteria. The MCS procedure begins with 25 regression models. We include a pure first-order autoregression, AR(1), of the federal funds rate in the initial MCS.
THE MODEL CONFIDENCE SET
457
The remaining 24 models are Taylor rule regressions that contain different combinations of lagged inflation, lags of various definitions of real economic activity (i.e., the output gap, the unemployment rate gap, or real marginal cost), and in some cases the lagged federal funds rate. It seems that there is limited information in our U.S. sample for the MCS procedure to narrow the set of Taylor rule regressions. The one exception is that the MCS only holds regressions that admit the lagged interest rate. This includes the pure AR(1). The reason is that the time-series properties of the federal funds rate is well explained by its own lag. Thus, the lagged federal funds rate appears to dominate lags of inflation and the real activity variables for explaining the current funds rate. There is some solace for advocates of interest rate rules, because under one likelihood criterion, the MCS often tosses out Taylor rule regression lacking in lags of inflation. Nonetheless, the MCS indicates that the data are consistent with either lags of the output gap, the unemployment rate gap, or real marginal cost playing the role of the real activity variables in the Taylor rule regression. This is not a surprising result. Measurement of gap and marginal cost variables remain an unresolved issue for macroeconometrics; for example, see Orphanides and Van Norden (2002) and Staiger, Stock, and Watson (1997a). It is also true that monetary policymakers rely on sophisticated information sets that cannot be spanned by a few aggregate variables (see Bernanke and Boivin (2003)). The upshot is that the sample used to calculate the MCS has difficulties extracting useful information to separate the pure AR(1) from Taylor rule regressions that include the lagged federal funds rate. 1.4. Outline of Paper The paper is organized as follows. We present the theoretical framework of the MCS in Section 2. Section 3 outlines practical bootstrap methods to implement the MCS. Multiple model comparison methods related to the MCS are discussed in Section 4. Section 5 reports the results of simulation experiments. The MCS is applied to two empirical examples in Section 6. Section 7 concludes. The Supplemental Material (Hansen, Lunde, and Nason (2011)) provides detailed description of our bootstrap implementation and some tables that substantiate the results presented in the simulation and empirical section. 2. GENERAL THEORY FOR MODEL CONFIDENCE SET In this section, we discuss the theory of model confidence sets for a general set of alternatives. Our leading example concerns the comparison of empirical models, such as forecasting models. Nevertheless, we do not make specific references to models in the first part of this section, in which we lay out the general theory. We consider a set, M0 that contains a finite number of objects that are indexed by i = 1 m0 The objects are evaluated in terms of a loss func-
458
P. R. HANSEN, A. LUNDE, AND J. M. NASON
tion and we denote the loss that is associated with object i in period t as Lit t = 1 n For example, in the situation where a point forecast Yˆ it of Yt is evaluated in terms of a loss function L we define Lit = L(Yt Yˆ it ). Define the relative performance variables dijt ≡ Lit − Ljt
for all i j ∈ M0
This paper assumes that μij ≡ E(dijt ) is finite and does not depend on t for all i j ∈ M0 We rank alternatives in terms of expected loss, so that alternative i is preferred to alternative j if μij < 0 DEFINITION 1: The set of superior objects is defined by
M∗ ≡ {i ∈ M0 : μij ≤ 0 for all j ∈ M0 } The objective of the MCS procedure is to determine M∗ This is done through a sequence of significance tests, where objects that are found to be significantly inferior to other elements of M0 are eliminated. The hypotheses that are being tested take the form (1)
H0M : μij = 0
for all i j ∈ M
where M ⊂ M0 We denote the alternative hypothesis, μij = 0 for some i j ∈ M by HAM . Note that H0M∗ is true given our definition of M∗ , whereas H0M is false if M contains elements from M∗ and its complement, M0 \ M∗ Naturally, the MCS is specific to a set of candidate models, M0 and therefore silent about the relative merits of objects that are not included in M0 We define a model confidence set to be any subset of M0 that contains all of M∗ with a given probability (its coverage probability). The challenge is to design a procedure that produces a set with the proper coverage probability. The next subsection introduces a generic MCS procedure that meets this requirement. This MCS procedure is constructed from an equivalence test and an elimination rule that are assumed to have certain properties. Next, Section 3 presents feasible tests and elimination rules that can be used for specific problems, such as comparing out-of-sample forecasts and in-sample regression models. 2.1. The MCS Algorithm and Its Properties As stated in the Introduction, the MCS procedure is based on an equivalence test, δM and an elimination rule, eM The equivalence test, δM is used to test the hypothesis H0M for any M ⊂ M0 , and eM identifies the object of M that is to be removed from M in the event that H0M is rejected. As a convention, we let δM = 0 and δM = 1 correspond to the cases where H0M are accepted and rejected, respectively.
THE MODEL CONFIDENCE SET
459
DEFINITION 2—MCS Algorithm: Step 0. Initially set M = M0 . Step 1. Test H0M using δM at level α ∗ = M; otherwise, use eM to elimiStep 2. If H0M is accepted, define M 1−α nate an object from M and repeat the procedure from Step 1. ∗ , which consists of the set of surviving objects (those that surThe set M 1−α vived all tests without being eliminated), is referred to as the model confidence set. Theorem 1, which is stated below, shows that the term “confidence set” is appropriate in this context, provided that the equivalence test and the elimination rule satisfy the following assumption. ASSUMPTION 1: For any M ⊂ M0 , we assume about (δM eM ) that (a) lim supn→∞ P(δM = 1|H0M ) ≤ α, (b) limn→∞ P(δM = 1|HAM ) = 1, and (c) limn→∞ P(eM ∈ M∗ |HAM ) = 0 The conditions that Assumption 1 states for δM are standard requirements for hypothesis tests. Assumption 1(a) requires the asymptotic level not exceed α and Assumption 1(b) requires the asymptotic power be 1, whereas Assumption 1(c) requires that a superior object i∗ ∈ M∗ not be eliminated (as n → ∞) as long as there are inferior models in M. THEOREM 1—Properties of MCS: Given Assumption 1, it holds that ∗ ) ≥ 1 − α and (ii) limn→∞ P(i ∈ M ∗ ) = 0 for (i) lim infn→∞ P(M∗ ⊂ M 1−α 1−α ∗ all i ∈ /M PROOF: Let i∗ ∈ M∗ To prove (i) we consider the event that i∗ is eliminated from M From Assumption 1(c) it follows that P(δM = 1 eM = i∗ |HAM ) ≤ P(eM = i∗ |HAM ) → 0 as n → ∞ So the probability that a good model is eliminated when M contains poor models vanishes as n → ∞ Next, Assumption 1(a) shows that lim supn→∞ P(δM = 1 eM = i∗ |H0M ) = lim supn→∞ P(δM = 1|H0M ) ≤ α such that the probability that i∗ is eliminated when all models in M are good models is asymptotically bounded by α To prove (ii), we first note that limn→∞ P(eM = i∗ |HAM ) = 0 such that only poor models will be eliminated (asymptotically) as long as M M∗ On the other hand, Assumption 1(b) ensures that models will be eliminated as long as the null hypothesis is false. Q.E.D. Consider first the situation where the data contain little information such that the equivalence test lacks power and the elimination rule may question a superior model prior to the elimination of all inferior models. The lack of power causes the procedure to terminate too early (on average), and the MCS will contain a large number of models, including several inferior models. We view this as a strength of the MCS procedure. Since lack of power is tied to
460
P. R. HANSEN, A. LUNDE, AND J. M. NASON
the lack of information in the data, the MCS should be large when there is insufficient information to distinguish good and bad models. In the situation where the data are informative, the equivalence test is powerful and will reject all false hypotheses. Moreover, the elimination rule will not question any superior model until all inferior models have been eliminated. (This situation is guaranteed asymptotically.) The result is that the first time a superior model is questioned by the elimination rule is when the equivalence test is applied to M∗ Thus, the probability that one (or more) superior model is eliminated is bounded (asymptotically) by the size of the test! Note that additional superior models may be eliminated in subsequent tests, but these tests will only be performed if H0M∗ is rejected. Thus, the asymptotic familywise error rate (FWE), which is the probability of making one or more false rejections, is bounded by the level that is used in all tests. Sequential testing is key for building a MCS. However, econometricians often worry about the properties of a sequential testing procedure, because it can “accumulate” Type I errors with unfortunate consequences (see, e.g., Leeb and Pötscher (2003)). The MCS procedure does not suffer from this problem because the sequential testing is halted when the first hypothesis is accepted. When there is only a single model in M∗ (one best model), we obtain a stronger result. COROLLARY 1: Suppose that Assumption 1 holds and that M∗ is a singleton. ∗ ) = 1 Then limn→∞ P(M∗ = M 1−α PROOF: When M∗ is a singleton, M∗ = {i∗ } then it follows from Theorem 1 that i∗ will be the last surviving element with probability approaching 1 as n → ∞ The result now follows, because the last surviving element is never eliminated. Q.E.D. 2.2. Coherency Between Test and Elimination Rule The previous asymptotic results do not rely on any direct connection between the hypothesis test, δM , and the elimination rule, eM . Nonetheless when the MCS is implemented in finite samples, there is an advantage to the hypothesis test and elimination rule being coherent. The next theorem establishes a finite sample version of the result in Theorem 1(i) when there is a certain coherency between the hypothesis test and the elimination rule. THEOREM 2: Suppose that P(δM = 1 eM ∈ M∗ ) ≤ α. Then we have ∗ ) ≥ 1 − α P(M∗ ⊂ M 1−α PROOF: We only need to consider the first instance that eM ∈ M∗ , because all preceding tests will not eliminate elements that are in M∗ Regardless of
THE MODEL CONFIDENCE SET
461
the null hypothesis being true or false, we have P(δM = 1 eM ∈ M∗ ) ≤ α So it follows that α bounds the probability that an element from M∗ is eliminated. Additional elements from M∗ may be eliminated in subsequent tests, but these test will only be undertaken if all preceding tests are rejected. So we conclude ∗ ) ≥ 1 − α. Q.E.D. that P(M∗ ⊂ M 1−α The property that P(δM = 1 eM ∈ M∗ ) ≤ α holds under both the null hypothesis and the alternative hypothesis is key for the result in Theorem 2. For a test with the correct size, we have P(δM = 1|H0M ) ≤ α which implies P(δM = 1 eM ∈ M∗ |H0M ) ≤ α The additional condition, P(δM = 1 eM ∈ M∗ |HAM ) ≤ α ensures that a rejection, δM = 1, can be taken as significant evidence that eM is not in M∗ . In practice, hypothesis tests often rely on asymptotic results that cannot guarantee P(δM = 1 eM ∈ M∗ ) ≤ α holds in finite samples. We provide a definition of coherency between a test and an elimination rule that is useful in situations where testing is grounded on asymptotic distributions. In what follows, we use P0 to denote the probability measure that arises via imposing the null hypothesis via the transformation dijt → dijt − μij Thus P is the true probability measure, whereas P0 is a simple transformation of P that satisfies the null hypothesis. DEFINITION 3: There is said to be coherency between test and elimination rule when P(δM = 1 eM ∈ M∗ ) ≤ P0 (δM = 1) The coherency in conjunction with an asymptotic control of the Type I error, lim supn→∞ P0 (δM = 1) ≤ α translates into an asymptotic version of the assumption we made in Theorem 2. Coherency places restrictions on the combinations of tests and elimination rules we can employ. These restrictions go beyond those imposed by the asymptotic conditions we formulated in Assumption 1. In fact, coherency serves to curb the reliance on asymptotic properties so as to avoid perverse outcomes in finite samples that could result from absurd combinations of test and elimination rule. Coherency prevents us from adopting the most powerful test of the hypothesis H0M in some situations. The reason is that tests do not necessarily identify a single element as the cause for the rejection. A good analogy is found in the standard regression model, where an F -test may reject the joint hypothesis that all regression coefficients are zero, even though all t-statistics are insignificant.2 In our bootstrap implementations of the MCS procedure, we adopt the required coherency between the test and the elimination rule. 2 Another analogy is that it is easier to conclude that a murder has taken place than it is to determine who committed the murder.
462
P. R. HANSEN, A. LUNDE, AND J. M. NASON
2.3. MCS p-Values In this section we introduce the notion of MCS p-values. The elimination rule, eM defines a sequence of (random) sets M0 = M1 ⊃ M2 ⊃ · · · ⊃ Mm0 where Mi = {eMi eMm0 } and m0 is the number of elements in M0 So eM0 = eM1 is the first element to be eliminated in the event that H0M1 is rejected, eM2 is the second element to be eliminated, and so forth. DEFINITION 4—MCS p-Values: Let PH0Mi denote the p-value associated with the null hypothesis H0Mi with the convention that PH0Mm ≡ 1 The MCS 0 p-value for model eMj ∈ M0 is defined by pˆ eMj ≡ maxi≤j PH0Mi . The advantage of this definition of MCS p-values will be evident from Theorem 3 which is stated below. Since Mm0 consists of a single model, the null hypothesis, H0Mm0 simply states that the last surviving model is as good as itself, making the convention PH0Mm ≡ 1 logical. 0 Table I illustrates how MCS p-values are computed and how they relate to p-values of the individual tests PH0Mi i = 1 m0 . The MCS p-values are convenient because they make it easy to determine whether a particular object ∗ for any α Thus, the MCS p-values are an effective way to convey is in M 1−α the information in the data. THEOREM 3: Let the elements of M0 be indexed by i = 1 m0 The MCS ∗ if and only if pˆ i ≥ α for any i ∈ M0 p-value, pˆ i is such that i ∈ M 1−α TABLE I COMPUTATION OF MCS p-VALUESa Elimination Rule
eM 1 eM2 eM3 eM4 eM5 eM6 eM7 eM8 eM(m0 )
p-Value for H0M
MCS p-Value
k
= 001 = 004 = 002 = 003 = 007 = 004 = 011 = 025
pˆ eM1 pˆ eM2 pˆ eM3 pˆ eM4 pˆ eM5 pˆ eM6 pˆ eM7 pˆ eM8
PH0Mm ≡ 100
pˆ eMm
PH0M1 PH0M2 PH0M3 PH0M4 PH0M5 PH0M6 PH0M7 PH0M8
0
0
= 001 = 004 = 004 = 004 = 007 = 007 = 011 = 025 = 100
a Note that MCS p-values for some models do not coincide with the p-values for the corresponding null hypotheses. For example, the MCS p-value for eM (the third 3 model to be eliminated) exceeds the p-value for H0M , because the p-value associ3 ated with H0M —a null hypothesis tested prior to H0M —is larger. 2
3
THE MODEL CONFIDENCE SET
463
PROOF: Suppose that pˆ i < α and determine the k for which i = eMk Since pˆ i = pˆ eMk = maxj≤k PH0Mj , it follows that H0M1 H0Mk are all rejected at significance level α. Hence, the first accepted hypothesis (if any) occurs after ∗ Suppose now that i = eMk has been eliminated. So pˆ i < α implies i ∈ /M 1−α pˆ i ≥ α Then for some j ≤ k, we have PH0Mj ≥ α in which case H0Mj is accepted at significance level α that terminates the MCS procedure before the ∗ This completes elimination rule gets to eMk = i So pˆ i ≥ α implies i ∈ M 1−α the proof. Q.E.D. The interpretation of a MCS p-value is analogous to that of a classical pvalue. The analogy is to a (1 − α) confidence interval that contains the “true” parameter with a probability no less than 1 − α. The MCS p-value also cannot be interpreted as the probability that a particular model is the best model, exactly as a classical p-value is not the probability that the null hypothesis is true. Rather, the probability interpretation of a MCS p-value is tied to the random nature of the MCS because the MCS is a random subset of models that contains M∗ with a certain probability. 3. BOOTSTRAP IMPLEMENTATION 3.1. Equivalence Tests and Elimination Rules Now we consider specific equivalence tests and an elimination rule that satisfy Assumption 1. The following assumption is sufficiently strong to enable us to implement the MCS procedure with bootstrap methods. ASSUMPTION 2: For some r > 2 and γ > 0, it holds that E|dijt |r+γ < ∞ for all i j ∈ M0 and that {dijt }ij∈M0 is strictly stationary with var(dijt ) > 0 and α-mixing of order −r/(r − 2). Assumption 2 places restrictions on the relative performance variables, {dijt } not directly on the loss variables {Lit } For example, a loss function need not be stationary as long as the loss differentials, {dijt } i j ∈ M0 satisfy Assumption 2. The assumption allows for some types of structural breaks and other features that can create nonstationary {Lit } as long as all objects in M0 are affected in a similar way that preserves the stationarity of {dijt } 3.1.1. Quadratic-Form Test Let M be some subset of M0 and let m be the number of models in M = {i1 im }. We define the vector of loss variables Lt ≡ (Li1 t Lim t ) t = n 1 n and its sample average L¯ ≡ n−1 t=1 Lt and we let ι ≡ (1 1) be the column vector where all m entries equal 1. The orthogonal complement to ι is an m × (m − 1) matrix, ι⊥ that has full column rank and satisfies ι ⊥ ι = 0
464
P. R. HANSEN, A. LUNDE, AND J. M. NASON
(a vector of zeros). The m − 1-dimensional vector Xt ≡ ι ⊥ Lt can be viewed as m − 1 contrasts, because each element of Xt is a linear combination of dijt , i j ∈ M which has mean zero under the null hypothesis. LEMMA 1: Given Assumption 2, let Xt ≡ ι ⊥ Lt and define θ ≡ E(Xt ). The null d hypothesis H0M is equivalent to θ = 0 and it holds that n1/2 (X¯ − θ) → N(0 Σ) n ¯ where X¯ ≡ n−1 t=1 Xt and Σ ≡ limn→∞ var(n1/2 X) PROOF: Note that Xt = ι ⊥ Lt can be written as a linear combination of dijt , i j ∈ M0 because ι ⊥ ι = 0 Thus H0M is given by θ = 0 and the asymptotic normality follows by the central limit theorem for α-mixing processes (see, e.g., White (2000a)). Q.E.D. Lemma 1 shows that H0M can be tested using traditional quadratic-form ¯ where Σˆ is some consistent estimator statistics. An example is TQ ≡ nX¯ Σˆ # X, # ˆ 3 The rank q ≡ rank(Σ) ˆ ˆ of Σ and Σ denotes the Moore–Penrose inverse of Σ. represents the effective number of contrasts (the number of linearly indepenp dent comparisons) under H0M . Since Σˆ → Σ (by assumption), it follows that d TQ → χ2(q) , where χ2(q) denotes the χ2 distribution with q degrees of freedom. Under the alternative hypothesis, TQ diverge to infinity with probability 1. So the test δM will meet the requirements of Assumption 1 when constructed from TQ Although the matrix ι⊥ is not fully identified by the requirements ι ⊥ ι = 0 and det(ι ⊥ ι⊥ ) = 0 (but the subspace spanned by the columns of ι⊥ is), there is no problem because the statistic TQ is invariant to the choice for ι⊥ A rejection of the null hypothesis based on the quadratic-form test need not identify an inferior alternative because a large value of TQ can stem from several d¯ij being slightly different from zero. To achieve the required coherence between test and elimination rule, additional testing is needed. Specifically, one needs to test all subhypotheses of any rejected hypothesis, unless the subhypothesis is nested in an accepted hypothesis, before further elimination is justified. The underlying principle is known as the closed testing procedure (see Lehmann and Romano (2005, pp. 366–367)). When m is large relative to the sample size, n reliable estimates of Σ are difficult to obtain, because the number of elements of Σ to be estimated are of order m2 It is convenient to use a test statistic that does not require an explicit estimate of Σ in this case. We consider test statistics that resolve this issue in the next section. 3 Under the additional assumption that {dijt }ij∈M is uncorrelated (across t), we can use ˆ ¯ ¯ Σ = n−1 nt=1 (Xt − X)(X t − X) . Otherwise, we need a robust estimator along the lines of Newey and West (1987). In the context of comparing forecasts, West and Cho (1995) were the first investigators to use the test statistic TQ . They based their test on (asymptotic) critical values from χ2(m−1) .
465
THE MODEL CONFIDENCE SET
3.1.2. Tests Constructed From t-Statistics This section develops two tests that are based on multiple t-statistics. This approach has two advantages. First, it bypasses the need for an explicit estimate of Σ Second, the multiple t-statistic approach simplifies the construction of an elimination rule that satisfies the notion of coherency formulated in Definition 3. n Define the relative sample loss statistics d¯ij ≡ n−1 t=1 dijt and d¯i· ≡ m−1 j∈M d¯ij Here d¯ij measures the relative sample loss between the ith and jth models, while d¯i· is the sample loss of the ith model relative to the average across models inM The latter can be seen from the identity d¯i· = (L¯ i − L¯ · ) n −1 −1 ¯ ¯ ¯ where Li ≡ n t=1 Lit and L· ≡ m i∈M Li From these statistics, we construct the t-statistics tij =
d¯ij d¯ij ) var(
d¯i· and ti· = d¯i· ) var(
for i j ∈ M
d¯ij ) and var( d¯i· ) denote estimates of var(d¯ij ) and var(d¯i· ), respecwhere var( tively. The first statistic, tij is used in the well known test for comparing two forecasts; see Diebold and Mariano (1995) and West (1996). The t-statistics tij and ti· are associated with the null hypothesis that Hij : μij = 0 and Hi· : μi· = 0, where μi· = E(d¯i· ) These statistics form the basis of tests of the hypothesis H0M . We take advantages of the equivalence between H0M {Hij for all i j ∈ M}, and {Hi· for all i ∈ M}. With M = {i1 im } the equivalence follows from μi1 = · · · = μim
⇔
μij = 0
for all i j ∈ M
⇔
μi· = 0
for all i ∈ M
Moreover, the equivalence extends to {μi· ≤ 0 for all i ∈ M} as well as {|μij | ≤ 0 for all i j ∈ M} and these two formulations of the null hypothesis map naturally into the test statistics TmaxM = max ti· i∈M
and
TRM ≡ max |tij | ij∈M
which are available to test the hypothesis H0M .4 The asymptotic distributions of these test statistics are nonstandard because they depend on nuisance parameters (under the null and the alternative). However, the nuisance parameters pose few obstacles, as the relevant distributions can be estimated with bootstrap methods that implicitly deal with the nuisance parameter problem. 4
An earlier version of this paper has results for the test statistics TD =
n
2 j=1 ti·
and TQ
466
P. R. HANSEN, A. LUNDE, AND J. M. NASON
This feature of the bootstrap has previously been used in this context by Kilian (1999), White (2000b), Hansen (2003b, 2005), and Clark and McCracken (2005). Characterization of the MCS procedure needs an elimination rule, eM that meets the requirements of Assumption 1(c) and the coherency of Definition 3. For the test statistic TmaxM , the natural elimination rule is emaxM ≡ arg maxi∈M ti· because a rejection of the null hypothesis identifies the hypothesis μj· = 0 as false for j = emaxM In this case the elimination rule removes the model that contributes most to the test statistic. This model has the largest standardized excess loss relative to the average across all models in M With the other test statistic, TRM the natural elimination rule is eRM = arg maxi∈M supj∈M tij because this model is such that teRM j = TRM for some j ∈ M These combinations of test and elimination rule will satisfy the required coherency. PROPOSITION 1: Let δmaxM and δRM denote the tests based on the statistics TmaxM and TRM respectively. Then (δmaxM emaxM ) and (δRM eRM ) satisfy the coherency of Definition 3. PROOF: Let Ti denote either ti· or maxj∈M tij and note that the test statistics TmaxM and TRM are both of the form T = maxi∈M Ti Let P0 be as defined in Section 2.2. From the definitions of ti· and tij , we have for i ∈ M∗ the firstorder stochastic dominance result P0 (maxi∈M Ti > x) ≥ P(maxi∈M Ti > x) for any M ⊂ M∗ and all x ∈ R The coherency now follows from P(T > c eM = i for some i ∈ M∗ ) = P(T > c T = Ti for some i ∈ M∗ ) = P max ∗ Ti > c Ti ≥ Tj for all j ∈ M ≤ P max ∗ Ti > c i∈M∩M i∈M∩M ≤ P0 max ∗ Ti > c ≤ P0 max Ti > c = P0 (T > c) i∈M∩M
i∈M
This completes the proof.
Q.E.D.
Next, we establish two intermediate results that underpin the bootstrap implementation of the MCS. LEMMA 2: Suppose that Assumption 2 holds and define Z¯ = (d¯1· d¯m· ) Then (2)
d n1/2 (Z¯ − ψ) → Nm (0 Ω) as n → ∞
¯ and Ω ≡ limn→∞ var(n1/2 Z) ¯ and the null hypothesis H0M is where ψ ≡ E(Z) equivalent to: ψ = 0
THE MODEL CONFIDENCE SET
467
PROOF: From the identity d¯i· = L¯ i − L¯ · = L¯ i − m−1 j∈M L¯ j = m−1 × −1 ¯ ¯ ¯ ¯ j∈M (Li − Lj ) = m j∈M dij , we see that the elements of Z are linear transformations of X¯ from Lemma 1. Thus for some (m − 1) × m matrix G, we have Z¯ = G X¯ and the result now follows, where ψ = G θ and Ω = G ΣG (The m × m covariance matrix Ω has reduced rank, as rank(Ω) ≤ m − 1.) Q.E.D. In the following discussion, we let denote the m × m correlation matrix that is implied by the covariance matrix Ω of Lemma 2. Further, given the vector of random variables ξ ∼ Nm (0 ) we let F denote the distribution of maxi ξi . 1/2 d¯i· ) = ˆ 2i ≡ var(n THEOREM 4: Let Assumption 2 hold and suppose that ω p d¯i· ) → ω2i where ω2i , i = 1 m, are the diagonal elements of Ω Under nvar( d
H0M , we have TmaxM → F , and under the alternative hypothesis HAM we have TmaxM → ∞ in probability. Moreover, under the alternative hypothesis, we have / M∗ for n sufficiently large. TmaxM = tj· , where j = emaxM ∈ ˆ ≡ diag(ω PROOF: Let D ≡ diag(ω21 ω2m ) and D ˆ 21 ω ˆ 2m ) From d Lemma 2 it follows that ξn = (ξ1n ξmn ) ≡ D−1/2 n1/2 Z¯ → Nm (0 ), since d¯i· ) = n1/2 d¯i· /ω ˆ i = ξin ωωˆ ii , it now fol = D−1/2 ΩD−1/2 From ti· = d¯i· / var( d ˆ −1/2 n1/2 Z) ¯ i → lows that TmaxM = maxi ti· = maxi (D F Under the alternap / M∗ so that both tj· and tive hypothesis, we have d¯j· → μj· > 0 for any j ∈ TmaxM diverge to infinity at rate n1/2 in probability. Moreover, it follows that / M∗ for n sufficiently large. Q.E.D. emaxM ∈ Theorem 4 shows that the asymptotic distribution of TmaxM depends on the correlation matrix Nonetheless, as discussed earlier, bootstrap methods can be employed to deal with this nuisance parameter problem. Thus, we construct a test of H0M by comparing the test statistic TmaxM to an estimate of the 95% quantile, say, of its limit distribution under the null hypothesis. Although the quantile may depend on our bootstrap implementation leads to an asymptotically valid test because the bootstrap consistently estimates the desired quantile. A detailed description of our bootstrap implementation is available in a separate appendix (Hansen, Lunde, and Nason (2011)). Theorem 4 formulates results for the situation where the MCS is constructed with TmaxM and emaxM = arg maxi ti· Similar results hold for the MCS that is constructed from TRM and eRM The arguments are almost identical to those used for Theorem 4. 3.2. MCS for Regression Models This section shows how to construct the MCS for regression models using likelihood-based criteria. Information criteria, such as the AIC and BIC, are
468
P. R. HANSEN, A. LUNDE, AND J. M. NASON
special cases for building a MCS of regression models. The MCS approach departs from standard practice where the AIC and BIC select a single model, but are silent about the uncertainty associated with this selection.5 Thus, the MCS procedure yields valuable additional information about the uncertainty surrounding model selection. In Section 6.2, application of the MCS procedure in sample to Taylor rule regressions indicates this uncertainty can be substantial. Although we focus on regression models for simplicity, it will be evident that the MCS procedure laid out in this setting can be adapted to more complex models, such as the type of models analyzed in Sin and White (1996). 3.2.1. Framework and Assumptions Consider the family of regression models Yt = β j Xjt + εjt , t = 1 n, where Xjt is a subset of the variables in Xt for j = 1 m0 The set of regression models, M0 may consist of nested, nonnested, and overlapping specifications. Throughout we assume that the pair (Yt Xt ) is strictly stationary and satisfies Assumption 1 in Goncalves and White (2005). This justifies our use of the moving-block bootstrap to implement our resampling procedure. The framework of Goncalves and White (2005) permits weak serial dependence in (Yt Xt ) which is important for many applications. The population parameters for each of the models are defined by β0j =
2 )]−1 E(Xjt Yt ) and σ0j2 = E(εjt ) where εjt = Yt − β 0j Xjt t = [E(Xjt Xjt 1 n Furthermore, the Gaussian quasi-log-likelihood function is, apart from a constant, given by 1 n (Yt − β j Xjt )2 (βj σj2 ) = − log σj2 − σj−2 2 2 t=1 n
3.2.2. MCS by Kullback–Leibler Divergence One way to define the best regression model is in terms of the Kullback– Leibler information criterion (KLIC) (see, e.g., Sin and White (1996)). This is equivalent to ranking the models in terms of the expected value of the quasilog-likelihood function when evaluated at their respective population parameters, that is, E[(β0j σ0j2 )] It is convenient to define Q(Z θj ) = −2(βj σj2 ) = n log σj2 +
n (Yt − β j Xjt )2 t=1
σj2
5 The same point applies to the Autometrics procedure; see Doornik (2009) and references therein. Autometrics is constructed from a collection of tests and decision rules but does not control a familywise error rate, and the set of models that Autometrics seeks to identify is not defined from a single criterion, such as the Kullback–Leibler information criterion.
469
THE MODEL CONFIDENCE SET
where θj can be viewed as a high-dimensional vector that is restricted by the parameter space Θj ⊂ Θ that defines the jth regression model. The population parameters are here given by θ0j = arg minθ∈Θj E[Q(Z θ)], j = 1 m0 and the best model is defined by minj E[Q(Z θ0j )] In the notation of the MCS framework, the KLIC leads to M∗KLIC = j : E[Q(Z θ0j )] = min E[Q(Z θ0i )] i
which (as always) permits the existence of more than one best model.6 The extension to other criteria, such as the AIC and the BIC, is straightforward. For instance, the set of best models in terms of the AIC is given by M∗AIC = {j : E[Q(Z θ0j ) + 2kj ] = mini E[Q(Z θ0i ) + 2ki ]}, where kj is the degrees of freedom in the jth model. ∗ ∗ The likelihood framework enables us to construct either M KLIC or MAIC by drawing on the theory of quasi-maximum-likelihood estimation (see, e.g., White (1994)). Since the family of regression models is linear, the n
−1 ) × quasi-maximum-likelihood estimators are standard, βˆ j = ( t=1 Xjt Xjt n n 2 −1 2
ˆ ˆj = n ˆ jt where εˆ jt = Yt − βj Xjt We have t=1 Xjt Yt and σ t=1 ε Q(Z θˆ j ) − Q(Z θ0j )
= n (log σ − log σˆ ) + n 2 0j
2 j
−1
n
ε /σ − 1 2 jt
2 0j
t=1
which is the quasi-likelihood ratio (QLR) statistic for the null hypothesis, H0 : θ = θ0j . In the event that the jth model is correctly specified, it is well known that the limit distribution of Q(Z θˆ j ) − Q(Z θ0j ) is χ2(kj ) where the degrees of freedom, kj is given by the dimension of θ0j = (β 0j σ0j2 ) In the present multimodel setup, it is unlikely that all models are correctly specified. More generkj 2 λij Zij where ally, the limit distribution of the QLR statistic has the form, i=1 −1 λ1j λkj j are the eigenvalues of Ij Jj and Z1j Zkj j ∼ iid N(0 1). The information matrices Ij and Jj are those associated with the jth model,
6 In the present situation, we have E[Q(Zj θ0j )] ∝ σ0j2 The implication is that the error variance, σ0j2 induces the same ranking as KLIC, so that M∗KLIC = {j : σ0j2 = minj σ0j2 }
470
P. R. HANSEN, A. LUNDE, AND J. M. NASON
Ij = diag(σ0j−2 E(X jt Xjt ) 12 σ0j−4 ) and
⎛ −4 −1 0j
⎜σ n ⎜ Jj = E ⎜ ⎜ ⎝
n
Xjs εjs εjt X
jt
st=1
•
⎞ n 1 −6 −1 2 σ n Xjs εjs εjt ⎟ 2 0j ⎟ st=1 ⎟ n ⎟ 1 −8 −1 2 2 4 ⎠ σ n (εjs εjt −σ 0j ) 4 0j st=1
The effective degrees of freedom, kj is defined by the mean of the QLR limit distribution: kj = λ1j + · · · + λkj j = tr{Ij−1 Jj }
n
−1 −2 −1
= tr [E(Xjt Xjt )] σ0j n E(Xjs εjs Xjt εjt ) st=1
2 2 n εjs εjt −1 1 E −1 +n 2 st=1 σ0j4 The previous expression points to estimating kj with heteroskedasticity and autocorrelation consistent (HAC) type estimators that account for the auto2 correlation in {Xjt εjt } and {εjt } (e.g., Newey and West (1987) and Andrews (1991)). Below we use a simple bootstrap estimate of kj which is also employed in our simulations and our empirical Taylor rule regression application. The effective degrees of freedom in the context of misspecified models was first derived by Takeuchi (1976). He proposed a modified AIC, sometimes referred to as the Takeuchi information criterion (TIC), which computes the penalty with the effective degrees of freedom rather than the number of parameters as is used by the AIC; see also Sin and White (1996) and Hong and Preston (2008). We use the notation AIC and BIC to denote the information criteria that are defined by substituting the effective degrees of freedom kj for kj in the AIC and BIC, respectively. In this case, our AIC is identical to the TIC proposed by Takeuchi (1976). 3.2.3. The MCS Procedure The MCS procedure can be implemented by the moving-block bootstrap applied to the pair (Yt Xt ); see Goncalves and White (2005). We compute re∗ ∗ n samples Zb∗ = (Ybt Xbt )t=1 for b = 1 B which equates the original point ˆ estimate, θj , to the population parameter in the jth model under the bootstrap scheme. The literature has proposed several bootstrap estimators of the effective degrees of freedom, kj = E[Q(Z θ0j ) − Q(Z θˆ j )]; see, for example, Efron
THE MODEL CONFIDENCE SET
471
(1983, 1986) and Cavanaugh and Shumway (1997). These and additional estimators are analyzed and compared in Shibata (1997). We adopt the estimator for kj that is labelled B3 in Shibata (1997). In the regression context, this estimator takes the form kˆ j = B−1
B
∗ Q(Zb∗ θˆ j ) − Q(Zb∗ θˆ bj )
b=1
= B−1
B b=1
n log
σˆ j2 ∗2 σˆ bj
n ∗ (εbjt )2
+
t=1
σˆ j2
−n
n ∗ ∗ ∗ ∗ ∗ ∗ ∗2 ∗ where εbjt = Ybt − βˆ j Xbjt εˆ bjt = Ybt − βˆ ∗ bj Xbjt and σˆ bj = n−1 t=1 (εˆ bjt )2 This is an estimate of the expected overfit that results from maximization of the likelihood function. For a correctly specified model, we have kj = kj , so we would expect kˆ j ≈ kj when the jth model is correctly specified. This is indeed what we find in our simulations; see Section 5.2. Given an estimate of the effective degrees of freedom kˆ j compute the AIC statistic Q(Z θˆ j ) + kˆ j , which is centered about E{Q(Z θ0j )} The null hypothesis H0M states that E[Q(Z θ0i ) − Q(Z θ0j )] = 0 for all i j ∈ M This motivates the range statistic TRM = max [Q(Z θˆ i ) + kˆ i ] − [Q(Z θˆ j ) + kˆ j ] ij∈M
and the elimination rule eM = arg maxj∈M [Q(Z θˆ j ) + kˆ j ] This elimination rule removes the model with the largest bias adjusted residual variance. Our test statistic, TRM is a range statistic over recentered QLR statistics computed for all pairs of models in M In the special case with independent and identically distributed (i.i.d.) data and just two models in M we could simply adopt the QLR test of Vuong (1989) as our equivalence test. Next, we estimate the distribution of TRM under the null hypothesis. The estimate is calculated with methods similar to those used in White (2000b) and Hansen (2005). The joint distribution of
Q(Z θˆ 1 ) + k1 − E[Q(Z θ01 )] Q Z θˆ m0 + km0 − E Q Z θ0m0
is estimated by the empirical distribution of (3)
∗ ∗ ) + kˆ 1 − Q(Z θˆ 1 ) Q Zb∗ θˆ bm Q(Zb∗ θˆ b1 + kˆ m0 − Q Z θˆ m0 0
472
P. R. HANSEN, A. LUNDE, AND J. M. NASON
for b = 1 B because Q(Z θˆ j ) plays the role of E[Q(Z θ0j )] under the resampling scheme. These bootstrap statistics are relatively easy to compute because the structure of the likelihood function is ∗ ∗2 Q(Zb∗ θˆ bj ) − Q(Z θˆ j ) = n(log σˆ bj + 1) − n(log σˆ j2 + 1) = n log
∗2 σˆ bj
σˆ j2
n ∗2 ∗ ∗ where σˆ bj = n−1 t=1 (Ybt − βˆ ∗ bj Xbjt )2 For each of the bootstrap resamples, we compute the test statistic ∗ ∗ ˆ∗ ˆ ˆ TbR M = max {Q(Zb θbi ) + ki − Q(Z θi )} ij∈M
∗ − {Q(Zb∗ θˆ bj ) + kˆ j − Q(Z θˆ j )}
The p-value for the hypothesis test with which we are concerned is computed by pM = B−1
B
∗ 1{TbR ≥TRM } M
b=1 ∗ The empirical distribution of n−1/2 TbR M yields a conservative estimate of the −1/2 distribution of n TRM as n B → ∞ The conservative nature of this estimate refers to the p-value, pM being conservative in situations where the comparisons involve nested models. We discuss this issue at some length in the next subsection. It is also straightforward to construct the MCS using either the AIC, the BIC, the AIC , or the BIC . The relevant test statistic has the form TRM = max [Q(Z θˆ i ) + ci ] − [Q(Z θˆ j ) + cj ] ij∈M
where cj = 2kj for the AIC, cj = log(n)kj for the BIC, cj = 2kˆ j for the AIC , and cj = log(n)kˆ j for the BIC . The computation of the resampled test statis∗ tics, TbR M is identical for the three criteria. The reason is that the location shift cj has no effect on the bootstrap statistics once the null hypothesis is imposed. Under the null hypothesis, we recenter the bootstrap statistics about zero and this offsets the location shift ci − cj . 3.2.4. Issues Related to the Comparison of Nested Models When two models are nested, the null hypothesis used with KLIC, E[Q(Z θ0i )] = E[Q(Z θ0j )] has the strong implication that Q(Z θ0i ) = Q(Z θ0j ) a.e. (almost everywhere), and this causes the limit distribution of the quasilikelihood ratio statistic, Q(Z θˆ i )−Q(Z θˆ j ) to differ for nested or nonnested
THE MODEL CONFIDENCE SET
473
comparisons (see Vuong (1989)). This property of nested comparisons can be imposed on the bootstrap resamples by replacing Q(Z θˆ j ) with Q(Z ∗ θˆ j ) because the latter is the bootstrap variant of Q(Z θ0j ) The MCS procedure can be adapted so that different bootstrap schemes are used for nested and nonnested comparisons, and imposing the stronger null hypothesis Q(Z θ0i ) = Q(Z θ0j ) a.e. may improve the power of the procedure. The key difference is that the null hypothesis with KLIC has Q(Z θˆ i ) − Q(Z θˆ j ) = Op (1) for nested comparisons and Q(Z θˆ i ) − Q(Z θˆ j ) = Op (n1/2 ) for nonnested comparisons. Our bootstrap implementation is such that ∗ ∗ ) + kˆ i − Q(Z θˆ i )} − {Q(Zb∗ θˆ bj ) + kˆ j − Q(Z θˆ j )} is Op (n1/2 ), {Q(Zb∗ θˆ bi whether the comparison involves nested or nonnested models, which causes the bootstrap critical values to be conservative. Under the alternative, Q(Z θˆ i ) − Q(Z θˆ j ) diverges at rate n for nested and nonnested comparisons, so the bootstrap testing procedure is consistent in both cases. Since nested and nonnested comparisons result in different rates of convergence and different limit distributions, there are better ways to construct an adaptive procedure than through the test statistic TRM , for instance, by combining the p-values for the individual subhypotheses. We shall not pursue such an adaptive bootstrap implementation in this paper. It is, however, important to note that the issue with nested models is only relevant for KLIC because the underlying null hypotheses of other criteria, including AIC and BIC , do not imply Q(Z θ0i ) = Q(Z θ0j ) a.e. for nested models. 4. RELATION TO EXISTING MULTIPLE COMPARISONS METHODS The Introduction discussed the relationship between the MCS and the trace test used to select the number of cointegration relations (see Johansen (1988)). The MCS and the trace test share an underlying testing principle known as intersection–union testing (IUT). Berger (1982) was responsible for formalizing the IUT, while Pantula (1989) applied the IUT to the problem of selecting the lag length and order of integration in univariate autoregressive processes. Another way to cast the MCS problem is as a multiple comparisons problem. The multiple comparisons problem has a long history in the statistics literature; see Gupta and Panchapakesan (1979), Hsu (1996), Dudoit, Shaffer, and Boldrick (2003), and Lehmann and Romano (2005, Chap. 9) and references therein. Results from this literature have recently been adopted in the econometrics literature. One problem is that of multiple comparisons with best, where objects are compared to those with the best sample performance. Statistical procedures for multiple comparisons with best are discussed and applied to economic problems in Horrace and Schmidt (2000). Shimodaira (1998) used a variant of Gupta’s subset selection (see Gupta and Panchapakesan (1979)) to construct a set of models that he terms a model confidence set. His procedure is specific to a ranking of models in terms of E(AICj ) and his framework
474
P. R. HANSEN, A. LUNDE, AND J. M. NASON
is different from ours in a number of ways. For instance, his preferred set of models does not control the FWE. He also invoked a Gaussian approximation that rules out comparisons of nested models. Our MCS employs a sequential testing procedure that mimics step-down procedures for multiple hypothesis testing; see, for example, Dudoit, Shaffer, and Boldrick (2003), Lehmann and Romano (2005, Chap. 9), or Romano, Shaikh, and Wolf (2008). Our definition of MCS p-values implies the monotonicity, pˆ eM1 ≤ pˆ eM2 ≤ · · · ≤ pˆ eMm that is key for the result of Theorem 3. 0 This monotonicity is also a feature of the so-called step-down Holm adjusted p-values. 4.1. Relationship to Tests for Superior Predictive Ability Another related problem is the case where the benchmark, to which all objects are compared, is selected independently of the data used for the comparison. This problem is known as multiple comparisons with control. In the context of forecast comparisons, this is the problem that arises when testing for superior predictive ability (SPA); see White (2000b), Hansen (2005), and Romano and Wolf (2005). The MCS has several advantages over tests for superior predictive ability. The reality check for data snooping of White (2000b) and the SPA test of Hansen (2005) are designed to address whether a particular benchmark is significantly outperformed by any of the alternatives used in the comparison. Unlike these tests, the MCS procedure does not require a benchmark to be specified, which is very useful in applications without an obvious benchmark. In the situation where there is a natural benchmark, the MCS procedure can still address the same objective as the SPA tests. This is done by observing whether the designated benchmark is in the MCS, where the latter corresponds to a rejection of the null hypothesis that is relevant for a SPA test. The MCS procedure has the advantage that it can be employed for model selection, whereas a SPA test is ill-suited for this problem. A rejection of the SPA test only identifies one or more models as significantly better than the benchmark.7 Thus, the SPA test offers little guidance about which models reside in M∗ . We are also faced with a similar problem in the event that the null hypothesis is not rejected by the SPA test. In this case, the benchmark may be the best model, but this label may also be applied to other models. This issue can be resolved if all models serve as the benchmark in a series of comparisons. The result is a sequence of SPA tests that define the MCS to be the set of “benchmark” models that are found not to be significantly inferior to the alternatives. However, the level of individual SPA tests needs to be adjusted 7 Romano and Wolf (2005) improved on the reality check by identifying the entire set of alternatives that significantly dominate the benchmark. This set of models is specific to the choice of benchmark and has, therefore, no direct relation to the MCS.
THE MODEL CONFIDENCE SET
475
for the number of tests that are computed to control the FWE. For example, if the level in each of the SPA tests is α/m, the Bonferroni bound states that the resulting set of surviving benchmarks is a MCS with coverage (1−α). Nonetheless, there is a substantial loss of power associated with the small level applied to the individual tests. The loss of power highlights a major pitfall of sequential SPA tests. Another drawback of constructing a MCS from SPA-tests is that the null of a SPA test is a composite hypothesis. The null is defined by several inequality constraints which affect the asymptotic distribution of the SPA test statistic because it depends on the number of binding inequalities. The binding inequality constraints create a nuisance parameter problem. This makes it difficult to control the Type I error rate, inducing an additional loss of power; see Hansen (2003a). In comparison, the MCS procedure is based on a sequence of hypothesis tests that only involve equalities, which avoids composite hypothesis testing. 4.2. Related Sequential Testing Procedures for Model Selection This subsection considers some relevant aspects of out-of-sample evaluation of forecasting models and how the MCS procedure relates to these issues. Several papers have studied the problem of selecting the best forecasting model from a set of competing models. For example, Engle and Brown (1985) compared selection procedures that are based on six information criteria and two testing procedures (general-to-specific and specific-to-general), Sin and White (1996) analyzed information criteria for possibly misspecified models, and Inoue and Kilian (2006) compared selection procedures that are based on information criteria and out-of-sample evaluation. Granger, King, and White (1995) argued that the general-to-specific selection procedure is based on an incorrect use of hypothesis testing, because the model chosen to be the null hypothesis in a pairwise comparison is unfairly favored. This is problematic when the data set under investigation does not contain much information, which makes it difficult to distinguish between models. The MCS procedure does not assume that a particular model is the true model; neither is the null hypothesis defined by a single model. Instead, all models are treated equally in the comparison and only evaluated on out-of-sample predictive ability. 4.3. Aspects of Parameter Uncertainty and Forecasting Parameter estimation can play an important role in the evaluation and comparison of forecasting models. Specifically, when the comparison of nested models relies on parameters that are estimated using certain estimation schemes, the limit distribution of our test statistics need not be Gaussian; see West and McCracken (1998) and Clark and McCracken (2001). In the present context, there will be cases that do not fulfil Assumption 2. Some of these
476
P. R. HANSEN, A. LUNDE, AND J. M. NASON
problems can be avoided by using a rolling window for parameter estimation, known as the rolling scheme. This is the approach taken by Giacomini and White (2006). Alternatively one can estimate the parameters once (using data that are dated prior to the evaluation period) and then compare the forecasts conditional on these parameter estimates. However, the MCS should be applied with caution when forecasts are based on estimated parameters because our assumptions need not hold in this case. As a result, modifications are needed in the case with nested models; see Chong and Hendry (1986), Harvey and Newbold (2000), Chao, Corradi, and Swanson (2001), and Clark and McCracken (2001) among others. The key modification that is needed to accommodate the case with nested models is to adopt a test with a proper size. With proper choices for δM and eM , the general theory for the MCS procedure remains. However, in this paper we will not pursue this extension because it would obscure our main objective, which is to lay out the key ideas of the MCS. 4.4. Bayesian Interpretation The MCS procedure is based on frequentist principles, but resembles some aspects of Bayesian model selection techniques. By specifying a prior over the models in M0 , a Bayesian procedure would produce a posterior distribution for each model, conditional on the actual data. This approach to MCS construction includes those models with the largest posteriors that sum at least to 1 − α If the Bayesian were also to choose models by minimizing the “risk” associated with the loss attributed to each model, the MCS would be a Bayes decision procedure with respect to the model posteriors. Note that the Bayesian and frequentist MCSs rely on the metric under which loss is calculated and depend on sample information. We argue that our approach to the MCS and its bootstrap implementation compares favorably to Bayesian methods of model selection. One advantage of the frequentist approach is that it avoids having to place priors on the elements of M0 (and their parameters). Our probability statement is associated with the random data-dependent set of models that is the MCS. It therefore is meaningful to state that the best model can be found in the MCS with a certain probability. The MCS also places moderate computational demands on the researcher, unlike the synthetic data creation methods on which Bayesian Markov chain Monte Carlo methods rely. 5. SIMULATION RESULTS This section reports on Monte Carlo experiments that show the MCS to be properly sized and possess good power in various simulation designs.
THE MODEL CONFIDENCE SET
477
5.1. Simulation Experiment I We consider two designs √ that are based on the m-dimensional vector 1
θ = (0 m−1 m−2 1) λ/ n that defines the relative performances μij = m−1 E(dijt ) = θi − θj . The experimental design ensures that M∗ consists of a single element, unless λ = 0, in which case we have M∗ = M0 . The stochastic nature of the simulation is primarily driven by Xt ∼ iid Nm (0 Σ) where 1 for i = j, Σij = ρ for i = j, for some 0 ≤ ρ ≤ 1, where ρ controls the degree of correlation between alternatives. DESIGN I.A—Symmetric Distributed Loss: Define the (vector of) loss variables to be at Lt ≡ θ + Xt E(a2t ) at = exp(yt )
yt =
where
−ϕ √ + ϕyt−1 + ϕεt 2(1 + ϕ)
and εt ∼ iid N(0 1) This implies that E(yt ) = −ϕ/{2(1 − ϕ2 )} and var(yt ) = ϕ/(1−ϕ2 ) such that E(at ) = exp{E(yt )+var(yt )/2} = exp{0} = 1 and var(at ) = (exp{ϕ/(1 − ϕ2 )} − 1). Furthermore, E(a2t ) = var(at ) + 1 = exp{ϕ/(1 − ϕ2 )} such that var(Lt ) = 1. Note that ϕ = 0 corresponds to homoskedastic errors and ϕ > 0 corresponds to (generalized autoregressive conditional heteroskedasticity) (GARCH type) heteroskedastic errors. The simulations employ 2,500 repetitions, where λ = 0, 5, 10, 20, ρ = 000, 0.50, 0.75, 0.95, ϕ = 00, 0.5, 0.8, and m = 10, 40, 100. We use the block bootstrap, in which blocks have length l = 2, and results are based on B = 1000 resamples. The size of a synthetic sample is n = 250. This approximates sample sizes often available for model selection exercises in macroeconomics. We report two statistics from our simulation experiment based on α = 10%: ∗ contains M∗ ; the other is the average one is the frequency at which M 90% ∗ . The former shows the size properties of the MCS number of models in M 90% procedure; the latter is informative about the power of the procedure. Table II presents simulation results that show that the small sample properties of the MCS procedure closely match its theoretical predictions. The frequency that the best models are contained in the MCS is almost always greater than (1 − α), and the MCS becomes better at separating the inferior models from the superior model, as the μij s become more disperse (e.g., as λ increases). Note also that a larger correlation makes it easier to separate inferior models from superior model. This is not surprising because
478
P. R. HANSEN, A. LUNDE, AND J. M. NASON TABLE II SIMULATION DESIGN I.Aa m = 10
λ
ρ=
0
05
075
m = 40 095
0
05
075
m = 100 095
Panel A: ϕ = 0 ∗90% (size) Frequency at which M∗ ⊂ M 0 0.885 0.898 0.884 0.885 0882 0882 0877 0880 5 0.990 0.988 0.991 1.000 0980 0979 0976 0984 10 0.994 0.998 0.999 1.000 0978 0983 0985 0993 20 0.998 1.000 1.000 1.000 0988 0981 0991 1000 40 1.000 1.000 1.000 1.000 0992 0996 0998 1000 ∗90% (power) Average number of elements in M 0 9.614 9.658 9.646 9.632 3868 3878 3891 3882 5 6.498 4.693 3.239 1.544 2530 1879 1335 6382 10 3.346 2.390 1.732 1.027 1359 9829 7142 3266 20 1.702 1.307 1.062 1.000 7060 5010 3617 1674 40 1.072 1.005 1.000 1.000 3572 2597 1840 1052 Panel B: ϕ = 05 ∗90% (size) Frequency at which M∗ ⊂ M 0 0.908 0.897 0.905 0.894 0911 0907 0910 0916 5 0.985 0.990 0.995 1.000 0971 0976 0977 0987 10 0.992 0.999 1.000 1.000 0978 0985 0982 0995 20 0.999 1.000 1.000 1.000 0988 0989 0988 1000 40 1.000 1.000 1.000 1.000 0996 0996 1000 1000 ∗90% (power) Average number of elements in M 0 9.660 9.664 9.664 9.649 3897 3893 3903 3905 5 6.076 4.497 3.213 1.564 2433 1772 1313 6112 10 3.188 2.278 1.680 1.035 1295 9268 6791 3136 20 1.700 1.274 1.069 1.000 6819 4883 3563 1659 40 1.085 1.008 1.000 1.000 3506 2517 1811 1061 Panel C: ϕ = 08 ∗90% (size) Frequency at which M∗ ⊂ M 0 0.931 0.940 0.939 0.947 0963 0968 0958 0962 5 0.990 0.997 0.998 1.000 0977 0980 0989 0993 10 0.998 1.000 1.000 1.000 0984 0987 0992 0998 20 1.000 1.000 1.000 1.000 0990 0993 0996 1000 40 1.000 1.000 1.000 1.000 0999 1000 1000 1000 ∗90% (power) Average number of elements in M 0 9.739 9.814 9.794 9.799 3961 3961 3953 3955 5 4.301 3.318 2.386 1.322 1626 1231 9118 4401 10 2.424 1.864 1.419 1.062 9133 6643 4727 2349 20 1.455 1.220 1.092 1.010 4770 3520 2535 1454 40 1.098 1.037 1.011 1.003 2645 1967 1490 1081
0
0880 0975 0973 0975 0981 9702 5987 3232 1703 8778
0925 0974 0975 0979 0980 9835 5784 3054 1604 8339
0970 0970 0982 0982 0988
05
0870 0976 0975 0978 0984
075
095
0877 0975 0974 0986 0990
0875 0976 0980 0992 0998
9684 9711 9720 4392 3251 1504 2304 1697 7902 1240 8785 4049 6375 4521 2083
0918 0974 0969 0976 0982
0909 0973 0983 0981 0991
0913 0973 0984 0992 0999
9805 9794 9773 4160 3035 1454 2230 1656 7510 1156 8430 3894 6166 4360 2034
0975 0975 0976 0982 0994
0969 0976 0974 0992 0996
0972 0981 0991 0998 1000
9900 9944 9915 9943 3969 2813 2056 1012 2072 1477 1126 5470 1115 8014 5948 2840 5932 4356 3248 1645
a The two statistics are the frequency at which M ∗ contains M∗ and the other is the average number of models 90% ∗ . The former shows the ‘size’ properties of the MCS procedure and the latter is informative about the ‘power’ in M 90% of the procedure.
THE MODEL CONFIDENCE SET
479
var(dijt ) = var(Lit ) + var(Ljt ) − 2 cov(Lit Ljt ) = 2(1 − ρ) which is decreasing in ρ. Thus, a larger correlation (holding the individual variances fixed) is associated with more information that allows the MCS to separate good from bad models. Finally, the effects of heteroskedasticity are relatively small, but heteroskedasticity does appear to add power to the MCS procedure. The av∗ tends to fall as ϕ increases. erage number of models in M 90% Corollary 1 has a consistency result that applies when λ > 0. The implication is that only one model enters M∗ under this restriction. Table II shows that M∗ often contains only one model given λ > 0. The MCS matches this ∗ = M∗ in a large number of theoretical prediction in Table II because M 90% simulations. This equality holds especially when λ and ρ are large. These are also the simulation experiments that yield size and power statistics equal (or ∗ nearly equal) to 1. With size close to 1 or equal to 1, observe that M∗ ⊂ M 90% ∗ (in all the synthetic samples). On the other hand, M90% is reduced to a single model (in all the synthetic samples) when power is close to 1 or equal to 1.
DESIGN I.B—Dependent Loss: This design sets Lt ∼ iid N10 (θ Σ), where the covariance matrix has the structure Σij = ρ|i−j| for ρ = 0 05, and 075. The mean vector takes the form θ = (0 0 15 15 ) so that the number of zero elements in θ defines the number of elements in M∗ We report simulation results for the case where m0 = 10 and M∗ consists of either one, two, or five models. The simulation results are presented in Figure 1. The left panels display the ∗ contains M∗ (size) at various sample sizes. The right frequency at which M 90% ∗ (power). The two upper panels present the average number of models in M 90% ∗ panels contain the results for the case where M is a single model. The upperleft panel indicates that the best model is almost always contained in the MCS. p ∗ → This agrees with Corollary 1, which states that M M∗ as n → ∞ when1−α ∗ ever M consists of a single model. The upper-right panel illustrates the power of the procedure based on TmaxM = maxi∈M ti· . We note that it takes about 800 observations to weed out the 9 inferior models in this design. The MCS procedure is barely affected by the correlation parameter ρ but we note that a larger ρ results in a small loss in power. In the lower-left panel, we see that the ∗ is reasonably close to 90% except frequency at which M∗ is contained in M 90% for the very short sample sizes. From the middle-right and lower-right panels, we see that it takes about 500 observations to remove all the poor models. The middle-right and lower-right panels illustrate another aspect of the MCS procedure. For large sample sizes, we note that the average number of models ∗ falls below the number of models in M∗ The explanation is simin M 90% ple. After all poor models have been eliminated, as occurs with probability approaching 1 as n → ∞ there is a positive probability that H0M∗ is rejected,
480
P. R. HANSEN, A. LUNDE, AND J. M. NASON
FIGURE 1.—Simulation Design I.B with 10 alternatives and 1, 2, or 5 elements in M∗ . The left ∗90% (size properties) and the right panels report the frequency at which M∗ is contained in M ∗90% (power properties). panels report the average number of models in M
which causes the MCS procedure to eliminate a good model. Thus, the inferences we draw from the simulation results are quite encouraging for the TmaxM test.
THE MODEL CONFIDENCE SET
481
5.2. Simulation Experiment II: Regression Models Next we study the properties of the MCS procedure in the context of insample evaluation of regression models as we laid out in Section 3.2. We consider a setup with six potential regressors, Xt = (X1t X6t ) that are distributed as Xt ∼ iid N6 (0 Σ) where 1 for i = j Σij = ρ for i = j for some 0 ≤ ρ < 1, where ρ measures the degree of dependence between the regressors. We define the dependent variable by Yt = μ + βX1t + 1 − β2 εt , where εt ∼ iid N(0 1). In addition to the six variables in Xt we include a constant, X0t = 1 in all regression models. The set of regressions being estimated is given by the 12 regression models that are listed in each of the panels in Table III. We report simulation results based on 10,000 repetitions, using a design with an R2 = 50% (i.e., β2 = 05) and either ρ = 03 or ρ = 09.8 For the number of bootstrap resamples, we use B = 1,000. Since X0t = 1 is included in all regression models, the relevant MCS statistics are invariant to the actual value for μ, so we set μ = 0 in our simulations. The definition of M∗ will depend on the criterion. With KLIC, the set of best models is given by the set of regression models that includes X1 The reason is that KLIC does not favor parsimonious models, unlike the AIC and BIC . With these two criteria, M∗ is defined to be the most parsimonious regression model that includes X1 . The models in M∗ are identified by the shaded regions in Table III. Our simulation results are reported in Table III. The average value of Q(Zj θˆ j ) is given in the first pair of data columns, followed by the average estimate of the effective degrees of freedom, kˆ The Gaussian setup is such that all models are correctly specified, so the effective degrees of freedom is simply the number of free parameters, which is the number of regressors plus 1 for σj2 Table III shows that the average value of kˆ j is very close to the number of free parameters in the jth regression model. The last three pairs of columns ∗ We want large numreport the frequency that each of the models are in M 90% bers inside the shaded region and small numbers outside the shaded region. The results are intuitive. As the sample size increases from 50 to 100 and then to 500, the MCS procedure becomes better at eliminating the models that do not reside in M∗ With a sample size of n = 500 the consistent criterion, BIC , 8 Simulation results for β2 = 01 and 09 are available in a separate appendix; see Hansen, Lunde, and Nason (2011).
482
P. R. HANSEN, A. LUNDE, AND J. M. NASON
TABLE III SIMULATION EXPERIMENT IIa kˆ
Q(Zj θˆ j ) ρ=
03
09
AIC (TIC)
KLIC
BIC
03
09
03
09
03
09
03
09
Panel A: n = 50 X0 X0 X1 X0 X2 X0 X3 X0 X4 X0 X5 X0 X6 X0 X2 X0 X2 X3 X0 X2 X4 X0 X2 X5 X0 X2 X6
481 124 113 102 909 795 677 447 423 404 388 372
481 124 113 102 904 788 669 210 181 163 148 134
1.99 3.02 4.08 5.18 6.32 7.50 8.73 3.02 4.08 5.18 6.32 7.50
2.00 3.02 4.08 5.18 6.32 7.50 8.74 3.02 4.08 5.18 6.32 7.51
0.058 0.998 0.998 0.999 1.000 1.000 1.000 0.086 0.106 0.120 0.132 0.145
0.038 0.999 0.999 0.999 1.000 1.000 1.000 0.905 0.948 0.958 0.962 0.964
0.085 1.000 0.962 0.940 0.905 0.867 0.806 0.100 0.107 0.105 0.100 0.094
0.070 1.000 0.999 0.998 0.997 0.994 0.990 0.935 0.949 0.938 0.913 0.869
0.118 1.000 0.566 0.469 0.367 0.279 0.203 0.099 0.077 0.054 0.036 0.022
0.124 1.000 0.940 0.912 0.803 0.598 0.400 0.877 0.806 0.665 0.501 0.348
Panel B: n = 100 X0 X0 X1 X0 X2 X0 X3 X0 X4 X0 X5 X0 X6 X0 X2 X0 X2 X3 X0 X2 X4 X0 X2 X5 X0 X2 X6
980 276 266 255 244 234 223 924 888 861 839 820
981 278 267 257 246 236 225 451 404 381 363 348
1.99 3.00 4.03 5.07 6.12 7.19 8.28 3.00 4.03 5.07 6.12 7.19
1.99 3.00 4.03 5.06 6.12 7.18 8.27 3.01 4.03 5.07 6.12 7.19
0.000 0.998 0.999 0.999 1.000 1.000 1.000 0.000 0.000 0.000 0.000 0.001
0.000 1.000 1.000 1.000 1.000 1.000 1.000 0.548 0.691 0.736 0.759 0.772
0.000 1.000 0.959 0.939 0.908 0.864 0.800 0.000 0.000 0.000 0.000 0.000
0.000 1.000 0.982 0.975 0.960 0.942 0.920 0.585 0.666 0.675 0.655 0.631
0.000 1.000 0.402 0.276 0.174 0.101 0.059 0.000 0.000 0.000 0.000 0.000
0.000 1.000 0.675 0.619 0.545 0.390 0.238 0.490 0.443 0.338 0.236 0.143
2.00 3.00 4.00 5.01 6.02 7.03 8.04 3.00 4.00 5.01 6.02 7.03
2.00 3.00 4.00 5.01 6.01 7.02 8.03 3.00 4.00 5.01 6.01 7.02
0.000 0.999 0.999 0.999 1.000 1.000 1.000 0.000 0.000 0.000 0.000 0.000
0.000 0.999 0.999 1.000 1.000 1.000 1.000 0.000 0.002 0.004 0.006 0.008
0.000 1.000 0.958 0.938 0.907 0.858 0.790 0.000 0.000 0.000 0.000 0.000
0.000 1.000 0.960 0.938 0.901 0.852 0.792 0.000 0.002 0.004 0.006 0.007
0.000 1.000 0.207 0.100 0.044 0.020 0.006 0.000 0.000 0.000 0.000 0.000
0.000 1.000 0.206 0.099 0.042 0.017 0.008 0.000 0.002 0.001 0.001 0.000
Panel C: n = 500 X0 X0 X1 X0 X2 X0 X3 X0 X4 X0 X5 X0 X6 X0 X2 X0 X2 X3 X0 X2 X4 X0 X2 X5 X0 X2 X6
498 151 150 149 148 147 145 474 460 451 444 439
498 151 150 149 148 147 146 238 219 211 206 203
a The average value of the maximized log-likelihood function multiplied by −2 is reported in the first two data columns. The next pair of columns has the average of the effective degrees of freedom. The last three pairs of columns ∗ for each of the three criteria: KLIC, AIC , report the frequency that a particular regression model is in the M 90% and BIC .
THE MODEL CONFIDENCE SET
483
has reduced the MCS to the single best model in the majority of simulations This is not true for the AIC criterion. Although it tends to settle on more parsimonious models than the KLIC, the AIC has a penalty that makes it possible for an overparameterized model to have the best AIC The bootstrap testing procedure is conservative when the comparisons involve nested models under KLIC; see our discussion in the last paragraph of Section 3.2. This explains that both Type I and Type II errors are close to zero when n = 500 an ideal outcome that is not guaranteed when M∗KLIC includes nonnested models.9 6. EMPIRICAL APPLICATIONS 6.1. U.S. Inflation Forecasts: Stock and Watson (1999) Revisited This section revisits the Stock and Watson (1999) study of the best out-ofsample predictors of inflation. Their empirical application consists of pairwise comparisons of a large number of inflation forecasting models. The set of inflation forecasting models includes several that have a Phillips curve interpretation, along with autoregressive and a no-change (month-over-month) forecast. We extend their set of forecasts by adding a second no-change (12-monthsover-12-months) forecast that was used in Atkeson and Ohanian (2001). Stock and Watson (1999) measured inflation, πt , as either the CPI-U, all items (PUNEW), or the headline personal consumption expenditure implicit price deflator (GMDC).10 The relevant Phillips curve is (4)
πt+h − πt = φ + β(L)ut + γ(L)(1 − L)πt + et+h
where ut is the unemployment rate, L is the lag polynomial operator, and et+h is the long-horizon inflation forecast innovation. Note that the natural rate hypothesis is not imposed on the Phillips curve (4) and that inflation as a regressor is in its first difference. Stock and Watson also forecasted inflation with (4) where the unemployment rate ut is replaced with different macrovariables. The entire sample runs from 1959:M1 to 1997:M9. Following Stock and Watson, we study the properties of their forecasting models on the pre- and post1984 subsamples of 1970:M1–1983:M12 and 1984:M1–1996:M9.11 The former subsample contains the great inflation of the 1970s and the rapid disinflation of the early 1980s. Inflation does not exhibit this volatile behavior in the post1984 subsample. We follow Stock and Watson so as to replicate their inflation 9 In an unreported simulation study where M∗KLIC was designed to include nonnested models, ∗90% converges to 90% we found the frequency by which M∗KLIC ⊂ M 10 The data for this applications was downloaded from Mark Watson’s web page. We refer the interested reader to Stock and Watson (1999) for details about the data and model specifications. 11 Stock and Watson split their sample at the end of 1983 to account for structural change in inflation dynamics. This structural break is ignored when estimating the Phillips curve model (4) and the alternative inflation forecasting equations. This is justified by Stock and Watson because the impact of the 1984 structural break on their estimated Phillips curve coefficients is small.
484
P. R. HANSEN, A. LUNDE, AND J. M. NASON
forecasts. However, our MCS bootstrap implementation, which is described in Section 3, relies on an assumption that dijt is stationary. This is not plausible when the parameters are estimated with a recursive estimation scheme, as was used by Stock and Watson (1999). We avoid this problem by following Giacomini and White (2006) and present empirical results that are based on parameters estimated over a rolling window with a fixed number of observations.12 Regressions are estimated on data that begin no earlier than 1960:M2, although lagged regressors impinge on observations back to 1959:M1. We compute the MCS across all of the Stock and Watson inflation forecasting models. This includes the Phillips curve model (4), the inflation forecasting equation that runs through all of the macrovariables considered by Stock and Watson, a univariate autoregressive model, and two no-change forecasts. The first no-change forecast is the past month’s inflation rate; the second no-change forecast uses the past year’s inflation rate as its forecast. The former matches the no-change forecast in Stock and Watson (1999) and the latter matches the no-change forecast in Atkeson and Ohanian (2001). Stock and Watson also presented results for forecast combinations and forecasts based on principal component indicator variables.13 Tables IV and V report (the level of) the root mean square error (RMSE) and MCS p-values for each of the inflation forecasting models. The second column of Table IV also lists the transformation of the macrovariable employed by the forecasting equation. Our Table IV matches the results reported in Stock and Watson (1999, Table 2). The initial model space M0 is filled with a total of 19 models. The results for the two no-change forecasts and the AR(p) are the first three rows of Table IV. The RMSEs and the p-values for the Phillips curve forecasting model (4) appear in the bottom row of our Table IV. The rest of the rows of Table IV are the “gap” and “first difference” specifications of Stock and Watson’s aggregate activity variables that appear in place of ut in inflation forecasting equation (4). The gap variables are computed with a one-sided Hodrick and Prescott (1997) filter; see Stock and Watson (1999, p. 301) for details.14 A glance at Table IV reveals that the MCS of subsamples 1970:M1– 1983:M12 and 1984:M1–1996:M9 are strikingly different for both inflation series, PUNEW and GMDC. The MCS of the pre-1984 subsample places seven 12 The corresponding empirical results that are based on parameters that are estimated with the recursive scheme, as was used in Stock and Watson (1999), are available in a separate appendix; see Hansen, Lunde, and Nason (2011). Although our assumption does not justify the recursive estimation scheme, it produces pseudo-MCS results that are very similar to those obtained under the rolling window estimation scheme. 13 See Stock and Watson (1999) for details about their modelling strategy, forecasting procedures, and data set. 14 The MCS p-values are computed using a block size of l = 12 in the bootstrap implementation. The MCS p-values are qualitatively similar when computed with l = 6 and l = 9 These are reported in a separate appendix; see Hansen, Lunde, and Nason (2011).
485
THE MODEL CONFIDENCE SET TABLE IV MCS FOR SIMPLE REGRESSION-BASED INFLATION FORECASTSa PUNEW 1970–1983 Variable
Trans
RMSE
No change (month) No change (year) uniar
– –
3.290 2.798 2.802
Gap specifications dtip dtgmpyq dtmsmtq dtlpnag ipxmca hsbp lhmu25
DT DT DT DT LV LN LV
GMDC 1984–1996
1970–1983
1984–1996
RMSE
pMCS
RMSE
pMCS
RMSE
pMCS
0.001 0.006 0.004
2.140 1.207 1.330
0.122 ∗ 1.00∗∗ 0.736∗∗
2.208 2.100 2.026
0.042 0.109∗ 0.145∗
1.751 0.888 1.070
0.113∗ 1.00∗∗ 0.411∗∗
2.597 2.751 2.202 2.591 2.609 2.114 2.968
0.059 0.020 0.872∗∗ 0.068 0.034 1.00∗∗ 0.006
1.475 1.691 1.704 1.433 1.318 1.582 1.439
0.651∗∗ 0.299∗∗ 0.477∗∗ 0.694∗∗ 0.736∗∗ 0.579∗∗ 0.651∗∗
2.103 2.090 1.806 2.132 2.040 1.967 2.231
0.095 0.157∗ 0.464∗∗ 0.075 0.261∗∗ 0.364∗∗ 0.061
1.050 1.125 1.046 1.026 1.034 1.034 1.040
0.411∗∗ 0.317∗∗ 0.411∗∗ 0.411∗∗ 0.411∗∗ 0.411∗∗ 0.411∗∗
First difference specifications ip DLN 2.344 gmpyq DLN 2.306 msmtq DLN 2.158 lpnag DLN 2.408 dipxmca DLV 2.379 dhsbp DLN 2.850 dlhmu25 DLV 2.383 dlhur DLV 2.296
0.306∗∗ 0.842∗∗ 0.872∗∗ 0.430∗∗ 0.139∗ 0.003 0.169∗ 0.631∗∗
1.393 1.524 1.391 1.341 1.353 1.456 1.440 1.429
0.736∗∗ 0.421∗∗ 0.736∗∗ 0.736∗∗ 0.736∗∗ 0.665∗∗ 0.579∗∗ 0.691∗∗
1.946 1.709 1.857 1.940 1.903 2.076 2.035 1.904
0.298∗∗ 1.00∗∗ 0.464∗∗ 0.298∗∗ 0.446∗∗ 0.075 0.102∗ 0.330∗∗
1.058 1.158 1.066 1.027 1.041 1.070 1.065 1.067
0.411∗∗ 0.317∗∗ 0.411∗∗ 0.411∗∗ 0.411∗∗ 0.411∗∗ 0.411∗∗ 0.411∗∗
Phillips curve lhur
0.034
1.388
0.736∗∗
2.076
0.098
1.162
0.325∗∗
2.637
pMCS
a RMSEs and MCS p-values for the different forecasts. The forecasts in M ∗ ∗ and M are identified by one 90% 75% and two asterisks, respectively.
∗ and nine models in GMDC-M ∗ . For forecasting models in PUNEW-M 75% 75% ∗ for both PUNEW the post-1984 subsample, all but one model ends up in M 75% and GMDC. The only model that is consistently kicked out of these MCSs is the monthly no-change forecast, which uses last month’s inflation rate as its forecast. Another intriguing feature of Table IV is the inflation forecasting models that reside in the MCS when faced with the 1970:M1–1983:M12 subsample. ∗ are driven by macrovariables The seven models that are in PUNEW-M 75% related either to real economic activity (e.g., manufacturing and trade, and building permits) or to the labor market. The labor market variables are lpnag (employees on nonagricultural payrolls) and dlhur (first difference of the unemployment rate, all workers 16 years and older). Thus, there is labor mar-
486
P. R. HANSEN, A. LUNDE, AND J. M. NASON TABLE V MCS RESULTS FOR SHRINKAGE-TYPE INFLATION FORECASTSa PUNEW 1970–1983
Variable
RMSE
No change (month) No change (year) Univariate
3.290 2.798 2.802
GMDC 1984–1996
1970–1983
1984–1996
RMSE
pMCS
RMSE
pMCS
RMSE
pMCS
0.006 0.020 0.012
2.140 1.207 1.330
0.000 1.00∗∗ 0.718∗∗
2.208 2.100 2.026
0.006 0.120∗ 0.046
1.751 0.888 1.070
0.000 1.00∗∗ 0.378∗∗
0.266∗∗ 1.00∗∗ 0.093 0.030 0.975∗∗
1.407 1.351 1.269 1.294 1.318
0.069 0.186∗ 0.869∗∗ 0.869∗∗ 0.869∗∗
2.105 1.746 1.880 1.939 1.918
0.088 1.00∗∗ 0.585∗∗ 0.323∗∗ 0.518∗∗
1.013 1.038 1.030 1.055 1.013
0.570∗∗ 0.570∗∗ 0.570∗∗ 0.530∗∗ 0.570∗∗
Panel B. Real activity indicators Mul. factors 2.245 0.768∗∗ 1 factor 2.115 0.975∗∗ Comb. mean 2.284 0.615∗∗ Comb. median 2.329 0.495∗∗ Comb. ridge reg. 2.160 0.953∗∗
1.416 1.347 1.263 1.284 1.326
0.022 0.358∗∗ 0.869∗∗ 0.869∗∗ 0.855∗∗
1.959 1.774 1.827 1.854 1.888
0.323∗∗ 0.720∗∗ 0.698∗∗ 0.647∗∗ 0.518∗∗
0.990 1.041 1.012 1.038 1.013
0.570∗∗ 0.570∗∗ 0.570∗∗ 0.553∗∗ 0.570∗∗
Panel C. Interest rates Mul. factors 1 factor Comb. mean Comb. median Comb. ridge reg.
2.828 2.776 2.474 2.567 2.436
0.019 0.030 0.092 0.077 0.164∗
1.512 1.463 1.349 1.377 1.372
0.005 0.003 0.123∗ 0.034 0.069
2.215 2.111 1.935 1.974 1.962
0.008 0.007 0.323∗∗ 0.290∗∗ 0.216∗
1.294 1.102 1.060 1.066 1.052
0.008 0.161∗ 0.522∗∗ 0.418∗∗ 0.530∗∗
Panel D. Money Mul. factors 1 factor Comb. mean Comb. median Comb. ridge reg.
2.801 2.805 2.742 2.752 2.721
0.015 0.013 0.019 0.019 0.019
1.340 1.352 1.390 1.340 1.446
0.597∗∗ 0.186∗ 0.022 0.386∗∗ 0.007
2.028 2.027 2.033 2.032 2.013
0.020 0.031 0.012 0.008 0.088
1.075 1.104 1.088 1.077 1.088
0.057 0.026 0.015 0.095 0.010
Phillips curve LHUR
2.637
0.030
1.388
0.022
2.076
0.031
1.162
0.423∗∗
Panel A. All indicators Mul. factors 2.367 1 factor 2.106 Comb. mean 2.423 Comb. median 2.585 Comb. ridge reg. 2.121
pMCS
a RMSEs and MCS p-values for the different forecasts. The forecasts in M ∗ ∗ and M are identified by one 90% 75% and two asterisks, respectively.
ket information that is important for predicting inflation during the pre-1984 subsample. This result is consistent with traditional Keynesian measures of aggregate demand. Table IV also shows that there are two levels and five first difference specifi∗ using the cations of the forecasting equation that consistently appear in M 75% 1970:M1–1983:M12 subsample. On this subsample, only msmtq (total real man∗ ufacturing and trade) is consistently embraced by PUNEW- and GMDC-M 75%
THE MODEL CONFIDENCE SET
487
whether in levels or first differences. In summary, we interpret these variables as signals about the anticipated path of either real aggregate demand or real aggregate supply that helps to predict inflation out of sample in the pre-1984 subsample. There are several more inferences to draw from Table IV. These concern the two types of no-change forecasts whose predictive accuracy is strikingly differ∗ either on the ent. The no-change (month) forecast fails to appear in M 75% pre-1984 or on the post-1984 subsamples, whereas the no-change (year) fore∗ for the post-1984 subsample, but not the 1970:M1– cast finds its way into M 75% 1983:M12 subsample. These results are especially of interest because the nochange (year) forecast yields the best inflation forecasts on the 1984:M1– 1996:M9 subsample for both PUNEW and GMDC. These empirical results for the no-change inflation forecasts are interesting because they reconcile the results of Stock and Watson (1999) with those of Atkeson and Ohanian (2001). Stock and Watson (1999, p. 327) found that “[T]he conventionally specified Phillips curve, based on the unemployment rate, was found to perform reasonably well. Its forecasts are better than univariate forecasting models (both autoregressions and random walk models).” In contrast, Atkeson and Ohanian (2001, p. 10) concluded that “economists have not produced a version of the Phillips curve that makes more accurate inflation forecasts than those from a naive model that presumes inflation over the next four quarters will be equal to inflation over the last four quarters.” The source of the disagreement is that Stock and Watson and Atkeson and Ohanian studied different no-change inflation forecasts. The no-change forecast Stock and Watson (1999) deployed is last month’s inflation rate, whereas the no-change forecasts in Atkeson and Ohanian (2001) is the past year’s inflation rate. We agree with Stock and Watson that the Phillips curve is a device that yields ∗ do not better forecasts of inflation in the pre-1984 period. The relevant M 75% include either of the no-change forecasts for PUNEW and GMDC. However, for the post-1984 sample, we observe that no-change (year) forecast has the smallest sample loss of all forecasts, which supports the conclusion of Atkeson and Ohanian (2001). Table V generates MCSs using factor models and forecast combination methods that replicate the set of forecasts in Stock and Watson (1999, Table 4). They combined a large set of inflation forecasts from an array of 168 models using sample means, sample medians, and ridge estimation to produce forecast weighting schemes. The other forecasting approach depends on principal components of the 168 macropredictors. The idea is that there exists an underlying factor or factors (e.g., real aggregate demand, financial conditions) that summarize the information of a large set of predictors. For example, Solow (1976) argued that a motivation for the Phillips curves of the 1960s and 1970s was that unemployment captured, albeit imperfectly, the true unobserved state of real aggregate demand.
488
P. R. HANSEN, A. LUNDE, AND J. M. NASON
The factor models and forecast combination methods produce inflation forecasts that are, in general, better than those in Table IV. The forecasts constructed from “All indicators” and “Real activity indicators” in Panels A and B do particularly well across the board. Interestingly, the best forecast during the 1970:M1–1983:M12 subsample is the one-factor “All indicators” model, while the second best is the one-factor “Real activity indicators” model. Most of the forecasts constructed from the “Money” variables do not find their way into the MCSs. Despite the better predictive accuracy produced by factor models and forecast combinations, during the post-1984 period the best forecast is the nochange (year) forecast. 6.2. Likelihood-Based Comparison of Taylor-Rule Models Monetary policy is often evaluated with the Taylor (1993) rule. A Taylor rule summarizes the objectives and constraints that define monetary policy by mapping (implicitly) from this decision problem to the path of the short-term nominal interest rate. A canonical monetary policy loss function penalizes the decision maker for inflation volatility against its target and output volatility around its trend. The mapping generates a Taylor rule that the interest rate responds to inflation and output deviations from trend. Thus, Taylor rules measure ex post the success monetary policy has had at meeting the goals of keeping inflation close to target and output at trend. Articles by Taylor (1999), Clarida, Galí, and Gertler (2000), and Orphanides (2003) are leading examples of using Taylor rules to evaluate actual monetary policy, while McCallum (1999) provided an introduction for consumers of monetary policy rules. This section shows how the MCS can be used to evaluate which Taylor rule regression best approximates the underlying data generating process. We posit the general Taylor rule regression py pπ Rt = (1 − ρ) γ0 + (5) γπj πt−j + γyj yt−j + ρRt−1 + vt j=1
j=1
where Rt denotes the short-term nominal interest rate, πt is inflation, yt equals deviations of output from trend (i.e., the output gap), and the error term, vt is assumed pπ to be a martingale difference process. The Taylor principle is satisfied if j=1 γπj exceeds 1 because a 1% rise in the sum of pπ lags of inflation indicates that Rt should rise by more than 100 basis points. The monetary policy repy sponse to real side fluctuations is given by j=1 γyj on the py lags of the output gap. The intercept γ0 is the equilibrium steady state real rate plus the target inpπ flation rate (weighted by 1 − j=1 γπj ). The Taylor rule regression (5) includes lagged interest, Rt−1 which may be interpreted as interest rate smoothing by the central bank. Alternatively, the lagged interest rate could be interpreted as
489
THE MODEL CONFIDENCE SET TABLE VI TAYLOR RULE REGRESSION DATA SETa Observable
Dependent variable Rt : Interest rate Effective Fed Funds Rate (EFFR), Rfed fundst Independent variables Implicit GDP deflator, Pt , πt : Inflation seasonally adjusted (SA) yt : Output gap
ln Qt − trend Qt , i.e., transitory component of output, where Qt is real GDP in billions of chained 2000 $, SA at annual rates
urt : Unemployment URt − trend URt , i.e., transitory rate gap component of URt , where URt is the is the civilian unemployment rate, SA rulct : Real unit labor costs
Construction
Temporally aggregate daily return (annual rate) to quarterly, Rt = 100 × ln[1 + Rfed fundst /100] πt = 400 × ln[Pt /Pt−1 ] Apply Hodrick–Prescott filter to ln Qt
Temporally aggregate monthly to quarterly frequency to get URt . Apply Baxter–King filter to URt
The cointegrating residual of nominal rulct = LSt − LPt − aˆ 0 − aˆ 1 t −aˆ 2 ln Pt ULCt (= LSt − LSt ) and ln Pt . LSt is labor share, i.e., log of compensation per hour in the nonfarm business sector; LPt is labor productivity, i.e., log of output per hour of all persons nonfarm business sector
a The effective federal funds rate is obtained from H.15 Selected Interest Rates in Federal Reserve Statistical Releases. The implicit price deflator, real GDP, the unemployment rate, compensation per hour, and output per hour of all persons are constructed by the Bureau of Economic Analysis and are available at the FRED Data Bank at the Federal Reserve Bank of St. Louis. The sample period is 1979:Q1–2006:Q4. The data are drawn from data available online from the Board of Governors and FRED at the Federal Reserve Bank of St. Louis.
a proxy for other determinants of the interest rate that are not captured by the regression (5). Note also that the Taylor rule regression (5) avoids issues that arise in the estimation of simultaneous equation systems because contemporaneous inflation, πt , and the output gap, yt , are not regressors, only lags of these variables are. In this case, structural interpretations have to be applied to the Taylor rule regression (5) with care. The Taylor rule regression (5) is estimated by ordinary least squares on a U.S. sample that runs from 1979:Q1 to 2006:Q4. Table VI provides details about the data used to estimate the Taylor rule regression.15 The (effective) federal funds rate defines the Taylor rule policy rate Rt . The growth rate of the im15 We have generated results on a shorter post-1984 sample. Omitting the volatile 1979–1983 period from the analysis does not substantially change our results, beyond the loss of information that one would expect with a shorter sample. These results are available in a separate appendix (Hansen, Lunde, and Nason (2011)).
490
P. R. HANSEN, A. LUNDE, AND J. M. NASON
plicit gross domestic product (GDP) deflator is our measure of inflation, πt . The cyclical component of the Hodrick and Prescott (1997) filter is applied to real GDP to obtain estimates of the output gap, yt . We also employ two real activity variables to fill out the model space and to act as alternatives to the output gap. These real activity variables are the Baxter and King (1999) filtered unemployment rate gap, urt , and the Nason and Smith (2008) measure of real unit labor costs, rulct . We compute the Baxter–King urt using the maximum likelihood–Kalman filter methods of Harvey and Trimbur (2003). The model space consists of 25 specifications. The model space is built by setting ρ to zero or estimating it (pπ = 1 or 2 py = 1 or 2) and equating yt with the output gap, or replacing it with either the unemployment rate gap or real unit labor costs. We add to these 24 (= 2 × 2 × 3 × 2) regressions a pure AR(1) model of the effective federal funds rate. TABLE VII MCS FOR TAYLOR RULES: 1979:Q1–2006:Q4a Model Specification
Rt−1
Rt−1 Rt−1 Rt−1 Rt−1 Rt−1 Rt−1 Rt−1 Rt−1 Rt−1 Rt−1 Rt−1 Rt−1
πt−1 πt−j , j=12 πt−1 πt−j , j=12 πt−1 πt−j , j=12 yt−1 yt−j , j=12 yt−1 yt−j , j=12 urt−1 urt−j , j=12 πt−1 πt−j , j=12 πt−1 πt−j , j=12 πt−1 πt−j , j=12 yt−1 yt−j , j=12 yt−1 yt−j , j=12 urt−1 urt−j , j=12
yt−1 yt−j , j=12 urt−1 urt−j , j=12 rulct−1 rulct−j , j=12 urt−1 urt−j , j=12 rulct−1 rulct−j , j=12 rulct−1 rulct−j , j=12 yt−1 yt−j , j=12 urt−1 urt−j , j=12 rulct−1 rulct−j , j=12 urt−1 urt−j , j=12 rulct−1 rulct−j , j=12 rulct−1 rulct−j , j=12
Q(Zj θˆ j )
kˆ
KLIC
AIC
BIC
93.15 284.82 258.95 289.65 268.90 289.99 266.07 387.45 385.86 386.47 385.43 386.21 384.82 68.57 62.11 77.57 73.27 72.80 69.21 86.16 85.51 89.42 88.11 87.42 85.93
13.74 11.44 14.66 10.20 12.82 9.89 12.12 17.04 23.42 14.92 19.44 15.41 19.86 17.71 22.11 16.32 18.79 16.06 19.26 19.16 24.32 18.92 22.42 18.07 21.32
106.89 (0.30)∗∗ 296.25 (0.00) 273.61 (0.00) 299.84 (0.00) 281.72 (0.00) 299.88 (0.00) 278.19 (0.00) 404.49 (0.00) 409.28 (0.00) 401.39 (0.00) 404.87 (0.00) 401.62 (0.00) 404.68 (0.00) 86.28 (0.86)∗∗ 84.22 (1.00)∗∗ 93.89 (0.72)∗∗ 92.07 (0.80)∗∗ 88.86 (0.86)∗∗ 88.47 (0.86)∗∗ 105.33 (0.33)∗∗ 109.83 (0.28)∗∗ 108.35 (0.29)∗∗ 110.53 (0.28)∗∗ 105.49 (0.33)∗∗ 107.25 (0.30)∗∗
120.63 (0.47)∗∗ 307.69 (0.00) 288.28 (0.01) 310.04 (0.00) 294.53 (0.00) 309.77 (0.00) 290.31 (0.01) 421.54 (0.00) 432.69 (0.00) 416.32 (0.00) 424.31 (0.00) 417.02 (0.00) 424.54 (0.00) 103.98 (1.00)∗∗ 106.32 (0.93)∗∗ 110.22 (0.89)∗∗ 110.86 (0.89)∗∗ 104.92 (0.93)∗∗ 107.73 (0.92)∗∗ 124.49 (0.38)∗∗ 134.16 (0.18)∗ 127.27 (0.31)∗∗ 132.94 (0.20)∗ 123.55 (0.38)∗∗ 128.56 (0.28)∗∗
157.99 (0.63)∗∗ 338.79 (0.00) 328.14 (0.01) 337.75 (0.00) 329.37 (0.01) 336.67 (0.01) 323.26 (0.01) 467.86 (0.00) 496.35 (0.00) 456.89 (0.00) 477.16 (0.00) 458.90 (0.00) 478.52 (0.00) 152.12 (0.64)∗∗ 166.43 (0.41)∗∗ 154.60 (0.64)∗∗ 161.95 (0.57)∗∗ 148.58 (1.00)∗∗ 160.09 (0.58)∗∗ 176.59 (0.16)∗ 200.28 (0.02) 178.72 (0.15)∗ 193.88 (0.03) 172.66 (0.21)∗ 186.51 (0.06)
a We report the maximized log-likelihood function (multiplied by −2), the effective degress of freedom, and the ∗ three criteria KLIC, AIC , and BIC along with the corresponding MCS p-values. The regression models in M 90% ∗ and M are identified by one and two asterisks, respectively. See the text and Table VI for variable mnemonics 75%
and definitions.
491
THE MODEL CONFIDENCE SET TABLE VIII ∗90% -KLICa REGRESSION MODELS IN M γ0
ρ
γπ1
γπ2
γy1
γy2
γur1
5.29 (2.50)
0.96 (30.1)
0.12 (0.13)
0.84 (17.0)
1.87 (7.01)
0.00 (0.00)
0.80 (12.1)
0.77 (2.58)
0.82 (0.67)
0.86 (16.8)
1.60 (4.85)
0.64 (0.56)
0.83 (12.9)
0.68 (1.77)
0.37 (0.30)
0.87 (17.0)
1.76 (5.38)
0.39 (0.35)
0.84 (12.9)
0.76 (2.12)
5.63 (2.20)
0.97 (37.3)
4.89 (1.05)
5.56 (2.12)
0.97 (32.3)
6.42 (0.58)
5.33 (2.22)
0.97 (35.5)
1.04 (0.32)
5.42 (2.22)
0.97 (32.6)
8.37 (0.64)
5.35 (2.02)
0.97 (37.8)
30.9 (0.63)
5.43 (2.10)
0.97 (34.2)
52.5 (0.64)
γur2
γrulc1
γrulc2
1.20 (2.17) 1.14 (4.76)
1.50 (1.25)
−0.39 (0.33) 1.58 (0.25)
0.97 (2.85)
5.90 (0.68)
−6.56 (1.16) −0.81 (1.56) −0.18 (0.23)
0.99 (3.55)
−0.55 (0.68)
45.9 (0.79) −1.71 (0.19)
60.7 (0.66)
−22.9 (0.42) −2.47 (0.79)
−8.05 (0.56)
2.52 (0.75)
−5.43 (0.96)
−3.62 (1.04) −25.6 (0.54)
−1.18 (0.30)
−2.74 (0.85)
a Parameter estimates with t -statistics (in absolute values) in parentheses. The shaded area identifies the models in ∗ -BIC . M 75%
We present results of applying the MCS and likelihood-based criteria to the choice of the best Taylor rule regression (5) and AR(1) regressions in Tables VII and VIII. Table VII reports Q(Zj θˆ j ) (the log-likelihood function multiplied by −2), the bootstrap estimate of the effective degrees of freedom, kˆ , and the realizations of the three empirical criteria, KLIC, AIC , and BIC . The numbers surrounded by parentheses in columns headed KLIC, AIC , and BIC are the MCS p-values, and an asterisk identifies the specifi∗ . Table VIII lists estimates of the regression models cations that enter M 90% ∗ that are in M 90% along with their corresponding t-statistics in parentheses.
492
P. R. HANSEN, A. LUNDE, AND J. M. NASON
The t-statistics are based on robust standard errors following Newey and West (1987). Table VII shows that the MCS procedure selects 10–13 of the 25 possible regressions depending on the information criteria. The lagged nominal rate ∗ for the Rt−1 is the one regressor common to the regressions that enter M 90% ∗ consists of the six Taylor rule KLIC, AIC , and BIC . Besides the AR(1), M 90% specifications that nest the AR(1). Under the KLIC and AIC , the Taylor rule regressions include all one or two lag combinations of πt , yt , urt , and rulct . ∗ because it ejects the two lag Taylor rule The BIC produces a smaller M 90% specifications that exclude lagged πt . Thus, the Taylor rule regression–MCS example finds that the BIC tends to settle on more parsimonious models. This is to be expected, given its larger penalty on model complexity. ∗ under the KLIC, AIC , and BIC . Although the The AR(1) falls into M 90% first line of Table VII shows that the AR(1) has the largest Q(Zj θˆ j ) of the ∗ , the MCS recruits the AR(1) because it has a regressions covered by M 90% relatively small estimate of the effective degrees of freedom, kˆ It is important to keep in mind that estimates of the effective degrees of freedom are larger than the number of free parameters in each of the models. This reflects the fact that the Gaussian model is misspecified. For example, the conventional AIC penalty (that doubles the number of free parameters) is misleading in the context of misspecified models; see Takeuchi (1976), Sin and White (1996), and Hong and Preston (2008). It is somewhat disappointing that the MCS procedure yields as many as 13 ∗ . The reason is that the data lack the information to resolve models in M 90% precisely which Taylor rule specification is best in terms of Kullback–Leibler discrepancy. The large set of models is also an outcome of the strict requirements that characterize the MCS. The MCS procedure is designed to control the familywise error rate (FWE), which is the probability of making one or ∗ further if we relax the conmore false rejections. We will be able to trim M ∗ . For instance, trol of the FWE, but that will affect the interpretation of M 1−α if we control the probability of making k or more false rejections, k-FWE (see, e.g., Romano, Shaikh, and Wolf (2008)), additional models can be eliminated. The drawback of k-FWE and other alternative controls is that the MCS looses its key property, which is to contain the best models with probability 1 − α ∗ -KLIC. The Table VIII provides information about the regressions in M 90% ∗ shaded area identifies the models in M75% -BIC First, note that the estimated Taylor rules always satisfy the Taylor principle (i.e., γˆ π1 > 1 or γˆ π1 + γˆ π2 > 1). The coefficients associated with real activity variables have insignificant tstatistics in most cases. Only the first lag of the output gap produces a positive coefficient with a t-ratio above 2 in the first Taylor rule regression listed in Table VIII. Moreover, the statistically insignificant coefficients for the unemployment rate gap and real unit labor costs variables often have counterintuitive
THE MODEL CONFIDENCE SET
493
signs. Finally, the estimates of ρ are between 0.83 and 0.87 in the Taylor rule regressions that include a lag of πt , which suggests interest rate smoothing.16 The fact that the MCS cannot settle on a single specification is not a surprising result. Monetary policymakers almost surely rely on a more complex information set than can be summarized by a simple model. Furthermore, any real activity variable is an imperfect measure of the underlying state of the economy, and there are important and unresolved issues regarding the measurement of gap and marginal cost variables that translate into uncertainty about the proper definitions of the real activity variables. 7. SUMMARY AND CONCLUDING REMARKS This paper introduces the model confidence set (MCS) procedure, relates it to other approaches of model selection and multiple comparisons, and establishes the asymptotic theory of the MCS. The MCS is constructed from a hypothesis test, δM and an elimination rule, eM We defined coherency between test and elimination rule, and stressed the importance of this concept for the finite sample properties of the MCS. We also outlined simple and convenient bootstrap methods for the implementation of the MCS procedure. The paper employs Monte Carlo experiments to study the MCS procedure that reveal it has good small sample properties. It is important to understand the principle of the MCS procedure in applications. The MCS is constructed such that inference about the “best” follows the conventional meaning of the word “significance.” Although the MCS will contain only the best model(s) asymptotically, it may contain several poor models in finite samples. A key feature of the MCS procedure is that a model is discarded only if it is found to be significantly inferior to another model. Models remain in the MCS until proven inferior, which has the implication that not all models in the MCS may be judged good models.17 An important advantage of the MCS, compared to other selection procedures, is that the MCS acknowledges the limits to the informational content of the data. Rather than selecting a single model without regard to degree of information, the MCS procedure yields a set of models that summarizes key sample information. We applied the MCS procedure to the inflation forecasting problem of Stock and Watson (1999). Results show that the MCS procedure provides a powerful tool for evaluating competing inflation forecasts. We emphasize that the information content of the data matters for the inferences that can be drawn. The 16 We have also estimated Taylor rule regressions with moving average (MA) errors, as an alternative to using Rt−1 as a regressor. The empirical fit of models with MA errors is, in all cases, inferior to the Taylor rule regressions that include Rt−1 17 ∗1−α that are members of M∗ can be related to the false The proportion of models in M discovery rate and the q-value theory of Storey (2002). See McCracken and Sapp (2005) for an application that compares forecasting models. See also Romano, Shaikh, and Wolf (2008).
494
P. R. HANSEN, A. LUNDE, AND J. M. NASON
great inflation–disinflation subsample of 1970:M1–1983:M12 has movements in inflation and macrovariables that allow the MCS procedure to make relatively sharp choices across the relevant models. The information content of the less persistent, less volatile 1984:M1–1996:M9 subsample is limited in comparison because the MCS procedure lets in almost any model that Stock and Watson considered. A key exception is the no-change (month) forecast that uses last month’s inflation rate as a predictor of future inflation. This no-change forecast never resides in the MCS in either the earlier or the later periods. A likely explanation is that month-to-month inflation is a noisy measure of core inflation. This view is supported by the fact that a second no-change (year) forecast, which employs a year-over-year inflation rate as the forecast, is a better forecast. This result enables us to reconcile the empirical results in Stock and Watson (1999) with those of Atkeson and Ohanian (2001). Nonetheless, the question of what constitutes the best inflation forecasting model for the last 35 years of U.S. data remains unanswered because the data provide insufficient information to distinguish between good and bad models. This paper also constructs a MCS for Taylor rule regressions based on three likelihood criteria. Such interest rate rules are often used to evaluate the success of monetary policy, but this is not our intent for the MCS. Instead, we study the MCS that selects the best fitting Taylor rule regressions under either a quasi-likelihood criterion, the AIC, or the BIC using the effective degrees of freedom. The competing Taylor rule regressions consist of different combinations of lags of inflation, lags of three different real activity variables, and the lagged federal funds rate. Besides these Taylor rule regressions, the MCS must also contend with a first-order autoregression of the federal funds rate. The regressions are estimated on a 1979:Q1–2006:Q4 sample of U.S. data. Under the three likelihood criteria, the MCS settles on Taylor rule regressions that satisfy the Taylor principle, include all three competing real activity variables, and add the lagged federal funds rate. Furthermore, we find that the first-order autoregression also enters the MCS. Thus, the U.S. data lack the information to resolve precisely which Taylor rule specification best describes the data. Given the large number of forecasting problems economists face at central banks and other parts of government, in financial markets, and other settings, the MCS procedure faces a rich set of problems to study. Furthermore, the MCS has a wide variety of potential uses beyond forecast comparisons and regression models. We leave this work for future research. REFERENCES ANDERSON, T. W. (1984): An Introduction to Multivariate Statistical Analysis (Second Ed.). New York: Wiley. [455] ANDREWS, D. W. K. (1991): “Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation,” Econometrica, 59, 817–858. [470] ATKESON, A., AND L. E. OHANIAN (2001): “Are Phillips Curves Useful for Forecasting Inflation?” Federal Reserve Bank of Minneapolis Quarterly Review, 25, 2–11. [456,483,484,487,494]
THE MODEL CONFIDENCE SET
495
BAXTER, M., AND R. G. KING (1999): “Measuring Business Cycles: Approximate Bandpass Filters for Economic Time Series,” Review of Economics and Statistics, 81, 575–593. [490] BERGER, R. L. (1982): “Multiparameter Hypothesis Testing and Acceptance Sampling,” Technometrics, 24, 295–300. [473] BERNANKE, B. S., AND J. BOIVIN (2003): “Monetary Policy in a Data-Rich Environment,” Journal of Monetary Economics, 50, 525–546. [457] CAVANAUGH, J. E., AND R. H. SHUMWAY (1997): “A Bootstrap Variant of AIC for State-Space Model Selection,” Statistica Sinica, 7, 473–496. [471] CHAO, J. C., V. CORRADI, AND N. R. SWANSON (2001): “An Out of Sample Test for Granger Causality,” Macroeconomic Dynamics, 5, 598–620. [476] CHONG, Y. Y., AND D. F. HENDRY (1986): “Econometric Evaluation of Linear Macroeconomic Models,” Review of Economic Studies, 53, 671–690. [476] CLARIDA, R., J. GALÍ, AND M. GERTLER (2000): “Monetary Policy Rules and Macroeconomic Stability: Evidence and Some Theory,” Quarterly Journal of Economics, 115, 147–180. [488] CLARK, T. E., AND M. W. MCCRACKEN (2001): “Tests of Equal Forecast Accuracy and Encompassing for Nested Models,” Journal of Econometrics, 105, 85–110. [475,476] (2005): “Evaluating Direct Multi-Step Forecasts,” Econometric Reviews, 24, 369–404. [466] DIEBOLD, F. X., AND R. S. MARIANO (1995): “Comparing Predictive Accuracy,” Journal of Business & Economic Statistics, 13, 253–263. [465] DOORNIK, J. A. (2009): “Autometrics,” in The Methodology and Practice of Econometrics: A Festschrift in Honour of David F. Hendry, ed. by N. Shephard and J. L. Castle. New York: Oxford University Press, 88–121. [468] (2006): Ox: An Object-Orientated Matrix Programming Language (Fifth Ed.). London: Timberlake Consultants Ltd. [453] DUDOIT, S., J. P. SHAFFER, AND J. C. BOLDRICK (2003): “Multiple Hypothesis Testing in Microarray Experiments,” Statistical Science, 18, 71–103. [473,474] EFRON, B. (1983): “Estimating the Error Rate of a Prediction Rule: Improvement on CrossValidation,” Journal of the American Statistical Association, 78, 316–331. [470,471] (1986): “How Biased Is the Apparent Error Rate of a Prediction Rule?” Journal of the American Statistical Association, 81, 461–470. [471] ENGLE, R. F., AND S. J. BROWN (1985): “Model Selection for Forecasting,” Journal of Computation in Statistics, 51, 341–365. [475] GIACOMINI, R., AND H. WHITE (2006): “Tests of Conditional Predictive Ability,” Econometrica, 74, 1545–1578. [476,484] GONCALVES, S., AND H. WHITE (2005): “Bootstrap Standard Error Estimates for Linear Regression,” Journal of the American Statistical Association, 100, 970–979. [468,470] GORDON, R. J. (1997): “The Time-Varying NAIRU and Its Implications for Economic Policy,” Journal of Economic Perspectives, 11, 11–32. [456] GRANGER, C. W. J., M. L. KING, AND H. WHITE (1995): “Comments on Testing Economic Theories and the Use of Model Selection Criteria,” Journal of Econometrics, 67, 173–187. [475] GUPTA, S. S., AND S. PANCHAPAKESAN (1979): Multiple Decision Procedures. New York: Wiley. [473] HANSEN, P. R. (2003a): “Asymptotic Tests of Composite Hypotheses,” Working Paper 03-09, Brown University Economics. Available at http://ssrn.com/abstract=399761. [475] (2003b): “Regression Analysis With Many Specifications: A Bootstrap Method to Robust Inference,” Mimeo, Stanford University. [466] (2005): “A Test for Superior Predictive Ability,” Journal of Business & Economic Statistics, 23, 365–380. [466,471,474] HANSEN, P. R., A. LUNDE, AND J. M. NASON (2011): “Supplement to ‘The Model Confidence Set’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ ecta/Supmat/5771_tables.pdf; http://www.econometricsociety.org/ecta/Supmat/5771_data and programs.zip. [457,467,481,484,489]
496
P. R. HANSEN, A. LUNDE, AND J. M. NASON
HARVEY, A. C., AND T. M. TRIMBUR (2003): “General Model-Based Filters for Extracting Cycles and Trends in Economic Time Series,” Review of Economics and Statistics, 85, 244–255. [490] HARVEY, D., AND P. NEWBOLD (2000): “Tests for Multiple Forecast Encompassing,” Journal of Applied Econometrics, 15, 471–482. [476] HODRICK, R. J., AND E. C. PRESCOTT (1997): “Postwar U.S. Business Cycles: An Empirical Investigation,” Journal of Money, Credit, and Banking Economy, 29, 1–16. [484,490] HONG, H., AND B. PRESTON (2008): “Bayesian Averaging, Prediction and Nonnested Model Selection,” Working Paper W14284, NBER. [470,492] HORRACE, W. C., AND P. SCHMIDT (2000): “Multiple Comparisons With the Best, With Economic Applications,” Journal of Applied Econometrics, 15, 1–26. [473] HSU, J. C. (1996): Multiple Comparisons. Boca Raton, FL: Chapman & Hall/CRC. [473] INOUE, A., AND L. KILIAN (2006): “On the Selection of Forecasting Models,” Journal of Econometrics, 130, 273–306. [475] JOHANSEN, S. (1988): “Statistical Analysis of Cointegration Vectors,” Journal of Economic Dynamics and Control, 12, 231–254. [455,473] KILIAN, L. (1999): “Exchange Rates and Monetary Fundamentals: What Do We Learn From Long Horizon Regressions?” Journal of Applied Econometrics, 14, 491–510. [466] LEEB, H., AND B. PÖTSCHER (2003): “The Finite-Sample Distribution of Post-Model-Selection Estimators, and Uniform versus Non-Uniform Approximations,” Econometric Theory, 19, 100–142. [460] LEHMANN, E. L., AND J. P. ROMANO (2005): Testing Statistical Hypotheses (Third Ed.). New York: Wiley. [464,473,474] MCCALLUM, B. T. (1999): “Issues in the Design of Monetary Policy Rules,” in Handbook of Macroeconomics, Vol. 1C, ed. by J. B. Taylor and M. Woodford. Amsterdam: North-Holland, 1483–1530. [488] MCCRACKEN, M. W., AND S. SAPP (2005): “Evaluating the Predictability of Exchange Rates Using Long Horizon Regressions: Mind Your p’s and q’s!” Journal of Money, Credit, and Banking, 37, 473–494. [493] NASON, J. M., AND G. W. SMITH (2008): “Identifying the New Keynesian Phillips Curve,” Journal of Applied Econometrics, 23, 525–551. [490] NEWEY, W., AND K. WEST (1987): “A Simple Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix,” Econometrica, 55, 703–708. [464,470,492] ORPHANIDES, A. (2003): “Historical Monetary Policy Analysis and the Taylor Rule,” Journal of Monetary Economics, 50, 983–1022. [488] ORPHANIDES, A., AND S. VAN NORDEN (2002): “The Unreliability of Output-Gap Estimates in Real Time,” Review of Economics and Statistics, 84, 569–583. [457] PANTULA, S. G. (1989): “Testing for Unit Roots in Time Series Data,” Econometric Theory, 5, 256–271. [473] ROMANO, J. P., AND M. WOLF (2005): “Stepwise Multiple Testing as Formalized Data Snooping,” Econometrica, 73, 1237–1282. [474] ROMANO, J. P., A. M. SHAIKH, AND M. WOLF (2008): “Formalized Data Snooping Based on Generalized Error Rates,” Econometric Theory, 24, 404–447. [474,492,493] SHIBATA, R. (1997): “Bootstrap Estimate of Kullback–Leibler Information for Model Selection,” Statistica Sinica, 7, 375–394. [471] SHIMODAIRA, H. (1998): “An Application of Multiple Comparison Techniques to Model Selection,” Annals of the Institute of Statistical Mathematics, 50, 1–13. [473] SIN, C.-Y., AND H. WHITE (1996): “Information Criteria for Selecting Possibly Misspecified Parametric Models,” Journal of Econometrics, 71, 207–225. [468,470,475,492] SOLOW, R. M. (1976): “Down the Phillips Curve With Gun and Camera,” in Inflation, Trade, and Taxes, ed. by D. A. Belsley, E. J. Kane, P. A. Samuelson, and R. M. Solow. Columbus, OH: Ohio State University Press. [487] STAIGER, D., J. H. STOCK, AND M. W. WATSON (1997a): “How Precise Are Estimates of the Natural Rate of Unemployment?” in Reducing Inflation: Motivation and Strategy, ed. by C. Romer and D. Romer. Chicago: University of Chicago Press, 195–242. [457]
THE MODEL CONFIDENCE SET
497
(1997b): “The NAIRU, Unemployment, and Monetary Policy,” Journal of Economic Perspectives, 11, 33–49. [456] STOCK, J. H., AND M. W. WATSON (1999): “Forecasting Inflation,” Journal of Monetary Economics, 44, 293–335. [453-456,483,484,487,493,494] (2003): “Forecasting Output and Inflation: The Role of Asset Prices,” Journal of Economic Literature, 61, 788–829. [456] STOREY, J. D. (2002): “A Direct Approach to False Discovery Rates,” Journal of the Royal Statistical Society, Ser. B, 64, 479–498. [493] TAKEUCHI, K. (1976): “Distribution of Informational Statistics and a Criterion of Model Fitting,” Suri-Kagaku (Mathematical Sciences), 153, 12–18. (In Japanese.) [470,492] TAYLOR, J. B. (1993): “Discretion versus Policy Rules in Practice,” Carnegie–Rochester Conference Series on Public Policy, 39, 195–214. [456,488] (1999): “A Historical Analysis of Monetary Policy Rules,” in Monetary Policy Rules, ed. by J. B. Taylor. Chicago: University of Chicago Press, 319–341. [488] VUONG, Q. H. (1989): “Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses,” Econometrica, 57, 307–333. [471,473] WEST, K. D. (1996): “Asymptotic Inference About Predictive Ability,” Econometrica, 64, 1067–1084. [465] WEST, K. D., AND D. CHO (1995): “The Predictive Ability of Several Models of Exchange Rate Volatility,” Journal of Econometrics, 69, 367–391. [464] WEST, K. D., AND M. W. MCCRACKEN (1998): “Regression Based Tests of Predictive Ability,” International Economic Review, 39, 817–840. [475] WHITE, H. (1994): Estimation, Inference and Specification Analysis. Cambridge: Cambridge University Press. [469] (2000a): Asymptotic Theory for Econometricians (Revised Ed.). San Diego: Academic Press. [464] (2000b): “A Reality Check for Data Snooping,” Econometrica, 68, 1097–1126. [455,466, 471,474]
Dept. of Economics, Stanford University, 579 Serra Mall, Stanford, CA 943056072, U.S.A. and CREATES;
[email protected], School of Economics and Management, Aarhus University, Bartholins Allé 10, Aarhus, Denmark and CREATES;
[email protected], and Federal Reserve Bank of Philadelphia, Ten Independence Mall, Philadelphia, PA 19106-1574, U.S.A.;
[email protected]. Manuscript received March, 2005; final revision received March, 2010.
Econometrica, Vol. 79, No. 2 (March, 2011), 499–553
ON THE EXISTENCE OF MONOTONE PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES BY PHILIP J. RENY1 We generalize Athey’s (2001) and McAdams’ (2003) results on the existence of monotone pure-strategy equilibria in Bayesian games. We allow action spaces to be compact locally complete metric semilattices and type spaces to be partially ordered probability spaces. Our proof is based on contractibility rather than convexity of bestreply sets. Several examples illustrate the scope of the result, including new applications to multi-unit auctions with risk-averse bidders. KEYWORDS: Bayesian games, monotone pure strategies, equilibrium existence, multi-unit auctions, risk aversion.
1. INTRODUCTION ATHEY (2001) ESTABLISHES the important result that a monotone purestrategy equilibrium exists whenever a Bayesian game satisfies a Spence– Mirlees single crossing property. Athey’s result is now a central tool for establishing the existence of monotone pure-strategy equilibria in auction theory (see, e.g., Athey (2001), Reny and Zamir (2004)). Recently, McAdams (2003) shows that Athey’s results, which exploit the assumed total ordering of the players’ one-dimensional type and action spaces, can be extended to settings in which type and action spaces are multidimensional and only partially ordered. This permits new existence results in auctions with multidimensional types and multi-unit demands (see McAdams (2003, 2006)). The techniques employed by Athey and McAdams, while ingenious, have their limitations and do not appear to easily extend beyond the environments they consider. We therefore introduce a new approach. The approach taken here exploits an important unrecognized property of a large class of Bayesian games. In these games, the players’ pure-strategy best-reply sets, while possibly nonconvex, are always contractible.2 This observation permits us to generalize the results of Athey and McAdams in several directions. First, we permit infinite-dimensional type spaces and infinitedimensional action spaces. Both can occur, for example, in share auctions, where a bidder’s type is a function that expresses his marginal valuation at any quantity of the good and where a bidder’s action is a downward-sloping 1 I wish to thank David McAdams, Roger Myerson, Max Stinchcombe, and Jeroen Swinkels for helpful conversations, and Sergiu Hart and Benjamin Weiss for providing an example of a compact metrizable semilattice that is not locally complete. I also thank three anonymous referees and the editor for a number of helpful remarks. Financial support from the National Science Foundation (SES-0214421, SES-0617884, SES-0922535) is gratefully acknowledged. 2 A set is contractible if it can be continuously deformed, within itself, to a single point. Convex sets are contractible, but contractible sets need not be convex (e.g., the symbol “+” viewed as a subset of R2 ).
© 2011 The Econometric Society
DOI: 10.3982/ECTA8934
500
PHILIP J. RENY
demand schedule. Second, even when type and action spaces are subsets of Euclidean space, we permit more general joint distributions over types, allowing one player to have private information about the support of another’s private information, as well as permitting positive probability on lower dimensional subsets, which can be useful when modeling random demand in auctions. Third, our approach allows general partial orders on both type spaces and action spaces. This can be especially helpful because, while single crossing may fail for one partial order, it might nonetheless hold for another, in which case our existence result can still be applied (see Section 5 for several such applications). Finally, while single crossing is helpful in establishing the hypotheses of our main theorem, it is not necessary; our hypotheses are satisfied even in instances where single crossing fails (e.g., as in Reny and Zamir (2004)). The key to our approach is to employ a more powerful fixed-point theorem than those employed in Athey (2001) and McAdams (2003). Both papers consider the game’s best reply correspondence: Kakutani’s theorem is used in Athey (2001); Glicksberg’s theorem is used in McAdams (2003). In both cases, essentially all of the effort is geared toward proving that sets of monotone pure-strategy best replies are convex. Our central observation is that this impressive effort is unnecessary and, more importantly, that the additional structure imposed to achieve the desired convexity (i.e., Euclidean type spaces with the coordinatewise partial order, Euclidean sublattice action spaces, absolutely continuous type distributions) is unnecessary as well. The fixed-point theorem on which our approach is based is due to Eilenberg and Montgomery (1946) and does not require the correspondence in question to be convex-valued. Rather, the correspondence need only be contractiblevalued. Consequently, we need only demonstrate that monotone pure-strategy best-reply sets are contractible. While this task need not be straightforward in general, it turns out to be essentially trivial in the class of Bayesian games of interest here. To gain a sense of this, note first that a pure strategy—a function from types to actions—is a best reply for a player if and only if it is a pointwise interim best reply for almost every type of that player. Consequently, any piecewise combination of two best replies—i.e., a strategy equal to one of the best replies on some subset of types and equal to the other best reply on the remainder of types—is also a best reply. Thus, by reducing the set of types on which the first best reply is employed and increasing the set of types on which the second is employed, it is possible to move from the first best reply to the second, all the while remaining within the set of best replies. With this simple observation, the set of best replies can be shown to be contractible.3 Because contractibility of best-reply sets follows almost immediately from the pointwise almost everywhere optimality of interim best replies, we are able 3 Because we are concerned with monotone pure-strategy best replies, some care must be taken to ensure that one maintains monotonicity throughout the contraction. Further, continuity of the contraction requires appropriate assumptions on the distribution over players’ types. In particular, there can be no atoms.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
501
to expand the domain of analysis well beyond Euclidean type and action spaces, and most of our additional effort is directed here. In particular, we require and prove two new results about the space of monotone functions from partially ordered probability spaces into compact metric semilattices. The first of these results (Lemma A.10) is a generalization of Helly’s selection theorem, stating that, under suitable conditions, any sequence of monotone functions possesses a pointwise almost everywhere convergent subsequence. The second result (Lemma A.16) provides conditions under which the space of monotone functions is an absolute retract, a property that, like convexity, renders a space amenable to fixed-point analysis. Our main result, Theorem 4.1, is as follows. Suppose that action spaces are compact convex locally convex semilattices or compact locally complete metric semilattices, that type spaces are partially ordered probability spaces, that payoffs are continuous in actions for each type vector, and that the joint distribution over types induces atomless marginals for each player assigning positive probability only to sets that can be order-separated by a fixed countable set of his types.4 If, whenever the others employ monotone pure strategies, each player’s set of monotone pure-strategy best replies is nonempty and joinclosed,5 then a monotone pure-strategy equilibrium exists. We provide several applications that yield new existence results. First, we consider both uniform-price and discriminatory multi-unit auctions with independent private information. We depart from standard assumptions by permitting bidders to be risk averse. Under risk aversion, McAdams (2007) contains a uniform-price auction example having no monotone pure-strategy equilibrium, suggesting that a general existence result is simply unavailable. However, we show that this negative result stems from the use of the coordinatewise partial order over types. By employing a distinct (and more economically relevant) partial order over types—a technique novel to our methods—we are able to demonstrate the existence of a monotone pure-strategy equilibrium with respect to this alternative partial order in both uniform-price and discriminatory auctions. Another application considers a price-competition game between firms selling differentiated products. Firms have private information about their constant marginal cost as well as private information about market demand. While it is natural to assume that costs may be affiliated, in the context we consider, it is less natural to assume that information about market demand is affiliated because information that improves demand for some firms may worsen it for others. Nonetheless, and again through a judicious choice of a partial order over types, we are able to establish the existence of a pure-strategy equilibrium that is monotone in players’ costs, but not necessarily 4 One set is order-separated by another if the one set contains two points between which lies a point in the other. 5 A subset of strategies is join-closed if the pointwise supremum of any pair of strategies in the set is also in the set.
502
PHILIP J. RENY
monotone in their private information about demand. Our final application establishes the existence of monotone mixed strategy equilibria when type spaces have atoms.6 If the actions of distinct players are strategic complements—an assumption we do not impose—even stronger results can be obtained. Indeed, in Van Zandt and Vives (2007), it is shown that monotone pure-strategy equilibria exist under somewhat more general distributional and action-space assumptions than we employ here, and that such an equilibrium can be obtained through iterative application of the best reply-map.7 The existence result in Van Zandt and Vives (2007) is perhaps the strongest possible for Bayesian games with strategic complementarities. Of course, while many interesting economic games exhibit strategic complements, many do not. Indeed, many auction games satisfy the hypotheses required to apply our result here, but fail to satisfy the strategic complements condition.8 The two approaches are therefore complementary. The remainder of the paper is organized as follows. Section 2 presents the essential ideas as well as the corollary of Eilenberg and Montgomery’s (1946) fixed-point theorem that is central to our approach. Section 3 describes the formal environment, including semilattices and related issues. Section 4 contains our main result, Section 6 contains its proof, and Section 5 provides several applications. Some readers interested in specific applications may find it sufficient to skip ahead to Corollary 4.2—a special case of our main result—which requires little in the way of preparation. 2. THE MAIN IDEA As already mentioned, the proof of our main result is based on a fixed-point theorem that permits the correspondence for which a fixed point is sought— here, the product of the players’ monotone pure best-reply correspondences— to have contractible rather than convex values. In this section, we introduce this fixed-point theorem and illustrate the ease with which contractibility can be established, focusing on the most basic case in which type spaces are [0 1], action spaces are subsets of [0 1], and the marginal distribution over each player’s type space is atomless. 6
A player’s mixed strategy is monotone if every action in the totally ordered support of one of his types is greater than or equal to every action in the totally ordered support of any lower type. 7 Related results can be found in Milgrom and Roberts (1990) and Vives (1990). 8 In a first-price independent private-value auction, for example, a bidder might increase his bid if his opponent increases her bid slightly when her private value is high. However, for sufficiently high increases in her bid at high private values, the bidder might be better off reducing his bid (and chance of winning) to obtain a higher surplus when he does win. Such strictly optimal nonmonotonic responses to increases in the opponent’s strategy are not possible under strategic complements.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
503
A subset X of a metric space is contractible if for some x0 ∈ X there is a continuous function h : [0 1] × X → X such that for all x ∈ X h(0 x) = x and h(1 x) = x0 We then say that h is a contraction for X Note that every convex set is contractible since, choosing any point x0 in the set, the function h(τ x) = (1 − τ)x + τx0 is a contraction. On the other hand, there are contractible sets that are not convex (e.g., the symbol “+”). Hence, contractibility is a strictly more permissive condition than convexity. A subset X of a metric space Y is said to be a retract of Y if there is a continuous function mapping Y onto X leaving every point of X fixed. A metric space (X d) is an absolute retract if for every metric space (Y δ) containing X as a closed subset and preserving its topology, X is a retract of Y .9 Examples of absolute retracts include closed convex subsets of Euclidean space or of any metric space, and many nonconvex sets as well (e.g., any contractible polyhedron).10 The fixed-point theorem we make use of is the following corollary of an even more general result due to Eilenberg and Montgomery (1946).11 THEOREM 2.1: Suppose that a compact metric space (X d) is an absolute retract and that F : X X is an upper-hemicontinuous, nonempty-valued, contractible-valued correspondence.12 Then F has a fixed point. For our purposes, the correspondence F is the product of the players’ monotone pure-strategy best-reply correspondences and X is the product of their sets of monotone pure strategies. While we must eventually establish all of the properties necessary to apply Theorem 2.1, our modest objective for the remainder of this section is to show, with remarkably little effort, that in the simple environment considered here, F is contractible-valued, i.e., that monotone pure best-reply sets are contractible. Suppose that player 1’s type is drawn uniformly from the unit interval [0 1] and that A ⊆ [0 1] is player 1’s compact action set. Fix monotone pure strategies for the other players and suppose that s¯ : [0 1] → A is the largest monotone best reply for player 1 in the sense that if s is any other monotone 9
It is not necessary to understand the concept of an absolute retract to apply any of our results: none of our hypotheses requires checking that a space is an absolute retract. However, to prove our main result using Theorem 2.1, we must (and do) demonstrate that under our hypotheses, each player’s space of monotone pure strategies is an absolute retract (see Lemma A.16). 10 Indeed, a compact subset, X of Euclidean space is an absolute retract if and only if it is contractible and locally contractible. The latter means that for every x0 ∈ X and every neighborhood U of x0 there is a neighborhood V of x0 and a continuous h : [0 1] × V → U such that h(0 x) = x and h(1 x) = x0 for all x ∈ V 11 Theorem 2.1 follows directly from Eilenberg and Montgomery (1946, Theorem 1), because every absolute retract is a contractible absolute neighborhood retract (Borsuk (1966, V, (2.3))) and every nonempty contractible set is acyclic (Borsuk (1966, II, (4.11))). 12 By upper hemicontinuous, we always mean that the correspondence in question has a closed graph.
504
PHILIP J. RENY
best reply, then s¯(t) ≥ s(t) for every type t of player 1.13 We now provide a contraction that continuously shrinks player 1’s entire set of monotone best replies, within itself, to the largest monotone best reply s¯ The simple, but key, observation is that a pure strategy is a best reply for player 1 if and only if it is a pointwise best reply for almost every type t ∈ [0 1] of player 1. Consider the following candidate contraction. For τ ∈ [0 1] and any monotone best reply, s for player 1, define h(τ s) : [0 1] → A as h(τ s)(t) =
s(t) if t ≤ 1 − τ and τ < 1, s¯(t) otherwise.
Note that h(0 s) = s h(1 s) = s¯ and h(τ s)(t) is always either s¯(t) or s(t) and so is a best reply for almost every t. Hence, by the key observation in the previous paragraph, h(τ s)(·) is a best reply. The pure strategy h(τ s)(·) is monotone because it is the smaller of two monotone functions for low values of t and the larger of them for high values of t. Moreover, because the marginal distribution over player 1’s type is atomless, the monotone pure strategy h(τ s)(·) varies continuously in the arguments τ and s when the distance between two strategies of player 1 is defined to be the integral with respect to his type distribution of their absolute pointwise difference (see Section 6).14 Consequently, h is a contraction under this metric, and so player 1’s set of monotone best replies is contractible. It is that simple. Figure 2.1 shows how the contraction works when player 1’s set of actions A happens to be finite, so that his set of monotone best replies cannot be convex in the usual sense unless it is a singleton. Three monotone functions are shown in each panel, where 1’s actions are on the vertical axis and 1’s types are on the horizontal axis. The dotted line step function is s the solid line step function is s¯ and the thick solid line step function is the step function determined by the contraction h In panel (a), τ = 0 and h coincides with s. The position of the vertical line appearing in each panel represents the value of τ The vertical line in each panel intersects the horizontal axis at the point 1 − τ. When τ = 0, the vertical line is at the far right-hand side, as shown in panel (a). As indicated by the arrow, the vertical line moves continuously toward the origin as τ moves from 0 to 1. The thick step function determined by the contraction h is s(t) for values of t to the left of the vertical line and is s¯(t) for values of t to the right; see panels (b) and (c). The step function h therefore changes continuously with τ because the areas between strategies change continuously. In panel (d), 13
Such a largest monotone best reply exists under the hypotheses of our main result. This particular metric is important because it renders a player’s payoff continuous in his strategy choice. 14
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
505
FIGURE 2.1.—The contraction.
τ = 1and h coincides with s¯ So altogether, as τ moves continuously from 0 to 1 the image of the contraction moves continuously from s to s¯ Two points are worth mentioning before moving on. First, single crossing plays no role in establishing the contractibility of sets of monotone best replies. As we shall see, ensuring the existence of monotone pure-strategy best replies is where single crossing can be helpful. Thus, the present approach clarifies the role of single crossing insofar as the existence of monotone pure-strategy equilibrium is concerned.15 Second, the action spaces employed in the above example are totally ordered, as in Athey (2001). Consequently, if two actions are optimal for some type of player 1, then the maximum of the two actions, being one or the other of them, is also optimal. The optimality of the maximum of two optimal actions is important for ensuring that a largest monotone best reply exists. When action spaces are only partially ordered (e.g., when actions are multidimensional with, say, the coordinatewise partial order), the maximum of two optimal actions need not even be well defined, let alone optimal. Therefore, to also cover partially ordered action spaces, we assume in the sequel (see Section 3.2) that action spaces are semilattices—i.e., that for every pair of actions there is a least upper bound—and that the least upper bound of two optimal actions is optimal. Stronger versions of both assumptions are employed in McAdams (2003). 15 In both Athey (2001) and McAdams (2003) single crossing is employed to help establish the existence of monotone best replies and to establish the convexity of the set of monotone best replies. The single crossing conditions in Athey (2001) and McAdams (2003) are therefore more restrictive than necessary. See Section 4.1.
506
PHILIP J. RENY
3. THE ENVIRONMENT In order as to speak about monotone pure strategies, the players’ type and action spaces must come equipped with partial orders. Moreover, as mentioned just above, action spaces require the additional structure of a semilattice. The following section provides the order-related concepts we need for both type spaces and action spaces. 3.1. Partial Orders, Lattices, and Semilattices Let X be a nonempty set partially ordered by ≥.16 If x y and z are members of X, we say that y lies between x and z if x ≥ y ≥ z. If X is endowed with a sigma algebra of subsets A then the partial order ≥ on X is called measurable if {(x y) ∈ X × X : x ≥ y} is a member of A × A.17 If X is endowed with a topology, then the partial order ≥ on X is called closed if {(x y) ∈ X × X : x ≥ y} is closed in the product topology. The partial order ≥ on X is called convex if X is a subset of a real vector space and {(x y) ∈ X × X : x ≥ y} is convex. Note that if the partial order on X is convex, then X is convex because x ≥ x for every x ∈ X Say that X is upper-bound-convex if it contains the convex combination of any two members whenever one of them, x¯ say, is an upper bound for X, i.e., x¯ ≥ x for every x ∈ X.18 Every convex set is upper-boundconvex. For x y ∈ X if the set {x y} has a least upper bound in X then it is unique and will be denoted by x ∨ y, the join of x and y. In general, such a bound need not exist. However, if every pair of points in X has a least upper bound in X then we shall say that X is a semilattice. It is straightforward to show that, in a semilattice, every finite set, {x y z} has a least upper bound, which we denote by ∨{x y z} or x ∨ y ∨ · · · ∨ z If the set {x y} has a greatest lower bound in X then it too is unique and it will be denoted by x ∧ y the meet of x and y Once again, in general, such a bound need not exist. If every pair of points in X has both a least upper bound and a greatest lower bound in X, then we say that X is a lattice.19 A semilattice (lattice) in Rm endowed with the coordinatewise partial order will be called a Euclidean semilattice (lattice). Clearly, every lattice is a semilattice. However, the converse is not true. For example, under the coordinatewise partial order, the set of vectors in R2 whose sum is at least 1 is a semilattice, but not a lattice. 16 Hence, ≥ is transitive (x ≥ y and y ≥ z imply x ≥ z), reflexive (x ≥ x), and antisymmetric (x ≥ y and y ≥ x imply x = y). 17 Recall that A × A is the smallest sigma algebra containing all sets of the form A × B with A B in A 18 Sets without upper bounds are trivially upper-bound-convex. 19 Defining a semilattice in terms of the join operator, ∨, rather than the meet operator, ∧ is entirely a matter of convention.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
507
A metric semilattice is a semilattice, X endowed with a metric under which the join operator, ∨ is continuous as a function from X × X into X. A metric semilattice in Rm endowed with the coordinatewise partial order and the Euclidean metric will be called a Euclidean metric semilattice. Because in a semilattice x ≥ y if and only if x ∨ y = x, a partial order in a metric semilattice is necessarily closed.20 A semilattice X is complete if every nonempty subset S of X has a least upper bound, ∨S in X A metric semilattice X is locally complete if for every x ∈ X and every neighborhood U of x there is a neighborhood W of x contained in U such that every nonempty subset S of W has a least upper bound, ∨S contained in U Lemma A.18 establishes that a compact metric semilattice X is locally complete if and only if for every x ∈ X and every sequence xn → x limm ( n≥m xn ) = x.21 A distinct sufficient condition for local completeness is given in Lemma A.20. Some examples of compact locally complete metric semilattices are as follows. • Finite semilattices. • Compact sublattices of the Euclidean lattice Rm —because in a Euclidean sublattice, the join of any two points is their coordinatewise maximum. • Compact Euclidean metric semilattices (Lemma A.19). • Compact upper-bound-convex Euclidean semilattices (Lemmas A.17 and A.19). • The space of continuous functions f : [0 1] → [0 1] satisfying for some λ > 0 the Lipschitz condition |f (x) − f (y)| ≤ λ|x − y| endowed with the maximum norm f = maxx |f (x)| and partially ordered by f ≥ g if f (x) ≥ g(x) for all x ∈ [0 1] The last example in the above list is an infinite-dimensional, compact, locally complete metric semilattice. In general, and unlike compact Euclidean metric semilattices, infinite-dimensional metric semilattices need not be locally complete even if they are compact and convex.22 3.2. A Class of Bayesian Games There are N players, i = 1 2 N Player i’s type space is Ti and his action space is Ai and both are nonempty and partially ordered. In addition, Ai is 20 The converse does not hold. For example, the set X = {(x1 x2 ) ∈ R2+ : x1 + x2 = 1} ∪ {(1 1)} is a semilattice with the coordinatewise partial order, and this order is closed under the Euclidean metric. But X is not a metric semilattice because whenever xn = yn and xn yn → x we have (1 1) = lim(xn ∨ yn ) = (lim xn ) ∨ (lim yn ) = x. 21 Hence, compactness and metrizability of a lattice under the order topology (see Birkhoff (1967, p. 244)) are sufficient, but not necessary, for local completeness of the corresponding semilattice. 22 No Lp space is locally complete when p < +∞ and endowed with the pointwise partial order. See Hart and Weiss (2005) for a compact metric semilattice that is not locally complete. Their example can be modified so that the space is, in addition, convex and locally convex.
508
PHILIP J. RENY
endowed with a metric. Unless a notational distinction is helpful, all partial orders, although possibly distinct, will be denoted by ≥. Player i’s payoff function
×
N
×
N
is ui : A × T → R, where A = i=1 Ai and T = i=1 Ti For each player i, Ti is a sigma algebra of subsets of Ti and members of Ti will often be referred to simply as measurable sets. The common prior over the players’ types is a countably additive probability measure μ defined on T1 × · · · × TN Let G denote this Bayesian game. For each player i we let μi denote the marginal of μ on Ti and hence the domain of μi is Ti As functions from types into actions, best replies for any player i are determined only up to μi measure zero sets. This leads us to the following definitions. A pure strategy for player i is a function, si : Ti → Ai that is μi -a.e. (almost everywhere) equal to a measurable function and is monotone if ti ≥ ti implies si (ti ) ≥ si (ti ) for all ti ti ∈ Ti .2324 Let Si denote player i’s set of pure N strategies and let S = i=1 Si A vector of pure strategies, (ˆs1 sˆN ) ∈ S is a Bayesian–Nash equilibrium or simply an equilibrium if for every player i and every pure strategy si for player i, ui (ˆs(t) t) dμ(t) ≥ ui (si (ti ) sˆ−i (t−i ) t) dμ(t)
×
T
T
where the left-hand side, henceforth denoted by Ui (ˆs) is player i’s payoff given the joint strategy sˆ and the right-hand side is his payoff when he employs si and the others employ sˆ−i .25 It will sometimes be helpful to speak of the payoff to player i’s type ti from the action ai given the strategies of the others, s−i This payoff, which we refer to as i’s interim payoff, is ui (ai s−i (t−i ) t) dμi (t−i |ti ) Vi (ai ti s−i ) ≡ T−i
where μi (·|ti ) is a version of the conditional probability on T−i given ti .26 A single such version is fixed for each player i once and for all. Consequently, (ˆs1 sˆN ) ∈ S is an equilibrium according to our definition above if and only if for each player i and μi -a.e. ti ∈ Ti Vi (ˆsi (ti ) ti sˆ−i ) ≥ Vi (ai ti sˆ−i )
for every ai ∈ Ai
23 Recall that a property P(ti ) holds μi -a.e. if the set of ti for which P(ti ) holds contains a measurable subset having μi measure 1. 24 The measurable subsets of the metric space Ai are the Borel sets. 25 This definition of pure-strategy Bayesian–Nash equilibrium coincides, for example, with that implicit in Milgrom and Weber (1985). 26 The conditional, μi (·|ti ), will not otherwise appear in the sequel and should not be confused with the marginal, μi which will appear throughout.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
509
that is, if and only if for each player i sˆi (ti ) is an interim best reply against sˆ−i for μi -a.e. ti ∈ Ti We make use of the following additional assumptions on the Bayesian game G. For every player i: G.1. The partial order on Ti is measurable. G.2. The probability measure μi on Ti is atomless.27 G.3. There is a countable subset Ti0 of Ti such that every set in Ti assigned positive probability by μi contains two points between which lies a point in Ti0 G.4. Ai is a compact metric space and a semilattice with a closed partial order.28 G.5. Either (i) Ai is a convex subset of a locally convex topological vector space and the partial order on Ai is convex or (ii) Ai is a locally complete metric semilattice.29 G.6. ui (a t) is bounded, jointly measurable, and continuous in a ∈ A for every t ∈ T Conditions G.1–G.5 are very general, covering a wide variety of situations. To reassure more applied readers, we illustrate that G.1–G.5 hold in settings that are not uncommon. The proof of the following proposition can be found in Appendix A.6. PROPOSITION 3.1: (i) Conditions G.1–G.3 are satisfied, in particular, when both of the following conditions (a) and (b) hold: (a) each player i’s type space, Ti = [τi τ¯ i ]ni is the ¯ union Ti1 ∪ Ti2 ∪ · · · of a finite or countably infinite number of nondegenerate nik Euclidean cubes, Tik = [τik τ¯ ik ] of possibly different dimensions and where the partial order on Ti is ¯the coordinatewise partial order; and (b) according to player i’s marginal, μi each one of the cubes Tik is chosen with probability pik and then ti ∈ Tik is chosen according to the probability density fik on Tik , which need not be everywhere positive. (ii) Conditions G.4 and G.5 are satisfied, in particular, when each player’s set of actions is a compact subset of Euclidean space endowed with the coordinatewise partial order, and the coordinatewise maximum of any two actions is itself a feasible action. In Athey (2001) and McAdams (2003) it is assumed that each Ai is a compact sublattice of Euclidean space, that each Ti is a Euclidean cube [τi τ¯ i ]ni ¯ conendowed with the coordinatewise partial order, and that μ is absolutely For every ti ∈ Ti the singleton set {ti } is in Ti by G.1. See Appendix A.1. Note that G.4 does not require Ai to be a metric semilattice—its join operator need not be continuous. 29 It is permissible for (i) to hold for some players and (ii) to hold for others. A topological space is locally convex if for every open set U , every point in U has a convex open neighborhood contained in U. 27 28
510
PHILIP J. RENY
tinuous with respect to Lebesgue measure, a situation strictly covered by conditions (i) and (ii) of Proposition 3.1.30 Hence, their hypotheses, which also include action continuity of utility functions, are strictly more restrictive than G.1–G.6. The additional structure they impose is, in fact, necessary for their Kakutani–Glicksberg-based approach.31 In addition to permitting infinite-dimensional type spaces, assumption G.1 permits the partial order on player i’s type space to be distinct from the usual coordinatewise partial order when Ti is Euclidean. As we shall see, this flexibility is very helpful in providing new equilibrium existence results for multi-unit auctions with risk-averse bidders. Assumption G.2 is used to establish the contractibility of the players’ sets of monotone best replies and, in particular, to construct an associated contraction that is continuous in a topology in which payoffs are continuous as well. Assumption G.3 connects the partial order on a player’s type space with his marginal distribution, and it implies, in particular, that no atomless subset of a player’s type space having positive probability can be totally unordered. For example, if Ti = [0 1]2 is endowed with the Borel sigma algebra and the coordinatewise partial order, G.3 requires μi to assign probability 0 to any atomless negatively sloped line in Ti . In fact, whenever Ti happens to be a separable metric space and Ti contains the open sets, G.3 holds if every atomless set having positive μi measure contains two “strictly ordered” points (Lemma A.21).32 Together with G.1 and G.4, G.3 ensures the compactness of the players’ sets of monotone pure strategies (Lemma A.10) in a topology in which payoffs are continuous.33 Thus, although G.3 is logically unrelated to Milgrom and Weber’s (1985) absolute-continuity assumption on the joint distribution over
30 In McAdams (2003) it is assumed further that the joint density over types is everywhere strictly positive, and in Athey (2001) it is assumed that each ni = 1 31 Indeed, suppose a player’s action set is the semilattice A = {(1 0) (1/2 1/2) (0 1) (1 1)} in R2 with the coordinatewise partial order and note that A is not a sublattice of R2 . It is not difficult to see that this player’s set of monotone pure strategies from [0 1] into A endowed 1 with the metric d(f g) = 0 |f (x) − g(x)| dx, is homeomorphic to three line segments joined at a common endpoint. Consequently, this strategy set is not homeomorphic to a convex set and so neither Kakutani’s nor Glicksberg’s theorems can be directly applied. On the other hand, this strategy set is an absolute retract (see Lemma A.16), which is sufficient for our approach. 32 Two points in a partially ordered metric space are strictly ordered if they are contained in disjoint open sets such that every point in one set is greater than or equal to every point in the other. 33 Indeed, without G.3, a player’s type space could be the negative diagonal in [0 1]2 endowed with the coordinatewise partial order. But then every measurable function from types to actions would be monotone, because no two distinct types are ordered. Compactness in a useful topology is then effectively precluded.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
511
types, it plays the same compactness role for monotone pure strategies as the Milgrom–Weber assumption plays for distributional strategies.3435 Assumption G.5 is used to ensure that the set of monotone pure strategies is an absolute retract and therefore amenable to fixed-point analysis. Assumption G.6 is used to ensure that best replies are well defined and that best-reply correspondences are upper hemicontinuous. Assumption G.6 is trivially satisfied when action spaces are finite. Thus, for example, it is possible to consider auctions here by supposing that players’ bid spaces are discrete. We do so in Section 5, where we also consider auctions with continuum bid spaces by considering limits of ever finer discretizations. 4. THE MAIN RESULT Call a subset of player i’s pure strategies join-closed if for any pair of strategies, si si in the subset, the strategy taking the action si (ti ) ∨ si (ti ) for each ti ∈ Ti is also in the subset.36 We can now state our main result, whose proof is provided in Section 6. THEOREM 4.1: If G.1–G.6 hold, and each player’s set of monotone pure best replies is nonempty and join-closed whenever the others employ monotone pure strategies, then G possesses a monotone pure-strategy equilibrium. Once again, for readers interested in certain applications, it may be sufficient to have access to the following more basic—although substantially less powerful—corollary of Theorem 4.1.37 See Remark 3 for the proof. COROLLARY 4.2: Suppose that conditions (i) and (ii) of Proposition 3.1 hold, and that each player’s payoff function is continuous in the joint vector of actions 34 To see that even G.2 and G.3 together do not imply the Milgrom and Weber (1985) restriction that μ is absolutely continuous with respect to the product of its marginals μ1 × · · · × μn , note that G.2 and G.3 hold when there are two players, each with unit interval type space with the usual order, and where the players’ types are drawn according to Lebesgue measure on the diagonal of the unit square. 35 One might wonder whether G.3 can be weakened by requiring, instead, merely that every atomless set in Ti assigned positive probability by μi contains two distinct ordered points. The answer is “no,” in the sense that this weakening permits examples in which every measurable function from [0 1] into [0 1] is monotone, precluding compactness of the set of monotone pure strategies in a useful topology. 36 Note that when the join operator is continuous, as it is in a metric semilattice, the resulting function is a.e.-measurable, being the composition of a.e.-measurable and continuous functions. But even when the join operator is not continuous, because the join of two monotone pure strategies is monotone, it is a.e.-measurable under the hypotheses of Lemma A.11 and hence under the hypotheses of Theorem 4.1. 37 Indeed, insofar as applications are concerned, Theorem 4.1, in particular, permits one to tailor the partial orders to the structure of the problem, a technique that can be very useful (see, e.g., Examples 5.1, 5.2, and 5.3). In contrast, the corollary insists on the coordinatewise partial order.
512
PHILIP J. RENY
for any joint vector of types. Suppose, in addition, that the coordinatewise minimum of any two feasible actions is itself a feasible action, that for each player i and for every monotone joint pure strategy, s−i of the others, player i’s interim payoff Vi (· s−i ) is defined and twice continuously differentiable on an open ball, Ui containing Ai × Ti , and that for every (ai ti ) ∈ Ui ,38 (a) ∂2 Vi (ai ti s−i )/∂aij ∂ail ≥ 0 for all j = l, (b) ∂2 Vi (ai ti s−i )/∂aij ∂til ≥ 0 for all j l Then G possesses a monotone pure-strategy equilibrium, sˆ In particular, for every player i and every pair of types ti ti in Ti if every coordinate of ti is weakly greater than the corresponding coordinate of ti then every coordinate of i’s equilibrium action sˆi (ti ) when his type is ti is weakly greater than the corresponding coordinate of his equilibrium action sˆi (ti ) when his type is ti A strengthening of Theorem 4.1 can be helpful when one wishes to demonstrate not merely the existence of a monotone pure-strategy equilibrium, but the existence of a monotone pure-strategy equilibrium within a particular subset of strategies. For example, in a uniform-price auction for m units, a strategy mapping a player’s nonincreasing m vector of marginal values into a vector of m bids is undominated only if his bid for a kth unit is no greater than his marginal value for a kth unit. As formulated, Theorem 4.1 does not directly permit one to demonstrate the existence of an undominated equilibrium.39 The next result takes care of this. Its proof is a straightforward extension of the proof of Theorem 4.1 and is provided in Remark 7. A subset of player i’s pure strategies is called pointwise-limit-closed if whenever si1 si2 are each in the set and sin (ti ) →n si (ti ) for μi almost every ti ∈ Ti then si is also in the set. A subset of player i’s pure strategies is called piecewiseclosed if whenever si and si are in the set, then so is any strategy si
such that for every ti ∈ Ti either si
(ti ) = si (ti ) or si
(ti ) = si (ti ) THEOREM 4.3: Under the hypotheses of Theorem 4.1, if for each player i Ci is a join-closed, piecewise-closed, and pointwise-limit-closed subset of pure strategies containing at least one monotone pure strategy, and the intersection of Ci with i’s set of monotone pure best replies is nonempty whenever every other player j employs a monotone pure strategy in Cj , then G possesses a monotone pure-strategy equilibrium in which each player i’s pure strategy is in Ci . REMARK 1: When player i’s action space is a semilattice with a closed partial order (as implied by G.4) and Ci is defined by any collection of weak inequalities, i.e., if Fi and Gi are arbitrary collections of measurable functions from Ti 38
This formulation permits Ai to be finite or, more generally, disconnected. Note that it is not possible to restrict the action space alone to ensure that the player chooses an undominated strategy, since the bids that he must be permitted to choose will depend on his private type, i.e., his vector of marginal values. 39
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
513
into Ai and Ci = f ∈Fi g∈Gi {si ∈ Si : g(ti ) ≤ si (ti ) ≤ f (ti ) for μi a.e. ti ∈ Ti } then Ci is join-closed, piecewise-closed, and pointwise-limit-closed. The next section provides conditions that are sufficient for the hypotheses of Theorem 4.1. 4.1. Sufficient Conditions for Nonempty and Join-Closed Sets of Monotone Best Replies In both Athey (2001) and McAdams (2003), quasisupermodularity and single-crossing conditions are put to good use within the confines of a lattice. We now provide weaker versions of both of these conditions, as well as a single condition that is weaker than their combination. Suppose that player i’s action space, Ai is a lattice. We say that player i’s interim payoff function Vi is weakly quasisupermodular if for all monotone pure strategies s−i of the others, all ai a i ∈ Ai and every ti ∈ Ti Vi (ai ti s−i ) ≥ Vi (ai ∧ a i ti s−i ) implies Vi (ai ∨ a i ti s−i ) ≥ Vi (a i ti s−i ) In McAdams (2003), the stronger assumption of quasisupermodularity— introduced in Milgrom and Shannon (1994)—is imposed, requiring, in addition, that the second inequality must be strict if the first happens to be strict.40 It is well known that Vi is supermodular in actions—hence weakly quasisupermodular—when the coordinates of a player’s own action vector are complementary, that is, when Ai = [0 1]K is endowed with the coordinatewise partial order and the second cross-partial derivatives of Vi (ai1 aiK ti s−i ) with respect to distinct action coordinates are nonnegative.41 We say that i’s interim payoff function Vi satisfies weak single crossing if for all monotone pure strategies s−i of the others, for all player i action pairs a i ≥ ai and for all player i type pairs ti ≥ ti Vi (a i ti s−i ) ≥ Vi (ai ti s−i ) implies Vi (a i ti s−i ) ≥ Vi (ai ti s−i ) In Athey (2001) and McAdams (2003) it is assumed that Vi satisfies the slightly more stringent single crossing condition in which, in addition to the 40 When actions are totally ordered, as in Athey (2001), interim payoffs are automatically supermodular, and hence both quasisupermodular and weakly quasisupermodular. 41 Complementarities between the actions of distinct players is not implied. This is useful because, for example, many auction games satisfy only own-action complementarity.
514
PHILIP J. RENY
above, the second inequality is strict whenever the first one is.42 We next present a condition that will be shown to be weaker than the combination of weak quasisupermodularity and weak single crossing. Return now to the case in which Ai is merely a semilattice. For any joint pure strategy of the others, player i’s interim best-reply correspondence is a mapping from his type into the set of optimal actions—or interim best replies— for that type. Say that player i’s interim best-reply correspondence is monotone if for every monotone joint pure strategy of the others, whenever action ai is optimal for player i when his type is ti and a i is optimal when his type is ti ≥ ti then ai ∨ a i is optimal when his type is ti .43 The following result relates the above conditions to the hypotheses of Theorem 4.1. PROPOSITION 4.4: The hypotheses of Theorem 4.1 are satisfied if G.1–G.6 hold, and if for each player i and for each monotone joint pure strategy of the other players, at least one of the following three conditions is satisfied.44 (i) Player i’s action space is a lattice and i’s interim payoff function is weakly quasisupermodular and satisfies weak single crossing. (ii) Player i’s interim best-reply correspondence is nonempty-valued and monotone. (iii) Player i’s set of monotone pure-strategy best replies is nonempty and joinclosed. Furthermore, the three conditions are listed in increasing order of generality, that is, (i) ⇒ (ii) ⇒ (iii). PROOF: Because, under G.1–G.6, the hypotheses of Theorem 4.1 hold if condition (iii) holds for each player i, it suffices to show that (i) ⇒ (ii) ⇒ (iii). So, fix some player i and some monotone pure strategy for every player but i for the remainder of the proof. (i) ⇒ (ii). Suppose i’s action space is a lattice. By G.4 and G.6, for each of i’s types, his interim payoff function is continuous on his compact action space. Player i therefore possesses an optimal action for each of his types and so his interim best-reply correspondence is nonempty-valued. Suppose that action ai is optimal for i when his type is ti and a i is optimal when his type is ti ≥ ti Then because ai ∧ a i is no better than ai when i’s type is ti weak quasisupermodularity implies that ai ∨ a i is at least as good as a i when i’s type is ti Weak single 42 For conditions on the joint distribution of types, μ and the players’ payoff functions, ui (a t) that imply the more stringent condition, see Athey (2001, pp. 879–881), McAdams (2003, p. 1197), and Van Zandt and Vives (2007). 43 This is strictly weaker than requiring the interim best-reply correspondence to be increasing in the strong set order, which in any case requires the additional structure of a lattice (see Milgrom and Shannon (1994)). 44 Which of the three conditions is satisfied is permitted to depend both on the player, i, and on the joint pure strategy employed by the others.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
515
crossing then implies that ai ∨ a i is at least as good as a i when i’s type is ti Since a i is optimal when i’s type is ti , so too must be ai ∨ a i Hence, i’s interim best-reply correspondence is monotone. (i) ⇒ (iii). Let Bi : Ti Ai denote i’s interim best-reply correspondence. If ai and a i are in Bi (ti ) then ai ∨ a i is also in Bi (ti ) by the monotonicity of Bi (·) (set ti = ti in the definition of a monotone correspondence). Consequently, Bi (ti ) is a subsemilattice of i’s action space for each ti and, therefore, i’s set of monotone pure-strategy best replies is join-closed (measurability of the pointwise join of two strategies follows as in footnote 32). It remains to show that i’s set of monotone pure best replies is nonempty. Let a¯ i (ti ) = ∨Bi (ti ) which is well defined because G.4 and Lemma A.6 imply that Ai is a complete semilattice. Because i’s interim payoff function is continuous in his action, Bi (ti ) is compact. Hence Bi (ti ) is a compact subsemilattice of Ai and so Bi (ti ) is itself complete by Lemma A.6. Therefore, a¯ i (ti ) is a member of Bi (ti ), implying that a¯ i (ti ) is optimal for every ti It remains only to show that a¯ i (ti ) is monotone (measurability in ti can be ensured by Lemma A.11). So, suppose that ti ≥i ti Because a¯ i (ti ) ∈ Bi (ti ) and a¯ i (ti ) ∈ Bi (ti ) the monotonicity of Bi (·) implies that a¯ i (ti ) ∨ a¯ i (ti ) ∈ Bi (ti ) Therefore, because ¯ i ) as a¯ i (ti ) is the largest member of Bi (ti ), we have a¯ i (ti ) = a¯ i (ti ) ∨ a¯ i (ti ) ≥ a(t desired. Q.E.D. REMARK 2: The environments considered in Athey (2001) and McAdams (2003) are strictly more restrictive than G.1–G.6 permit. Moreover, their conditions on interim payoffs are strictly more restrictive than condition (i) of Proposition 4.4. Theorem 4.1 is, therefore, a strict generalization of their main results. REMARK 3: We can now prove Corollary 4.2. Conditions G.1–G.5 hold by Proposition 3.1, G.6 holds by assumption, and the coordinatewise minimum condition and (ii) imply that i’s action space is a lattice. Furthermore, when others use monotone pure strategies, (a) implies that i’s interim payoff function is weakly quasisupermodular and (b) implies that it satisfies weak single crossing. Hence, by Proposition 4.4, the hypotheses of Theorem 4.1 are satisfied and the result follows. When G.1–G.6 hold, it is often possible to apply Theorem 4.1 by verifying condition (i) of Proposition 4.4. But there are important exceptions. For example, Reny and Zamir (2004) have shown in the context of asymmetric firstprice auctions that when bidders have distinct and finite bid sets, monotone best replies exist even though weak single crossing fails. Furthermore, since action sets (i.e., real-valued bids) there are totally ordered, best-reply sets are necessarily join-closed and so the hypotheses of Theorem 4.1 are satisfied even though condition (i) of Proposition 4.4 is not. A similar situation arises in the context of multi-unit discriminatory auctions with risk-averse bidders (see Section 5.2 below). There, under constant absolute risk aversion (CARA), weak
516
PHILIP J. RENY
quasisupermodularity fails but sets of monotone best replies are nonetheless nonempty and join-closed because condition (ii) of Proposition 4.4 is satisfied. 4.2. Symmetric Games We very briefly provide a companion result for symmetric Bayesian games. If x = (x1 xN ) is an N vector and π is a permutation of 1 N let xπ denote the N vector whose ith coordinate is xπ(i) Also, let u(a t) denote the N vector of the players’ payoffs when the vector of actions and types is (a t) The Bayesian game G defined above is symmetric if for every permutation, π of 1 2 N the following conditions hold: (i) T1 = · · · = TN (hence, T1 = · · · = TN ) and the partial orders on all the Ti are the same. (ii) A1 = · · · = AN and the partial orders on all the Ai are the same. (iii) μ(D) = μ(tπ ∈ T : t ∈ D) for every D ∈ T .45 (iv) u(aπ tπ ) = uπ (a t) for every (a t) ∈ A × T A pure-strategy equilibrium is symmetric if each player employs the same pure strategy. THEOREM 4.5: If G is symmetric, then it possesses a symmetric monotone pure-strategy equilibrium if G.1–G.6 hold, and each player’s set of monotone pure strategies is nonempty and join-closed whenever the others employ the same monotone pure strategy.46 We now turn to several applications of our results. 5. APPLICATIONS The first two of our four applications are to uniform-price and discriminatory auctions with risk-averse bidders who possess independent private information. The novelty is in permitting risk aversion. We consider separately the case in which bids are restricted to a finite grid and the case in which they are not. In the uniform-price auction, values are permitted to be interdependent when bid grids are finite, but are restricted to be private when bids can be any nonnegative number. In each of these cases it is currently not known whether a pure-strategy equilibrium exists. Because T1 = · · · = TN D ∈ T implies {tπ ∈ T : t ∈ D} ∈ T To prove Theorem 4.5, let M1 denote player 1’s (and hence each player’s) set of monotone pure strategies and consider the correspondence B : M1 M1 , where B(s1 ) is the set of monotone pure-strategy best replies of player 1 when all other players employ the monotone pure strategy s1 ∈ M1 By following steps analogous to those in the proof of Theorem 4.1, one shows that the hypotheses of Theorem 2.1 are satisfied, so that B has a fixed point sˆ1 ∈ M1 The conditions defining a symmetric game ensure that (ˆs1 sˆ1 ) is then a symmetric monotone pure-strategy equilibrium. 45 46
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
517
In the discriminatory auction, we restrict values to be private both when bid grids are finite and when they are not. In the finite-grid case, Theorem 4 of Milgrom and Weber (1985) implies the existence of a pure-strategy equilibrium. However, the existence of a monotone pure-strategy equilibrium remains an important open question. In particular, monotonicity in the finite-grid case is crucial for establishing the existence of a (monotone) pure-strategy equilibrium in the unrestricted bid case, where the existence of a pure-strategy equilibrium (monotone or otherwise) is an open question. Indeed, our technique for establishing existence with unrestricted bid sets is to consider limits of finite-grid equilibria as the grid becomes ever finer. Without monotonicity, one cannot ensure the existence of a convergent subsequence of pure strategies and the technique would fail. For the uniform-price auction, McAdams (2007) contains a counterexample to the existence of a monotone pure-strategy equilibrium when bidders are risk averse. This, it turns out, is due to the use of the coordinatewise partial order over the bidders’ types. However, the economics of the auction setting (both uniform-price and discriminatory) calls for a partial order over types that ensures, for each k that when a bidder’s type “increases,” so too does his marginal utility of winning a kth unit of the good. Only then can one reasonably expect that a bidder will bid more for each unit when his type rises. The coordinatewise partial order enjoys this property only under risk neutrality, while the partial order we introduce—which reduces to the coordinatewise partial order under risk neutrality—always has this property. Using our methods, which permit flexibility in the partial orders employed, we establish the existence of pure-strategy equilibria that are monotone in a new, but economically meaningful, partial order over types in both the uniform-price and discriminatory multi-unit auctions whether bids are restricted to a finite grid or not. Our third application illustrates how the existence of a pure-strategy equilibrium can be established in a multidimensional type setting when the players’ interim payoff functions exhibit strict single crossing in even a single coordinate of their type. The example is economically interesting because it yields a pure-strategy equilibrium in an oligopoly setting without substitute goods. It is technically interesting because one cannot easily obtain the existence of a pure-strategy equilibrium through alternative means. For example, one might first apply Theorem 1 of Milgrom and Weber (1985) to correctly conclude that the game possesses an equilibrium in distributional strategies. One might then hope to conclude that strict single crossing, even in just one coordinate, implies that all such equilibria must be pure. But this second step can fail because, in the example, strict single crossing is sure to hold only when the other players employ monotone pure strategies, and need not hold when, for example, they employ arbitrary distributional strategies. The final application is to Bayesian games with type spaces containing atoms, where it is shown that our main result establishes the existence of what we call monotone mixed-strategy equilibria.
518
PHILIP J. RENY
5.1. Uniform-Price Multi-Unit Auctions With Risk-Averse Bidders Consider a uniform-price auction with n bidders and m homogeneous units of a single good for sale. Each bidder i simultaneously submits a bid, bi = (bi1 bim ) where bi1 ≥ · · · ≥ bim and each bik is taken from the set B ⊆ [0 1], where B contains both 0 and 1 Call bik bidder i’s kth unit bid. The uniform price, p is the m + 1st highest of all nm unit bids. Each unit bid above p wins a unit at price p, and any remaining units are awarded to unit bids equal to p according to a random-bidder-order tie-breaking rule.47 We begin by considering the case in which the bid set B is finite. Bidder i’s private type is a nonincreasing vector ti = (ti1 tim ) ∈ [0 1]m , so that his type space is Ti = {ti ∈ [0 1]m : ti1 ≥ · · · ≥ tim }. Bidder i is risk averse with utility function for money ui : [−m m] → R where u i > 0 u
i ≤ 0 When the vector of bidder types is t = (t1 tn ) bidder i’s marginal value for a kth unit is vi (tik t−i ) where vi : [0 1]m(n−1)+1 → [0 1] is continuous, and ∂vi (tik t−i )/∂tik is continuous and strictly positive. Consequently, bidder i’s ex post utility of winning k units at price p may depend on the types of others k and is given by ui ( j=1 vi (tij t−i ) − kp) For notational simplicity, we specialize our arguments, but not our results, to the case in which values are private, i.e., where vi (tik t−i ) = tik 48 Types are chosen independently across bidders, and bidder i’s type vector is chosen according to the density fi which need not be positive on all of Ti Multi-unit uniform-price auctions always have trivial equilibria in weakly dominated strategies in which some player always bids very high on all units and all others always bid zero. We wish to establish the existence of monotone pure-strategy equilibria that are not trivial in this sense. But observe that, because the set of feasible bids is finite, bidding above one’s marginal value on some unit need not be weakly dominated. Indeed, it might be a strict best reply for bidder i of type ti to bid bik > tik for a kth unit as long as there is no feasible bid in [tik bik ). Such a kth unit bid might permit bidder i to win a kth unit and earn a surplus with high probability rather than risk losing the unit by bidding below tik . On the other hand, in this instance there is never any gain, and there might be a loss, from bidding above bik on a kth unit. Call a monotone pure-strategy equilibrium nontrivial if for each bidder i for fi almost every ti and for every k bidder i’s kth unit bid does not exceed the smallest feasible unit bid greater than or equal to tik .49 As shown by McAdams 47
As in McAdams (2003), the tie-breaking rule is as follows. Bidders are ordered randomly and uniformly. Then one bidder at a time according to this order—each bidder’s total remaining demand (i.e., his number of bids equal to p) or as much as possible—is filled at price p per unit until supply is exhausted. 48 Interdependent values introduce no substantive complications. 49 Alternatively, in the case of interdependent values, the smallest feasible unit bid greater than or equal to supt−i vi (tik t−i ).
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
519
(2007), under the coordinatewise partial order on type and action spaces, nontrivial monotone pure-strategy equilibria need not exist when bidders are risk averse, as we permit here. Nonetheless, we will demonstrate that a nontrivial monotone pure-strategy equilibrium does exist under an economically meaningful partial order on type spaces that differs from the coordinatewise partial order; we maintain the coordinatewise partial order on the action space Bm of m vectors of unit bids. Before introducing the new partial order, it is instructive to see what goes wrong with the coordinatewise partial order on types. The heart of the matter is that single crossing fails. To see why, it is enough to consider the case of two units. Fix monotone pure strategies for the other bidders and consider two bids for bidder i, b¯ i = (b¯ i1 b¯ i2 ) and bi = (bi1 bi2 ) where b¯ ik > bik for k = 1 2 ¯ ¯bid, b¯ he is certain ¯ to win both Suppose that when bidder i employs¯ the high i units and pay p¯ for each, while he is certain to win only one unit when he employs the low bid, bi Further suppose that the low bid yields a price for the one unit he wins that ¯is either p or p > p each being equally likely. Thus, the ¯ ¯ from ¯ employing expected difference in his payoff the high bid versus the low one can be written as 1 ¯ − ui (ti1 − p )] [ui (ti1 + ti2 − 2p) 2 ¯ 1 ¯ − ui (ti1 − p)] + [ui (ti1 + ti2 − 2p) 2 ¯ where we suppose that the first square-bracketed term is positive and the second is negative. Single crossing requires the above average of the bracketed terms, when nonnegative, to remain nonnegative when bidder i’s type increases according to the coordinatewise partial order, i.e., when ti1 and ti2 increase. But this can fail when risk aversion is strict because the first bracketed term, being positive, strictly falls when ti1 increases. Consequently, the average of the bracketed terms can become negative since, even though the negative second bracketed term increases with ti1 , it may not increase by much. The economic intuition for the failure of single crossing is straightforward. Under risk aversion, the marginal utility of winning a second unit falls when the dollar value of a first unit rises, giving the bidder an incentive to reduce his second unit bid so as to reduce the price paid on the first unit. We now turn to the new partial order, which ensures that a higher type implies a higher marginal utility of winning each additional unit. Thus, this new partial order has economic content and is not merely a technical device used to establish the existence of a pure-strategy equilibrium.
520
PHILIP J. RENY
FIGURE 5.1.—Types that are ordered with ti0 are bounded between two lines through ti0 , one line being vertical and the other having slope αi u (−m)
For each bidder i let αi = ui (m) − 1 ≥ 0 and consider the partial order, ≥i i on Ti defined as follows: ti ≥i ti if (5.1)
ti1 ≥ ti1
ik
and
i1
t − αi (t + · · · + tik−1 )
≥ tik − αi (ti1 + · · · + tik−1 )
for all k ∈ {2 m}50
Figure 5.1 shows the types that are greater than and less than a typical type, ti0 when types are two-dimensional, i.e., when m = 2. In that case, one type is considered greater than another if the one type is coordinatewise greater and if, in addition, the increase in the second coordinate of the type vector is sufficiently high relative to the increase in the first coordinate. Only then will the bidder’s marginal utilities of winning both a first and second unit increase, and only then will he have an incentive to increase his first and second unit bids. Under the Euclidean metric and Borel sigma algebra on the type space, the partial order ≥i defined by (5.1) is clearly closed so that G.1 is satisfied. Because the marginal distribution of each player’s type has a density, G.2 is sat0 isfied as well. To see that G.3 is satisfied, let Ti be the set of points in Ti with rational coordinates and suppose that B fi (ti ) dti > 0 for some Borel subset B of Ti Then B must have positive Lebesgue measure in Rm Consequently, by Fubini’s theorem, there exists z ∈ Rm (indeed there is a positive Lebesgue measure of such z’s) such that the line defined by z + R((1 + αi ) (1 + αi )2 (1 + 50
Under interdependent values, this second condition becomes
vi (tik t−i ) − αi (vi (ti1 t−i ) + · · · + vi (tik−1 t−i ))
≥ vi (tik t−i ) − αi (vi (ti1 t−i ) + · · · + vi (tik−1 t−i )) for all t−i and all k ∈ {2 m}
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
521
αi )m ) intersects B in a set of positive one-dimensional Lebesgue measure on the line. Therefore, we may choose two distinct points, ti and ti in B that are on this line. Hence, ti − ti = β((1 + αi ) (1 + αi )2 (1 + αi )m ) for some β > 0 But then ti1 − ti1 = β(1 + αi ) > 0 and for k ∈ {2 m}
− tik tik
= β(1 + αi )k = β 1 + αi [1 + (1 + αi ) + (1 + αi )2 + · · · + (1 + αi )k−1 ] = β(1 + αi ) + αi [β(1 + αi ) + β(1 + αi )2 + · · · + β(1 + αi )k−1 ]
= β(1 + αi ) + αi [(ti1 − ti1 ) + (ti2 − ti2 ) + · · · + (tik−1 − tik−1 )]
> αi [(ti1 − ti1 ) + (ti2 − ti2 ) + · · · + (tik−1 − tik−1 )]
Consequently, for any ti0 ∈ Ti0 close enough to (ti + ti )/2 ti ≥i ti0 ≥i ti according to the partial order ≥i defined by (5.1). Hence, G.3 is satisfied. As noted in Section 4.1, action spaces, being finite sublattices, are compact locally complete metric semilattices. Hence, G.4 and G.5(ii) hold. Also, G.6 holds because action spaces are finite. Thus, we have so far verified G.1–G.6. In McAdams (2003) it is shown that for any fixed order of players for tiebreaking purposes, the pair of auction outcomes associated with any pair of joint bid vectors b and b is identical to the pair of outcomes associated with b ∨ b and b ∧ b This implies that each bidder’s ex post (and hence interim) payoff function is modular and hence quasisupermodular, even under risk aversion.51 By condition (i) of Proposition 4.4, the hypotheses of Theorem 4.1 will, therefore, be satisfied if interim payoffs satisfy weak single crossing, which we now demonstrate. It is here where the new partial order ≥i in (5.1) is fruitfully employed. To verify weak single crossing, it suffices to show that ex post payoffs satisfy increasing differences. So fix the strategies of the other bidders, a realization of their types, and an ordering of the players for the purposes of tie-breaking. ¯ chosen by bidder i of type ti wins With these fixed, suppose that the bid, b k units at the price p¯ per unit, while the coordinatewise-lower bid, b wins ¯ from j ≤ k units at the price p ≤ p¯ per unit. The difference in i’s ex post utility ¯ bidding b¯ versus b is then ¯ ¯ − ui (ti1 + · · · + tij − j p) ui (ti1 + · · · + tik − kp) (5.2) ¯ 51 The particular tie-break rule used both here and in McAdams (2003) is important for this result.
522
PHILIP J. RENY
Assuming that ti ≥i ti in the sense of (5.1), it suffices to show that (5.2) is weakly greater at ti than at ti Noting that (5.1) implies that til ≥ til for every l it can be seen that if j = k then (5.2), being negative, is weakly greater at ti than at ti by the concavity of ui . It, therefore, remains only to consider the case in which j < k where we have
¯ − ui (ti1 + · · · + tik − kp) ¯ − kp) ui (ti1 + · · · + tik
≥ u i (m)[(ti1 − ti1 ) + · · · + (tik − tik )]
− tij+1 )] ≥ u i (m)[(ti1 − ti1 ) + · · · + (tij+1
≥ u i (−m)[(ti1 − ti1 ) + · · · + (tij − tij )] ≥ ui (ti1 + · · · + tij − j p) − ui (ti1 + · · · + tij − j p) ¯ ¯ where the first and fourth inequalities follow from the concavity of ui and because a bidder’s surplus lies between m and −m and the third inequality follows because ti ≥i ti in the sense of (5.1). We conclude that weak single crossing holds and so the hypotheses of Theorem 4.1 are satisfied. Finally, for each player i, let Ci denote the subset of his pure strategies such that for fi almost every ti and for every k bidder i’s kth unit bid does not exceed φ(tik ), the smallest feasible unit bid greater than or equal to tik .52 By Remark 1, each Ci is join-closed, piecewise-closed, and pointwise-limit-closed. Further, because the hypotheses of Theorem 4.1 are satisfied, whenever the others employ monotone pure strategies, player i has a monotone best reply, b i (·) say. Defining bi (ti ) to be the coordinatewise minimum of b i (ti ) and (φ(ti1 ) φ(tim )) for all ti ∈ Ti implies that bi (·) is a monotone best reply contained in Ci This is because, ex post, any units won by employing b i (·) that are also won by employing bi (·) are won at a weakly lower price with bi (·), and any units won by employing b i (·) that are not won by employing bi (·) cannot be won at a positive surplus. Hence, the hypotheses of Theorem 4.3 are satisfied and we conclude that a nontrivial monotone pure-strategy equilibrium exists. We may therefore state the following proposition. PROPOSITION 5.1: Consider an independent private information, interdependent-value, uniform-price multi-unit auction as above with the random-bidderorder tie-breaking rule. Suppose that bids are restricted to a finite grid, that each bidder i’s nonincreasing type vector is chosen according to the density fi , and that each bidder is weakly risk averse. Then there is a pure-strategy equilibrium of the auction with the following properties for each bidder i: (i) The equilibrium is monotone under the type-space partial order ≥i defined by (5.1) and under the usual coordinatewise partial order on bids. 52 In the case of interdependent values, φ(tik ) is the smallest feasible unit bid greater than or equal to supt−i vi (tik t−i ).
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
523
FIGURE 5.2.—After performing the change of variable from ti to xi as described in Remark 5, bidder i’s new type space is triangle OAB and it is endowed with the coordinatewise partial order. The figure is drawn for the case in which αi ∈ (0 1)
(ii) The equilibrium is nontrivial in the sense that for fi almost all of his types ti and for every k bidder i’s kth unit bid does not exceed the smallest feasible unit bid greater than or equal to supt−i vi (tik t−i ). REMARK 4: The partial order defined by (5.1) reduces to the usual coordinatewise partial order under risk neutrality (i.e., when αi = 0), but is distinct from the coordinatewise partial order under strict risk aversion (i.e., when αi > 0), in which case McAdams (2003) does not apply since the coordinatewise partial order is employed there. REMARK 5: In the private values case, the partial order defined by (5.1) can instead be thought of as a change of variable from ti to say xi where xi1 = ti1 and xik = tik − αi (ti1 + · · · + tik−1 ) for k > 1 and where the coordinatewise partial order is applied to the new type space. Our results apply equally well using this change-of-variable technique. In contrast, McAdams (2003) still does not apply because the resulting type space is not the product of intervals, an assumption maintained in McAdams (2003) together with a strictly positive joint density.53 See Figure 5.2 for the case in which m = 2. In the more general interdependent values case, there is no obvious change of variable that would render the coordinatewise partial order equivalent to the partial order we use here. 53
Indeed, starting with the partial order defined by (5.1), there is no change of variable that, when combined with the coordinatewise partial order, is order-preserving and maps to a product of intervals. This is because, in contrast to a product of intervals with the coordinatewise partial order, under the new partial order, there is never a smallest element of the type space and there is no largest element when αi > 1
524
PHILIP J. RENY
In the private-values case, by considering finer and finer finite grids of bids, one can permit unit bids to be any nonnegative real number. The proof of the following corollary of Proposition 5.1 is given in the Appendix. COROLLARY 5.2: If all the conditions of Proposition 5.1 hold except that bidders’ unit bids are permitted to be any nonnegative real number and if, in addition, values are private (i.e., vi (tik t−i ) = tik ), then for any tie-break rule,54 a weakly undominated pure-strategy equilibrium exists that is monotone in the sense described in Proposition 5.1. Moreover, ties occur with probability 0 in every such equilibrium. 5.2. Discriminatory Multi-Unit Auctions With CARA Bidders Consider the same finite bid set and private-values setup as in Section 5.1 with three exceptions. First, change the payment rule so that each bidder pays his kth unit bid for a kth unit won. Second, assume that each bidder’s utility function, ui exhibits constant absolute risk aversion. Third, assume that values are private, i.e., that vi (tik t−i ) = tik Despite these changes, single crossing still fails under the coordinatewise partial order on types for the same underlying reason as in a uniform-price auction with risk-averse bidders. Nonetheless, the same methods in the previous section demonstrate that assumptions G.1–G.6 hold here and that each bidder i’s interim payoff function satisfies weak single crossing under the partial order, ≥i defined in (5.1).55 For the remainder of this section, we therefore employ the type-space partial order ≥i defined in (5.1) and the coordinatewise partial order on the space of feasible bid vectors. Monotonicity of pure strategies is then defined in terms of these partial orders. If it can be shown that interim payoffs are quasisupermodular, condition (i) of Proposition 4.4 would permit us to apply Theorem 4.1. However, quasisupermodularity does not hold in discriminatory auctions with strictly risk-averse bidders—even CARA bidders. The intuition for the failure of quasisupermodularity is as follows. Suppose there are two units and let bk denote a kth unit bid. Fixing b2 suppose that b1 is chosen to maximize a bidder’s interim payoff when his type is (t1 t2 ), namely, P1 (b1 )[u(t1 − b1 ) − u(0)]
+ P2 (b2 ) u((t1 − b1 ) + (t2 − b2 )) − u(t1 − b1 ) 54 A tie-break rule specifies, possibly randomly, how any units that remain after awarding a unit to each unit bid above the m + 1st highest are distributed among the unit bids equal to the m + 1st highest. 55 This statement remains true with any risk-averse utility function. The CARA utility assumption is required for a different purpose, which will be revealed shortly.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
525
where Pk (bk ) is the probability of winning at least k units.56 There are two benefits from increasing b1 . First, the probability, P1 (b1 ) of winning at least one unit increases. Second, when risk aversion is strict, the marginal utility, u((t1 − b1 ) + (t2 − b2 )) − u(t1 − b1 ) of winning a second unit increases. The cost of increasing b1 is that the marginal utility, u(t1 −b1 )−u(0), of winning a first unit decreases. Optimizing over the choice of b1 balances this cost with the two benefits. For simplicity, suppose that the optimal choice of b1 satisfies b1 > t2 Now suppose that b2 increases. Indeed, suppose that b2 increases to t2 Then the marginal utility of winning a second unit vanishes. Consequently, the second benefit from increasing b1 is no longer present and the optimal choice of b1 may fall—even with CARA utility. This illustrates that the change in utility from increasing one’s first unit bid may be positive when one’s second unit bid is low, but negative when one’s second unit bid is high. Thus, the different coordinates of a bidder’s bid are not necessarily complementary, and weak quasisupermodularity can fail. We therefore cannot appeal to condition (i) of Proposition 4.4. Fortunately, we can instead appeal to condition (ii) of Proposition 4.4, owing to the following lemma, whose proof is given in the Appendix. It is here where we employ the assumption of CARA utility. LEMMA 5.3: Fix any monotone pure strategies for other bidders and suppose that the vector of bids bi is optimal for bidder i when his type vector is ti and that b i is optimal when his type is ti ≥i ti where ≥i is the partial order defined in (5.1). Then the vector of bids bi ∨ b i is optimal when his type is ti Because Lemma 5.3 establishes condition (ii) of Proposition 4.4, we may apply Theorem 4.1 to conclude that a monotone pure-strategy equilibrium exists. Thus, despite the failure—even with CARA utilities—of both single crossing with the coordinatewise partial order on types and of weak quasisupermodularity with the coordinatewise partial order on bids, we have established the following proposition. PROPOSITION 5.4: Consider an independent private-value discriminatory multiunit auction as above with the random-bidder-order tie-breaking rule and in which bids are restricted to a finite grid. Suppose that each bidder i’s vector of marginal values is nonincreasing and chosen according to the density fi , and that each bidder is weakly risk averse and exhibits constant absolute risk aversion. Then there is a pure-strategy equilibrium that is monotone under the type-space partial order ≥i defined by (5.1) and under the usual coordinatewise partial order on bids. 56 Our tie-breaking rule ensures that, given the others’ strategies, the probability of winning at least k units depends only on one’s kth unit bid.
526
PHILIP J. RENY
The proof of the following corollary is provided in the Appendix. COROLLARY 5.5: When the bidders’ unit bids are permitted to be any nonnegative real number, the conclusions of Proposition 5.4 remain valid for any tie-break rule.57 Moreover, ties occur with probability 0 in every equilibrium. The two applications provided so far demonstrate that it is useful to have flexibility in defining the partial order on the type space, since the mathematically natural partial order (in this case the coordinatewise partial order on the original type space) may not be the partial order that corresponds best to the economics of the problem. The next application shows that even when single crossing cannot be established for all coordinates of the type space jointly, it is enough for the existence of a pure-strategy equilibrium if single crossing holds strictly even for a single coordinate of the type space. 5.3. Price Competition With Nonsubstitutes Consider an n-firm differentiated-product price-competition setting. Firm i chooses price pi ∈ [0 1] and receives two pieces of private information—his constant marginal cost ci ∈ [0 1] and information xi ∈ [0 1] about the state of demand in each of the n markets. The demand for firm i’s product is Di (p x) when the vector of prices chosen by all firms is p ∈ [0 1]n and when their joint vector of private information about market demand is x ∈ [0 1]n Demand functions are assumed to be twice continuously differentiable, strictly positive when own-price is less than 1, and strictly downward-sloping, by which we mean ∂Di (p x)/∂pi < 0 Some products may be substitutes, but others need not be. More precisely, the n firms are partitioned into two subsets N1 and N2 .58 Products produced by firms within each subset are substitutes, and so we assume that Di (p x) and ∂Di (p x)/∂pi are nondecreasing in pj whenever i and j are in the same Nk . In addition, marginal costs are affiliated among firms within each Nk and are independent across the two subsets of firms. The joint density of costs is given by the continuously differentiable density f (c) on [0 1]n Information about market demand may be correlated across firms, but is independent of all marginal costs and has continuously differentiable joint density g(x) on [0 1]n We do not assume that market demands are nondecreasing in x because we wish to permit the possibility that information that increases demand for some products might decrease it for others. 57 58
See footnote 54. The extension to any finite number of subsets is straightforward.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
527
Given pure strategies pj (cj xj ) for the others, firm i’s interim profits are (5.3)
so that (5.4)
vi (pi ci xi ) = (pi − ci )Di (pi p−i (c−i x−i ) x)gi (x−i |xi )fi (c−i |ci ) dx−i dc−i
∂2 vi (pi ci xi ) ∂ ∂Di
ci xi + = −E E(Di |ci xi )
∂ci ∂pi ∂pi ∂ci
∂ ∂Di
ci xi + (pi − ci ) E ∂ci ∂pi
Note that both partial derivatives with respect to ci on the right-hand side of (5.4) are nonnegative. For example, consider the expectation in the first partial derivative (the second is similar) and suppose that i ∈ N1 Then
E(Di |ci xi ) = E E Di (pi p−i (c−i x−i ) x)|ci xi (cj xj )j∈N2 |ci xi The inner expectation is nondecreasing in ci because the vector of marginal costs for firms in N1 are affiliated, their prices are nondecreasing in their costs, and their goods are substitutes. That the entire expectation is nondecreasing in ci follows from the independence of (ci xi ) and (cj xj )j∈N2 Therefore, if pj (cj xj ) is nondecreasing in cj for each firm j = i and every xj then
∂Di
∂2 vi (pi ci xi ) c (5.5) >0 ≥ −E x i i ∂ci ∂pi ∂pi for all pi ci xi ∈ [0 1] such that pi ≥ ci Thus, according to (5.5), when pi ≥ ci , single crossing holds strictly for the marginal cost coordinate of the type space. On the other hand, single crossing need not hold for the market-demand coordinate, xi since we have made no assumptions about how xi affects demand.59 Nonetheless, we shall now define a partial order on firm i’s type space Ti = [0 1]2 under which a monotone purestrategy equilibrium exists. Note that because −∂Di /∂pi is positive and continuous on its compact domain, it is bounded strictly above zero with a bound that is independent of the pure strategies, pj (cj xj ), employed by other firms. Hence, because our continuity assumptions imply that ∂2 vi (pi ci xi )/∂xi ∂pi is bounded, there exists 59 We cannot simply restrict attention to strategies pi (ci xi ) that are monotone in ci and jointly measurable in (ci xi ), because this set of pure strategies is not compact in a topology that renders ex ante payoffs continuous.
528
PHILIP J. RENY
FIGURE 5.3.—Types that are greater than and less than ti0 are bounded between two lines through ti0 , one line being horizontal, the other having slope αi
αi > 0 such that for all β ∈ [0 αi ] and all pure strategies pj (cj xj ) nondecreasing in cj (5.6)
∂2 vi (pi ci xi ) ∂2 vi (pi ci xi ) +β >0 ∂ci ∂pi ∂xi ∂pi
for all pi ci xi ∈ [0 1] such that pi ≥ ci Inequality (5.6) implies that when pi ≥ ci the marginal gain from increasing one’s price, namely, ∂vi (pi ci xi ) ∂pi is strictly increasing along lines in (ci xi ) space with slope β ∈ [0 αi ] This provides a basis for defining a partial order under which players possess monotone best replies. For each player i define the partial order ≥i on Ti = [0 1]2 as follows:
(ci x i ) ≥i (ci xi ) if αi ci − x i ≥ αi ci − xi and x i ≥ xi Figure 5.3 shows those types greater than and less than a typical type ti0 = (ci0 x0i ) Under the partial order ≥i assumptions G.1–G.3 hold as in Example 5.1. The action-space assumption G.4 clearly holds while G.5(ii) holds by Lemma A.19 given the usual partial order over the reals. Assumption G.6 holds by our continuity assumption on demand. Also, because the action space [0 1] is totally ordered, the set of monotone best replies is join-closed because the join of two best replies is, at every ti equal to one of them or to the other. Finally, as is shown in the Appendix (see Lemma A.22), under the type-space partial order, ≥i firm i possesses a monotone best reply when the others employ monotone pure strategies. Therefore, by Theorem 4.1, there exists a pure-strategy equilibrium in which each firm’s price is monotone in (ci xi ) according to ≥i . In particular, there is a pure-strategy equilibrium in which each firm’s price is nondecreasing in his marginal cost, the coordinate in which strict single crossing holds.
529
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
5.4. Type Spaces With Atoms When type spaces contain atoms, assumption G.2 fails and there may not exist a pure-strategy equilibrium, let alone a monotone pure-strategy equilibrium. Thus, one must permit mixing and we show here how our results can be used to ensure the existence of a monotone mixed-strategy equilibrium. We follow Aumann (1964) and define a mixed-strategy for player i to be a measurable function mi : Ti × [0 1] → Ai where [0 1] is endowed with the Borel sigma algebra B and Ti × [0 1] is endowed with the product sigma algebra Ti × B Mixed strategies m1 mN for the N players are implemented as follows. The players’ types t1 tN are drawn jointly according to μ and then, independently, each player i privately draws ωi from [0 1] according to a uniform distribution. Player i knowing ti and ωi takes the action m i ωi ) Player i’s payoff given the mixed strategies m1 mN is therefore, i (t u (m(t ω) t) dω dμ where m(t ω) = (m1 (t1 ω1 ) mN (tN ωN )) T [01]N i Call a mixed strategy mi : Ti × [0 1] → Ai monotone if the image of mi (ti ·) i.e., the set mi (ti [0 1]) is a totally ordered subset of Ai for every ti ∈ Ti and if every member of the image of mi (ti ·) is greater than or equal to every member of the image of mi (ti ·) whenever ti ≥ ti .60 Loosely, a mixed strategy is monotone if whenever a player’s type randomizes over actions, any two actions in the support of his mixture are ordered. Moreover, every action in the support of one type’s mixture is greater than every action in the support of any lower type’s mixture. The following result permits a player’s marginal type distribution to contain atoms, even countably many. THEOREM 5.6: If G.1 and G.3–G.6 hold, and each player’s set of monotone pure best replies is nonempty and join-closed whenever the others employ monotone mixed strategies, then G possesses a monotone mixed-strategy equilibrium. PROOF: For each player i let Ti∗ denote the set of atoms of μi Consider the following surrogate Bayesian game. Player i’s type space is Qi = [(Ti \ Ti∗ ) × {0}] ∪ (Ti∗ × [0 1]) and the sigma algebra on Qi is generated by all sets of the form (B \ Ti∗ ) × {0} and (B ∩ Ti∗ ) × C where B ∈ Ti and C is a Borel subset of [0 1] The joint distribution on types, ν is determined as follows. Nature first chooses t ∈ T according to the original type distribution μ Then, for each i Nature independently and uniformly chooses xi ∈ [0 1] if ti ∈ Ti∗ and chooses xi = 0 if ti ∈ Ti \ Ti∗ .61 Hence, νi the marginal distribution on Qi , is atomless. 60 A subset of a partially ordered space is totally ordered if any two members of the subset are ordered. Such a subset is sometimes also called a chain. 61 In particular, if for each player i, Bi ∈ Ti and Ci is a Borel subset of [0 1] and D = i∈I [(Bi \ ∗ Ti ) × {0}] × i∈I c [(Bi ∩ Ti∗ ) × Ci ] then ν(D) = μ([ i∈I (Bi \ Ti∗ )] × [ i∈I c (Bi ∩ Ti∗ )]) i∈I c λ(Ci ) where λ is Lebesgue measure on [0 1]
×
×
×
×
530
PHILIP J. RENY
Player i is informed of qi = (ti xi ) Action spaces are unchanged. The xi are payoff irrelevant and so payoff functions are as before. This completes the description of the surrogate game. The partial order on Qi is the lexicographic partial order. That is, qi =
(ti x i ) ≥ (ti xi ) = qi if either ti ≥ ti and ti = ti or ti = ti and x i ≥ xi The metrics and partial orders on the players’ action spaces are unchanged. It is straightforward to show that under the hypotheses above, all the hypotheses of Theorem 4.1 but perhaps G.3 hold in the surrogate game.62 We now show that G.3 too holds in the surrogate game. For each player i let Ti0 denote the countable subset of Ti that can be used to verify G.3 in the original game and define the countable set Qi0 = [Ti0 × {0}] ∪ [Ti∗ × R] where R denotes the set of rationals in [0 1]. Suppose that for some player i, νi (B) > 0 for some measurable subset B of Qi Then either νi (B ∩ [(Ti \ Ti∗ ) × {0}]) > 0 or νi (B ∩ ({ti∗ } × [0 1])) > 0 for some ti∗ ∈ Ti∗ In the former case, μi ({ti ∈ Ti \ Ti∗ : (ti 0) ∈ B}) > 0 and G.3 in the original game implies the existence of ti and ti
in {ti ∈ Ti \ Ti0 : (ti 0) ∈ B} and ti0 in Ti0 such that ti
≥ ti0 ≥ ti according to the partial order on Ti But then (ti
0) ≥ (ti0 0) ≥ (ti 0) according to the lexicographic partial order on Qi , and where (ti
0) and (ti 0) are in B and (ti0 0) is in Qi0 In the latter case, there exist x i xi in [0 1] with x i > xi > 0 such that (ti∗ xi ) and (ti∗ x i ) are in B But for any rational r between x i and xi , we have (ti∗ x i ) ≥ (ti∗ r) ≥ (ti∗ xi ) according to the lexicographic order on Qi and where (ti∗ r) is in Qi0 Thus, the surrogate game satisfies G.3 and we may conclude, by Theorem 4.1, that it possesses a monotone pure-strategy equilibrium. But any such equilibrium induces a monotone mixed-strategy equilibrium of the original game. Q.E.D. REMARK 6: The proof of Theorem 5.6, in fact, demonstrates that players need only randomize when their type is an atom. 6. PROOF OF THEOREM 4.1 Let Mi denote the nonempty set of monotone functions from Ti into Ai ,
×
N
and let M = i=1 Mi By Lemma A.11, every element of Mi is equal μi almost everywhere to a measurable monotone function, and so Mi coincides with player i’s set of monotone pure strategies. Let Bi : M−i Mi denote player i’s best-reply correspondence when all players must employ monotone pure strategies. Because, by hypothesis, each player possesses a monotone best reply (among all strategies) when the others employ monotone pure strategies, n any fixed point of i=1 Bi : M M is a monotone pure-strategy equilibrium. The following steps demonstrate that such a fixed point exists.
×
62 Observe that a monotone pure strategy in the surrogate game induces a monotone mixed strategy in the original game, and that a monotone pure strategy in the original game defines a monotone pure strategy in the surrogate game by viewing it to be constant in xi
531
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
Step I—M Is a Nonempty, Compact, Metric, Absolute Retract: Without loss, we may assume for each player i that the metric di on Ai is bounded.63 Given di define the metric δi on Mi by64 δi (si si ) = di (si (ti ) si (ti )) dμi (ti ) Ti
By Lemmas A.13 and A.16, each (Mi δi ) is a compact absolute retract.65 Consequently, under the product topology—metrized by the sum of the δi —M is a nonempty compact metric space and, by Borsuk (1966, IV, (7.1)), an absolute retract. n Step II— i=1 Bi Is Nonempty-Valued and Upper Hemicontinuous: We first demonstrate that, given the metric spaces (Mj δj ) each player i’s payoff function, Ui : M → R is continuous under the product topology. To see this, suppose that sn is a sequence of joint strategies in M and that sn → s ∈ M By Lemma A.12, for each player i sin (ti ) → si (ti ) for μi almost every ti ∈ Ti . Consequently, sn (t) → s(t) for μ almost every t ∈ T66 Hence, since ui is bounded, Lebesgue’s dominated convergence theorem yields Ui (sn ) = ui (sn (t) t) dμ(t) → ui (s(t) t) dμ(t) = Ui (s)
×
T
T
establishing the continuity of Ui Because each Mi is compact, Berge’s theorem of the maximum implies that n Bi : M−i Mi is nonempty-valued and upper hemicontinuous. Hence, i=1 Bi is nonempty-valued and upper hemicontinuous as well. n Step III— i=1 Bi Is Contractible-Valued: According to Lemma A.3, for each player i assumptions G.1–G.3 imply the existence of a monotone and measurable function Φi : Ti → [0 1] such that μi {ti ∈ Ti : Φi (ti ) = c} = 0 for every c ∈ [0 1]67 Fixing such a function Φi permits the construction of a contraction map as follows.
×
×
For any metric, d(· ·) a topologically equivalent bounded metric is min(1 d(· ·)). Formally, the resulting metric space (Mi δi ) is the space of equivalence classes of functions in Mi that are equal μi almost everywhere, i.e., two functions are in the same equivalence class if the set on which they coincide contains a measurable subset having μi measure 1. Nevertheless, analogous to the standard treatment of Lp spaces, in the interest of notational simplicity, we focus on the elements of the original space Mi rather than on the equivalence classes themselves. 65 One cannot improve on Lemma A.16 by proving, for example, that Mi metrized by δi is homeomorphic to a convex set. It need not be (e.g., see footnote 31). 66 This is because if Q1 Qn are such that μ(Qi × T−i ) = μi (Qi ) = 1 for all i then μ( i Qi ) = μ( i (Qi × T−i )) = 1 67 For example, if Ti = [0 1]2 and μi is absolutely continuous with respect to Lebesgue measure, we may take Φi (ti ) = (ti1 + ti2 )/2 63 64
×
532
PHILIP J. RENY
To construct the contraction, we require each player i to have pointwise everywhere largest best replies, not merely best replies that are pointwise μi almost everywhere largest. The existence of such best replies is established next. Fix some monotone pure strategy, s−i for players other than i, and consider player i’s set of monotone pure best replies, Bi (s−i ). We wish to show that there exists s¯i ∈ Bi (s−i ) such that s¯i (ti ) ≥ si (ti ) for every ti ∈ Ti and every si ∈ Bi (s−i ) A natural idea is to define s¯i (ti ) = ∨si (ti ) for each ti ∈ Ti where the join is taken over all si ∈ Bi (s−i ). However, because each si ∈ Bi (s−i ) is an interim best reply against s−i only for μi a.e. ti it is not at all clear that s¯i so defined, is a member of Bi (s−i ) Thus, we must proceed more carefully. Because Bi (·) is upper hemicontinuous, it is closed-valued and, therefore, Bi (s−i ) is compact, being a closed subset of the compact metric space Mi By hypothesis, Bi (s−i ) is nonempty and join-closed, and so Bi (s−i ) is a compact semilattice under the partial order defined by si ≥ si if si (ti ) ≥ si (ti ) for μi a.e. ti ∈ Ti . By Lemma A.12, this partial order is closed. Therefore, Lemma A.6 implies that Bi (s−i ) is a complete semilattice so that s˜i = ∨Bi (s−i ) is a well defined member of Bi (s−i ). Consequently for every si ∈ Bi (s−i ), s˜i (ti ) ≥ si (ti ) for μi a.e. ti ∈ Ti By Lemma A.14, there exists s¯i ∈ Mi such that s¯i (ti ) = s˜i (ti ) for μi a.e. ti (and hence s¯i ∈ Bi (s−i )) and such that s¯i (ti ) ≥ si (ti ) for every ti ∈ Ti and every si that is μi a.e. less than or equal to s˜i , and, therefore, in particular for every si ∈ Bi (s−i ) This yields the desired pointwise everywhere upper bound, s¯i for Bi (s−i ) Define h : [0 1] × Bi (s−i ) → Bi (s−i ) as follows: For every ti ∈ Ti s (t ) if Φi (ti ) ≤ 1 − τ and τ < 1, (6.1) h(τ si )(ti ) = i i s¯i (ti ) otherwise. Note that h(0 si ) = si h(1 si ) = s¯i and h(τ si )(ti ) is always either s¯i (ti ) or si (ti ), and so is an interim best reply for μi almost every ti . Moreover, h(τ si ) is monotone because Φi is monotone and s¯i (ti ) ≥ si (ti ) for every ti ∈ Ti Hence, h(τ si ) ∈ Bi (s−i ) Therefore, h will be a contraction for Bi (s−i ) and Bi (s−i ) will be contractible if h(τ si ) is continuous, which we establish next.68 Suppose τn ∈ [0 1] converges to τ and sin ∈ Bi (s−i ) converges to si both as n → ∞ By Lemma A.12, there is a measurable subset, D of i’s types such that μi (D) = 1 and for all ti ∈ D sin (ti ) → si (ti ) Consider any ti ∈ D There are three cases: (a) Φi (ti ) < 1 − τ (b) Φi (ti ) > 1 − τ and (c) Φi (ti ) = 1 − τ In case (a), τ < 1 and Φi (ti ) < 1 − τn for n large enough and so h(τn sin )(ti ) = sin (ti ) → si (ti ) = h(τ si ) In case (b), Φi (ti ) > 1 − τn for n large enough and so for such 68 With Φi defined as in footnote 67, Figure 6.1 provides snapshots of the resulting h(τ si ) as τ moves from 0 to 1. The axes are the two dimensions of the type vector (ti1 ti2 ) and the arrow within the figures depicts the direction in which the negatively sloped line, (ti1 + ti2 )/2 = 1 − τ moves as τ increases. For example, panel (a) shows that when τ = 0 h(τ si )(ti ) is equal to si (ti ) for all ti in the unit square. On the other hand, panel (c) shows that when τ = 3/4 h(τ si )(ti ) is equal to si (ti ) for ti below the negatively sloped line and equal to s¯i (ti ) for ti above it.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
533
FIGURE 6.1.—h(τ si ) as τ varies from 0 (panel (a)) to 1 (panel (d)) and the domain is the unit square.
large enough n h(τn sin )(ti ) = s¯i (ti ) = h(τ si )(ti ) Because the remaining case (c) occurs only if ti is in a set of types having μi measure 0, we have shown that h(τn sin )(ti ) → h(τ si )(ti ) for μi a.e. ti which, by Lemma A.12 implies that h(τn sin ) → h(τ si ) establishing the continuity of h Thus, for each player i the correspondence Bi : M−i Mi is contractiblen valued. Under the product topology, i=1 Bi is therefore contractible-valued as well. n Steps I–III establish that i=1 Bi satisfies the hypotheses of Theorem 2.1 and, therefore, possesses a fixed point. Q.E.D.
×
×
REMARK 7: The proof of Theorem 4.3 mimics that of Theorem 4.1, but where each Mi is replaced with Mi ∩ Ci and where each correspondence Bi : M−i Mi is replaced with the correspondence B∗i : M−i ∩ C−i Mi ∩ Ci defined by B∗i (s−i ) = Bi (s−i ) ∩ Ci The proof goes through because the hypotheses of Theorem 4.3 imply that each Mi ∩ Ci is compact, nonempty, joinclosed, piecewise-closed, and pointwise-limit-closed (and hence the proof that each Mi ∩ Ci is an absolute retract mimics the proof of Lemma A.16), and that each correspondence B∗i is upper hemicontinuous, nonempty-valued, and contractible-valued (the contraction is once again defined by (6.1)). The result then follows from Theorem 2.1. APPENDIX To simplify the notation, we drop the subscript i from Ti μi and Ai throughout the Appendix. Thus, in this appendix, T μ and A should be thought of as the type space, marginal distribution, and action space, respectively, of any one of the players, not as the joint type spaces, joint distribution, and joint action
534
PHILIP J. RENY
spaces of all the players. For convenience, we rewrite here without subscripts the assumptions from Section 3.2 that will be used in this appendix. G.1. T is endowed with a sigma algebra of subsets, T a measurable partial order, and a countably additive probability measure μ G.2. The probability measure μ is atomless. G.3. There is a countable subset T 0 of T such that every set in T assigned positive probability by μ contains two points between which lies a point in T 0 G.4. A is a compact metric space and a semilattice with a closed partial order. G.5. Either (i) A is a convex subset of a locally convex linear topological space and the partial order on A is convex or (ii) A is a locally complete metric semilattice. A.1. Partially Ordered Probability Spaces Preliminaries. We say that Ψ = (T T μ ≥) is a partially ordered probability space if G.1 holds, i.e., if T is a sigma algebra of subsets of T ≥ is a measurable partial order on T , and μ is a countably additive probability measure with domain T . If, in addition, G.2 holds, we say that Ψ is a partially ordered atomless probability space. If Ψ = (T T μ ≥) is a partially ordered probability space, Lemma 5.1.1 of Cohn (1980) implies that the sets ≥(t) = {t ∈ T : t ≥ t) and ≤(t) = {t ∈ T : t ≥ t } are in T for each t ∈ T Hence, for all t t ∈ T the interval [t t ] = {t
∈ T : t ≥ t
≥ t} is a member of T , being the intersection of ≥(t) and ≤(t ). In particular, the singleton set {t} being a degenerate interval, is a member of T for every t ∈ T LEMMA A.1: Suppose that (T T μ ≥) is a partially ordered probability space satisfying G.3 and that D ∈ T has positive measure under μ Then there are se0
∞ quences {tn }∞ n=1 in T and {tn }n=1 in D such that μ assigns positive measure to the
intervals [tn tn ] and [tn tn+1 ] for every n PROOF: For each of the countably many t 0 in T 0 remove from D all members of ≥(t 0 ) if D ∩ ≥(t 0 ) has μ measure 0 and remove from D all members of ≤(t 0 ) if D ∩ ≤(t 0 ) has μ measure 0. Having removed from D countably many subsets each with μ measure 0, we are left with a set D with the same positive measure as D Applying G.3 to D there exist t t in D and t˜1 in T 0 such that t ≥ t˜1 ≥ t Hence, t is a member of both D and ≥(t˜1 ), implying that μ(D ∩ ≥(t˜1 )) > 0 and t is a member of both D and ≤(t˜1 ), implying that μ(D ∩ ≤(t˜1 )) > 0 Setting D0 = D we may inductively apply the same argument, for each k ≥ 1, to the positive μ measure set Dk = Dk−1 ∩ ≥(t˜k ), yielding t˜k+1 ∈ T 0 such that μ(Dk ∩ ≥(t˜k+1 )) > 0 and μ(Dk ∩ ≤(t˜k+1 )) > 0
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
535
0 ˜ Define the sequence {tn }∞ n=1 in T by setting tn = t3n−2 and define the sequence
˜ {t } in D by letting tn be any member of D∩[t3n−1 t˜3n ] The latter set is always nonempty because for every k ≥ 1 μ(D ∩ [t˜k t˜k+1 ]) ≥ μ [Dk−1 ∩ ≥(t˜k )] ∩ ≤(t˜k+1 )] (A.1)
∞ n n=1
= μ(Dk ∩ ≤(t˜k+1 )) > 0 where the first line follows because D contains Dk−1 and the second line follows from the definition of Dk Hence the two sequences, {tn } in T 0 and {tn } in D are well defined. Finally, for every n ≥ 1 (A.1) implies μ([tn tn ]) ≥ μ([t˜3n−2 t˜3n−1 ]) ≥ μ(D ∩ ˜ [t3n−2 t˜3n−1 ]) > 0 and μ([tn tn+1 ]) ≥ μ([t˜3n t˜3n+1 ]) ≥ μ(D ∩ [t˜3n t˜3n+1 ]) > 0 as desired. Q.E.D. COROLLARY A.2: Under the hypotheses of Lemma A.1, if μ([a b]) > 0, then μ([a t ∗ ]) > 0 and μ([t ∗ b]) > 0 for some t ∗ ∈ T 0 PROOF: Let D = [a b], and obtain sequences {tn } in T 0 and {tn } in [a b] satisfying the conclusion of Lemma A.1. Then letting t ∗ = t2 ∈ T 0 , for example, yields μ([a t ∗ ]) ≥ μ([t1 t2 ]) > 0, where the first inequality follows because t1 ∈ [a b] implies [a t ∗ ] contains [t1 t ∗ ] = [t1 t2 ] and yields μ([t ∗ b]) ≥ μ([t2 t2 ]) > 0, where the first inequality follows because t2 ∈ [a b] implies Q.E.D. [t ∗ b] contains [t ∗ t2 ] = [t2 t2 ] LEMMA A.3: If (T T μ ≥) is a partially ordered atomless probability space satisfying G.3, then there is a monotone and measurable function Φ : T → [0 1] such that μ(Φ−1 (α)) = 0 for every α ∈ [0 1] PROOF: Let T 0 = {t1 t2 } be the countable subset of T in G.3. Define Φ : T → [0 1] by (A.2)
Φ(t) =
∞
2−k 1≥(tk ) (t)
k=1
Clearly, Φ is monotone and measurable, being the pointwise convergent sum of monotone and measurable functions. It remains to show that μ(Φ−1 (α)) = 0 for every α ∈ [0 1] Suppose, by way of contradiction, that μ(Φ−1 (α)) > 0 Because μ is atomless, μ(Φ−1 (α) \ T 0 ) = μ(Φ−1 (α)) > 0, and so applying G.3 to Φ−1 (α) \ T 0 yields t t
in Φ−1 (α) \ T 0 and tk ∈ T 0 such that t
≥ tk ≥ t But then α = Φ(t
) ≥ Φ(t ) + 2−k > Φ(t ) = α a contradiction. Q.E.D.
536
PHILIP J. RENY
A.2. Semilattices The standard proofs of the next two lemmas are omitted. LEMMA A.4: If G.4 holds, and an bn cn are sequences in A such that an ≤ bn ≤ cn for every n and both an and cn converge to a then bn converges to a. LEMMA A.5: If G.4 holds, then every nondecreasing sequence and every nonincreasing sequence in A converges. LEMMA A.6: If G.4 holds, then A is a complete semilattice. PROOF: Let S be a nonempty subset of A Because A is a compact metric space, S has a countable dense subset, {a1 a2 } Let a∗ = limn a1 ∨ · · · ∨ an where the limit exists by Lemma A.5. Suppose that b ∈ A is an upper bound for S and let a be an arbitrary element of S Then some sequence, ank converges to a Moreover, ank ≤ a1 ∨ a2 ∨ · · · ∨ ank ≤ b for every k Taking the limit as k → ∞ yields a ≤ a∗ ≤ b Hence, a∗ = ∨S Q.E.D. A.3. The Space of Monotone Functions From T Into A In this section, we introduce a metric, δ under which the space M of monotone functions from T into A will be shown to be a compact metric space. Furthermore, it will be shown that under suitable conditions, the metric space (M δ) is an absolute retract. Some preliminary results are required. Recall that a property P(t) is said to hold for μ a.e. t ∈ T if the set of t ∈ T on which P(t) holds contains a measurable subset having μ measure 1. We next introduce an important definition. DEFINITION A.7: Given a partially ordered probability space Ψ = (T T μ ≥) and a partially ordered metric space A say that a monotone function f : T → A is Ψ approachable at t ∈ T if there are sequences {tn } and {tn } in T such that limn f (tn ) = limn f (tn ) = f (t) and the intervals [tn t] and [t tn ] have positive μ measure for every n REMARK 8: (i) The positive measure condition implies that the intervals are nonempty, i.e., that tn ≥ t ≥ tn for every n (ii) Because we have not endowed T with a topology, neither {tn } nor {tn } is required to converge. (iii) f is Ψ approachable at every atom t of μ because we can set tn = tn = t for all n LEMMA A.8: Suppose that Ψ = (T T μ ≥) is a partially ordered probability space satisfying G.3, that A satisfies G.4, and that f : T → A is measurable and monotone. Then the set of points at which f is Ψ approachable is measurable.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
537
PROOF: Suppose that f is Ψ approachable at t ∈ T , and that the sequences {tn } and {tn } satisfy the conditions in Definition A.7. Then, by Corollary A.2, for each n there exist t˜n t˜n in T 0 such that the intervals [tn t˜n ], [t˜n t] [t t˜n ] and [t˜n tn ] each have positive μ measure. In particular, tn ≤ t˜n ≤ t implies f (tn ) ≤ f (t˜n ) ≤ f (t) and t ≤ t˜n ≤ tn implies f (t) ≤ f (t˜n ) ≤ f (tn ) Consequently, by Lemma A.4, limn f (t˜n ) = limn f (t˜n ) = f (t) We conclude that the definition of Ψ -approachability at any t ∈ T would be unchanged if the sequences {tn } and {tn } were required to be in T 0 Let d be the metric on A, and for every t1 t2 ∈ T and every n = 1 2 define n Tt1 t2 = t ∈ T : μ([t1 t]) > 0 μ([t t2 ]) > 0 1 1 d(f (t1 ) f (t)) < d(f (t2 ) f (t)) < n n Then according to the conclusion drawn in the preceding paragraph, the set of points at which f is Ψ approachable is Ttn1 t2 n≥1 t1 t2 ∈T 0
Consequently, it suffices to show that each Ttn1 t2 is measurable, and for this it suffices to show that, as functions of t the functions μ([t1 t]) μ([t t2 ]) d(f (t1 ) f (t)) and d(f (t2 ) f (t)) are measurable. The functions d(f (t1 ) f (t)) and d(f (t2 ) f (t)) are measurable in t because the metric d is continuous in its arguments and f is measurable. For the measurability of μ([t1 t]) let E = {(t t
) ∈ T × T : t ≥ t
} ∩ (T × ≥(t1 )) Then E is in T × T by the measurability of ≥ and [t1 t] = Et is the slice of E in which the first coordinate is t Proposition 5.1.2 of Cohn (1980) states that μ(Et ) is measurable in t. A similar argument shows that μ([t t2 ]) is measurable in t. Q.E.D. LEMMA A.9: Suppose that G.1, G.3, and G.4 hold, i.e., that Ψ = (T T μ ≥) is a partially ordered probability space satisfying G.3 and that A satisfies G.4. If f : T → A is measurable and monotone, then f is Ψ approachable at μ a.e. t ∈ T PROOF: Let D denote the set of points at which f is not Ψ approachable. By Lemma A.8, D is a member of T It suffices to show that μ(D) = 0 Define Ttn1 t2 as in the proof of Lemma A.8 so that c D= Ttn1 t2 n=1 t1 t2 ∈T 0
and suppose, by way of contradiction, that μ(D) > 0 Then, for some N ≥ 1 μ(DN ) > 0 where DN = t1 t2 ∈T 0 (TtN1 t2 )c
538
PHILIP J. RENY
Let d denote the metric on A Then for every t ∈ DN and every t1 t2 ∈ T 0 such that the intervals [t1 t] and [t t2 ] have positive μ measure, either (A.3)
d(f (t1 ) f (t)) ≥
1 N
or d(f (t2 ) f (t)) ≥
1 N
0
∞ By Lemma A.1, there are sequences {tn }∞ n=1 in T and {tn }n=1 in DN such
that μ assigns positive measure to the intervals [tn tn ] and [tn tn+1 ] for every n Consequently, for every n (A.3) implies that either
(A.4)
d(f (tn ) f (tn )) ≥
1 N
or d(f (tn+1 ) f (tn )) ≥
1 N
On the other hand, because for every n, the intervals [tn tn ] and [tn tn+1 ]— having positive μ measure—are nonempty, we have t1 ≤ t1 ≤ t2 ≤ t2 ≤ · · · Hence, the monotonicity of f implies that f (t1 ) ≤ f (t1 ) ≤ f (t2 ) ≤ f (t2 ) ≤ · · · is a monotone sequence of points in A and must, therefore, converge by Lemma A.5. But then both d(f (tn ) f (tn )) and d(f (tn+1 ) f (tn )) converge to zero, contradicting (A.4), and so we conclude that μ(D) = 0 Q.E.D. LEMMA A.10—A Generalized Helly Selection Theorem: Suppose that G.1, G.3, and G.4 hold, i.e., that Ψ = (T T μ ≥) is a partially ordered probability space satisfying G.3 and that A satisfies G.4. If fn : T → A is a sequence of monotone functions—not necessarily measurable—then there is a subsequence, fnk and a measurable monotone function, f : T → A such that fnk (t) →k f (t) for μ a.e. t ∈ T PROOF: Let T 0 = {t1 t2 } be the countable subset of T satisfying G.3. Choose a subsequence, fnk of fn such that, for every i limk fnk (ti ) exists. Define f (ti ) = limk fnk (ti ) for every i and extend f to all of T by defining f (t) = ∨{a ∈ A : a ≤ f (ti ) for all ti ≥ t}.69 By Lemma A.6, this is well defined because {a ∈ A : a ≤ f (ti ) for all ti ≥ t} is nonempty for each t since it contains any limit point of fnk (t) Indeed, if fnkj (t) →j a then a = limj fnkj (t) ≤ limj fnkj (ti ) = f (ti ) for every ti ≥ t Furthermore, as required, the extension to T is monotone and leaves the values of f on {t1 t2 } unchanged, where the latter follows because the monotonicity of f on {t1 t2 } implies that {a ∈ A : a ≤ f (ti ) for all ti ≥ tk } = {a ∈ A : a ≤ f (tk )} To see that f is measurable, note first that f (t) = limm gm (t) where gm (t) = ∨{a ∈ A : a ≤ f (ti ) for all i = 1 m such that ti ≥ t} and where the limit exists by Lemma A.5. Because 69
Hence, f (t) = ∨A if no ti ≥ t
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
539
the partial order on T is measurable, each gm is a measurable simple function. Hence, f is measurable, being the pointwise limit of measurable functions. Let f be Ψ approachable at t ∈ T By Lemma A.9, it suffices to show that fnk (t) → f (t) So suppose that fnkj (t) → a ∈ A for some subsequence nkj of nk By the compactness of A it suffices to show that a = f (t) Because f is Ψ approachable at t ∈ T the argument in the first paragraph of the proof of Lemma A.8 implies that there exist sequences {tin } and {ti n } in T 0 such that limn f (tin ) = limn f (ti n ) = f (t), and such that the intervals [tin t] and [t ti n ] have positive μ measure for every n In particular, the intervals [tin t] and [t ti n ] are always nonempty, and so tin ≤ t ≤ ti n , implying by the monotonicity of each fnk that fnk tin ≤ fnk (t) ≤ fnk ti n for every k and n Because the partial order on A is closed, taking the limit first in k yields f tin ≤ a ≤ f ti n and taking the limit next in n yields f (t) ≤ a ≤ f (t) from which we conclude that a = f (t) as desired.
Q.E.D.
By setting {fn } in Lemma A.10 equal to a constant sequence, we obtain the following lemma. LEMMA A.11: Under G.1, G.3, and G.4, every monotone function from T into A is μ almost everywhere equal to a measurable monotone function. We now introduce a metric on M the space of monotone functions from T into A. Denote the metric on A by d and assume without loss that d(a b) ≤ 1 for all a b ∈ A Define the metric, δ on M by δ(f g) = d(f (t) g(t)) dμ(t) T
which is well defined by Lemma A.11. Formally, the resulting metric space (M δ) is the space of equivalence classes of monotone functions that are equal μ almost everywhere, i.e., two functions are in the same equivalence class if there is a measurable subset of T having μ measure 1 on which they coincide. Nevertheless, and analogous to the standard treatment of Lp spaces, we focus on the elements of the original space M rather than on the equivalence classes themselves.
540
PHILIP J. RENY
LEMMA A.12: Under G.1, G.3, and G.4, δ(fk f ) → 0 if and only if d(fk (t) f (t)) → 0 for μ a.e. t ∈ T PROOF: Only If. Suppose that δ(fk f ) → 0 By Lemma A.9, it suffices to show that fk (t) → f (t) for all Ψ -approachability points, t of f Let t0 be a Ψ -approachability point of f Because A is compact, it suffices to show that an arbitrary convergent subsequence, fkj (t0 ) of fk (t0 ) converges to f (t0 ). So suppose that fkj (t0 ) converges to a ∈ A By Lemma A.10, there is a further subsequence, fk j of fkj , and a monotone measurable function, g : T → A, such that fk j (t) → g(t) for μ a.e. t in T Because d is bounded, the dominated convergence theorem implies that δ(fk j g) → 0 But δ(fk j f ) → 0 then implies that δ(f g) = 0 and so fk j (t) → f (t) for μ a.e. t in T Because t0 is a Ψ -approachability point of f there are sequences {tn }∞ n=1 and
∞ {tn }n=1 in T such that limn f (tn ) = limn f (tn ) = f (t0 ), and the intervals [tn t0 ] and [t0 tn ] have positive μ measure for every n ≥ 1 Consequently, because fk j (t) → f (t) for μ a.e. t in T , and because the intervals [tn t0 ] and [t0 tn ] have positive μ measure, for every n there exist t˜n and t˜n such that tn ≤ t˜n ≤ t0 ≤ t˜n ≤ tn fk j (t˜n ) →j f (t˜n ) and fk j (t˜n ) →j f (t˜n ). Consequently, fk j (t˜n ) ≤ fk j (t0 ) ≤ fk j (t˜n ) and taking the limit as j → ∞ yields f (t˜n ) ≤ a ≤ f (t˜n ) so that f (tn ) ≤ f (t˜n ) ≤ a ≤ f (t˜n ) ≤ f (tn ) and, therefore, f (tn ) ≤ a ≤ f (tn ) Taking the limit of the latter inequality as n → ∞ yields f (t0 ) ≤ a ≤ f (t0 ) so that a = f (t0 ) as desired. If. To complete the proof, suppose that fk (t) converges to f (t) for μ a.e. t ∈ T Then because d is bounded, the dominated convergence theorem implies that δ(fk f ) → 0 Q.E.D. Combining Lemmas A.10 and A.12, we obtain the following lemma. LEMMA A.13: Under G.1, G.3, and G.4, the metric space (M δ) is compact. LEMMA A.14: Suppose that G.1, G.3, and G.4 hold, and that f : T → A is monotone. If for every t ∈ T f¯(t) = ∨g(t) where the join is taken over all monotone g : T → A such that g(t) ≤ f (t) for μ a.e. t ∈ T then f¯ : T → A is monotone and f¯(t) = f (t) for μ a.e. t ∈ T70 PROOF: Note that f¯(t) is well defined for each t ∈ T by Lemma A.6, and f¯ is monotone, being the pointwise join of monotone functions. It remains only to show that f¯(t) = f (t) for μ a.e. t ∈ T 70 It can be further shown that for all t ∈ T f¯(t) = ∨{a ∈ A : a ≤ f (t ) for all t ≥ t such that t ∈ T is a Ψ -approachability point of f } But we will not need this result.
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
541
Suppose first that f is measurable. Let C denote the measurable (by Lemma A.8) set of Ψ -approachability points of f and let Lf denote the set of monotone g : T → A such that g(t) ≤ f (t) for μ a.e. t ∈ T By Lemma A.9, μ(C) = 1 We claim that f (t) ≥ g(t) for every t ∈ C and every g ∈ Lf To see this, fix g ∈ Lf and let D be a measurable set with μ measure 1 such that g(t) ≤ f (t) for every t ∈ D Consider t ∈ C Because t is a Ψ -approachability point of f there are sequences {tn } and {tn } in T such that limn f (tn ) = limn f (tn ) = f (t), and such that the intervals [tn t] and [t tn ] have positive μ measure for every n Therefore, in particular, the set D ∩ [t tn ] has positive μ measure for every n Consequently, for every n, we may choose t˜n ∈ D ∩ [t tn ] and, therefore, f (tn ) ≥ f (t˜n ) ≥ g(t˜n ) ≥ g(t) for all n In particular, f (tn ) ≥ g(t) for all n so that f (t) = limn f (tn ) ≥ g(t) proving the claim. Consequently, f (t) ≥ g∈Lf g(t) for every t ∈ C Hence, because f itself is a member of Lf f (t) = g∈Lf g(t) = f¯(t) for every t ∈ C and, therefore, for μ a.e. t ∈ T If f is not measurable, then by Lemma A.11, we can repeat the argument, replacing f with a measurable and monotone f˜ : T → A that is μ almost everywhere equal to f concluding that f˜(t) = g∈L ˜ g(t) for μ a.e. t ∈ T f But Lf = Lf˜ then implies that for μ a.e. t ∈ T f (t) = f˜(t) = g∈L ˜ g(t) = f ¯ Q.E.D. g∈Lf g(t) = f (t) LEMMA A.15: Assume G.1, G.3, and G.4. Suppose that the join operator on A is continuous and that Φ : T → [0 1] is a monotone and measurable function such that μ(Φ−1 (c)) = 0 for every c ∈ [0 1] Define h : [0 1] × M × M → M by defining for every t ∈ T ⎧ if Φ(t) ≤ |1 − 2τ| and τ < 1/2, ⎨ f (t) h(τ f g)(t) = g(t) (A.5) if Φ(t) ≤ |1 − 2τ| and τ ≥ 1/2, ⎩ f (t) ∨ g(t) if Φ(t) > |1 − 2τ|. Then h : [0 1] × M × M → M is continuous. PROOF: Suppose that (τk fk gk ) → (τ f g) ∈ [0 1] × M × M By Lemma A.12, there is a μ measure 1 subset, D of T such that fk (t) → f (t) and gk (t) → g(t) for every t ∈ D There are three cases: τ = 1/2, τ > 1/2, and τ < 1/2 Suppose that τ < 1/2 For each t ∈ D such that Φ(t) < |1 − 2τ| we have Φ(t) < |1 − 2τk | for all k large enough. Hence, h(τk fk gk )(t) = fk (t) for all k large enough, and so h(τk fk gk )(t) = fk (t) → f (t) = h(τ f g)(t) Similarly, for each t ∈ D such that Φ(t) > |1 − 2τ| h(τk fk gk )(t) = fk (t) ∨ gk (t) → f (t) ∨ g(t) = h(τ f g)(t) where the limit follows because ∨ is continuous. Because μ({t ∈ T : Φ(t) = |1 − 2τ|}) = 0 we have, therefore, shown that if
542
PHILIP J. RENY
τ < 1/2 then h(τk fk gk )(t) → h(τ f g)(t) for μ a.e. t ∈ T and so, by Lemma A.12, h(τk fk gk ) → h(τ f g) Because the case τ > 1/2 is similar to τ < 1/2 we consider only the remaining case in which τ = 1/2 In this case, |1 − 2τk | → 0 Consequently, for any t ∈ T such that Φ(t) > 0 we have h(τk fk gk )(t) = fk (t) ∨ gk (t) for k large enough and so h(τk fk gk )(t) = fk (t) ∨ gk (t) → f (t) ∨ g(t) = h(1/2 f g)(t) Hence, because μ({t ∈ T : Φ(t) = 0}) = 0, we have shown that h(τk fk gk )(t) → h(1/2 f g)(t) for μ a.e. t ∈ T , and so again by Lemma A.12, h(τk fk gk ) → h(τ f g) Q.E.D. LEMMA A.16: Under G.1–G.5, the metric space (M δ) is an absolute retract. PROOF: Define h : [0 1] × M × M → M by h(τ s s )(t) = τs(t) + (1 − τ)s (t) for all t ∈ T if G.5(i) holds, and by (A.5) if G.5(ii) holds, where the monotone function Φ(·) appearing in (A.5) is defined by (A.2). Note that h maps into M in case G.5(i) holds because A is convex (which itself follows because the partial order on A is convex). We claim that, in each case, h is continuous. Indeed, if G.5(ii) holds, the continuity of h follows from Lemmas A.3 and A.15. If G.5(i) holds and the sequence (τn sn sn ) ∈ [0 1] × M × M converges to (τ s s ) then by Lemma A.12, sn (t) → s(t) and sn (t) → s (t) for μ a.e. t ∈ T Hence, because A is a convex subset of a linear topological space, τn sn (t) + (1 − τn )sn (t) → τs(t) + (1 − τ)s (t) for μ a.e. t ∈ T But then Lemma A.12 implies τn sn + (1 − τn )sn → τs + (1 − τ)s as desired. One consequence of the continuity of h is that for any g ∈ M h(· · g) is a contraction for M so that (M δ) is contractible. Hence, by Borsuk (1966, IV, (9.1)) and Dugundji (1965), it suffices to show that for each f ∈ M, every neighborhood U of f contains a neighborhood V of f such that the sets V n n ≥ 1 defined inductively by V 1 = h([0 1] V V ) V n+1 = h([0 1] V V n ) are all contained in U We shall establish this by way of contradiction. Specifically, let us suppose to the contrary that for some neighborhood U of f ∈ M, there is no open set V containing f and contained in U such that all the V n as defined above are contained in U In particular, for each k = 1 2 taking V to be B1/k (f ) the 1/k ball around f , there exists nk such that some gk ∈ V nk is not in U We derive a contradiction separately for each of the two cases, G.5(i) and G.5(ii). Case I. Suppose G.5(i) holds. For each n V n+1 ⊂ co V so that for every k = 1 2 gk ∈ V nk ⊂ co B1/k (f ) Hence, for each k there exist f1k fnkk in B1/k (f ) and nonnegative weights λk1 λknk summing to one such that nk k k nk k k λj fj ∈ / U Hence, gk (t) = j=1 λj fj (t) for μ a.e. t ∈ T and so for gk = j=1 all t in some measurable set E having μ measure 1. Moreover, the sequence f11 fn11 f12 fn22 converges to f Consequently, by Lemma A.12 the sequence f11 (t) fn11 (t) f12 (t) fn22 (t) converges to f (t) for μ a.e. t ∈ T and so for all t in some measurable set D having μ measure 1. But then for each t ∈ D ∩ E and every convex neighborhood Wt of f (t) each
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
543
of f k (t) fnkk (t) is in Wt for all k large enough and, therefore, gk (t) = nk 1 k k j=1 λj fj (t) is in Wt for k large enough as well. But this implies, by the local convexity of A that gk (t) → f (t) for every t ∈ D ∩ E and hence for μ a.e. t ∈ T Lemma A.12 then implies that gk → f contradicting that no gk is in U. Case II. Suppose G.5(ii) holds. As a matter of notation, for f g ∈ M write f ≤ g if f (t) ≤ g(t) for μ a.e. t ∈ T . Also, for any sequence of monotone functions f1 f2 in M denote by f1 ∨ f2 ∨ · · · the monotone function taking the value limn [f1 (t) ∨ f2 (t) ∨ · · · ∨ fn (t)] for each t in T This is well defined by Lemma A.5. If g ∈ V 1 then g = h(τ f0 f1 ) for some τ ∈ [0 1] and some f0 f1 ∈ V Hence, by the definition of h we have g ≤ f0 ∨ f1 and either f0 ≤ g or f1 ≤ g We may choose the indices so that f0 ≤ g ≤ f0 ∨ f1 Inductively, it can similarly be seen that if g ∈ V n then there exist f0 f1 fn ∈ V such that (A.6)
f 0 ≤ g ≤ f 0 ∨ · · · ∨ fn
Hence, for each k = 1 2 gk ∈ V nk and (A.6) imply that there exist f fnkk ∈ V = B1/k (f ) such that k 0
(A.7)
f0k ≤ gk ≤ f0k ∨ · · · ∨ fnkk
Consider the sequence f01 fn11 f02 fn22 Because fjk is in B1/k (f ) this sequence converges to f Let us reindex this sequence as f1 f2 Hence, fj → f Because for every n, the set {fn fn+1 } contains the set {f0k fnkk } whenever k is large enough, we have fj f0k ∨ · · · ∨ fnkk ≤ j≥n
for every n and all large enough k. Combined with (A.7), this implies that (A.8) fj f0k ≤ gk ≤ j≥n
for every n and all large enough k. Now f0k → f as k → ∞ Hence, by Lemma A.12, f0k (t) → f (t) for μ a.e. t ∈ T . Consequently, if for μ a.e. t ∈ T j≥n fj (t) → f (t) as n → ∞ then (A.8) and Lemma A.4 imply that gk (t) → f (t) for μ a.e. t ∈ T . But then Lemma A.12 implies that gk → f , once again contradicting that no gk is in U. It, therefore, remains only to establish that for μ a.e. t ∈ T j≥n fj (t) → f (t) as n → ∞ But by Lemma A.18, because A is locally complete, this will follow if fj (t) →j f (t) for μ a.e. t which follows from Lemma A.12 because fj → f Q.E.D.
544
PHILIP J. RENY
A.4. Locally Complete Metric Semilattices We denote the partially ordered set by A in this section because the results to follow, while applicable to any partially ordered set, are applied in the main text to the players’ action sets. LEMMA A.17: If A is an upper-bound-convex Euclidean semilattice and compact in the Euclidean metric, then A is a Euclidean metric semilattice, i.e., ∨ is continuous. PROOF: Suppose that an → a, bn → b a ∨ b = c and an ∨ bn → d where all of these points are in A We must show that c = d Because an ≤ an ∨ bn taking limits implies a ≤ d Similarly, b ≤ d so that c = a ∨ b ≤ d Thus, it remains only to show that c ≥ d Let a¯ = ∨A denote the largest element of A which is well defined by Lemma A.6. By the upper-bound-convexity of A εa¯ + (1 − ε)c ∈ A for every ε ∈ [0 1] Because the coordinatewise partial order is closed, it suffices to show that εa¯ + (1 − ε)c ≥ d for every ε > 0 sufficiently small. So fix ε ∈ (0 1) and consider the kth coordinate, ck of c If for some n akn > ck then because a¯ k ≥ akn , we have a¯ k > ck and, therefore, εa¯ k + (1 − ε)ck > ck Consequently, because akn →n ak ≤ ck we have εa¯ k + (1 − ε)ck > akn for all n sufficiently large. On the other hand, suppose that akn ≤ ck for all n Then because a¯ k ≥ ck , we have εa¯ k + (1 − ε)ck ≥ akn for all n So in either case, εa¯ k + (1 − ε)ck ≥ akn for all n sufficiently large. Therefore, because k is arbitrary, εa¯ + (1 − ε)c ≥ an for all n sufficiently large. Similarly, εa¯ + (1 − ε)c ≥ bn for all n sufficiently large. Therefore, because εa¯ + (1 − ε)c ∈ A εa¯ + (1 − ε)c ≥ an ∨ bn for all n sufficiently large Taking limits in n gives εa¯ + (1 − ε)c ≥ d Q.E.D. LEMMA A.18: If G.4 holds, then A is locally complete if and only if for every a ∈ A and every sequence an converging to a limn ( k≥n ak ) = a PROOF: We first demonstrate the “only if” direction. Suppose that A is locally complete, that U is a neighborhood of a ∈ A and that an → a By local completeness, there is a neighborhood W of a contained in U such that every subset of W has a least upper bound in U In particular, because for n large enough, {an an+1 } is a subset of W the least upper bound of {an an+1 } namely k≥n ak is in U for n large enough. Since U was arbitrary, this implies limn ( k≥n ak ) = a We now turn to the “if” direction. Fix any a ∈ A and let B1/n (a) denote the open ball around a with radius 1/n For each n ∨B1/n (a) is well defined by Lemma A.6. Moreover, because ∨B1/n (a) is nonincreasing in n limn ∨B1/n (a) exists by Lemma A.5. We first argue that limn ∨B1/n (a) = a For each n construct as in the proof of Lemma A.6 a sequence {anm } of points in B1/n (a) such that limm (an1 ∨ · · · ∨ anm ) = ∨B1/n (a) We can, therefore, choose mn sufficiently large so that the distance between an1 ∨ · · · ∨ anmn and ∨B1/n (a) is less
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
545
than 1/n Consider now the sequence {a11 a1m1 a21 a2m2 a31 a3m3 } Because anm is in B1/n (a) this sequence converges to a Consequently, by hypothesis, lim an1 ∨ · · · ∨ anmn ∨ a(n+1)1 ∨ · · · ∨ a(n+1)m(n+1) ∨ · · · = a n
But because every akj in the join in parentheses on the left-hand side above (denote this join by bn ) is in B1/n (a) we have an1 ∨ · · · ∨ anmn ≤ bn ≤ ∨B1/n (a) Therefore, because for every n the distance between an1 ∨ · · · ∨ anmn and ∨B1/n (a) is less than 1/n Lemma A.4 implies that limn ∨B1/n (a) = limn bn But since limn bn = a we have limn ∨B1/n (a) = a. Next, for each n let Sn be an arbitrary nonempty subset of B1/n (a) and choose any sn ∈ Sn Then sn ≤ ∨Sn ≤ ∨B1/n (a) Because sn ∈ B1/n (a) Lemma A.4 implies that limn ∨Sn = a Consequently, for every neighborhood U of a there exists n large enough such that ∨S (well defined by Lemma A.6) is in U for every subset S of B1/n (a) Since a was arbitrary, A is locally complete. Q.E.D. LEMMA A.19: Every compact Euclidean metric semilattice is locally complete. PROOF: Suppose that an → a with every an and a in the semilattice, A.18, it suffices to show which we assume to be a subset of RK . By Lemma that limn ( k≥n ak ) = a By Lemma A.5, limn ( k≥n ak ) exists and is equal to limn limm (an ∨ · · · ∨ am ) since an ∨ · · · ∨ am is nondecreasing in m and limm (an ∨ · · · ∨ am ) is nonincreasing in n For each dimension k = 1 K let aknm denote the first among an an+1 am with the largest kth coordinate. Hence, an ∨ · · · ∨ am = a1nm ∨ · · · ∨ aKnm where the right-hand side consists of K terms. Because an → a, limm aknm exists for each k and n and limn limm aknm = a for each k Consequently, limn limm (an ∨ · · · ∨ am ) = limn limm (a1nm ∨ · · · ∨ aKnm ) = (limn limm a1nm ) ∨ · · · ∨ (limn limm aKnm ) = a ∨ · · · ∨ a = a, as desired. Q.E.D. LEMMA A.20: If G.4 holds and for all a ∈ A every neighborhood of a contains a such that b ≤ a for all b close enough to a then A is locally complete. PROOF : Suppose that an → a By Lemma A.18, it suffices to show that limn ( k≥n ak ) = a For every n and m am ≤ am ∨ am+1 ∨ · · · ∨ am+n , and so taking the limit first as n → ∞ and then as m → ∞ gives a ≤ limm k≥m ak where the limit in n exists by Lemma A.5 because the sequence is monotone. Hence, it suffices to show that limm k≥m ak ≤ a. Let U be a neighborhood of a and let a be chosen as in the statement of the lemma. Then because am → a am ≤ a for all m large enough. Consequently, for m large enough and for all n, am ∨ am+1 ∨ · · · ∨ am+n ≤ a Tak-
546
PHILIP J. RENY
ing the limit first in n and then in m yields limm k≥m ak ≤ a Because for
every neighborhood U of a this holds for some a in U limm k≥m ak ≤ a as desired. Q.E.D. A.5. Assumption G.3 Say that two points in a partially ordered metric space are strictly ordered if they are contained in disjoint open sets and every member of one set is greater or equal to every member of the other. The following lemma provides a sufficient condition for G.3 to hold when T happens to be a separable metric space. LEMMA A.21: Suppose that (T T μ ≥) is a partially ordered probability space, that T is a separable metric space, and that T contains the open sets. Then G.3 holds if every atomless set having positive μ measure contains two strictly ordered points. PROOF: Let T 0 be the union of a countable dense subset of T and the countable set of atoms of μ and suppose that D ∈ T has positive μ measure. We must show that t1 ≥ t0 ≥ t2 for some t1 t2 ∈ D and some t0 ∈ T 0 If D contains an atom, t0 of μ then we may set t1 = t2 = t0 and we are done. Hence, we may assume that D is atomless. Without loss, we may assume that μ(D ∩ U) > 0 for every open set U whose intersection with D is nonempty.71 Because μ(D) > 0 there exist t1 t2 ∈ D and open sets U1 containing t1 and U2 containing t2 such that every member of U1 is greater than or equal to every member of U2 which we write as U1 ≥ U2 Because D ∩ U1 is nonempty (e.g., it contains t1 ), μ(D ∩ U1 ) > 0. Consequently, there exist t1 t1
∈ D ∩ U1 and open sets U1 containing t1 and U1
containing t1
such that U1 ≥ U1
Hence, U1 ∩ U1 ≥ U1
∩ U1 ≥ U2 Therefore, because the open set U1
∩ U1 is nonempty (e.g., it contains t1
), it contains some t0 in the dense set T 0 Hence, t1 ≥ t0 ≥ t2 because t1 ∈ U1 ∩ U1 and t2 ∈ U2 Q.E.D. Noting that t1 and t2 are members of D completes the proof. A.6. Sufficient Conditions for G.1–G.5 PROOF OF PROPOSITION 3.1: Suppose that each player i’s type space and marginal distribution satisfy the hypotheses of the lemma. Then G.1 and G.2 are immediate. To see that G.3 holds, for each i and k let Tik0 be a countable dense subset of Tik . Consequently, if μi (B) > 0 then by Fubini’s theorem, there exist k and ti ∈ (τik τ¯ ik )nik such that B ∩ Lik (ti ) contains a continuum of ¯ members, any two of which define an interval of types containing a member of 71 Otherwise replace D with D ∩ V c where V is the largest open set whose intersection with D has μ-measure 0. To see that V is well defined, let {Ui } be a countable base of open sets. Then V is the union of all the Ui satisfying μ(Ui ∩ D) = 0
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
547
Tik0 where Lik (ti ) is the line joining the lowest point in Tik i.e., (τik τik ) ¯ ¯ with ti Hence, G.3 holds by setting Ti0 = Ti10 ∪ Ti20 ∪ · · · Suppose next that each player’s action space satisfies the hypotheses of the lemma. If the coordinatewise maximum of any two actions is a feasible action, the join of any two points is their coordinatewise maximum and, hence, the join operator is continuous in the Euclidean metric. Each player’s action space is then a compact Euclidean metric semilattice and, by Lemma A.19, locally complete. Conditions G.4 and G.5 are therefore satisfied. Q.E.D. A.7. Proofs From Section 5 PROOF OF COROLLARY 5.2: Consider the uniform-price auction but where unit bids can be any nonnegative real number. Because marginal values are between 0 and 1, without loss we may restrict attention to unit bids in [0 1] The resulting game is discontinuous. Remark 3.1 in Reny (1999) establishes that if this game is better-reply secure, then the limit of a convergent sequence of pure-strategy ε equilibria, as ε tends to zero, is a pure-strategy equilibrium. Hence, in view of Lemma A.13, it suffices to show that the auction game is better-reply secure (when players employ monotone pure strategies) and that it possesses, for every ε > 0 an ε equilibrium in monotone pure strategies. An argument analogous to that given in the first paragraph on page 1046 in Reny (1999) shows that, regardless of the tie-break rule, the uniform-price auction game with unit bid space [0 1] is better-reply secure when bidders employ weakly undominated monotone pure strategies and that ties occur with probability 0 in every such equilibrium. Fix ε > 0 By Proposition 5.1, for each k = 1 2 there is a nontrivial monotone pure-strategy equilibrium, bk of the uniform-price auction when unit bids are restricted to the finite set {0 1/k 2/k k/k} It suffices to show that for all k sufficiently large, bk is an ε equilibrium of the game in which unit bids can be chosen from [0 1] Fix player i Let D denote the set of nonincreasing bid vectors in [0 1]m It suffices to show that for all k sufficiently large and all monotone pure strategies b : Ti → D for player i there is a monotone pure strategy b : Ti → D ∩ {0 1/k 2/k k/k}m that yields player i utility within ε of b(·) uniformly in the others’ strategies. By weak dominance, it suffices to consider monotone pure strategies b : Ti → D for player i such that each unit bid, bj (ti ) is in [0 tij ] for every ti = (ti1 tim ) ∈ Ti So let b(·) be such a monotone pure strategy and let b : Ti → D ∩ {0 1/k 2/k k/k}m be such that for every ti ∈ Ti b j (ti ) is the smallest member of {0 1/k k/k} greater than or equal to bj (ti ) Hence, b (·) is monotone and b 1 (ti ) ≥ · · · ≥ b m (ti ) for every ti ∈ Ti so that b (·) is a feasible monotone pure strategy. If bidder i employs b (·) instead of b(·) then regardless of his type, and for any strategies the others might employ and for each j = 1 m bidder i will win a jth unit whenever b(·) would have won a jth unit although the price might be higher because his bid vector is higher, and he may win a jth unit when b(·) would not have. The increase in
548
PHILIP J. RENY
the price caused by the at most 1/k increase in each of his unit bids can be no greater than 1/k and because bj (ti ) ≤ tij for every ti ∈ Ti the ex post surplus lost on each additional unit won from employing b (·) instead of b(·) can be no greater than 1/k Hence, the total ex post loss in surplus as a result of the strategy change can be no greater than 2m/k which can be made arbitrarily small for k sufficiently large, regardless of the others’ strategies. Hence, i’s expected utility loss from employing b (·) instead of b(·) is, for k large enough, less than ε and this holds uniformly in the others’ strategies. Q.E.D. REMARK 9: An alternative proof method is to consider the limit of a sequence of finite-grid monotone pure-strategy equilibria (which exist by Proposition 5.1) as the grid becomes increasingly fine. Then techniques as in Jackson, Simon, Swinkels, and Zame (2002) can be used to show that any limit strategies (which, by Lemma A.13, exist along a subsequence, and are monotone and pure) form an equilibrium with an endogenous tie-break rule. Theorem 6 of Jackson and Swinkels (2005) then implies that ties occur with probability 0 and that the same strategies constitute an equilibrium for any tie-break rule. The proof of Corollary 5.5 is analogous to the proof of Corollary 5.2 above. PROOF OF LEMMA 5.3: Fix monotone pure strategies for all players but i For the remainder of this proof, we omit most subscripts i to keep the notation manageable. Let v(b t) denote bidder i’s expected payoff from employing the bid vector b = (b1 bm ) when his type vector is t = (t1 tm ) Then letting Pk (bk ) denote the probability that bidder i wins at least k units—which, owing to our tie-breaking rule, depends only on his kth unit bid bk —we have, where 1k is an m vector of k ones followed by m − k zeros, v(b t) = u(0) +
m
Pk (bk ) u((t − b) · 1k ) − u((t − b) · 1k−1 )
k=1
1 r(b1 +···+bk−1 ) e Pk (bk ) 1 − e−r(tk −bk ) e−r(t1 +···+tk−1 ) r k=1 m
=
−rx
where u(x) = 1−er is bidder i’s utility function with constant absolute risk aversion parameter r ≥ 0 where it is understood that u(x) = x when r = 0 Note that the dependence of r on i has been suppressed. From now on, we proceed as if r > 0, because all of the formulae employed here have well defined limits as r tends to 0 that correspond to the risk neutral case u(x) = x Letting wk (bk t) = 1r Pk (bk )(1 − e−r(tk −bk ) )e−r(t1 +···+tk−1 ) we can write v(b t) =
m k=1
er(b1 +···+bk−1 ) wk (bk t)
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
549
As shown in (5.2) (and setting p¯ = p = 0 there), for each k = 2 m ¯ 1 (A.9) u(t1 + · · · + tk ) − u(t1 + · · · + tk−1 ) = (1 − e−rtk )e−r(t1 +···+tk−1 ) r is nondecreasing in t according to the partial order ≥i defined in (5.1). Henceforth, we employ the partial order ≥i on i’s type space. We next demonstrate the following facts. (i) wk (bk t) is nondecreasing in t. (ii) wk (b¯ k t) − wk (bk t) is nondecreasing in t for all b¯ k ≥ bk . ¯ ¯ To see (i), write 1 wk (bk t) = Pk (bk ) 1 − e−r(tk −bk ) e−r(t1 +···+tk−1 ) r 1 = Pk (bk )(1 − e−rtk )e−r(t1 +···+tk−1 ) r 1 + Pk (bk )(erbk − 1) −e−r(t1 +···+tk ) r The first term in the sum is nondecreasing in t according to ≥i by (A.9) and the second term, being nondecreasing in the coordinatewise partial order, is a fortiori nondecreasing in t according to ≥i . Turning to (ii), if Pk (bk ) = 0, then wk (bk t) = 0 and (ii) follows from (i). So ¯ ¯ assume Pk (bk ) > 0 Then ¯ 1 ¯ wk (b¯ k t) − wk (bk t) = Pk (b¯ k ) 1 − e−r(tk −bk ) e−r(t1 +···+tk−1 ) r ¯ 1 − Pk (bk ) 1 − e−r(tk −b¯ k ) e−r(t1 +···+tk−1 ) r ¯ ¯ Pk (bk ) − 1 wk (bk t) = Pk (bk ) ¯ ¯ 1 ¯ + Pk (b¯ k )(er bk − er b¯ k ) −e−r(t1 +···+tk ) r The first term in the sum is nondecreasing in t according to ≥i by (i) and the second term, being nondecreasing in the coordinatewise partial order, is a fortiori nondecreasing in t according to ≥i . This proves (ii). Suppose now that the vector of bids b is optimal for bidder i when his type vector is t and that b is optimal when his type is t ≥i t. We must argue that b ∨ b is optimal when his type is t If bk ≤ b k for all k then b ∨ b = b and we are done. Hence, we may assume that there is a maximal set of consecutive
550
PHILIP J. RENY
coordinates of b that are strictly greater than those of b That is, there exist coordinates j and l with j ≤ l such that bk > b k for k = j l, bj−1 ≤ b j−1 , and bl+1 ≤ b l+1 where the first of the last two inequalities is ignored if j = 1 and the second is ignored if l = m. Let bˆ be the bid vector obtained from b by replacing its coordinates j through l with the coordinates j through l of b Because b is optimal at t, and bˆ is ˆ t) is nonnegative. Dividing nonincreasing and therefore feasible, v(b t) − v(b ˆ t) by er(b1 +···+bj ) implies v(b t) − v(b l
j
0 ≤ wj (bj t) − wj (b t) +
er(bj +···+bk−1 ) (wk (bk t) − wk (b k t))
k=j+1
+ er(bj +···+bl ) − er(bj +···+bl )
× wl+1 (bl+1 t) + erbl+1 wl+2 (bl+2 t) + · · · + er(bl+1 +···+bm−1 ) wm (bm t) Consequently, for t ≥i t (i) and (ii) imply (A.10)
j
0 ≤ wj (bj t ) − wj (b t ) +
l
er(bj +···+bk−1 ) (wk (bk t ) − wk (b k t ))
k=j+1
+ er(bj +···+bl ) − e
× wl+1 (bl+1 t ) + erbl+1 wl+2 (bl+2 t ) + · · · + er(bl+1 +···+bm−1 ) wm (bm t ) r(b j +···+b l )
Focusing on the second term in square brackets in (A.10), we claim that (A.11)
wl+1 (bl+1 t ) + erbl+1 wl+2 (bl+2 t ) + · · · + er(bl+1 +···+bm−1 ) wm (bm t )
≤ wl+1 (b l+1 t ) + erbl+1 wl+2 (b l+2 t ) + · · · + er(bl+1 +···+bm−1 ) wm (b m t ) To see this, note that because bl+1 ≤ b l+1 the bid vector b
obtained from b by replacing its coordinates l + 1 through m with the coordinates l + 1 through m of b is a feasible (i.e., nonincreasing) bid vector. Consequently, because b is optimal at t , we must have 0 ≤ v(b t ) − v(b
t ) But this difference in utilities is precisely the difference between the right-hand and left-hand sides of (A.11) multiplied by er(b1 +···+bl ) thereby establishing (A.11). Thus, we may conclude, after making use of (A.11) in (A.10), that
j
0 ≤ wj (bj t ) − wj (b t ) +
l k=j+1
er(bj +···+bk−1 ) (wk (bk t ) − wk (b k t ))
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
551
+ er(bj +···+bl ) − er(bj +···+bl )
× wl+1 (b l+1 t ) + erbl+1 wl+2 (b l+2 t ) + · · ·
+ er(bl+1 +···+bm−1 ) wm (b m t ) =
˜ t ) − v(b t ) v(b
er(b1 +···+bj−1 )
where b˜ is the nonincreasing and therefore feasible bid vector obtained from b by replacing its coordinates j through l with the coordinates j through l ˜ t ) ≥ v(b t ) and b is optimal at of b Hence, b˜ is optimal at t because v(b
t. Thus, we have shown that whenever j l is a maximal set of consecutive coordinates such that bk > b k for all k = j l replacing in b the unit bids b j b l with the coordinate-by-coordinate larger unit bids bj bl results in a bid vector that is optimal at t Applying this result finitely often leads to the conclusion that b ∨ b is optimal at t as desired. Q.E.D. LEMMA A.22: Consider the price competition game from Section 5.3. Under the partial orders on types ≥i defined there for each firm i, each firm possesses a monotone pure-strategy best reply when the other firms employ monotone pure strategies. PROOF: Suppose that all firms j = i employ monotone pure strategies according to ≥j defined in Section 5.3. Therefore, in particular, pj (cj xj ) is nondecreasing in cj for each xj and (5.6) applies. For the remainder of this proof, we omit most subscripts i to keep the notation manageable. Because firm i’s interim payoff function is continuous in his price for each of his types, and because his action space, [0 1] is totally ordered and comˆ x) for each of his types (c x) ∈ pact, firm i possesses a largest best reply, p(c ˆ is monotone according to ≥i . [0 1]2 . We will show that p(·) ¯ x) ¯ and t = (c x) in [0 1]2 be two types of firm i and suppose that Let t¯ = (c ¯ x¯¯−¯x = β(c¯ − c) for some β ∈ [0 α ] Let p¯ = p( ˆ c ¯ x) ¯ t¯ ≥i t Hence, c¯ ≥ c and i ¯ ¯ λ ¯ t + λt¯ for¯ λ ∈ [0 1] We wish to show that p¯ ≥ p ˆ c x) and t = (1 − λ) p = p( ¯ fundamental ¯ of calculus, ¯ ¯ By the ¯ theorem
vi (p t ) − vi (p t ) = ¯ λ
λ
so that ∂[vi (p t λ ) − vi (p t λ )] ¯ ∂λ
p ¯
p
∂vi (p t λ ) dp ∂p
552
PHILIP J. RENY
p ¯
∂2 vi (p t λ ) dp ∂λ ∂p p p 2 λ ∂2 vi (p t λ ) ¯ ∂ vi (p t ) = (c¯ − c) + (x¯ − x) dp ∂c ∂p ∂x ∂p ¯ ¯ p p 2 λ 2 λ ∂ vi (p t ) ∂ vi (p t ) +β dp = (c¯ − c) ¯ ∂c ∂p ∂x ∂p ¯ p =
≥ 0 ¯ Therefore, vi (p t¯) − where the inequality follows by (5.6) if p ≥ p ≥ c
¯
¯ ¯ vi (p t ) ≥ vi (p t) − vi (p t) ≥ 0 where the first inequality follows because ¯ ¯ ¯ the second follows because p is a best reply at t Therefore, t 0 = t t 1 = t¯ and ¯ ¯ ¯ then ¯ we have shown the following: If p ≥ c ¯ ¯ p] vi (p t¯) − vi (p t¯) ≥ 0 for all p ∈ [c ¯ ¯ ¯ ¯ then p( ˆ t ) = p¯ ≥ p = p( ˆ t) because p( ˆ t¯) is the largest best Hence, if p ≥ c ¯ ¯ ¯ ¯ x) ¯ is below c ¯ On the other reply at t¯ and because no best reply at t¯ = (c ¯ then p¯ = p( ˆ t¯) ≥ c¯ > p = p( ˆ t) where the first inequality again hand, if p < c ¯ ¯ ¯ We conclude that p¯ ≥ p as defollows because no best reply at t¯ is ¯below c. ¯ Q.E.D. sired. REFERENCES ATHEY (2001): “Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information,” Econometrica, 69, 861–889. [499,500,505,509,510,513-515] AUMANN, R. J. (1964): “Mixed and Behavior Strategies in Infinite Extensive Games,” in Advances in Game Theory, ed. by M. Dresher, L. S. Shapley, and A. W. Tucker. Annals of Mathematics Study, Vol. 52. Princeton University Press, 627–650. [529] BIRKHOFF, G. (1967): Lattice Theory. Providence, RI: American Mathematical Society. [507] BORSUK, K. (1966): Theory of Retracts. Warsaw, Poland: Polish Scientific Publishers. [503,531,542] COHN, D. L. (1980): Measure Theory. Boston: Birkhauser. [534,537] DUGUNDJI, J. (1965): “Locally Equiconnected Spaces and Absolute Neighborhood Retracts,” Fundamenta Mathematicae, 52, 187–193. [542] EILENBERG, S., AND D. MONTGOMERY (1946): “Fixed Point Theorems for Multi-Valued Transformations,” American Journal of Mathematics, 68, 214–222. [500,502,503] HART, S., AND B. WEISS (2005): “Convergence in a Lattice: A Counterexample,” Mimeo, Institute of Mathematics, Department of Economics and Center for the Study of Rationality, The Hebrew University of Jerusalem. [507] JACKSON, M. O., AND J. M. SWINKELS (2005): “Existence of Equilibrium in Single and Double Private Value Auctions,” Econometrica, 73, 93–139. [548] JACKSON, M. O., L. K. SIMON, J. M. SWINKELS, AND W. R. ZAME (2002): “Communication and Equilibrium in Discontinuous Games of Incomplete Information,” Econometrica, 70, 1711–1740. [548] MCADAMS, D. (2003): “Isotone Equilibrium in Games of Incomplete Information,” Econometrica, 71, 1191–1214. [499,500,505,509,510,513-515,518,521,523]
PURE-STRATEGY EQUILIBRIA IN BAYESIAN GAMES
553
(2006): “Monotone Equilibrium in Multi-Unit Auctions,” Review of Economic Studies, 73, 1039–1056. [499] (2007): “On the Failure of Monotonicity in Uniform-Price Auctions,” Journal of Economic Theory, 137, 729–732. [501,517,519] MILGROM, P., AND J. ROBERTS (1990): “Rationalizability, Learning, and Equilibrium in Games With Strategic Complementarities,” Econometrica, 58, 1255–1277. [502] MILGROM, P., AND C. SHANNON (1994): “Monotone Comparative Statics,” Econometrica, 62, 157–180. [513,514] MILGROM, P., AND R. WEBER (1985): “Distributional Strategies for Games With Incomplete Information,” Mathematics of Operations Research, 10, 619–632. [508,510,511,517] RENY, P. J., AND S. ZAMIR (2004): “On the Existence of Pure Strategy Monotone Equilibria in Asymmetric First-Price Auctions,” Econometrica, 72, 1105–1126. [499,500,515] VAN ZANDT, T., AND X. VIVES (2007): “Monotone Equilibria in Bayesian Games of Strategic Complementarities,” Journal of Economic Theory, 134, 339–360. [502,514] VIVES, X. (1990): “Nash Equilibrium With Strategic Complementarities,” Journal of Mathematical Economics, 19, 305–321. [502]
Dept. of Economics, University if Chicago, 1126 East 59th Street, Chicago, IL 60637, U.S.A.;
[email protected]. Manuscript received November, 2009; final revision received September, 2010.
Econometrica, Vol. 79, No. 2 (March, 2011), 555–600
THE ECONOMICS OF LABOR COERCION BY DARON ACEMOGLU AND ALEXANDER WOLITZKY1 The majority of labor transactions throughout much of history and a significant fraction of such transactions in many developing countries today are “coercive,” in the sense that force or the threat of force plays a central role in convincing workers to accept employment or its terms. We propose a tractable principal–agent model of coercion, based on the idea that coercive activities by employers, or “guns,” affect the participation constraint of workers. We show that coercion and effort are complements, so that coercion increases effort, but coercion always reduces utilitarian social welfare. Better outside options for workers reduce coercion because of the complementarity between coercion and effort: workers with a better outside option exert lower effort in equilibrium and thus are coerced less. Greater demand for labor increases coercion because it increases equilibrium effort. We investigate the interaction between outside options, market prices, and other economic variables by embedding the (coercive) principal–agent relationship in a general equilibrium setup, and studying when and how labor scarcity encourages coercion. General (market) equilibrium interactions working through the price of output lead to a positive relationship between labor scarcity and coercion along the lines of ideas suggested by Domar, while interactions those working through the outside option lead to a negative relationship similar to ideas advanced in neo-Malthusian historical analyses of the decline of feudalism. In net, a decline in available labor increases coercion in general equilibrium if and only if its direct (partial equilibrium) effect is to increase the price of output by more than it increases outside options. Our model also suggests that markets in slaves make slaves worse off, conditional on enslavement, and that coercion is more viable in industries that do not require relationship-specific investment by workers. KEYWORDS: Coercion, feudalism, labor scarcity, principal–agent, slavery, supermodularity.
In the context of universal history, free labor, wage labor, is the peculiar institution. M. I. Finley (1976)
1. INTRODUCTION STANDARD ECONOMIC MODELS of the labor market, regardless of whether they incorporate imperfections, assume that transactions in the labor market are “free.” For most of human history, however, the bulk of labor transactions have been “coercive,” meaning that the threat of force was essential in convincing workers to take part in the employment relationship, and thus in determining compensation. Slavery and forced labor were the most common forms of labor transactions in most ancient civilizations, including Greece, Egypt, Rome, several Islamic and Asian Empires, and most known pre-Columbian civilizations (e.g., Meltzer (1993), Patterson (1982), Lovejoy (2000), Davis (2006)). 1 We thank Stephen Morris, Andrew Postlewaite, four anonymous referees, and seminar participants at the Canadian Institute for Advanced Research, the Determinants of Social Conflict Conference, and MIT for useful comments and suggestions.
© 2011 The Econometric Society
DOI: 10.3982/ECTA8963
556
D. ACEMOGLU AND A. WOLITZKY
Slavery was also the basis of the plantation economies in the Caribbean (e.g., Curtin (1990), Klein and Vinson (2007)), in parts of Brazil and Colombia, and in the United States South (e.g., Patterson (1982), Fogel and Engerman (1974), Wright (1978)), while forced labor played a major role in Spanish Latin America with regard to mining and the encomiendas as well as in the subsequent hacienda system that developed in much of Latin America (e.g., Lockhart and Schwartz (1983), Lockhart (2000)). Although formal slavery has been rare in Europe since the Middle Ages, feudal labor relations, which include both forced labor services from serfs and various special dues and taxes to landowners, were the most important type of employment relationship until the 19th century except in cities (e.g., Bloom (1998)). Even today, the United Nations’ International Labor Organization (ILO) estimates that there are over 12.3 million forced laborers worldwide (Andrees and Belser (2009)).2 The prevalence of slavery and forced labor in human history raises the question of when we should expect labor to be transacted in free markets rather than being largely or partly coerced. In a seminal paper, Domar (1970) provides one answer: slavery or serfdom should be more likely when labor is scarce so that (shadow) wages are high. This answer is both intuitive and potentially in line with the experience in the Caribbean where Europeans introduced slavery into islands that had their population decimated during the early phases of colonization. In contrast, the “neo-Malthusian” theory of feudal decline, exemplified by Habakkuk (1958), Postan (1973), Le Roy Ladurie (1977), and North and Thomas (1971), claims that coercive feudal labor relations started their decline when labor became scarce following the Black Death and other demographic shocks that reduced population and raised per capita agricultural income throughout Europe in the 16th century. Similarly, Acemoglu, Johnson, and Robinson (2002) show that Europeans were more likely to set up laborcoercive and extractive institutions when population density was high and labor was relatively abundant. The relationship between labor scarcity/abundance and coercion is also important in understanding the causes of continued prevalence of forced labor in many developing countries. In this paper, we develop a simple model of labor coercion. In partial equilibrium, our model is a version of the principal–agent framework, with two crucial differences. First, the agent (worker) has no wealth, so there is a limited liability constraint, and the principal can punish as well as reward the agent. Second, the principal chooses the amount of “guns” (coercion), which influences the reservation utility (outside option) of the agent.3 The first of these changes has been explored in several papers (e.g., Chwe (1990), Dow (1993), 2 The ILO estimates that of these 12.3 million, 20% are coerced by the state (largely into military service), 14% are forced sex workers, and the remaining 66% are coerced by private agents in other industries, such as agriculture, mining, ranching, and domestic service (Andrees and Belser (2009)). Our model applies most directly to the last category. 3 Throughout the paper, “guns” stand in for a variety of coercive tools that employers can use. These include the acquisition and use of actual guns by the employers as a threat against the
THE ECONOMICS OF LABOR COERCION
557
Sherstyuk (2000)). The second is, to our knowledge, new, and is crucial for our perspective and our results; it captures the central notion that coercion is mainly about forcing workers to accept employment, or terms of employment, that they would otherwise reject.4 Our basic principle–agent model leads to several new insights about coercive labor relations. First, we show that coercion always increases the effort of the agent, which is consistent with Fogel and Engerman’s (1974) view that Southern slavery was productive. Second, we show that coercion is always “socially inefficient,” because it involves a (endogenously) costly way to transfer resources (utility) from workers to employers.5 Third, perhaps somewhat surprisingly, we find that workers who have a lower (ex ante) outside option are coerced more and provide higher levels of effort in equilibrium. The intuition for this result illustrates a central economic mechanism: in our model—and, we believe, most often in practice—effort and coercion are “complements.” When the employer wishes to induce effort, he finds it optimal to pay wages following high output, so he must pay wages frequently when he induces high effort. Greater ex ante coercion enables him to avoid making these payments, which is more valuable when he must pay frequently, hence the complementarity between effort and coercion. This observation also implies that more “productive” employers will use more coercion, and thus a worker will be worse off when matched with a more productive firm. This contrasts with standard results in models of noncoercive labor markets where ex post rent sharing typically makes workers matched with more productive employers better off. It also implies that coerced workers may receive high expected monetary compensation, despite having low welfare, which is consistent with the finding of workers or their families; the use of guards and enforcers to prevent workers from escaping or to force them to agree to employment terms favorable to employers; the confiscation of workers’ identification documents; the setting up of a system of justice favorable to employers; investment in political ties to help employers in conflictual labor relations; and the use of paramilitaries, strikebreakers, and other nonstate armed groups to increase employer bargaining power in labor conflicts. In all instances of coercion mentioned here, for example, in the Caribbean plantation complex, African slave trade, the mita, the encomienda, the feudal system, and contemporary coercion in Latin America and South Asia, employers use several of these methods simultaneously. 4 This view of coercion is consistent with the historical and contemporary examples given above as well as with the 1930 ILO Convention’s definition of compulsory labor as “work or service which is exacted from any person under the menace of any penalty and for which the said person has not offered himself voluntarily” (quoted in Andrees and Belser (2009, p. 179)). Discussing contemporary coercion, Andrees and Belser (2009, p. 179) write that “a situation can qualify as forced labor when people are subjected to psychological or physical coercion . . . in order to perform some work or service that they would otherwise not have freely chosen.” 5 It also offers a straightforward explanation for why gang labor disappeared after the Reconstruction, which was a puzzle for Fogel and Engerman. In this light, our model is much more consistent with Ransom and Sutch’s (1975, 1977) evidence and interpretation of slavery and its aftermath in the South.
558
D. ACEMOGLU AND A. WOLITZKY
both Fogel and Engerman (1974) and ILO (2009) that coerced laborers often receive income close to that of comparable free laborers. The above-mentioned partial equilibrium results do not directly address whether labor scarcity makes coercion more likely. To investigate this issue and the robustness of our partial equilibrium results to general equilibrium interactions, we embed our basic principal–agent model of coercion in a general (market) equilibrium setting, with two distinct equilibrium interactions. The first is a labor demand effect: the price an employer faces for his output is determined endogenously by the production—and thus coercion and effort— decisions of all employers, and affects the marginal product of labor and the return to coercion. The second is an outside option effect: agents’ outside options affect employers’ coercion and effort choices, and because agents who walk away from a coercive relationship (by paying a cost determined by the extent of coercion) may match with another coercive employer, effective outside options are determined by the overall level of coercion in the economy. Labor scarcity encourages coercion through the labor demand effect because labor scarcity increases the price of output, raising the value of effort and thus encouraging coercion; this is reminiscent of Domar’s intuition.6 On the other hand, labor scarcity discourages coercion through the outside option effect, because labor scarcity increases the marginal product of labor in (unmodeled) competing sectors of the economy, which increases outside options and thus discourages coercion. This is similar in spirit to the neo-Malthusian theory of feudal decline, where labor scarcity increased the outside opportunities for serfs, particularly in cities, and led to the demise of feudal labor relations. We show that whether the labor demand or the outside option effect dominates in general equilibrium is determined by whether the overall direct (i.e., partial equilibrium) effect of labor scarcity on the difference between price of output and the outside option is positive or negative. This finding provides a potential answer to the famous critique of the neo-Malthusian theory due to Brenner (1976) (i.e., that falling populations in Eastern Europe were associated with an increase in the prevalence of forced labor in the so-called second serfdom; see Ashton and Philpin (1987)), because, for reasons we discuss below, the fall in population in Eastern Europe was likely associated more with higher prices of agricultural goods than with better outside options for workers. The tractability of our principal–agent framework also enables us to investigate several extensions. First, we introduce ex ante investments and show that there is a type of “holdup” in this framework, where workers underinvest in skills that increase their productivity in their current coercive relationship 6
A recent paper by Naidu and Yuchtman (2009) exploits the effects of labor demand shocks under the Master and Servant Acts in 19th century Britain and finds evidence consistent with the labor demand effect.
THE ECONOMICS OF LABOR COERCION
559
(since workers who are more productive with their current employer are coerced more) and overinvest in skills that increase their outside option (since coercion is decreasing in their outside option). This extension provides a potential explanation for why coercion is particularly prevalent in effort-intensive and low-skill activities, and relatively rare in activities that require investment in relationship-specific skills or are “care intensive” (as argued by Fenoaltea (1984)). Second, we investigate the implications of coercion when it affects the interim outside option of the agent and also when ex post punishments are costly. Third, we show that when coercion choices are made before matching, our model generates an economies of scale effect in line with the idea suggested in Acemoglu, Johnson, and Robinson (2002) that greater labor abundance makes extractive institutions and coercion more profitable. Finally, we investigate the implications of “trading in slaves,” whereby employers can sell their coerced agents to other potential employers, and we show that such trade always reduces agent welfare and may reduce social welfare (because slave trade shifts the productivity distribution of active employers in the sense of first-order stochastic dominance, and with greater productivity comes greater coercion). Despite the historical importance of coercion, the literature on coercive labor markets is limited. Early work in this area includes Conrad and Meyer (1958), Domar (1970), Fogel and Engerman (1974), and Ransom and Sutch (1977). Bergstrom (1971) defines a “slavery equilibrium”—a modification of competitive equilibrium in which some individuals control the trading decisions of others—and shows existence and efficiency property of equilibria. Findlay (1975) and Canarella and Tomaske (1975) present models that view slaves as “machines” that produce output as a function of payments and force. Barzel (1977) performs a comparative analysis of slavery and free labor under the assumption that slaves, but not free workers, must be monitored and that slaves (exogenously) work harder. In more recent work, Basu (1986) and Naqvi and Wemhöner (1995) develop models in which landlords coerce their tenants by inducing other agents to ostracize them if they do not agree to favorable contract terms with the landlord. Genicot (2002) develops a model in which workers are collectively better off without the option of voluntarily entering into bonded labor agreements, because this stimulates the development of alternative forms of credit. Conning (2004) formalizes and extends Domar’s hypothesis in the context of a neoclassical trade model with a reduced-form model of slavery. Lagerlöf (2009) analyzes a dynamic model of agricultural development in which slavery is more likely at intermediate stages of development. The paper most closely related to ours is Chwe’s (1990) important work on slavery. Chwe analyzes a principal–agent model closely related to our partial equilibrium model. There are several differences between Chwe’s approach and ours. First, his model has no general equilibrium aspects and does not investigate the relationship between labor scarcity and coercion, which is one of
560
D. ACEMOGLU AND A. WOLITZKY
our central objectives. Second, and more importantly, in Chwe’s model, the principal cannot affect the agent’s outside option, whereas all of our main results follow from our fundamental modeling assumption that coercion is about affecting the outside option of the agent (i.e., coercing an individual to accept an employment contract that he or she would not have otherwise accepted).7 For example, this modeling assumption is important for our results on efficiency (in Chwe’s model, coercion is typically efficiency-enhancing). The rest of the paper is organized as follows. Section 2 introduces our model. Section 3 characterizes the solution to the principal–agent problem with coercion and presents several key comparative static results. Section 4 studies our general equilibrium model and investigates the relationship between labor scarcity and coercion. Section 5 presents several extensions. Section 6 concludes. Appendix A contains the proofs of Propositions 1 and 2, and relaxes some of the simplifying assumptions used in the text. Appendix B, which is available online as Supplemental Material (Acemoglu and Wolitzky (2011)), considers a generalization of our basic principal–agent model to include multiple levels of output. 2. MODEL In this section, we describe the environment. We start with the contracting problem between a coercive producer and an agent, and then describe how market prices and outside options are determined. We consider a simplified version of our model in the text and present the general version in Appendix A. 2.1. The Environment There is a population of mass 1 of identical (coercive) producers and a population of mass L < 1 of identical agents (forced laborers, slaves, or simply workers); L is an inverse measure of labor scarcity in the economy. All agents are risk-neutral. Throughout the text we focus on the case where producers are homogeneous and each producer has a project that yields x > 0 units of a consumption good if successful and zero units if unsuccessful.8 Each producer is initially randomly matched with a worker with probability L. We first describe the actions and timing of events after such a match has taken place. A producer with productivity x who is matched with an agent chooses a level of guns, g ≥ 0, at cost ηχ(g), and simultaneously offers (and commits to) a “contract” specifying an output-dependent wage–punishment 7 Interestingly, Chwe (1990, p. 1110) agrees with this perspective and writes “one forces another person to be her labourer if one does so by (even indirectly) changing her reservation utility,” but then assumes that the principal cannot change the agent’s reservation utility. In particular, the agent in Chwe’s model has exogenous reservation utility U¯ and she receives at least this payoff in expectation under any contract she accepts. 8 In Appendix A, we consider the case where there is a distribution of productivity among the producers.
THE ECONOMICS OF LABOR COERCION
561
pair (wy py ) for y ∈ {h l}, corresponding to high (x) and low (0) output, respectively. Wages and punishments have to be nonnegative, that is, wy ≥ 0 and py ≥ 0, corresponding to the agent having no wealth, and we assume that inflicting punishment is costless for the producer.9 We also assume that χ(0) = 0 and χ(·) is twice differentiable, strictly increasing, and strictly convex, with derivative denoted by χ that also satisfies χ (0) = 0 and limg→∞ χ (g) = ∞. The parameter η > 0 corresponds to the cost of guns or, more generally, to the cost of using coercion, which is mainly determined by institutions, regulations, and technology (e.g., whether slavery is legal). Following the contract offer of the producer, the agent either accepts or rejects the contract. If she ¯ minus the rejects, she receives payoff equal to her (intrinsic) outside option, u, level of guns, g, that is, u¯ − g and the producer receives payoff 0. The interpretation of the agent’s payoff is that if she rejects the contract, the principal inflicts punishment g on her ¯ 10 This formulation before she “escapes” and receives her outside option, u. introduces our main assumption that coercion involves using force or the threat of force to convince an agent to accept an employment relationship that she might have rejected otherwise. In light of this, g will be our measure of coercion throughout the paper. We can also think of u¯ − g as the agent’s “extrinsic” ¯ outside option, influenced by the coercion of the producer, as opposed to u, which could be thought of as the “intrinsic” outside option, determined by factors outside the current coercive relationship. If the agent accepts the contract offer, then she chooses effort level a ∈ [0 1] at private cost c(a). Here a is the probability with which the project succeeds, leading to output x. We assume that c(0) = 0 and that c(·) is strictly increasing, strictly convex, and twice differentiable, with derivative denoted by c , and we also impose that lima→1 c(a) = ∞ to ensure interior solutions. Suppose that the market price for the output of the producer is P. Thus when the agent accepts contract (wy py ) and chooses effort a, guns are equal to g, and output realization is y, the producer’s payoff is Py − wy − ηχ(g) and the agent’s payoff is wy − py − c(a) 9 The constraint wy ≥ 0 is a standard “limited liability” constraint in principal–agent theory, although due to the possibility of coercion and punishment in the present context, this should not be interpreted as the agent having any kind of legal protection. 10 This formulation implies that g affects the the agent’s utility if she “escapes” rather than her utility when she accepts the employment contract. Thus it is the threat of force, not its actual exercise, that matters. Nevertheless, since g is chosen at the beginning, this threat is only credible (feasible) when the producer undertakes the investments in coercive capacity (guns).
562
D. ACEMOGLU AND A. WOLITZKY
¯ is the An equilibrium contract (for given market price, P, and outside option, u) subgame perfect equilibrium of the above-described game between the producer and the agent.11 Given the timing of events, this equilibrium contract is a solution to the maximization problem (1)
max
(agwh wl ph pl )∈[01]×R5+
a(Px − wh ) + (1 − a)(−wl ) − ηχ(g)
subject to (IR0 )
a(wh − ph ) + (1 − a)(wl − pl ) − c(a) ≥ u¯ − g
and (IC0 )
l ˜ ˜ ˜ h − ph ) + (1 − a)(w − pl ) − c(a) a ∈ arg max a(w ˜ a∈[01]
Here (IR0 ) can be interpreted as the “individual rationality” or “participation constraint” of the agent. If this constraint is not satisfied, then the agent would reject the contract—run away from the match with the producer. (IC0 ) is the “incentive compatibility” constraint, ensuring that a is the agent’s best response in the subgame following the contract offer and her acceptance of the contract. There is no loss of generality in letting the producer choose a from the set of maximizers, since if the producer expected an effort level choice from this set that did not maximize his payoff, then he would have a profitable deviation. Thus solutions to this program coincide with subgame perfect equilibria.12 In the text, we make the following additional assumption on c(·): ASSUMPTION 1: c(·) is three times differentiable and satisfies (1 − a)c (a) ≥ c (a)
for all a
Assumption 1 guarantees that the program that characterizes equilibrium contracts, (1) subject to (IR0 ) and (IC0 ), is strictly concave13 and is adopted to simplify the exposition. Appendix A shows that all of our substantive results hold without this assumption. 11 As our definition of general equilibrium, Definition 1 (below) makes clear, not every equi¯ which we take librium contract may be part of a general equilibrium, since the levels of P and u, as given here, may not correspond to equilibrium values. One might thus alternatively refer to an equilibrium contract as an “optimal contract.” We use the term equilibrium contract throughout for consistency. 12 Observe also that, given (wh wl ph pl ), the agent’s maximization problem is strictly concave, which implies that the principal cannot induce the agent to randomize (and this remains true if the principal offers a lottery over (wh wl ph pl )). Therefore, our implicit assumption that the principal offers a deterministic contract is without loss of generality. 13 Assumption 1 is a slight weakening of the sufficient condition for concavity of the producer’s problem imposed by Chwe. Chwe provides a justification for this apparently ad hoc assumption:
THE ECONOMICS OF LABOR COERCION
563
2.2. Market Interactions and General Equilibrium To complete the description of the model, we next describe how the market ¯ are determined. price, P, and an agent’s outside option, u, Denote the unique values of a and g that maximize (1) subject to (IR0 ) and (IC0 ) by a∗ and g∗ .14 Then average production among matched producers is (2)
Q ≡ a∗ x
The aggregate level of production is thus QL, and we assume that market price is given by (3)
P = P(QL)
where P(·) is a strictly positive, decreasing, and continuously differentiable demand schedule. Equation (3) captures the idea that greater output will reduce price. Equation (2) makes the equilibrium price P a function of the distribution of efforts induced by producers. ¯ is determined according to a reduced-form An agent’s outside option, u, matching model. When an agent escapes, she either matches with another coercive producer or escapes matching with coercive producers altogether (e.g., running away to the city or to freedom in the noncoercive sector). We assume that the probability that an agent who exercises her outside option matches with a randomly drawn, previously unmatched, coercive producer is γ ∈ [0 1), and the probability that she matches with an outside, noncoercive producer is ˜ 1 − γ. In this latter case, she obtains an outside wage u(L), which depends on ˜ quantity of labor in the coercive sector. We interpret u(L) as the wage in the ˜ noncoercive sector, and assume that u(L) is continuously differentiable and ˜ strictly decreasing, consistent with u(L) being the marginal product of labor in the noncoercive sector when the noncoercive production technology exhibits diminishing returns to scale and the quantity of labor in the noncoercive sector is proportional to the quantity of labor in the coercive sector.15 In practice, ˜ both the parameter γ and the exogenous outside option u(L) measure the possibilities outside the coercive sector. For example, in the context of feudalism and forced agricultural labor relations, the existence of cities to which coerced ˜ workers may escape would correspond to a low γ and a high u(L). define f (·) by f (− log(1 − a)) ≡ c(a), so that f (ρ) is the cost to the worker of ensuring success with probability 1−e−ρ ; then one can verify that (1−a)c (a) ≥ 2c (a) if f (·) ≥ 0. Our condition simply weakens this to (1 − a)c (a) ≥ c (a). 14 Uniqueness is again guaranteed by Assumption 1. In Appendix A, we not only relax Assumption 1, but also allow for heterogeneous producers and mixed strategies. 15 ˜ of which some fraction γ˜ is For example, the total amount of labor in the economy may be L, initially matched to coercive producers. Then the quantity of labor in the coercive sector equals ˜ and the quantity of labor in the noncoercive sector equals (1 − γ) L ≡ γ˜ L, ˜ L˜ = (1 − γ)L/ ˜ γ, ˜ which ˜ justifies simply writing u(L) for the marginal product of labor in the noncoercive sector.
564
D. ACEMOGLU AND A. WOLITZKY
This formulation implies that the outside option of an agent in a coercive ¯ satisfies relationship, u, (4)
˜ u¯ = γ(u¯ − g∗ ) + (1 − γ)u(L)
Let G be the average number of guns used by (matched) coercive producers. Since producers are homogeneous, this is simply given by (5)
G ≡ g∗
We refer to G as the aggregate level of coercion in the economy. Equation (4) can now be written as (6)
˜ u¯ = u(L) −
γ G 1−γ
Intuitively, (6) states that an agent’s outside option in the coercive sector equals her payoff from exiting the coercive sector minus the aggregate level of coercion, G, as given by (5), times a constant, γ/(1 − γ), which is increasing in the difficulty of exiting the coercive sector. Given this description, we now define a (general or market) equilibrium for this economy, referred to as an equilibrium for short.16 Henceforth, we use the terminology “general equilibrium” even though the demand curve P(·) is exogenous. DEFINITION 1: An equilibrium is a pair (a∗ g∗ ) such that (a∗ g∗ ) is an equi¯ where P and u¯ are librium contract given market price P and outside option u, given by (3) and (6) evaluated at (a∗ g∗ ). ˜ Throughout, we impose the following joint restriction on P(·), L, x, u(·), and c(·): ASSUMPTION 2: ˜ P(Lx)x > u(L) + c (0) Assumption 2 states that, even if all producers were to set a = 1 and g = 0, the marginal product of effort would be greater than the agent’s outside option plus her cost of effort at a = 0. Our analysis below will show that Assumption 2 is a sufficient (though not necessary) condition for all matched producers to 16
Here, we restrict attention to pure-strategy equilibria without loss of generality, since the program that characterizes equilibrium contracts is strictly concave. A more general approach is developed in Appendix A.
THE ECONOMICS OF LABOR COERCION
565
induce their agents to exert positive effort (i.e., generate positive expected output) in equilibrium. Therefore, imposing this assumption allows us to focus on the economically interesting case and simplify the exposition considerably. ¯ as In the next section, we take the market price, P, and outside option, u, given and characterize equilibrium contracts. We then turn to the characterization of (general) equilibrium in Section 4, which will enable us to discuss issues related to the effects of labor scarcity on coercion, as well as to verify the robustness of the partial equilibrium effects in the presence of general equilibrium interactions. 3. EQUILIBRIUM CONTRACTS AND COMPARATIVE STATICS 3.1. Equilibrium Contracts Recall that an equilibrium contract is a solution to (1) subject to (IR0 ) and (IC0 ). Thus an equilibrium contract is simply a tuple (a∗ g∗ wh wl ph pl ) ∈ [0 1] × R5+ . Our first result provides a more tractable characterization of equilibrium contracts when they involve positive effort (a∗ > 0). Throughout the paper, we use the notation [z]+ ≡ max{z 0}. PROPOSITION 1: Suppose Px > u¯ + c (0). Then any equilibrium contract involves a∗ > 0 and g∗ > 0, and an equilibrium contract is given by (a∗ g∗ wh wl ph pl ) such that (7)
(a∗ g∗ ) ∈ arg max Pxa − a[(1 − a)c (a) + c(a) + u¯ − g]+ (ag)∈[01]×R+
− (1 − a)[−ac (a) + c(a) + u¯ − g]+ − ηχ(g) with wl = ph = 0, wh = (1 − a∗ )c (a∗ ) + c(a∗ ) + u¯ − g∗ ≥ 0, and pl = a∗ c (a∗ ) − c(a∗ ) − u¯ + g∗ ≥ 0. See Appendix A for the proof. REMARK 1: The condition Px > u¯ + c (0) is automatically satisfied when ˜ Assumption 2 holds, since P ≥ P(Lx) and u¯ ≤ u(L). Thus Proposition 1 always applies under our maintained assumption. The qualifier Px > u¯ + c (0) is added for emphasis, since, as the proof illustrates, it ensures that a∗ > 0, which is in turn important for this result. Problem (7) is not equivalent to the maximization of (1) subject to (IR0 ) and (IC0 ) when the solution to the latter problem involves a∗ = 0. REMARK 2: Proposition 1 states that (1 − a)c (a) + c(a) + u¯ − g ≥ 0 and −ac (a) + c(a) + u¯ − g ≤ 0 in any equilibrium contract; so in any equilibrium contract, the right-hand side of (7) equals (8)
Pxa − a(1 − a)c (a) − ac(a) − au¯ + ag − ηχ(g)
566
D. ACEMOGLU AND A. WOLITZKY
Straightforward differentiation shows that under Assumption 1, this expression is strictly concave; thus an equilibrium contract is characterized by a unique pair (a∗ g∗ ) (see Proposition 2). Hence, in what follows, we refer to (a∗ g∗ ) as an equilibrium contract. REMARK 3: The maximization problem (7) is (weakly) supermodular in ¯ −η) (see Proposition 11 in Appendix A). We also show in (a g x P −u Lemma 3 in Appendix A that “generically” (more precisely, for all parameter values, except for possibly one value of each parameter) the expression −ac (a) + c(a) + u¯ − g is strictly less than 0 in any equilibrium contract, and in ¯ −η) in the neighborthis case, (7) is strictly supermodular in (a g x P −u hood of any equilibrium contract. This gives us strict rather than weak comparative statics everywhere. Supermodularity will be particularly useful when we relax Assumption 1 in Appendix A. To obtain an intuition for Proposition 1, suppose that the solution to (1) (subject to (IR0 ) and (IC0 )) indeed involves a > 0. Then recall that c is differentiable and that the first-order approach is valid in view of the fact that there are only two possible output realizations (y ∈ {0 x}), which implies that, given the contract offer (wy py ), the agent’s maximization problem in (IC0 ) is concave. Moreover, since lima→1 c(a) = ∞, the solution involves a < 1. This implies that (IC0 ) can be replaced by the corresponding first-order condition, where we write uh ≡ wh − ph and ul ≡ wl − pl for the agent’s payoff (without effort costs) following the good and bad outcomes: uh − ul = c (a) To punish and pay the agent simultaneously would waste money, so we have py = 0 if uy ≥ 0 and wy = 0 if uy ≤ 0. This implies that wh = [uh ]+ and wl = [ul ]+ , so (1) can be written as (9)
max
(aguh ul )∈[01]×R+ ×R2
a(Px − [uh ]+ ) − (1 − a)[ul ]+ − ηχ(g)
subject to (IR1 )
auh + (1 − a)ul − c(a) ≥ u¯ − g
and (IC1 )
uh − ul = c (a)
Next, using (IC1 ) to substitute for ul in (IR1 ) shows that this problem is equivalent to maximizing (9) subject to (IR2 )
uh − (1 − a)c (a) − c(a) ≥ u¯ − g
THE ECONOMICS OF LABOR COERCION
567
Finally, using (IR2 ) to substitute uh out of (9), and using (IR2 ) and (IC1 ) to substitute ul out of (9) yields (7). Furthermore, it is intuitive that any solution to (7) will necessarily involve wl = 0 and wh ≥ 0 if a > 0, so that the contract does not punish the agent for a good outcome and does not reward her for a bad outcome. Note that (7) is strictly concave in g for given a. This, combined with the assumption that χ (0) = 0 and limg→∞ χ (g) = ∞, implies that the first-order condition with respect to g, for a given equilibrium level of a, (10)
χ (g) =
a η
is necessary and sufficient whenever (7) is differentiable (and (7) is differentiable in a and g whenever ul < 0, which, as explained in Remark 3, holds almost everywhere; both of these claims are proved in Lemma 3 in Appendix A). This immediately implies that a producer who wishes to induce higher effort will use more guns. Put differently, as noted in Remark 3, (7) is weakly supermodular in a and g everywhere, and strictly so whenever ul < 0. Although mathematically simple, this result is both important for our analysis and economically somewhat subtle. One might have presumed that high effort could be associated with less or more coercion. Our model implies that it will always be associated with more coercion. The logic is as follows: (IR2 ) implies that coercion is valuable to the producer because, regardless of effort, it allows a one-for-one reduction in wages when the agent is successful (i.e., in wh , since uh = wh in an equilibrium contract). An agent who exerts high effort succeeds more often and, therefore, must be rewarded more often. This makes coercion more valuable to the producer. ¯ Next, recall from Remark 3 that (7) is also supermodular in (a g x P −u ¯ −η)). −η) (and thus it exhibits increasing differences in (a g) and (x P −u This implies that changes in productivity, x, market price, P, outside option, ¯ and cost of guns, η, will have unambiguous effects on the set of equilibrium u, contracts. This observation enables us to derive economically intuitive comparative static results from standard monotonicity theorems for supermodular optimization problems (e.g., Topkis (1998)).17 PROPOSITION 2: There exists a unique equilibrium contract (a∗ g∗ ). Moreover, (a∗ g∗ ) is increasing in x and P, and decreasing in u¯ and η. See Appendix A for the proof. Proposition 2 is intuitive. Higher x and P both increase the value of a successful outcome for the producer and thus the value of effort. Since effort 17 Throughout “increasing” stands for “strictly increasing,” and “nondecreasing” stands for “weakly increasing.”
568
D. ACEMOGLU AND A. WOLITZKY
and coercion are complements, both a and g increase. The effect of P on coercion (g) captures the labor demand effect, which was suggested by Domar (1970) and will be further discussed in the next section. Similarly, higher cost of guns, η, reduces the value of coercion. By complementarity between effort and coercion, this reduces both a and g. ˜ Proposition 2 also shows that (a∗ g∗ ) is decreasing in u¯ (= u(L) − γG/ (1 − γ)), which is the essence of the outside option effect. This result is at first surprising; since higher g offsets the effect of higher u¯ (recall (IR0 ) or (IR2 )), one might have expected g and u¯ to covary positively. This presumption would also follow from a possible, perhaps mistaken, reading of Domar (1970) on the ¯ However, Proposition 2 hypothesis that labor scarcity corresponds to higher u. shows that the opposite is always the case.18 The intuition for this result is in¯ will be induced teresting: An individual with a worse outside option (lower u) to work harder because her participation constraint, (IR2 ), is easier to satisfy, and agents working harder will be successful more often and will be paid more often. This increases the value of coercion to the producer and the level of equilibrium coercion. 3.2. Discussion of Assumptions It is useful to briefly discuss the role of various assumptions in leading to the sharp characterization result in Proposition 1 and to (7), which will play a central role in the rest of our analysis. Seven assumptions deserve special mention. First, we assume that the coercive relationship starts with a match between the producer and the agent, and the only reason for the producer to offer an “attractive” contract to the agent is to prevent her from running away. This is important for our analysis, since it implies that producers do not compete with each other to attract agents. We believe that this is often a realistic assumption in the context of coercion. Serfs in Europe and forced laborers in Latin America were often tied to the land, and employers did not need to attract potential workers into serfdom. Slaves throughout the ages were often captured and coerced. According to Andrees and Belser (2009), even today many forced employment relationships originate when employers are able to lure workers into such relationships, for example, by promising good working conditions that do not materialize once workers arrive at a plantation or mine, at which point they are not allowed to leave. Second, we use a principal–agent model with moral hazard and a “limited liability” constraint, so that the worker cannot be paid a negative wage. We 18
The only previous analysis of the relationship between coercion and outside options is provided by Chwe (1990), who shows that better outside options lead to higher payoffs after both output realizations for the agent. Because higher payoffs are associated with less ex post punishment, this can be interpreted as better outside options leading to less coercion. In Chwe’s model, this result depends on the agent’s risk aversion (i.e., on “income effects”).
THE ECONOMICS OF LABOR COERCION
569
view both of these assumptions as central for a good approximation to actual coercive relationships. Inducing agents to exert effort is a crucial concern in coercive employment relationships, and clearly these agents cannot make (unlimited) payments to their employers, since they are trapped in the coercive relationship without other sources of income. From a theoretical point of view, both of these assumptions are important for our results (and we view this as a strength of our approach in clearly delineating results that depend on distinctive features of coercive relationships). Relaxing either of these two assumptions would imply that the employer could implement the “first-best” level of effort, aFB , given by Px = c (aFB ), either by dictating it or by choosing large enough negative payments after low output (given risk neutrality). In particular, in this case, the problem of a coercive producer, with productivity x, could be written as maxgwh aFB (Px − wh ) − ηχ(g) subject to aFB wh − c(aFB ) ≥ u¯ − g. Since the constraint will necessarily hold as equality, this problem can be written as maxg≥0 aFB Px − (u¯ − g + c(aFB )) − ηχ(g). This problem is no longer strictly supermodular, and coercion will always be independent of both u¯ and P. Therefore, all of our results depend on the principal–agent approach and the importance of effort and moral hazard (and limited liability).19 Third, we allow the principal to use punishment p ≥ 0. The presence of such punishments is another realistic aspect of coercive relationships. Moreover, they play an important role in our theoretical results by ensuring that the participation constraint, (IR0 ) or (IR2 ), holds as equality. In the absence of such punishments, the participation constraint can be slack, in which case there would be no role for using g to reduce the (extrinsic) outside option of the agent, and one could not talk of coercion making agents accept employment terms that they would otherwise reject. One could construct different versions of the principal–agent problem, where the participation constraint holds as equality even without punishments, and we conjecture that these models would generate similar insights. Our formulation is tractable and enables us to focus on the key role of coercion in inducing agents to accept employment terms that they would have rejected in the absence of force or threats of force. Fourth, we impose Assumption 2 throughout, which implies that productivity in the coercive sector is (sufficiently) greater than u˜ and thus greater than ¯ This makes coercive relationships viable agents’ (intrinsic) outside option, u. and corresponds to situations in which coercive producers have access to valuable assets for production, such as land or capital. This type of unequal access to assets of production is a key feature supporting coercive relationships such as serfdom, forced labor, or slavery. Fifth, we assume that coercion is undertaken by each producer and thus corresponds to the producer’s use of armed guards, enforcers, or threat of violence 19 Another justification for viewing moral hazard as central to coercion is the presence of ex post inefficient punishments in many coercive relationships. In our model, as well as in all standard principal–agent and repeated game models, no punishments would be observed in equilibrium under perfect monitoring.
570
D. ACEMOGLU AND A. WOLITZKY
against its laborers. In practice, much coercion is undertaken jointly by a group of producers (for example, via the use of local or national law enforcement, or the judiciary system, as was the case in the U.S. South both before and after the Civil War, e.g., Key (1949) or Ransom and Sutch (1977)). Moreover, even coercion by each individual producer presumes an institutional structure that permits the exercise of such coercion. A comprehensive study of coercion requires an analysis of the politics of coercion, which would clarify the conditions under which producers can use the state or other enforcement mechanisms to exercise coercion and pass laws reducing the outside option of their employees. Our analysis is a crucial step toward this bigger picture, since it clarifies the incentives of each producer to use coercion before incorporating details of how they will solve the collective action problem among themselves and cooperate in coercive activities. The working paper version shows how our results generalize to the case in which coercion is exercised collectively. Sixth, we assume risk neutrality. The effects we focus on in this paper do not disappear in the presence of risk aversion, although adding risk aversion complicates the analysis. Nevertheless, there is at least one important way in which making the agent risk-averse reinforces our central intuition that effort and coercion are complementary. Consider the case where ul < 0, so that the sole purpose of coercion is to reduce wh . By (IR2 ) and convexity of c(·), uh is increasing in a (for fixed g), and increasing g allows the principal to reduce uh one for one. When the agent is risk-averse, the wage that the producer must pay after high output to give the agent utility uh is convex in uh , since uh is a concave function of wh . Reducing uh is then more valuable to the principal when a is higher, which provides a second source of complementarity between a and g in the principal’s problem.20 Finally, for simplicity, we assume only two levels of output. Appendix B shows how our results can be generalized to an environment with multiple levels of output. 3.3. Further Comparative Statics In this subsection, we use Proposition 2 to examine the consequences of coercion for productivity, welfare, and wages. We first look at the implications of coercion on worker effort and productivity by changing the cost of coercion, η. Throughout the paper, when we make comparisons between a coercive equilibrium contract (coercion) and no coercion, the latter refers to a situation in 20 The reason that this argument is not completely general is that ul may equal 0 in an equilibrium contract. However, if the agent’s utility function for money, u(w), satisfies u (w) < 0, it can be shown that the producer’s problem, the analogue of (7), is strictly supermodular in (a g) regardless of ul (proof available from the authors upon request). In fact, the producer’s problem remains strictly supermodular in this case even in the absence of limited liability and/or punishments.
THE ECONOMICS OF LABOR COERCION
571
which either we exogenously impose g = 0 or, equivalently, η → ∞. Given this convention, the next corollary is an immediate implication of Proposition 2 (proof omitted): COROLLARY 1: Coercion (or cheaper coercion, i.e., lower η) increases effort. This result may explain Fogel and Engerman’s (1974) finding that productivity was high among slaves in the U.S. South in the antebellum period. It is also intuitive. Coercion and effort are complements, so equilibrium contracts induce less effort when coercion becomes more difficult or is banned. The next corollary is immediate from the analysis in Section 3.1 and shows that coercion is unambiguously bad for the welfare of the agent.21 COROLLARY 2: Coercion (or cheaper coercion, i.e., lower η) reduces agent welfare. PROOF: Since, as shown above, (IR0 ) holds as equality, the welfare of the ¯ agent is equal to u−g. The result then follows from the fact that g is decreasing in η. Q.E.D. Even though coercion reduces agent welfare, it may still increase some measures of “economic efficiency” or net output. In fact, Fogel and Engerman not only documented that slaves in the U.S. South had relatively high productivity, but argued that the slave system may have been economically efficient. While some aspects of the slave system may have been “efficiently” designed, the next two corollaries show that coercion in our model always reduces utilitarian social welfare and may even reduce net output (here utilitarian social welfare is the sum of the producer’s and worker’s utilities, i.e., Pxa − a(1 − a)c (a) − ac(a) − au¯ + ag − ηχ(g) + u¯ − g; net output is output net of effort costs, i.e., Px − c(a)). First, we show that coercion can lead to effort above the first-best level that would prevail in the absence of information asymmetries and limited liability constraints (i.e., aFB given by c (aFB ) = Px); this implies that coercion can in fact reduce net output. The argument leading to this result is simple. Since lima→1 c(a) = ∞, the first-best effort, aFB , is strictly less than 1. We show that as η → 0, any equilibrium contract involves an effort level a arbitrarily close to 1 (see the proof of Corollary 3), and since lima→∞ c(a) = ∞, in this case coercion necessarily reduces net output. The intuition for this is that coercion allows the producer to “steal utility from the agent,” as shown by (IR0 ) or (IR2 ). Moreover, since the agent is subject to limited liability, the transfer of utility from the agent to the producer will take place inefficiently by inducing excessive effort. The next corollary formalizes this argument. 21 This result also contrasts with Chwe’s (1990) framework, where the agent receives her outside option (reservation utility) regardless of coercion.
572
D. ACEMOGLU AND A. WOLITZKY
COROLLARY 3: There exists η∗∗ > 0 such that if η < η∗∗ , then effort a is strictly greater than aFB . PROOF: From the proof of Proposition 11 in Appendix A, there exists η∗ > 0 such that, for all η ≤ η∗ , ul < 0 and a∗ solves −1 a (11) max Pxa − a (1 − a)c (a) + c(a) + u¯ − (χ ) a∈[01] η a − ηχ (χ )−1 η where we used the fact that when ul < 0, χ (g∗ ) = a∗ /η. From Proposition 1, a∗ > 0 for all η > 0. Since (11) is differentiable and lima→1 c(a) = ∞, the firstorder condition a + Px (1 − a)(c (a) + ac (a)) + c(a) + u¯ = (χ )−1 η is necessary. Now consider η → 0. For any a < 1, the left-hand side is finite, while, since χ is convex and satisfies limg→∞ χ (g) = ∞, and a∗ does not converge to 0 as η → 0 (because a∗ is decreasing in η by Proposition 2), the righthand side converges to ∞. This implies that a∗ must also converge to 1 as η → 0. Since aFB < 1, this completes the proof of the corollary. Q.E.D. The next corollary shows that utilitarian social welfare is always lower under coercion. COROLLARY 4: Social welfare in any equilibrium contract under coercion (η < ∞) is strictly lower than social welfare in any equilibrium contract under no coercion. PROOF: Let (a∗ g∗ ) be an equilibrium contract under coercion. Let SWC be social welfare under coercion given (a∗ g), and let SWN be social welfare under no coercion. Then SWC = Pxa∗ − a∗ (1 − a∗ )c (a∗ ) − a∗ c(a∗ ) − a∗ u¯ + a∗ g∗ − ηχ(g∗ ) + u¯ − g∗ < Pxa∗ − a∗ (1 − a∗ )c (a∗ ) − a∗ c(a∗ ) − a∗ u¯ + u¯ ≤ max Pxa − a(1 − a)c (a) − ac(a) − au¯ + u¯ a∈[01]
= SWN where the second and third lines are immediate since g∗ > 0 by (10) and a∗ ≤ 1, and the fourth line follows because the maximand in the third line is the same
THE ECONOMICS OF LABOR COERCION
573
as the maximand in (7), that is, as (8), with g set to zero, which, by definition, characterizes the equilibrium contract under no coercion. Q.E.D. The intuition for Corollary 4 is simple: coercion is a costly means of transferring utility from the agent to the producer. Therefore, it is necessarily overused in equilibrium. Despite the simplicity of this intuition, the result contained in Corollary 4 has not appeared in the literature, to the best of our knowledge, because the central role of coercion in affecting the participation constraint has not been modeled. Another immediate implication of Proposition 2 follows (proof omitted): COROLLARY 5: A coerced worker is better off when matched with a less productive producer (i.e., a producer with lower x). The intuition (and the proof) is simply that producers with higher x use more coercion and thus give lower welfare, u¯ − g, to their agents. Once again, although straightforward, this corollary has interesting economic implications. One of these, discussed further in Section 5.4, is that trading in slaves makes agents worse off, even conditional on their being coerced. Finally, we consider the cross-sectional relationship between coercion and expected incentive pay, assuming that cross-sectional variation is generated by variation in x. Proposition 1 implies that an equilibrium contract always involves wl = 0 and wh = u¯ − (χ )−1 ( ηa ) + (1 − a)c (a) + c(a). An increase in x leads to an increase in a and affects wh only through its effect on a, so 1 ∂wh = − + (1 − a)c (a) ∂a ηχ (g) The sign of this derivative is ambiguous: the direct effect of an increase in a is to increase wh (as (IR2 ) binds). But an increase in a also increases g (through (10)), which reduces wh (again through (IR2 )). If the first effect dominates, then an increase in x leads to higher g and wh ; if the second effect dominates, then an increase in x leads to higher g and lower wh . The former case is particularly interesting because it provides an explanation for Fogel and Engerman’s observation that workers who are subjected to more coercion are not necessarily less well paid. In contrast to their interpretation, our result also shows that this has nothing to do with the efficiency of slavery. We state this result in the next corollary (proof in the text). COROLLARY 6: Cross-sectional variation in x leads to a positive correlation between g and wh if ∂wh /∂a > 0 for all a, and leads to a negative correlation between g and wh if ∂wh /∂a < 0 for all a.
574
D. ACEMOGLU AND A. WOLITZKY
4. GENERAL EQUILIBRIUM In this section, we characterize and discuss the comparative statics of equilibria as defined in Definition 1. Our main objectives are twofold. The first is to understand the relationship between labor scarcity and coercion, which was one of our main motivating questions. The second is to investigate the robustness of the partial equilibrium insights derived in the previous section. ˜ We first recall that Assumption 2 ensures that P(QL)x > u¯ + c (0) = u(L) − γG/(1 − γ) + c (0). This implies that Proposition 1 applies and characterizes the set of equilibrium contracts given P and u¯ (or, alternatively, G). Then a (general) equilibrium is simply a pair (a∗ g∗ ) satisfying (7), where P and u¯ are given by (3) and (6) evaluated at (a∗ g∗ ). ¯ deEven though both endogenous (general equilibrium) objects, P and u, pend on the strategy profile of producers via two aggregates, Q and G, defined in (2) and (5), the structure of equilibria in this game is potentially complex, because the game can have multiple equilibria and exhibits neither strategic complements nor strategic substitutes. When a set of producers choose higher (a g), this increases both Q and G, but the increase in Q reduces P (since the function P(·) is decreasing) and discourages others from increasing their (a g), while the increase in G reduces u¯ and encourages further increases in (a g). These interactions raise the possibility that the set of equilibria may not be a lattice and make the characterization of equilibrium comparative statics difficult. Nonetheless, under Assumption 1 and our additional assumption that producers are homogeneous, the set of equilibria is a lattice and we can provide tight comparative statics results. In Appendix A, we show how this analysis can be extended to the general case where the set of equilibria may not be a lattice. If a is an optimal effort for the producer, then the strict concavity of (7) and (10) implies that (a (χ )−1 (a/η)) is the unique solution to (7). Homogeneity of producers then imply that all producers choose the same equilibrium contract (a g). This implies that Q = a∗ x and G = g∗ , so that from (10), we have G = ¯ x η) and g∗ (P u ¯ x η) for the (χ )−1 (a∗ /η) = (χ )−1 (Q/xη).22 Write a∗ (P u unique equilibrium contract levels of effort and guns given market price, P, ¯ productivity, x, and cost of coercion, η, and write outside option, u, γ Q ˜ φ(Q x γ L η) ≡ a∗ P(QL) u(L) − (χ )−1 x η x 1−γ xη so that φ(Q x γ L η) is the (unique) level of aggregate output consistent with each producer choosing an equilibrium contract given aggregate output Equation (10) holds only when ul < 0. Lemma 3 in Appendix A shows that ul < 0 everywhere except possibly for one vector of parameters and that comparative statics are exactly as if (10) holds everywhere. 22
THE ECONOMICS OF LABOR COERCION
575
QL, and the unique level of G consistent with aggregate output QL and equilibrium contracts. The main role of our assumptions here (homogeneous producers and Assumption 1, which will be relaxed in Appendix A) is to guarantee the existence of such a unique level of G. When the parameters can be omitted without confusion, we write φ(Q) for φ(Q x γ L η). It is clear that if Q is an equilibrium level of aggregate output, then Q is a fixed point of φ(Q). The converse is also true, because if Q is a fixed point of φ(Q), then (a∗ = Q/x g∗ = (χ )−1 (Q/xη)) is an equilibrium. This implies that the set of equilibrium (Q G) pairs is a lattice, because (χ )−1 (Q/xη) is increasing in Q. In what follows, we focus on the comparative statics of the extremal fixed points of φ(Q x γ L η). Even though φ(Q x γ L η) is not necessarily monotone in Q, we will show that it is monotone in its parameters, which is sufficient to establish global comparative static results. PROPOSITION 3: (i) An equilibrium exists, the set of equilibria is a lattice, and the smallest and greatest equilibrium aggregates (Q G) are increasing in γ and decreasing in η. ˜ (ii) If u(L) = u˜ 0 for all L, then the smallest and greatest equilibrium aggregates (Q G) are decreasing in L. (iii) If P(QL) = P0 for all QL, then the smallest and greatest equilibrium aggregates (Q G) are increasing in L. PROOF: φ(0 x γ L η) ≥ 0 and φ(x x γ L η) ≤ x, and φ(Q x γ L η) is continuous since, given Assumption 1, the program characterizing equilibrium contracts is strictly concave. Note also that φ(Q x γ L η) is increasing in γ and decreasing in η. The first part then follows from this observation combined with Corollary 1 from Milgrom and Roberts (1994) (which, in particular, ¯ × RM ¯ is continuous in x and increasing in t states that if F(x t) : [0 x] + → [0 x] for all x, then the smallest and greatest fixed points of F(x t) exist and are increasing in t). The second and third parts follow with the same argument, ˜ since when u(L) = u˜ 0 for all L, φ(Q x γ L η) is decreasing in L, and when ˜ P(QL) = P0 for all QL, φ(Q x γ L η) is increasing in L (recalling that u(L) is decreasing in L). Q.E.D. Proposition 3 shows that a (general) equilibrium exists and its comparative statics are well behaved: the smallest and greatest equilibrium aggregates are increasing in the difficulty of leaving the coercive sector, and are decreasing in the cost of coercion. Part (ii) illustrates the labor demand effect. Because ˜ = u˜ 0 , labor scarcity has no impact on outside options and only increases u(·) price, encouraging coercion in line with Domar’s thesis. Part (iii) illustrates the outside option effect. In this case, P(·) = P0 and labor scarcity simply increases u˜ ¯ reducing coercion. Note, however, that comparative statics with (and thus u),
576
D. ACEMOGLU AND A. WOLITZKY
respect to x are ambiguous, because for a fixed level of Q, the corresponding level of G is decreasing in x.23 Proposition 3 provides global comparative statics and addresses the labor scarcity issue when either the labor demand effect or the outside option effect is shut down. The next proposition characterizes the conditions under which the labor demand or the outside option effect dominates when both are present. Let us first observe that (8) can be alternatively rewritten as (12)
˜ (P(QL)x − u(L))a − a(1 − a)c (a) γ G + ag − ηχ(g) − ac(a) + a 1−γ
This expression shows that the return to effort is increasing in L if an increase in L decreases market price by less than it decreases the outside option, that is, if (13)
QP (QL) > u˜ (L)
An argument identical to the proof of Proposition 3 shows that an increase in L increases the smallest and greatest equilibrium aggregates (Q G) if (13) holds for all Q. However, the converse statement that an increase in L decreases the smallest and greatest equilibrium aggregates (Q G) if the opposite of (13) holds for all Q is not possible since (13) always holds for sufficiently small Q. Nevertheless, equivalents of these results hold locally as shown in the next proposition. Let (Q+ (L) G+ (L)) and (Q− (L) G− (L)) denote the smallest and greatest equilibrium aggregates given labor L, and let us use (Q• (L) G• (L)) to refer to either one of these two pairs. PROPOSITION 4: Suppose that Q• (L0 )P (Q• (L0 )L0 ) > u˜ (L0 ) (where Q• (L0 ) is either Q+ (L) or Q− (L)). Then there exists δ > 0 such that (Q• (L) G• (L)) > (Q• (L0 ) G• (L0 )) for all L ∈ (L0 L0 + δ) (and (Q• (L) G• (L)) < (Q• (L0 ) G• (L0 )) for all L ∈ (L0 − δ L0 )). Conversely, suppose that Q• (L0 )P (Q• (L0 )L0 ) < u˜ (L0 ). Then there exists δ > 0 such that (Q• (L) G• (L)) < (Q• (L0 ) G• (L0 )) for all L ∈ (L0 L0 + δ) (and (Q• (L) G• (L)) > (Q• (L0 ) G• (L0 )) for all L ∈ (L0 − δ L0 )). PROOF: We will only prove the first part of the proposition for (Q• (L) G (L)) = (Q− (L) G− (L)). The proof for (Q• (L) G• (L)) = (Q+ (L) G+ (L)) and the proof of the second part are analogous. Fix L0 > 0 (and x, γ, η). First, note that φ(Q x γ L η) is continuous in Q and L (because (8) is strictly concave) and thus uniformly continuous over the •
23 Another way to see this is to define φ as a function of a rather than Q, and to note that for fixed a, the corresponding price P(axL) is decreasing in x. Some comparative statics with respect to x are provided in Appendix A.
THE ECONOMICS OF LABOR COERCION
577
compact region [0 Q− (L0 )] × [0 2L0 ]. By hypothesis, Q− (L0 )P (Q− (L0 )L0 ) > ˜ u˜ (L0 ). Since P(·), u(·), and (χ )−1 (·) are continuously differentiable, (12) then implies that for any ε > 0, there exists δ¯ > 0 such that φ(Q x γ L η) is (strictly) increasing in L on {(L Q) : |L − L0 | ≤ δ¯ and |Q − Q− (L0 )| ≤ ε}. Recall that Q− (L0 ) is the smallest fixed point of φ(Q x γ L0 η). Let Ψ (L) ≡ minQ : 0≤Q≤Q− (L0 )−δ¯ (φ(Q x γ L η) − Q), which is well defined and continuous in L by Berge’s maximum theorem. Moreover, Ψ (L0 ) ≥ ε¯ for some ε¯ > 0, since ¯ if it were equal to 0, then this would imply that there exists Q˜ ∈ [0 Q− (L0 ) − δ] − ˜ ˜ such that φ(Q x γ L η) − Q = 0, contradicting the fact that Q (L0 ) is the smallest fixed point. As Ψ (L) is continuous, for any ε > 0 there exists δ such that for any L ∈ (L0 − δ L0 + δ), we have |Ψ (L) − Ψ (L0 )| < ε. Choose ε = ε¯ ˆ and let δ˜ = min{δ ˆ δ}. ¯ Then for any and denote the corresponding δ by δ, ˜ ˜ ˜ and L ∈ (L0 − δ L0 + δ), φ(Q x γ L η) − Q > 0 for all [0 Q− (L0 ) − δ], − − ˜ thus Q (L) > Q (L0 ) − δ. ˜ Q− (L) > Q− (L0 ). To obtain a We next show that for any L ∈ (L0 L0 + δ), contradiction, suppose this is not the case. Since we have already established ˜ there exists Lˆ ∈ (L0 L0 + δ) ˜ such that Q− (L0 ) − that Q− (L) > Q− (L0 ) − δ, ˆ ≤ Q− (L0 ). Moreover, given our choice of δ, ˜ φ(Q x γ L η) ˜δ ≤ Q− (L) is (strictly) increasing in L on {(L Q) : |L − L0 | ≤ δ˜ and |Q − Q− (L0 )| ≤ ˜ and thus Q− (L) ˆ = φ(Q− (L) ˆ x γ L ˆ η) > φ(Q− (L) ˆ x γ L0 η). Since δ}, φ(Q x γ L0 η) is continuous in Q and φ(Q x γ L0 η) ≥ 0 for all Q, Brouwer’s fixed point theorem now implies that φ(Q x γ L0 η) has a fixed ˆ Since Q− (L) ˆ > φ(Q− (L) ˆ x γ L0 η), it must be the point Q∗ in [0 Q− (L)]. ∗ − ˆ − case that Q < Q (L) ≤ Q (L0 ), where the second inequality follows by hypothesis. But this contradicts the fact that Q− (L0 ) is the smallest fixed point of φ(Q x γ L0 η) and completes the proof that Q− (L) > Q− (L0 ). Since G− (L) = (χ )−1 (Q− (L)/(xη)) and χ is strictly increasing, G− (L) > G− (L0 ) follows immediately. Q.E.D. In general, labor scarcity both increases price—encouraging coercion through the labor demand effect—and increases the marginal product of labor in the noncoercive sector, discouraging coercion through the outside option effect. Proposition 4 shows that the impact of a small change in labor scarcity is determined by whether its direct (partial equilibrium) effect on price or outside options is greater. This proposition thus helps us interpret different historical episodes in light of our model. For example, Brenner (1976) criticized neo-Malthusian theories that predict a negative relationship between labor scarcity and coercion because in many instances, most notably during the “second serfdom” in Eastern Europe, population declines were associated with more—not less—coercion. Proposition 4 suggests that during the periods of population declines in Western Europe, particularly during the Black Death, which were the focus of the neo-Malthusian theories, labor scarcity may have
578
D. ACEMOGLU AND A. WOLITZKY
significantly increased outside options, thus reducing P − u˜ and discouraging coercion. In contrast, it is plausible that the effects of population declines in Eastern Europe were quite different and would involve higher (rather than lower) P − u˜ for at least two reasons. First, the decline in population in this case coincided with high Western European demand for Eastern European agricultural goods. Second, the increase in the outside option of Eastern European workers is likely to have been muted due the relative paucity and weakness of cities in this region. This discussion illustrates that the general model that allows for both labor demand and outside option effects leads to richer and more subtle comparative statics, and also highlights that the predictions depend in a simple way on whether labor scarcity has a larger effect on the market price of output or workers’ outside options. 5. EXTENSIONS 5.1. Ex ante Investments and Effort- versus Skill-Intensive Labor A natural conjecture would be that, in addition to the inefficiencies associated with coercion identified above, coercive relationships discourage ex ante investments by workers. For example, one could use the benchmark model of relationship-specific investments introduced by Grossman and Hart (1986), equate “coercion” with the ex post bargaining power of the producer, and thus conclude that coercion should discourage investments by the worker while potentially encouraging those by the producer. Our model highlights that the effect of coercion on investments is more complex. In particular, coercion will encourage agents to undertake investments that increase their outside options, while at the same time giving them incentives to reduce their productivity within the relationship. This is because, as shown in Section 3, greater outside options increase the agent’s payoff more than one for one (by also reducing coercion), while greater productivity inside the coercive relationship reduces the agent’s payoff (by increasing coercion). Conversely, the producer will undertake those investments that increase productivity more than the outside option of the agent. One implication of this result is that the presence of coercion may encourage agents to invest in their general human capital, although for reasons different than the standard Becker approach would suggest. Coercion also discourages them from investing in relationship-specific human capital; in fact, it gives them incentives to sabotage their producer’s productive assets. We model these issues by adding an interim stage to our game between matching and the investment in guns, g. At this stage, matched agents and producers make investment decisions, denoted by i and I, respectively. For simplicity, we analyze the case in which such investment opportunities are available only to one party (either the agent or the producer)24 and focus on partial 24 This implies that we are abstracting from indirect effects resulting from the interaction of investments by agents and producers.
THE ECONOMICS OF LABOR COERCION
579
equilibrium. Investments potentially affect both productivity x within the rela¯ which we now write as either x(i) tionship and the worker’s outside option u, ¯ ¯ and u(i) or x(I) and u(I), depending on which side makes the investment. Suppose that investment i costs the agent ζ(i), while investment I costs a pro˜ ducer ζ(I), and that both cost functions are increasing and convex. We also further simplify the discussion throughout by assuming that sign(Px (i) − u¯ (i)) and sign(Px (I) − u¯ (I)) do not depend on i and I, that is, each investment ¯ This last assumption enables us to always has a larger effect on either Px or u. clearly separate two different cases, which have distinct implications. Let us first analyze the situation in which only agents have investment opportunities. As a benchmark, note that if there is no coercion (i.e., η = ∞), ¯ − ζ(i) after choosa matched worker anticipates receiving expected utility u(i) ing investment i and, therefore, chooses i to solve (14)
¯ − ζ(i) max u(i) i∈R+
Returning to the analysis in Section 3, it is clear that when there is the possibility of coercion (recall that guns, g, is chosen after the agent’s investment, i), ¯ − g(i) − ζ(i) and she, therefore, chooses i to the agent will receive utility u(i) solve (15)
¯ − g(i) − ζ(i) max u(i) i∈R+
To characterize the solution to this program, we need to determine g(i), that is, how the choice of guns by the producer responds to changes in i. Equation (10) implies that g(i) = (χ )−1 (a(i)/η), which we differentiate to obtain (16)
g (i) =
a (i) ηχ (g(i))
Next, note that the producer’s expected profit in an equilibrium contract may be written as (17)
¯ (Px(i) − u(i))a − a(1 − a)c (a) − ac(a) + ag − ηχ(g)
Therefore, sign(a (i)) = sign(Px (i) − u¯ (i)) and thus sign(g (i)) = sign(Px (i) − u¯ (i)). Combining this with (14) and (15) then immediately yields the following result (proof in the text). PROPOSITION 5: Equilibrium investment by the agent under coercion, iC (the solution to (15)) is smaller [greater] than the no-coercion investment, iN (the solution to (14)), if Px (i) − u¯ (i) > 0 [if Px (i) − u¯ (i) < 0]. Proposition 5 implies that under coercion, agents will underinvest (compared to no coercion) in tasks that increase their within-relationship productivity relative to their outside option. This is because when the difference between
580
D. ACEMOGLU AND A. WOLITZKY
the agent’s productivity and her outside option increases, the producer chooses a contract with higher effort and coercion, which reduces agent welfare. Proposition 5 relates to the argument by Fenoaltea (1984) that slavery is often observed in effort-intensive tasks, but not in care-intensive tasks. Fenoaltea attributed this association to the psychological difficulty of using punishments to motivate care. Our result provides an alternative explanation, under the assumption that care-intensive tasks are those where relationship-specific investments by the worker are more important—in this interpretation, a corresponds to effort, while i is associated with care, and we have in mind tasks where Px (·) − u¯ (·) > 0.25 Next, consider the situation where only producers undertake investments. Without coercion, a producer who chooses I and a receives expected payoff (18)
˜ ¯ (Px(I) − u(I))a − a(1 − a)c (a) − ac(a) − ζ(I)
while with coercion he receives expected payoff (19)
˜ ¯ (Px(I) − u(I))a − a(1 − a)c (a) − ac(a) + ag − ηχ(g) − ζ(I)
Clearly, the producer will choose I = 0 if Px (I) − u¯ (I) ≤ 0, regardless of whether we allow coercion; this is a version of the standard result in human capital theory in which producers never provide general skills training. If, on the other hand, Px (I) − u¯ (I) > 0, then with the same arguments as in Section 3, it can be verified that (19) is supermodular in (I a g −η). Now the comparison between producer investment under coercion and no coercion can be carried out by noting that (18) is equivalent to (19) as η → ∞. Supermodularity of (19) then immediately gives the following result (proof in the text). PROPOSITION 6: Equilibrium investment by the producer under coercion, I C (the solution to (19)), is greater [smaller] than the no-coercion investment, I N (the solution to (18)), if Px (I) − u¯ (I) > 0 [if Px (I) − u¯ (I) < 0]. Proposition 6 has a similar interpretation to Proposition 5: investment incentives are determined by whether investment has a greater impact on productivity or the outside option. In contrast to the agent, the producer has greater incentives to invest when relationship-specific productivity increases more than the outside option. The general principle here is related to the result in Acemoglu and Pischke (1999) that employers will invest in general human capital when there is wage distortion, so that these investments increase worker productivity inside the relationship by more than their outside wage. 25 This is consistent with Fenoaltea’s discussion, which emphasizes the association between care-intensive and skill-intensive tasks. For example, he notes that uncoerced galley crews were sometimes used because “the technically superior rowing configuration did require skilled oarsmen” (Fenoaltea (1984, p. 642)) and that “at least the skilled branches of factory production” were care intensive (p. 654).
THE ECONOMICS OF LABOR COERCION
581
5.2. Labor Scarcity and the Returns to Investment in Guns In this subsection, we highlight another general equilibrium mechanism that links labor scarcity to coercion. The underlying idea is related to Acemoglu, Johnson, and Robinson’s (2002) argument that labor coercion is more profitable when there is abundant labor to coerce (because when labor is scarce, coercion can exploit economies of scale). This channel is absent in our model so far, because each employer employs at most one worker and coercion decisions are made conditional on the match. An alternative timing of events is to assume that producers invest in guns before the matching stage. This minor change in timing introduces the above-mentioned economies of scale effect and implies that investment in guns will be less profitable when producers are relatively unlikely to match with workers, that is, when labor is scarce. To bring out this particular general equilibrium effect, we abstract from the labor de˜ ≡ u˜ 0 . mand and outside option effects by assuming that P(·) ≡ P0 and u(·) The only difference from our baseline analysis is that producers choose g before they learn whether they are matched with an agent. Thus they have a two-stage decision problem: they first choose g before matching and then after matching, they propose a contract to the agent, which, as before, can be summarized by (a g). Even though this is a two-stage decision problem, there is no loss of generality in formulating it mathematically as choosing a and g simultaneously, that is, γ max L aP0 x − a (1 − a)c (a) + c(a) + u˜ 0 − G−g (ag)∈[01]×R+ 1−γ + γ G−g − ηχ(g) − (1 − a) −ac (a) + c(a) + u˜ 0 − 1−γ + with the interpretation that a is the level of effort that will be chosen following a match with an agent (and we have again substituted out the incentive compatibility and participation constraints under Assumption 2). Rewriting this as γ max aP0 x − a (1 − a)c (a) + c(a) + u˜ 0 − G−g (ag)∈[01]×R+ 1−γ + γ η G − g − χ(g) − (1 − a) −ac (a) + c(a) + u˜ 0 − 1−γ L + we see that changing the timing of the model by requiring producers to choose g before matching is formally identical to replacing the cost of guns η with η/L. This implies that the analysis of the set of equilibria is similar to that in Proposition 3 and yields the following proposition (proof omitted). PROPOSITION 7: Consider the modified model presented in this subsection. Then an equilibrium exists and the set of equilibria is a lattice. Labor scarcity
582
D. ACEMOGLU AND A. WOLITZKY
reduces coercion, that is, a decline in L reduces the smallest and greatest equilibrium aggregates (Q G). Moreover, the smallest and greatest equilibrium aggregates (Q G) are increasing in P0 , γ, and x, and decreasing in u˜ 0 and η. This proposition thus formalizes another channel via which labor abundance can encourage coercion and thus complements Proposition 3 in the previous ˜ = u˜ 0 , subsection. Naturally, if we relax the assumption that P(·) ≡ P0 and u(·) the implications of labor scarcity for coercion will be determined by competing effects as in Proposition 4. 5.3. Interim Participation Constraints We have so far assumed that the agent cannot run away once she accepts a contract offer. In practice, coerced agents may attempt to run away not only ex ante, but also after the realization of output. This would amount to one more individual rationality (IR) or participation constraint for the agent, which we refer to as the interim participation constraint. The presence of such an interim participation constraint introduces a potential “useful” role of coercion. Intuitively, the agent may prefer ex ante to commit to not running away after output is realized; this is similar to the logic that operates in models such as Chwe’s, where coercion does not affect the ex ante participation constraint but does enable punishment. Interestingly, however, we will see that under fairly weak assumptions, this effect is dominated by the negative impact of coercion on welfare, and comparative statics remain unchanged. Formally, in this subsection we again focus on partial equilibrium and assume that the agent will run away after the realization of y if her (equilibriumpath) continuation utility is below u¯ − Φ(g), where u¯ is the outside option of the agent introduced above and Φ(g) ≥ 0 is an increasing function of guns; the interpretation of this is that the producer can inflict punishment Φ(g) if the agent runs away after the realization of y, just as she can inflict punishment g if the agent runs away before the realization of y.26 This introduces an “interim IR constraint” in addition to the ex ante IR constraint, (IR0 ), in Section 2. This implies that both wh − ph and wl − pl must be greater than u¯ − Φ(g), although naturally only the constraint wl − pl ≥ u¯ − Φ(g) can be binding in an equilibrium contract. Therefore, the considerations discussed in this subsection introduce the additional constraint (IIR)
wl − pl ≥ u¯ − Φ(g)
Now, by an argument similar to that in Section 3, an equilibrium contract is a solution to the maximization of (1) subject to (IC0 ), (IR0 ), and now (IIR). 26 For example, g may be the pain that the producer can inflict on a worker if she runs away on the first day on the job, while Φ(g) may be the pain that the producer can inflict on the worker once she has set up a home on the producer’s plantation.
THE ECONOMICS OF LABOR COERCION
583
Suppose first that Φ(g) is sufficiently large for all g. It is then clear that (IIR) will always be slack and the producer’s problem is identical to that in Section 3, and Proposition 2 applies. Thus, we assume in this subsection that Φ(g) is “not too large” and, in particular, assume that Φ(g) ≤ g, which implies that (IIR) is binding and in effect replaces (IR0 ): LEMMA 1: Consider the model with the interim participation constraint and suppose that Φ(g) ≤ g. Then (IR0 ) is slack and (IIR) binds. PROOF: The first-order approach still applies, so we can replace (IC0 ) with (IC1 ). By (IIR), wl − pl ≥ u¯ − Φ(g). Substituting this into (IC1 ) gives wh − ph ≥ c (a) + u¯ − Φ(g). Therefore, a(wh − ph ) + (1 − a)(wl − pl ) − c(a) ≥ u¯ − Φ(g) + ac (a) − c(a) ≥ u¯ − Φ(g) ≥ u¯ − g where the second inequality follows by convexity of c(a) and the fact that c(0) = 0, and the third inequality follows by the assumption that Φ(g) ≤ g. This chain of inequalities implies (IR0 ). Given that (IR0 ) is slack, the fact that increasing pl relaxes (IC1 ) implies that (IIR) must bind. Q.E.D. Provided that Φ(g) ≤ g, Lemma 1 then allows us to substitute (IC1 ) into (IIR), which implies that equilibrium contracts are characterized by (20)
max
(ag)∈[01]×R+
aPx − a[c (a) + u¯ − Φ(g)]+ − (1 − a)[u¯ − Φ(g)]+ − ηχ(g)
¯ −η), so comparative statics Equation (20) is supermodular in (a g x P −u go in the same direction as in our baseline model, although they may hold with weak rather than strict inequalities: PROPOSITION 8: If Φ(g) ≤ g for all g, then (a∗ g∗ ) are nondecreasing in x and P, and nonincreasing in u¯ and η. The intuition for this is that regardless of whether (IR0 ) or (IIR) is binding, increasing g reduces the amount that the producer must pay the agent after high output is realized, leading to complementarity between effort and coercion as in Section 3. While the possibility that (IIR) rather than (IR0 ) may be binding does not affect our comparative static results, it does suggest that coercion may play a socially useful role. In particular, ex post punishments may be useful in providing incentives to an agent who is subject to limited liability, and (IIR) limits the use of such punishments; one may then conjecture that increasing g may increase social welfare if it relaxes (IIR). We next show that this conjecture is not correct. To see this, note that if (IIR) is binding and coercion is not allowed
584
D. ACEMOGLU AND A. WOLITZKY
¯ the producer’s problem becomes (i.e., η = ∞), then u¯ − Φ(g) = u¯ − Φ(0) = u, ¯ + , and (utilitarian) social welfare is maxa∈[01] aPx − ac (a) − au¯ − (1 − a)[u] ¯ + + u ¯ SWN = max aPx − ac (a) − au¯ − (1 − a)[u] a∈[01]
while with coercion the producer’s problem is given by (20) and social welfare corresponding to an equilibrium contract involving (a g) is SWC = aPx − ac (a) − au¯ + aΦ(g) − (1 − a)[u¯ − Φ(g)]+ − ηχ(g) + u¯ − g If Φ(g) ≤ g for all g, then it is clear that SWN ≥ SWC with strict inequality if g > 0. Thus, coercion reduces social welfare if Φ(g) ≤ g. Formally, we have the following result (proof in text). COROLLARY 7: Suppose that Φ(g) ≤ g for all g and let SWC be social welfare corresponding to an equilibrium contract with coercion. Then SWN ≥ SWC with strict inequality if g∗ > 0. The intuition for Corollary 7 follows by comparing (7) to (20). Both (7) and (20) are supermodular in (a g) because of the term ag in (7) and the term aΦ(g) in (20). Our result in Section 3 that coercion reduces social welfare exploits the fact that coercion enters into worker welfare through the term −g, which is always larger in absolute value than ag. If Φ(g) ≤ g, then g is greater than aΦ(g), so the same argument implies that coercion reduces social welfare. It is worth noting that the analysis here is also informative about the case where ex post punishments are costly. In particular, the model in this subsection corresponds to the case where the producer requires at least g guns to ¯ Thus, it also shows that our comparative statics inflict punishment Φ(g) − u. and welfare results continue to hold in this case.27 27
An alternative, perhaps more direct way to model costly punishments is to assume that inflicting punishment (utility) ul < 0 costs the producer ξ(ul ), where ξ ≥ 0 and ξ ≥ 0. In this case, (8) becomes Pxa − a((1 − a)c (a) + c(a) + u¯ − g) − (1 − a)ξ(ac (a) − c(a) − u¯ + g) − ηχ(g) ¯ −η) provided that ξ is concave. Hence, our basewhich is still supermodular in (a g x P −u line comparative statics hold in this alternative model if there are decreasing marginal costs of punishment.
THE ECONOMICS OF LABOR COERCION
585
5.4. Trade in Slaves In this subsection, we briefly investigate the effects of trade in slaves, whereby agents subject to coercion are sold from one producer to another. We also investigate the related issue of comparative statics in the presence of fixed costs of coercion, for example, in the presence of a fixed cost (price) of obtaining a worker to coerce or of violating legal or ethical proscriptions against coercion. To analyze trading in slaves, we assume, as in Appendix A, that producers differ in their productivities, which are drawn independently from a distribution F(x). Let us also assume that, initially, matched producers are a random selection from the population of producers and thus their productivity distribution is given by F(x). Since L < 1, some producers are left unmatched. Now suppose that, before the investment in guns, agents can be bought and sold among producers. Since more productive (higher x) producers have a greater willingness to pay for coerced agents, this trading will ensure that all of the agents will be employed by the most productive L producers. Consequently, ˜ the distribution of productivity among matched producers will be F(x) = 0 for ˜ all x < x1−L and F(x) = F(x)/L for all x ≥ x1−L , where x1−L is the productivity of the producer at the (1 − L)th percentile of the productivity distribution. This implies that trade in slaves is equivalent to a first-order stochastically dominating shift in the distribution of productivity in the context of our model. We know from Proposition 2 that in partial equilibrium (i.e., with P and u¯ fixed), this increases effort and coercion, reducing worker welfare. There is a potential offsetting effect that results from the fact that the reallocation of agents across producers increases productivity and aggregate output, Q, and thus decreases price P(QL). In this subsection, we abstract from this effect by assuming that P(QL) = P0 . Under this assumption, we show that trade in slaves increases the amount of coercion and reduces agent welfare, and that it may also reduce (utilitarian) social welfare despite the fact that agents are now allocated to the most productive producers.28 PROPOSITION 9: Assume that P(QL) = P0 for all QL. Introducing slave trade in the baseline model increases coercion (G) and reduces agent welfare. More formally, the smallest and greatest equilibrium levels of coercion (resp., average agent welfare) under slave trade are greater (resp., smaller) than the smallest and greatest equilibrium levels of coercion (resp., average agent welfare) under no slave trade. In addition, social welfare may decline under slave trade. 28 Note that our analysis of trade in slaves ignores the possibility of bringing additional slaves into the market, as the Atlantic slave trade did until its abolition in 1807. We show that slave trade reduces worker welfare even if one ignores its effect on the number of coerced workers (which is outside our model).
586
D. ACEMOGLU AND A. WOLITZKY
The proof is omitted. Most of the proof follows from our analysis of comparative statics with heterogeneous producers provided in Proposition 14 in Appendix A. The result that social welfare may decline under coercion follows from a parametric example, which is contained in the working paper version (available upon request). To see the intuition for why trade in slaves (or, equivalently, an exogenous first-order stochastic dominance increase in the distribution of productivities, F(·)) may reduce social welfare, note that increasing productivity x has two offsetting effects on welfare. First, fixing a, increasing x by Δ increases expected output by a (a first-order positive effect). Second, increasing x increases a and thus g. The increase in g has a first-order negative effect on worker welfare, but no first-order effect on producer welfare by the envelope theorem. Thus, it is straightforward to construct an example in which increasing productivity reduces social welfare by choosing parameters that induce a low choice of a. We can also incorporate the price paid for a coerced agent or other fixed costs of coercion in a straightforward manner. In particular, we simply need to change the cost of coercion to ηχ(g) + κ1{g > 0}, where κ > 0 is a constant and 1{g > 0} is the indicator function for the event g > 0. This introduces a natural nonconcavity into the producer’s optimization problem, which leads the producer either to set a = aN and g = 0 (no coercion) or to set a = aC and g = (χ )−1 (aC /η) (coercion), paying the fixed cost of coercion and then setting the level of coercion according to equation (10). As we have seen, aN = ¯ Comparing this with (8) and obarg maxa Pxa − a(1 − a)c (a) − ac(a) − au. serving that aC > aN implies the following result.29 ¯ η, PROPOSITION 10: Coercion is more likely when P or x is higher, or when u, or κ is lower. Proposition 10 shows that comparative statics of the decision whether to use coercion are identical to comparative statics on the level of coercion. 6. CONCLUDING REMARKS Standard economic models assume that transactions in the labor market are “free.” For most of human history, however, the bulk of labor transactions have been coercive, in the sense that the threat of force was essential in convincing workers to take part in the employment relationship and accept the terms of employment. The small existing literature on slavery and forced labor does not model what we view as the fundamental effect of coercion on labor transactions—coercion makes workers accept employment relations (terms of employment) that they would otherwise reject. This paper provides a tractable 29 In this result, “coercion is more likely when z is higher,” means that there is a larger set (in the set-inclusion sense) of parameters other than z for which coercion is optimal when z is higher.
THE ECONOMICS OF LABOR COERCION
587
model that incorporates this feature and uses it to provide a range of comparative statics results useful for understanding when coercive labor relations are more likely to arise and what their welfare implications are. At the heart of our model is the principal–agent relationship between a potentially coercive producer and an agent. Coercion, modeled as investment by the producer in guns, affects the participation constraint of the agent. We first analyzed this principal–agent problem and derived partial equilibrium comparative statics, and then embedded it in a general equilibrium framework to study the relationship between labor scarcity/abundance and labor coercion. Both our partial and general equilibrium analyses rely on the complementarity between effort and coercion. This complementarity (supermodularity) is not only mathematically convenient, but also economically central: greater effort implies that the principal will have to reward the agent more frequently because success is more likely. But this also implies that greater coercion, which reduces these rewards, becomes more valuable to the principal. As a consequence, agents with higher marginal productivity in the coercive sector will be coerced more (the labor demand effect) and agents with better outside options will be coerced less (the outside option effect). We show that, consistent with Fogel and Engerman (1974), coercion increases effort. However, our formulation also implies that, in contrast to Fogel and Engerman’s interpretation, this does not imply that coercion is or may be efficient. On the contrary, coercion always reduces (utilitarian) social welfare, because the structure of the principal–agent model dictates that coercion hurts the agent more than the additional effort it induces helps the principal. Our model also shows that coercion changes both workers’ and producers’ incentives to make ex ante investments in their relationships, and points out a new channel via which trading coerced workers makes them worse off. A major question in the economics of coercion, both from a historical perspective and for understanding the continued prevalence of forced labor today, is the effect of labor scarcity/abundance on coercion. Domar (1970) argues that labor scarcity encourages coercion by increasing the cost of hiring workers in the market. The neo-Malthusians, on the other hand, link the decline of feudalism and serfdom to falling population in Western Europe starting in the second half of the 15th century, which led to new opportunities for free agriculture. Relatedly, Acemoglu, Johnson, and Robinson (2002) suggest that Europeans were more likely to set up coercive institutions in colonies with abundant labor, because setting up such institutions was not profitable when labor was scarce. Our general equilibrium analysis shows why these diverse perspectives are not contradictory. Labor scarcity creates a labor demand effect: it increases the marginal product of workers in the coercive sector, and thus encourages employers to use greater coercion and extract higher effort from their workers. It also creates an outside option effect: it increases the outside option of the workers in the noncoercive sector, and reduces coercion because employers demand lower effort and use less coercion when workers have greater outside
588
D. ACEMOGLU AND A. WOLITZKY
options. Finally, it creates an economies of scale effect: it makes it more likely that up-front investments in coercive instruments will go to waste due to lack of labor. Whether the labor demand effect or the outside option effect dominates simply depends on whether the population change has a larger direct effect on the market price or the workers’ outside options. We view this paper as a first step toward a systematic analysis of coercion in the labor market and its implications for the organization of production and economic development. Despite the historical and current importance of forced labor and other coercive relations, many central questions in the economics of coercion have not previously been answered or even posed. Our framework provides a first set of answers to these questions and can serve as a starting point for different directions of study. Theoretical and empirical investigations of the dynamics of coercion, of why coercive relationships persist in many developing countries even today, of the effects of coercion on technology choices and organizational decisions, and of how coercive production impacts trade are important areas for future research. A particularly fruitful area of future research is a more in-depth analysis of the politics of coercion. We presumed the presence of an institutional environment that permitted coercion by producers. In many instances, coercion comes to an end or is significantly curtailed when political forces induce a change in the institutional environment. Combining our microeconomic model of coercion with a model of endogenous institutions would be one way to make progress in this direction. APPENDIX A: GENERAL CASE Equilibrium Contracts and Partial Equilibrium Comparative Statics In this appendix, we generalize our model by allowing for heterogeneity among producers and relaxing the assumption that the program characterizing equilibrium contracts is concave. Formally, we now drop Assumption 1 and assume that each producer’s productivity, x, is independently drawn from a ¯ with x > 0. We present most proofs in distribution F(x) with support [x x] ¯ ¯ this general model. Without Assumption 1, the program characterizing equilibrium contracts may have multiple solutions. Nonetheless, Proposition 1 applies with the sole modification that the condition Px > u¯ + c (0) is replaced by P x > u¯ + c (0). ¯ We therefore present the general proof here. PROOF OF PROPOSITION 1: First, observe that it is suboptimal to set both wh > 0 and ph > 0, or wl > 0 and pl > 0, as reducing both wh (wl ) and ph (pl ) by ε > 0 would strictly increase profits without affecting (IR0 ) or (IC0 ), so wh = [uh ]+ and wl = [ul ]+ . With two possible outcomes, the first-order approach is valid, so (IC0 ) can be rewritten as (IC1 ), which also exploits the fact that
THE ECONOMICS OF LABOR COERCION
589
the first-order condition must hold as equality, because lima→1 c(a) = ∞ and, therefore, a∗ < 1. Then the producer’s problem can be written as (A-1)
max
(aguh ul )∈[01]×R+ ×R2
a(Px − [uh ]+ ) − (1 − a)[ul ]+ − ηχ(g)
subject to (IR1 )
auh + (1 − a)ul − c(a) ≥ u¯ − g
and (IC1 )
uh − ul = c (a)
Substituting (IC1 ) into (IR1 ) yields (IR2 )
uh − (1 − a)c (a) − c(a) ≥ u¯ − g
Equation (IR2 ) must bind, as otherwise decreasing uh or increasing a would increase profit. Using (IC1 ) and (IR2 ) to substitute uh and ul out of (A-1) now yields (7). It remains only to show that g∗ > 0, ul ≤ 0, and uh ≥ 0, and finally that a∗ > 0 in any equilibrium contract (that wh = (1 − a)c (a) + c(a) + u¯ − g > 0 and pl = −ac (a) + c(a) + u¯ − g ≥ 0 then follows immediately from (IR2 ) and (IC1 )). First, the result that g∗ > 0 in any equilibrium contract with a∗ > 0 follows from (7), since it must be that χ (g∗ ) ∈ [a∗ /η 1/η] in any equilibrium contract. The result that ul ≤ 0 and uh ≥ 0 is established in the next lemma. LEMMA 2: In any equilibrium contract with a > 0, we have ul ≤ 0 and uh ≥ 0. PROOF: Note first that the Lagrangian for (A-1) subject to (IR2 ) and (IC1 ) is (A-2)
a(Px − [uh ]+ ) − (1 − a)[ul ]+ − ηχ(g) + λ(auh + (1 − a)ul − c(a) − (u¯ − g)) + μ(uh − ul − c (a))
To obtain a contradiction, now suppose that ul > 0. Then [ul ]+ = ul and is differentiable. Moreover, in this case uh > 0, as uh > ul (since a > 0) and thus [uh ]+ = uh is also differentiable. Clearly, d[uh ]+ /duh = d[ul ]+ /dul = 1. Then differentiating (A-2) with respect to uh and ul , we obtain (FOCuh ) 1 = λ +
μ a
590
D. ACEMOGLU AND A. WOLITZKY
and (FOCul ) 1 = λ −
μ 1−a
These first-order conditions always hold, as setting ul or uh to ∞ or −∞ cannot be optimal if a ∈ (0 1). Combining (FOCuh ) and (FOCul ) then implies that μ = 0. Now differentiating (A-2) with respect to a and using (IC1 ) yields (FOCa) Px − ([uh ]+ − [ul ]+ ) = μc (a) Equation (FOCa) holds with equality by our assumptions that a > 0 and that lima→1 c(a) = ∞. The fact that μ = 0 then implies that [uh ]+ − [ul ]+ = uh − ul = Px. Since [uh ]+ −[ul ]+ = uh − ul = Px and μ = 0, the Lagrangian (A-2) becomes −ul − ηχ(g) + λ(ul + Px − c(a) − (u¯ − g)) which is maximized at a = 0, contradicting our assumption that a > 0. This completes the proof of the claim that ul ≤ 0. Finally, to show uh ≥ 0, suppose, to obtain a contradiction, that uh < 0. Then (A-2) is differentiable with respect to uh at the optimum, and its derivative with respect to uh is aλ + μ. Since a > 0, this can equal 0 only if λ = 0 and μ = 0. But then the maximum of (A-2) over a ∈ [0 1] is attained at a = 1, Q.E.D. which violates (IC1 ) by our assumption that lima→1 c(a) = ∞. It remains only to check that a solution to the producer’s problem with a∗ > 0 ¯ (0). Consider the producer’s problem of first choosing a and exists if P x > u+c ¯ then maximizing profit given a. The producer’s optimal profit given fixed a is continuous in a. Therefore, it is sufficient to show that the producer’s optimal profit given a is increasing in a for all sufficiently small a > 0. The producer’s problem, given a > 0, is to maximize (7) over g ∈ R+ : max a Px − [(1 − a)c (a) + c(a) + u¯ − g]+ (A-3) g∈R+
− (1 − a)[−ac (a) + c(a) + u¯ − g]+ − ηχ(g) The right derivative of (A-3) with respect to a is d[uh ]+ d[ul ]+ h l (A-4) − Px − ([u ]+ − [u ]+ ) − a(1 − a)c (a) duh dul where d[uh ]+ /duh and d[ul ]+ /dul denote right derivatives. We have shown above that uh ≥ 0 and uh > ul at an optimum with a > 0, and d[uh ]+ /duh − d[ul ]+ /dul ≤ 1, so (A-4) is no less than Px − uh − a(1 − a)c (a). By (IR2 ),
THE ECONOMICS OF LABOR COERCION
591
as a converges to 0, uh converges to at most u¯ + c (0). Therefore, provided that Px > u¯ + c (0), (A-4), and thus the derivative of (A-3) with respect to a, is strictly positive for sufficiently small a. This establishes that a∗ > 0 and completes the proof. Q.E.D. We now state and prove the generalization of Proposition 2, which allows for multiple equilibrium contracts: PROPOSITION 11: The set of equilibrium contracts for a producer of type x forms a lattice, with smallest and greatest equilibrium contracts (a− (x) g− (x)) and (a+ (x) g+ (x)). Moreover, (a− (x) g− (x)) and (a+ (x) g+ (x)) are increasing in x and P and decreasing in u¯ and η. PROOF: First note that the maximization problem (7) is weakly supermod¯ −η). This follows because c (a) > 0 and, therefore, the ular in (a g x P −u first bracketed term in (7) ((1 − a)c (a) + c(a) + u¯ − g) is greater than the second (−ac (a) + c(a) + u¯ − g). Hence, (7) equals either Pxa − c(a) − u¯ + g − ηχ(g) (if both bracketed terms are positive), Pxa − a((1 − a)c (a) + c(a) + u¯ − g) − ηχ(g) (if only the first term is positive), or Pxa − ηχ(g) (if neither term is positive). In each of these cases, (7) is weakly supermodular in ¯ −η), so the fact that (7) is continuous in (a g x P −u ¯ −η) (a g x P −u ¯ −η). implies that it is weakly supermodular in (a g x P −u ¯ −η) implies that the set The supermodularity of (7) in (a g x P −u of equilibrium contracts for a producer of type x forms a lattice, and that the smallest and greatest equilibrium contracts, (a− (x) g− (x)) and (a+ (x) g+ (x)), are nondecreasing in x and P and nonincreasing in u¯ and η (see, e.g., Theorems 2.7.1 and 2.8.1 in Topkis (1998)). In the rest of the proof, we show that these comparative statics results ¯ η) to are strict in the sense that whenever we have a change from (x P u ¯ and η ≤ η, with at least one (x P u¯ η ), where x ≥ x, P ≥ P, u¯ ≤ u, strict inequality, we have “increasing” instead of “nondecreasing” and “decreasing” instead of “nonincreasing.” In the process of doing this, we will also establish the genericity result claimed in Remark 3. These results are stated in the following lemma, the proof of which completes the proof of the proposition. Q.E.D. ¯ η) and g(x P u ¯ η) denote the smallest [or greatest] LEMMA 3: Let a(x P u ¯ η) solution to the maximization problem (7). Then a(x P u¯ η ) > a(x P u ¯ η) for any (x P u¯ η ), where x ≥ x, P ≥ P, and g(x P u¯ η ) > g(x P u ¯ and η ≤ η, with at least one strict inequality. Moreover, let ul (x P u ¯ η) u¯ ≤ u, ¯ η) ≥ 0, then ¯ η). If ul (x P u be a solution at the parameter vector (x P u ul (x P u¯ η ) < 0. PROOF: For brevity, we will prove this lemma for a change in x, holding P, ¯ and η constant. The argument for the other cases is analogous. u,
592
D. ACEMOGLU AND A. WOLITZKY
Consider a change from x to x > x. The first part of Proposition 2, which ¯ η) and has already been established, implies that a(x P u¯ η ) ≥ a(x P u ¯ η), which we shorten to a(x ) ≥ a(x) and g(x ) ≥ g(x P u¯ η ) ≥ g(x P u g(x). First suppose that ul (x) < 0. Then (7) can be written as (A-5)
(a∗ g∗ ) ∈ arg max Pxa − a[(1 − a)c (a) + c(a) + u¯ − g]+ (ag)∈[01]×R+
− ηχ(g) Suppose, to obtain a contradiction, that uh = (1 − a)c (a) + c(a) + u¯ − g ≤ 0. Then (A-5) implies a = 1. Since lima→1 c(a) = ∞, uh ≤ 0 and a = 1 violate (IR1 ). Therefore, we have (1 − a)c (a) + c(a) + u¯ − g > 0, and (A-5) is ¯ η)) in strictly supermodular in (a g x) (or more generally in (a g x P u the neighborhood of x and is also differentiable in (a g). Then since a > 0 and a < 1 (again since lima→1 c(a) = ∞), a satisfies the first-order necessary condition (A-6)
Px − ((1 − a)c (a) + c(a) + u¯ − g) − a(1 − a)c (a) = 0
Moreover, since (A-5) is differentiable, g satisfies (10) and thus g > 0. Next, note that an increase in x strictly raises the left-hand side of (A-6), and thus a and g cannot both remain constant following an increase in x. Since they cannot decline, it must be the case that an increase in x strictly increases the smallest and the greatest values of both a and g. We have thus established that any change from x to x > x will give weak ˜ = 0 for all x˜ ∈ [x x ] (from Lemma 2, comparative statics results only if ul (x) ˜ > 0 is not possible). Suppose, to obtain a contradiction, that ul (x) ˜ =0 ul (x) for all x˜ ∈ [x x ]. Then (IC1 ) and (IR2 ) imply that (A-7)
˜ (a(x)) ˜ + c(a(x)) ˜ + u¯ − g(x) ˜ = 0 ul = −a(x)c
Consider variations in a and g along (A-7) (i.e., holding ul = 0). Since a > 0 and c is differentiable, this implies dg/da = −ac (a) < 0. Now a(x ) ≥ a(x), g(x ) ≥ g(x), and (A-7) holds for x˜ ∈ {x x }, so dg/da < 0 implies that a(x) = a(x ) and g(x) = g(x ). Next, since a(x) and g(x) are optimal, any such variation along (A-7) should not increase the value of (7). At x˜ = x, this is only possible if (A-8)
Px − a(x)(1 − a(x))c (a(x)) − (1 − a(x))c (a(x)) − c(a(x)) − u¯ + g(x) + ηχ (g(x))a(x)c (a(x)) = 0
where we have used the fact that uh > 0 wherever ul = 0, by (IC1 ) and the fact that a > 0. Repeating this argument for x˜ = x yields (A-9)
Px − a(x)(1 − a(x))c (a(x)) − (1 − a(x))c (a(x)) − c(a(x)) − u¯ + g(x) + ηχ (g(x))a(x)c (a(x)) = 0
THE ECONOMICS OF LABOR COERCION
593
However, in view of the fact that x > x, (A-9) cannot be true at the same time as (A-8), yielding a contradiction. We conclude that (i) all comparative statics are strict (even if ul (x) = 0, we must have ul (x ) < 0, and thus a change from x to x will strictly increase a and g) and (ii) for any pair of x > x, if ul (x) = 0, we must have ul (x ) < 0, and thus ul = 0 is only possible at one level of x, that is, at the lowest level x = x. Q.E.D. ¯ It can also be shown that modified versions of Corollaries 1–6 for smallest and greatest equilibrium contracts also hold in this general case. We omit the details to save space. Existence of General Equilibrium To establish existence of an equilibrium, we now allow all producers to use “mixed-strategies” (so that we are looking for a mixed strategy equilibrium or Cournot–Nash equilibrium distribution in the language of Mas-Collel). We will then use Theorem 1 from Mas-Collel (1984). To do this in the simplest possible way, we rewrite producer profits slightly: Since limg→∞ χ(g) = ∞ and a ≤ 1, there exists a positive number g¯ such that setting g > g¯ is dominated for any producer. We now specify that a producer with productivity x chooses ¯ × [0 g] ¯ to maximize (q g) ∈ [0 x] q q q q 1− c +c + u¯ − g (A-10) π(x) ≡ qP − x x x x + q q q q − c +c + u¯ − g − ηχ(g) − 1− x x x x + where we have rewritten a as q/x. By our assumption that lima→1 c(a) = ∞, any solution to the problem of a producer with productivity x satisfies q < x. It is also clear that (A-10) is continuous in (p q). This reformulation gives each producer the same action set, and our assumption that lima→1 c(a) = ∞ ensures that every equilibrium contract in the reformulated game is feasible in the original game. Formally, a (mixed) strategy profile is a measure τ over ¯ and ¯ × ([0 x] ¯ × [0 g]) ¯ such that the marginals τx and τ(qg) of τ on [x x] [x x] ¯ ¯ x] ¯ × [0 g] ¯ satisfy (i) τx = F and (ii) τ({x q g} : q ≤ x) = 1. [0 Given a strategy profile τ, the corresponding aggregates are x¯ (A-11) Q(τ) ≡ qτq (q) dq 0
and (A-12)
g¯
G(τ) ≡
gτg (g) dg 0
594
D. ACEMOGLU AND A. WOLITZKY
¯ and [0 g], ¯ respectively. where τq and τg are the marginal densities over [0 x] P and u¯ are defined as functions of Q(τ) and G(τ) by equations (3) and (6), as in the text. Let π(x q g Q G) denote the payoff to a producer with productivity x who chooses (q g) when facing aggregates Q and G. DEFINITION 2: An equilibrium is a (mixed) strategy profile τ that satisfies ˜ g ˜ Q G) for all τ {π(x) q g} : π(x q g Q G) ≥ π(x q ˜ g) ˜ ∈ [0 x] ¯ × [0 g] ¯ = 1 (q where Q and G are given by (A-11) and (A-12) evaluated at τ. PROPOSITION 12: An equilibrium exists. PROOF: The proof follows closely the proof of Mas-Collel’s Theorem 1, with the addition that each distribution consistent with producers’ playing best responses satisfies point (ii) above (since q < x in any equilibrium contract of a producer with productivity x, by our assumption that lima→1 c(a) = ∞), so any fixed point must satisfy point (ii) as well. Q.E.D. While Proposition 12 establishes the existence of an equilibrium in mixed strategies, our analysis in the next subsection will also show that in some special cases (for example, when P(·) = P0 or when γ = 0), we can establish that the equilibrium is in pure strategies (as was the case in the text). General Equilibrium Comparative Statics As discussed in Section 4, the set of equilibria may not form a lattice, which makes it impossible to consider comparative statics on extremal equilibria in (Q G). Instead, we show that the comparative statics of Proposition 3 apply separately to the extremal equilibrium values of Q and G. Our approach is based on the analysis of the function ˆ φ(Q γ L η) ¯ × [0 g]) ¯ such that = Q : there exists a density τ on π(x) × ([0 x] ˜ g ˜ Q G(τ)) = 1 τ {π(x) q g} : π(x q g Q G(τ)) ≥ π(x q
and Q = Q(τ) φˆ maps Q and parameter values to those Q that are mixed-strategy equilibrium values of output in the modified game where price is fixed at P(QL). It is clear that the set of fixed points of φˆ equals the set of mixed-strategy equilibrium values of Q. We first establish comparative statics on the extremal
THE ECONOMICS OF LABOR COERCION
595
ˆ elements of φ(Q γ L η) with respect to the parameters and Q, and then use these results to establish comparative statics on the extremal fixed points of ˆ φ(Q γ L η) in a standard way. ˆ Toward establishing comparative statics on the extremal elements of φ(Q γ L η), consider the modified game where price is fixed at P(QL) = P0 for all ¯ x η) and (q+ g+ )(P u ¯ x η) denote the smallest and QL. Let (q− g− )(P u ¯ greatest equilibrium contract levels of (q g) given price, P, outside option, u, productivity, x, and cost of coercion, η. Let (A-13)
˜ φ(G P0 x γ L η) x¯ ˜ g− P0 u(L) − =
γ G x η dF(x) 1−γ x ¯ x¯ γ + ˜ g P0 u(L) − G x η dF(x) 1−γ x ¯
˜ ˜ ˜ γ η) and g− (x G) (g+ (x G)) for g− (P0 We write φ(G) for φ(G P0 u γ γ ˜ ˜ G x η) (g+ (P0 u(L) − 1−γ G x η)) when the parameters are unu(L) − 1−γ derstood. It is clear that if G is an equilibrium aggregate level of coercion (in ˜ the modified game), then G is a fixed point of φ(G). The converse is also true, ˜ because if G ∈ φ(G), then, by the intermediate value theorem, there exists ¯ such that x∗ ∈ [x x] ¯ x∗ γ − ˜ G x η dF(x) g P0 u(L) − G= 1−γ x ¯ x¯ γ ˜ G x η dF(x) + g+ P0 u(L) − 1−γ x∗ and the strategy profile in which producers of type x ≤ x∗ choose (q− g− )(P0 γ ˜ ˜ G x η) and producers of type x > x∗ choose (q+ g+ )(P0 u(L) − u(L) − 1−γ γ ˜ G x η) is an equilibrium. Thus, the fixed points of φ(G) are exactly the 1−γ equilibrium values of G. The following lemma shows that if G0 is the smallest (greatest) fixed point ˜ 0 ). ˜ of φ(G), then G0 is the smallest (greatest) element of the set φ(G ˜ then G− = LEMMA 4: If G− is the smallest fixed point of φ(G), x¯ − ˜ g (x G− ) dF(x). If G+ is the greatest fixed point of φ(G), then G+ = x¯x¯ + g (x G+ ) dF(x). x ¯
˜ Then G− ≥ PROOF: Suppose G− is the smallest fixed point of φ(G). x¯ − − − g (x G ) dF(x), since g (x G) is increasing in G and any other solux ¯
596
D. ACEMOGLU AND A. WOLITZKY
tion g(x G) to (7) satisfies g(x G) ≥ g− (x G). Thus to obtain a contra x¯ x¯ diction, suppose that G− > x g− (x G− ) dF(x). Since x g− (x G) dF(x) is x¯ ¯ ¯ increasing in G and x g− (x 0) dF(x) ≥ 0, by Tarski’s fixed point theorem ¯ (e.g., Theorem 2.5.1 in Topkis (1998)), there exists G ∈ [0 G− ) such that ¯ x G = x g− (x G ) dF(x), yielding a contradiction. ¯ ˜ Similarly, G+ ≤ Next, suppose G+ is the greatest fixed point of φ(G). x¯ + + g (x G ) dF(x), so to obtain a contradiction, suppose that G+ < x ¯x¯ + ¯ such g (x G+ ) dF(x). Since limg→∞ χ (g) = ∞ and a ≤ 1, there exists G x x¯ + x¯ + ¯ ¯ dF(x). So, since ¯ > g (x G) g (x G) dF(x) is increasing in G, that G x x ¯ ¯ ¯ such that again by Tarski’s fixed point theorem, there exists G ∈ (G+ G) ¯ x G = x g+ (x G ) dF(x), yielding another contradiction and completing the ¯ proof of the lemma. Q.E.D. We can now derive comparative statics on the extremal elements of ˆ φ(Q γ L η). We say that “z is increasing in F(·)” if a first-order stochastic dominance increase in F(·) leads to an increase in z. ˆ LEMMA 5: The smallest and greatest elements of φ(Q γ L η) exist, and are increasing in γ and F(·) and decreasing in Q and η. If P(QL) = P0 for all QL, ˆ then the smallest and greatest elements of φ(Q γ L η) are increasing in L. If ˆ ˜ γ L η) are u(L) = u˜ 0 for all L, then the smallest and greatest elements of φ(Q decreasing in L. ˆ γ L η) are PROOF: Recall that the smallest and greatest elements of φ(Q the smallest and greatest equilibrium values of Q when price is fixed at P(QL). We claim that, in the modified game where price is fixed at an arbitrary P0 , the smallest and greatest equilibrium values of Q exist and are increasing in P0 , γ, L, and F(·), and decreasing in η. Given this claim, the results for γ, F(·), and η follow immediately, and the result for Q follows from the claim combined with the fact that P(QL) is decreasing. If P(QL) = P0 for all QL, then the result ˜ for L also follows immediately from the claim. If u(L) = u˜ 0 for all L, then the smallest and greatest equilibrium aggregates (Q G ) are constant in L when price is fixed at P0 , so the result for L follows from the fact that P(QL) is decreasing. It therefore remains only to prove the claim. Thus, consider the modified game where P(·) ≡ P0 . From (7), for all G, ˜ φ(G) is increasing in P0 , x, γ, L, and F(·), and decreasing in η (in the strong ˜ set order). Since φ(G) is increasing, Theorem 2.5.2 in Topkis (1998) implies ˜ that the smallest and greatest fixed points of φ(G), and thus the smallest and greatest equilibrium values of G (in the modified game), exist and are increasing in P0 , γ, L, and F(·), and decreasing in η. Now Lemma 4 and the supermodularity of (7) in (q g) imply that the smallest (greatest) equilibrium value
THE ECONOMICS OF LABOR COERCION
597
˜ of output, Q− (Q+ ), corresponds to the smallest (greatest) fixed point of φ(G), − + G (G ); that is, the smallest and greatest equilibrium values of Q are given by x¯ x¯ Q− = x ηχ (g− (x G− ))x dF(x) and Q+ = x ηχ (g+ (x G+ ))x dF(x). There¯ ¯ fore, the comparative statics on G− and G+ together with the comparative − + statics on g and g described in Proposition 11 imply that the smallest and greatest equilibrium values of Q (in the modified game) exist and are increasing in P0 , γ, L, and F(·), and decreasing in η, which proves the claim. Q.E.D. It is now straightforward to derive comparative statics on the extremal fixed ˆ points of φ(Q γ L η), which equal the extremal equilibrium values of Q: PROPOSITION 13: The smallest and greatest mixed-strategy equilibrium values of Q are increasing in γ and F(·), and decreasing in η. If P(QL) = P0 for all QL, then the smallest and greatest mixed-strategy equilibrium values of Q are in˜ creasing in L. If u(L) = u˜ 0 for all L, then the smallest and greatest mixed-strategy equilibrium values of Q are decreasing in L. ˆ γ L η) PROOF: From Lemma 5, greater γ or F(·) or lower η shifts φ(Q ˆ up for all Q, while greater L shifts φ(Q γ L η) up for all Q if P(QL) = P0 ˆ ˆ ˜ and shifts φ(Q γ L η) down for all Q if u(L) = u˜ 0 . Since φ(Q γ L η) is monotone in Q (by Lemma 5), Theorem 2.5.2 in Topkis (1998) implies that the ˆ smallest and greatest fixed points of φ(Q γ L η) are increasing in γ and F(·), and decreasing in η, and are increasing in L if P(QL) = P0 and decreasing in L ˜ if u(L) = u˜ 0 . Q.E.D. Repeating the above argument and interchanging the roles of Q and G gives the following proposition. PROPOSITION 14: The smallest and greatest mixed-strategy equilibrium values of G are increasing in γ and decreasing in η. If P(QL) = P0 for all QL, then the smallest and greatest mixed-strategy equilibrium values of G are increasing in L ˜ and F(·). If u(L) = u˜ 0 for all L, then the smallest and greatest mixed-strategy equilibrium values of G are decreasing in L. PROOF: The only difference between this result and Proposition 13 is that the comparative statics with respect to F(·) now only applies when P(QL) = P0 for all QL. To see why, observe that instead of assuming that P(·) is fixed and ˜ studying the map φ(G P0 x γ L η) in (A-13), we now assume that γ = 0 and study the map x¯ ¯ ˜ q− (P(QL) u(L) x η) dF(x) φ(Q x L η) = x ¯ x¯
x ¯
˜ q (P(QL) u(L) x η) dF(x) +
598
D. ACEMOGLU AND A. WOLITZKY
An argument identical to the proof of Lemma 4 shows that the smallest and greatest fixed points of φ¯ correspond to the smallest and greatest equilibrium levels of Q when γ = 0, and that the corresponding smallest and greatest equi x¯ ˜ librium levels of G are G− = x (χ )−1 (q− (P(Q− L) u(L) x η)/xη) dF(x) ¯ ¯ x ˜ x η)/xη) dF(x), respectively. φ¯ is deand G+ = (χ )−1 (q+ (P(Q+ L) u(L) x
creasing in ¯Q and η, and increasing in F(·), is increasing in L if P(QL) = P0 ˜ for all QL, and is decreasing in L if u(L) = u˜ 0 for all L. By the same argument as in the proof of the claim in Lemma 5, the smallest and greatest equilibrium levels of Q are decreasing in η and increasing in F(·), and are increasing in L if ˜ P(QL) = P0 and decreasing in L if u(L) = u˜ 0 . It then follows from our characterization of G− and G+ that they are decreasing in η, are increasing in L and ˜ F(·) if P(QL) = P0 , and are decreasing in L if u(L) = u˜ 0 . However, G− and G+ need not be increasing in F(·) if P(QL) is decreasing, because the direct (positive) effect of increasing F(·) may be offset by the indirect (negative) effect that Q− and Q+ are increasing in F(·); this accounts for the difference relative to Proposition 13. The remainder of the proof is analogous to the proofs of Lemma 5 and Proposition 13. Q.E.D. REFERENCES ACEMOGLU, D., AND J. S. PISCHKE (1999): “The Structure of Wages and Investment in General Training,” Journal of Political Economy, 107, 539–572. [580] ACEMOGLU, D., AND A. WOLITZKY (2011): “Supplement to ‘The Economics of Labor Coercion’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/ Supmat/8963_extensions.pdf. [560] ACEMOGLU, D., S. JOHNSON, AND J. A. ROBINSON (2002): “Reversal of Fortune: Geography and Institutions in the Making of the Modern World Income Distribution,” Quarterly Journal of Economics, 117, 1231–1294. [556,559,581,587] ANDREES, B., AND P. BELSER (EDS.) (2009): Forced Labor: Coercion and Exploitation in the Private Economy. Boulder: Lynne Rienner Publishers. [556,557,568] ASTON, T. H., AND C. H. E. PHILPIN (1987): The Brenner Debate: Agrarian Class Structure and Economic Development in Pre-Industrial Europe. Cambridge: Cambridge University Press. [558] BARZEL, Y. (1977): “An Economic Analysis of Slavery,” Journal of Law and Economics, 20, 87–110. [559] BASU, K. (1986): “One Kind of Power,” Oxford Economic Papers, 38, 259–282. [559] BERGSTROM, T. (1971): “On the Existence and Optimality of Competitive Equilibrium for a Slave Economy,” Review of Economic Studies, 38, 23–36. [559] BLOOM, J. (1998): The End of the Old Order in Rural Europe. Princeton: Princeton University Press. [556] BRENNER, R. (1976): “Agrarian Class-Structure and Economic-Development in Pre-Industrial Europe,” Past and Present, 70, 30–75. [558,577] CANARELLA, G., AND J. TOMASKE (1975): “The Optimal Utilization of Slaves,” Journal of Economic History, 36, 621–629. [559] CHWE, M. (1990): “Why Were Workers Whipped? Pain in a Principal–Agent Model” Economic Journal, 100, 1109–1121. [556,559,560,568,571] CONNING, J. (2004): “On the Causes of Slavery or Serfdom and the Roads to Agrarian Capitalism: Domar’s Hypothesis Revisited,” Manuscript, Hunter College. [559]
THE ECONOMICS OF LABOR COERCION
599
CONRAD, A. H., AND J. R. MEYER (1958): “The Economics of Slavery in the Antebellum South,” Journal of Political Economy, 66, 95–122. [559] CURTIN, P. D. (1990): The Rise and Fall of the Plantation Complex. Cambridge: Cambridge University Press. [556] DAVIS, D. B. (2006): Inhuman Bondage: The Rise and Fall of Slavery in the New World. Oxford: Oxford University Press. [555] DOMAR, E. D. (1970): “The Causes of Slavery or Serfdom: A Hypothesis,” Journal of Economic History, 30, 18–32. [556,559,568,587] DOW, G. K. (1993): “Why Capital Hires Labor: A Bargaining Perspective,” American Economic Review, 83, 118–134. [556] FENOALTEA, S. (1984): “Slavery in Comparative Perspective: A Model,” Journal of Economic History, 44, 635–638. [559,580] FINDLAY, R. (1975): “Slavery, Incentives, and Manumission: A Theoretical Model,” Journal of Economic History, 44, 923–933. [559] FINLEY, M. I. (1976): “A Peculiar Institution?” Times Literary Supplement, 3877, 819–821. [555] FOGEL, R. W., AND S. L. ENGERMAN (1974): Time on the Cross: The Economics of American Negro Slavery, 2 Volumes. Boston: Little Brown. [556-559,571,587] GENICOT, G. (2002): “Bonded Labor and Serfdom: A Paradox of Voluntary Choice,” Journal of Development Economics, 67, 101–127. [559] GROSSMAN, S. J., AND O. D. HART (1986): “The Costs and Benefits of Ownership: A Theory of Vertical and Lateral Integration,” Journal of Political Economy, 94, 691–719. [578] HABAKKUK, H. J. (1958): “The Economic History of Modern Britain,” Journal of Economic History, 18, 486–501. [556] INTERNATIONAL LABOR ORGANIZATION (2009): Report of the Director-General: The Cost of Coercion. Geneve, Switzerland: International Labour Office. [558] KEY, V. O., JR. (1949): Southern Politics: In State and Nation. New York: Vintage Books. [570] KLEIN, H. S., AND B. VINSON (2007): African Slavery in Latin America and the Caribbean. New York: Oxford University Press. [556] LAGERLÖF, N. (2009): “Slavery and Other Property Rights,” Review of Economic Studies, 76, 319–342. [559] LE ROY LADURIE, E. (1977): The Peasants of Languedoc. Urbana, IL: University of Illinois Press. Translated by John Day. [556] LOCKHART, J. (2000): Of Things of the Indies: Essays Old and New in Early in American History. Stanford: Stanford University Press. [556] LOCKHART, J., AND S. B. SCHWARTZ (1983): Early Latin America: A History of Colonial Spanish Latin America and Brazil. Cambridge: Cambridge University Press. [556] LOVEJOY, P. E. (2000): Transformations of Slavery: A History of Slavery in Africa (Second Ed.). Cambridge: University of Cambridge Press. [555] MAS-COLLEL, A. (1984): “On a Theorem of Schmeidler,” Journal of Mathematical Economics, 13, 201–206. [593] MELTZER, M. (1993): Slavery: A World History. New York: DeCapo Press. [555] MILGROM, P., AND J. ROBERTS (1994): “Comparing Equilibria,” American Economic Review, 84, 441–459. [575] NAIDU, S., AND N. YUCHTMAN (2009): “How Green Was My Valley? Coercive Contract Enforcement in 19th Century Industrial Britain,” Mimeo, Harvard University. [558] NAQVI, N., AND F. WEMHÖNER (1995): “Power, Coercion, and the Games Landlords Play,” Journal of Development Economics, 47, 191–205. [559] NORTH, D. C., AND R. T. THOMAS (1971): “The Rise and Fall of the Manorial System: A Theoretical Model,” Journal of Economic History, 31, 777–803. [556] PATTERSON, O. (1982): Slavery and Social Death: A Comparative Study. Cambridge: Harvard University Press. [555,556] POSTAN, M. M. (1973): Cambridge Economic History of Europe: Expanding Europe in the 16th and 17th Centuries. Cambridge: Cambridge University Press. [556]
600
D. ACEMOGLU AND A. WOLITZKY
RANSOM, R. L., AND R. SUTCH (1975): “The Impact of the Civil War and of Emancipation on Southern Agriculture,” Explorations in Economic History, 12, 1–28. [557] (1977): One Kind of Freedom: The Economic Consequences of Emancipation. Cambridge: Cambridge University Press. [557,559,570] SHERSTYUK, K. (2000): “Performance Standards and Incentive Pay in Agency Contracts,” Scandinavian Journal of Economics, 102, 725–736. [557] TOPKIS, D. M. (1998): Supermodularity and Complementarity. Princeton, NJ: Princeton University Press. [567,591,596,597] WRIGHT, G. (1978): The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton. [556]
Dept. of Economics, Massachusetts Institute of Technology, 50 Memorial Drive, E52-371, Cambridge, MA 02142-1347, U.S.A. and Canadian Institute for Advanced Research;
[email protected] and Dept. of Economics, Massachusetts Institute of Technology, 50 Memorial Drive, Cambridge, MA 02142-1347, U.S.A.;
[email protected]. Manuscript received November, 2009; final revision received September, 2010.
Econometrica, Vol. 79, No. 2 (March, 2011), 601–644
TEMPTATION AND REVEALED PREFERENCE1 BY JAWWAD NOOR Gul and Pesendorfer (2001) model the static behavior of an agent who ranks menus prior to the experience of temptation. This paper models the dynamic behavior of an agent whose ranking of menus itself is subject to temptation. The representation for the agent’s dynamically inconsistent choice behavior views him as possessing a dynamically consistent view of what choices he “should” make (a normative preference) and being tempted by menus that contain tempting alternatives. Foundations for the model require a departure from Gul and Pesendorfer’s idea that temptation creates a preference for commitment. Instead, it is hypothesized that distancing an agent from the consequences of his choices separates normative preference and temptation. KEYWORDS: Temptation, dynamic inconsistency, preference for commitment, sophistication, menus.
1. INTRODUCTION AN AGENT MAY BREAK HIS DIET or abuse drugs while simultaneously telling himself that he really should not. Such instances suggest that choice is determined not by one, but two preference orderings: a temptation preference that captures the agent’s desires and a normative preference that captures his view of what choices he “should” make. Choice behavior is the outcome of an aggregation of temptation preference and normative preference. The agent is said to experience temptation when his desires conflict with his normative preference. So as to write down a choice-theoretic model of temptation, a foundational question must be answered: What observable behavior identifies an agent who struggles with temptation and reveals his normative and temptation preferences? Gul and Pesendorfer (2001, 2004) (henceforth GP) were the first to provide a choice-theoretic model of temptation. Their answer to the foundational question is based on the idea that temptation creates a preference for commitment: an agent who thinks he should choose a “good” option g but anticipates being tempted by a “bad” option b would avoid the latter. In particular, he would strictly prefer {g}, the menu (choice problem) that commits him to g, rather than the menu {g b} that provides the flexibility of choosing b: {g} {g b} 1 This paper is based on Chapter 2 of my Ph.D. Thesis (Noor (2005)). I am greatly indebted to Larry Epstein for his guidance and encouragement throughout the project and to Bart Lipman for many useful discussions. I have also benefitted from comments from Paulo Barelli, Faruk Gul, Klaus Nehring, Wolfgang Pesendorfer, Gábor Virág, seminar participants at Rochester, Princeton, Yale, Iowa, and Boston University, participants at the Canadian Economic Theory Conference (May 2004), the Risk, Uncertainty and Decisions Conference (June 2004), and the Midwest Economic Theory Meeting (Nov. 2004), and in particular from the editor and referees. The usual disclaimer applies.
© 2011 The Econometric Society
DOI: 10.3982/ECTA5800
602
JAWWAD NOOR
This preference for commitment reveals the existence of temptation and, moreover, reveals a normative preference for g and a temptation by b. Adopting an agent’s preferences over menus as their primitive, GP used such ideas to construct a model of temptation. This paper studies an agent who may be tempted not just by alternatives in a menu, but also by menus themselves. Specifically, opportunities that lead to tempting consumption may themselves be tempting. For instance, the agent in the above example may be tempted by the menu {g b} because it offers b. Modelling such agents requires a substantial departure from GP’s strategy for identifying temptation. Observe that if the temptation by the menu {g b} is strong enough, the agent would exhibit {g} {g b} That is, when the very act of choosing commitment requires self-control, it becomes possible that temptation may induce the agent to refuse commitment, contrary to GP’s hypothesis. Indeed, applying their hypothesis here would lead the analyst to erroneously conclude that b does not tempt and may even be normatively superior to g. Evidently then, an alternative to GP is required so as to identify an agent’s temptation and normative preference when menus tempt. One may consider extending GP’s model by adopting a preference over menus of menus as the primitive. However, the logic of temptation by menus extends to this preference as well and, indeed, also to preferences over more complicated domains consisting of menus of menus up to all orders. The objective of this paper is to provide a choice-theoretic model that describes agents who are tempted by menus. Distancing Our answer to the foundational question is based on the idea that, at least in stationary environments, normative preference is revealed when the agent is distanced from the consequences of his choices. The idea is familiar to philosophers and psychologists, and is part of common wisdom. For instance, when trying to demonstrate to a friend that his smoking is against his better judgment, we try to get him to view the act of smoking from a distance by asking him how he would feel about his children smoking. The “veil of ignorance” (Rawls (1971)) in philosophy is a distancing tool. Psychologists have argued that reversals in choices induced by temporal distancing, such as the so-called preference reversals and dynamic inconsistency in the experimental literature on time preference (Frederick, Loewenstein, and O’Donoghue (2002)), reveal the existence of self-control problems.2 2 Preference reversals and dynamic inconsistency reveal a loss in patience when rewards are brought closer to the present. For a large reward received at time t + d and a smaller reward
TEMPTATION AND REVEALED PREFERENCE
603
We formalize the idea of temporal distancing in the following way. Suppose we observe how the agent chooses between delayed rewards. Derive a set of preference relations {t }∞ t=0 where each t represents choices between consumption alternatives that are to be received t periods later. By the distancing hypothesis, as t grows, the influence of temptation on the agent’s ranking t of alternatives diminishes. That is, as t grows, the temptation component underlying t becomes less significant and so each t provides an increasingly better approximation of the agent’s underlying normative preference. We identify normative preference with the (appropriately defined) limit (1.1)
∗ ≡ lim t t→∞
With the normative preference defined thus, temptation is naturally identified through normatively inferior choices. These ideas form the basis for building a choice-theoretic model of temptation.3 Our Model The specific model we construct is a stationary infinite horizon dynamic model. In every period the agent faces a menu from which he chooses immediate consumption and a menu for the next period. Choice is determined by a struggle between normative and temptation preferences (over consumptionmenu pairs). Temptation preferences have a rich structure. The impact of temptation on choice is stronger for immediate consumption than future consumption. The agent is tempted by immediate consumption alternatives, may be tempted to over- or underdiscount the future (relative to normative preference), and may be tempted by menus. A menu tempts to the extent that it offers an opportunity for future indulgence, but the model also permits normative considerations to affect the extent of temptation by a given menu. The discounting of menu temptation may be nonexponential, in which case the temptation ranking of menus can reverse with delay. Our agent is dynamically inconsistent. For instance, at time t the agent may plan to commit at t + 2, but if given the opportunity at time t + 1, may deviate from this plan and postpone commitment until t + 3. Our agent is sophisticated in that he is fully aware of his future behavior. However, choice is not determined as the outcome of an intrapersonal game (Laibson (1997)). Instead, received at time t, subjects in experiments on preference reversals exhibit a preference for the small reward when t = 0, but reverse preferences when t is large. In experiments on dynamic inconsistency, subjects prefer the large reward when t is large, but switch preferences after t periods elapse. 3 Note that the appeal of (1.1) relies on a stationary setup: if the agent, say, anticipates preference shocks in the future, then those considerations will be reflected in ∗ and, therefore, this ordering ceases to fully capture normative preference over current consumption.
604
JAWWAD NOOR
choice in each period maximizes a recursively defined (though not recursive) utility function. Some of the components of our utility representation adopt the functional form of GP (2001). Nevertheless, our model is different from existing infinite horizon versions of GP’s model (GP (2004), Krusell, Kurusçu, and Smith (2002), and Noor (2006)) in two fundamental respects, described as contributions (b) and (c) below. Summary of Contributions This paper makes three main contributions: (a) Foundations for Temptation: While GP identify temptation by means of a preference for commitment, we introduce an alternative strategy that first derives a normative preference by looking at behavior from a distance, and then identifies temptation through gaps between normative preference and choice. This strategy permits us to study an agent whose behavior is contaminated by temptation in every period, and yet identify what does or does not constitute temptation. In particular, whereas GP’s strategy identifies temptation experienced only in the next period(s), our’s does so for the current period as well. (b) Temptation by Menus: We axiomatize a dynamic model of tempting menus. The model is an extension of GP (2001) to an infinite horizon. Other extensions in the literature are by GP (2004), Krusell, Kurusçu, and Smith (2002), and Noor (2006). A key difference is that these models satisfy the so-called Stationarity axiom (see Section 5), which enables a relatively straightforward extension of GP to an infinite horizon, while in our model temptation by menus necessitates the violation of Stationarity. We exploit the distancing hypothesis to extend GP to an infinite horizon. The models in GP (2004), Krusell, Kurusçu, and Smith (2002), and Noor (2006) have counterparts in our model and as such, our model also unifies them. (c) Foundations for Sophistication: The literature emanating from GP (2001) describes choices at one point in time—the ranking of menus in an ex ante period—and relies on an interpretation of the representation to describe subsequent dynamic choice behavior. In our model, we take dynamic choice behavior as our primitive. Thus, our model fully describes not only how the agent ranks menus over time, but also what he chooses from them. While the literature assumes that the agent correctly anticipates future behavior, such sophistication produces a restriction on choice behavior in our model, and thus becomes a refutable hypothesis. We also prove a general result (Section 6.1) that shows how the two-period model of GP (2001) can be enriched so that sophistication can be given foundations in that model. The paper proceeds as follows. Section 2 introduces our model. Sections 3 and 4 present axioms and representation theorems, respectively. Section 5 relates this paper with the literature. Section 6 outlines the proof of our main representation theorem and Section 7 concludes. Proofs are relegated to the Appendices and the Supplemental Material (Noor (2011)).
TEMPTATION AND REVEALED PREFERENCE
605
2. THE MODEL Given any compact metric space X, let Δ(X) denote the set of all probability measures on the Borel σ-algebra of X endowed with the weak convergence topology (Δ(X) is compact and metrizable (Aliprantis and Border (1994, Theorem 14.11))); K(X) denotes the set of all nonempty compact subsets of X endowed with the Hausdorff topology (K(X) is a compact metric space (Aliprantis and Border (1994, Theorem 3.71(3)))). Generic elements of K(X) are x y z and those of Δ(X) are μ η ν. For α ∈ [0 1], αμ + (1 − α)η ∈ Δ(X) is the measure that assigns αμ(A) + (1 − α)η(A) to each A in the Borel σalgebra of X. Similarly, αx + (1 − α)y ≡ {αμ + (1 − α)η : μ ∈ x η ∈ y} ∈ K(X) is a mixture of x and y. Denote these mixtures more simply by μαν and xαy, respectively. Given a compact metric space C of consumption alternatives, GP (2004) constructed a space Z of infinite horizon menus. Each menu z ∈ Z is a compact set of lotteries, where each lottery is a measure over current consumption and a continuation menu—Z is homeomorphic to K(Δ(C × Z)), a compact metric space. See GP (2004) for the formal definition of Z. Below we often write Δ(C × Z) as Δ. The primitive of our model is a closed-valued choice correspondence C : Z Δ, where φ = C (x) ⊂ x for all x ∈ Z. This is a time-invariant choice correspondence that describes the choices at any time t = 1 2 The time line is given by (2.1)
t=1
t=2
(cy)∈x
(c z)∈y
• ————–
•
————–
t=3
•
(c
z
)∈z
—
For any menu x faced in period 1, the agent chooses (c y) ∈ C (x) say. He receives immediate consumption c and a continuation menu y.4 The continuation menu y is faced in period 2 and a choice is made from it. The process continues ad infinitum. All choice are interpreted as possibly subject to temptation. For any pair of continuous linear functions U V : Δ → R, the GP representation is given by (2.2)
W (x) := max U(μ) + V (μ) − max V (η) μ∈x
η∈x
where the dependence of W on U V is suppressed to ease notation. Our model takes the form of the following representation for C . More generally, if the alternative chosen from x is a nondegenerate lottery μ ∈ Δ, then the uncertainty plays out before the next period, yielding some (c y). This leaves the agent with immediate consumption c and the menu y to face in period 2. 4
606
JAWWAD NOOR
DEFINITION 2.1—U-V Representation: The choice correspondence C over Δ admits a U-V representation if there exist functions U V : Δ → R such that C satisfies (2.3)
C (x) = arg max{U(μ) + V (μ)} μ∈x
and U V satisfy (2.4)
x ∈ Z
U(μ) =
(u(c) + δW (x)) dμ C×Z
and (2.5)
v(c) + βW (x) + γ max V (η) dμ
V (μ) = C×Z
η∈x
for all μ ∈ Δ, where W : Z → R is defined by (2.2), u v : C → R are continuous functions, and δ γ β are scalars satisfying δ ∈ (0 1), γ ∈ [0 δ], and β > γ − δ. The representation is identified with the tuple (u v δ β γ). The model makes sense of the agent’s (possibly temptation-ridden) choices C by asserting that the agent possesses normative and temptation preferences over Δ, reflected in U and V , respectively. The representation (2.3) states that choice in any period is a compromise between the two: it maximizes the sum of U and V . Normative (expected) utility U evaluates lotteries with a utility index u(c) + δW (x) that comprises utility u for immediate consumption, a discount factor δ, and a utility W over continuation menus that has the familiar GP form (2.2). To remind the reader: the temptation opportunity cost |V (μ) − maxη∈x V (η)| is interpreted as the self-control cost of choosing μ and, thus, W is a value function suggesting that the agent maximizes normative utility U net of selfcontrol costs. An important observation is that the maximizer—the anticipated choice from x—maximizes U + V . This is precisely what is described by (2.3). Thus our agent is sophisticated in that her anticipated choices coincide with her actual choices. Temptation (expected) utility V evaluates lotteries with a utility index v(c) + βW (x) + γ maxη∈x V (η) that evaluates immediate consumption by v and the continuation menu x according to the discounted utility: (2.6)
βW (x) + γ max V (η) η∈x
There are two differences from how U evaluates continuation menus. First, the normative utility W of a continuation menu is discounted by β instead of δ.5 Second, consideration is given to maxη∈x V (η), the pure temptation value 5 The restriction β > γ − δ permits β < 0. Section 4 characterizes the special case that rules this out.
TEMPTATION AND REVEALED PREFERENCE
607
of a menu. This is discounted by γ. The model requires γ ≤ δ, reflecting the intuitive idea that the temptation perspective is, in some sense, more myopic than the normative perspective. The experience of temptation is suggested by a strict conflict between U and V . The experience of temptation by menus, or menu temptation for short, is similarly suggested by a conflict in the ranking of continuation menus, such as (c x) versus (c y). The nature of menu temptation is determined in the model by the parameters γ and β: When γ = 0 and β ≥ 0, there is no menu temptation: U(c x) ≥ U(c y)
⇒
W (x) ≥ W (y)
⇒
V (c x) ≥ V (c y)
When γ > 0 and β = 0, menu temptation is determined completely by the tempting alternatives contained in it, whereas if β > 0, then the latter consideration is dampened by the normative value. Moreover, when β > 0, the relatively steeper discounting of the pure temptation value of a menu (γ ≤ δ) implies that a menu may cease to tempt if it is pushed into the future. Some of these special cases have counterparts in the literature (GP (2004), Krusell, Kurusçu, and Smith (2002), Noor (2006)), and we give them related names: DEFINITION 2.2 —QSC, FT, DSC: A U-V representation (u v δ β γ) is a quasihyperbolic self-control (QSC) representation if γ = 0 and β ≥ 0 a future temptation (FT) representation if γ > 0 and β = 0, and a dynamic selfcontrol (DSC) agent if β = γ = 0. The description of and comparison with related literature is deferred to Section 5. 3. FOUNDATIONS: AXIOMS The following notation will aid exposition. • Fix c ∈ C throughout. For any x, define x+1 ≡ (c x) and inductively for t > 1, x+t = (c x+(t−1) ). Then x+t ∈ Δ is the alternative that yields menu x after t > 0 periods, and c in all periods between time 0 and t. We write {μ}+t as μ+t and identify μ+0 with μ. The reader should keep in mind that x+t is not a menu, but a degenerate lottery that yields a menu t periods later. • Let denote the revealed preference relation on Δ that is generated by choices from binary menus: (3.1)
μη
⇐⇒
μ ∈ C ({μ η})
The indifference relation ≈ and the strict preference relation > are derived from in the usual way. Consider the following axioms on C . The quantifiers “for all μ η ∈ Δ x y ∈ Z c c c
∈ C and α ∈ [0 1]” are suppressed.
608
JAWWAD NOOR
3.1. Standard Axioms The first set of axioms reflects that the agent has some features possessed by “standard” agents: the agent behaves as if he maximizes a single, continuous, linear, additively separable utility function. Although such features might rule out certain forms of temptation, the representation theorems in the next section confirm that they are consistent with an interesting class of temptation models. AXIOM 1—Weak Axiom of Revealed Preference (WARP): If μ η ∈ x ∩ y, μ ∈ C (x), and η ∈ C (y), then μ ∈ C (y). This familiar axiom is a minimal consistency requirement on choices. However, though WARP is a standard axiom in standard choice theory, it is not clear that it is appropriate for a theory of choice under temptation. While WARP is an expression of the agent using a menu-independent preference to guide his choices, intuition suggests that the degree of self-control an agent has may well depend on what is available in the menu.6 The upshot is that the current model should be thought of as one where self-control, or the relative weight between temptation and normative preference in the agent’s choices, is menu-independent. This intuition assures us that temptation need not rule out WARP. AXIOM 2—Continuity: C (·) is upper hemicontinuous. Upper hemicontinuity of C (·) is implied by choices being determined by the maximization of a continuous preference. We impose upper hemicontinuity as an axiom, with the intention of establishing that choices are determined in such a way. Formally, upper hemicontinuity is equivalent to the statement that if {xn } is a sequence of menus converging to x and if μn ∈ C (xn ) for each n, then the sequence {μn } has a limit point in C (x). AXIOM 3—Independence: μ > η ⇒ μαν > ηαν This is the familiar Independence axiom. The next axiom is an explicit statement of the “indifference to the timing of resolution of uncertainty” property that is implicitly assumed in GP (2001). AXIOM 4—Indifference to Timing: x+t αy +t ≈ (xαy)+t 6 This is explored in Noor and Takeoka (2008). To illustrate, let s denote salad, b denote a burger, and B denote a large burger. If b is not so tempting, the agent may apply self-control and choose s out of {s b}. But when faced with {s b B}, the presence of a large burger B may whet his appetite for a burger, so to compromise between his craving for B and his normative preference for s, he may settle for b, thereby violating WARP.
TEMPTATION AND REVEALED PREFERENCE
609
Under both rewards x+t αy +t and (xαy)+t , the agent faces x after t periods with probability α and y after t periods with probability (1 − α). However, under x+t αy +t , the uncertainty will be resolved today, whereas under (xαy)+t , the uncertainty will be resolved after t periods. That is, the two rewards differ only in the timing of resolution of uncertainty. Indifference between the rewards corresponds to indifference to the timing of resolution of uncertainty. The axiom rules out temptation that may be associated with the timing of resolution of uncertainty, such as anxiety.7 AXIOM 5—Separability: ( 12 (c x) + 12 (c x ))+t ≈ ( 12 (c x ) + 12 (c x))+t Separability states that when comparing two lotteries (delayed by t ≥ 0 periods), the agent only cares about the marginal distributions on C and Z induced by the lotteries. That is, only marginals matter, and correlations between consumption and continuation menus do not affect the agent’s choices. Separability is not consistent with addiction, where the value of a menu may well depend on what is consumed today. 3.2. Main Axioms There are four main axioms. AXIOM 6—Set-Betweenness: x+t y +t ⇒ x+t (x ∪ y)+t y +t This adapts the Set-Betweenness axiom in GP (2001) into our setting. GP (2001) used the axiom to describe menu choice prior to the experience of temptation, and that interpretation is valid here when the ranking of x+t and y +t is not swayed by menu temptation. But it is interesting that the axiom may be satisfied even when the agent is swayed by menu temptation. To illustrate, suppose that {μ}+t {η}+t is due to overwhelming temptation by the menu {μ}. We assert that the agent would exhibit {μ}+t ≈ {μ η}+t {η}+t , consistent with Set-Betweenness.8 The ranking {μ η}+t {η}+t would arise because the choice between {μ η}+t and {η}+t must also be overwhelmed by temptation—{μ η}+t is a tempting menu for the same reason that {μ}+t is, and since {μ}+t is chosen over {η}+t , so is {μ η}+t . The indifference between {μ}+t 7 Indifference to Timing and Independence together imply the Set-Independence axiom of Dekel, Lipman, and Rustichini (2001) and GP (2001): in our context it can be stated as x+t > y +t ⇒ (xαz)+t > (yαz)+t Noor and Takeoka (2008) demonstrated that this axiom needs to be relaxed so as to accommodate stories about anticipated choice from menus that violate WARP. This was also observed by Fudenberg and Levine (2006). Dekel, Lipman, and Rustichini (2009) note that Set-Independence is also related with a stochastic version of WARP. 8 The possibility that {μ}+t > {μ η}+t when {μ} is overwhelmingly tempting is ruled out by the Reversal axioms presented shortly. Note that overwhelming temptation is associated with the existence of a reversal below.
610
JAWWAD NOOR
and {μ η}+t reflects that the agent foresees being overwhelmed by μ in either menu. AXIOM 7—Sophistication: If {μ}+t > {η}+t , then {μ η}+t > {η}+t ⇐⇒ μ >η. As the name suggests, this axiom connects the agent’s expectation of his future choices with his actual choices. Suppose that μ is preferred to η from a distance of t periods, {μ}+t > {η}+t . Owing to this, if the anticipated choice from {μ η} is μ, then he would exhibit {μ η}+t > {η}+t . The axiom states that the agent anticipates choosing μ from {μ η} after t periods if and only if he actually does so (recall that reflects time-invariant choice, and so it describes also choice after t periods). That is, he is sophisticated in that he correctly anticipates future choices. It should be noted that this axiom is dynamic in that it relates choice across different times. From the perspective of one point in time, Sophistication also relates the ranking of different menus at different delays: if μ is ranked higher than η at delays t t (that is, {μ}+i > {η}+i , i = t t ), then {μ η}+t > {η}+t ⇐⇒
{μ η}+t > {η}+t . Two final axioms further establish such connections by describing how the agent’s ranking between two alternatives might change, if at all, when the alternatives are pushed into the future.
AXIOM 8—Reversal: If μ+t < η+t (resp. μ+t η+t ) and μ+t η+t (resp.
+t
μ > η+t ) for some t > t, then μ+t η+t (resp. μ+t > η+t ) for all t
> t . The axiom states that pushing a pair of rewards into the future may lead the agent to reverse the way he ranks them, and if this happens, then the reversed ranking is maintained for all subsequent delays in the rewards. The axiom expresses the basic structure of “preference reversals,” a robust finding in the experimental psychology literature on time preference; see Frederick, Loewenstein, and O’Donoghue (2002) for a survey of the evidence.9 An explanation given for preference reversals in the literature is that it is caused by a desire for immediate gratification. As in GP (2004), we specifically view it as arising due to temptation: when two alternatives are pushed into the future, temptation is weaker and eventually resistible, and this induces a reversal. Observe that given the time invariance of the primitive C , Reversal implies that revealed preferences in our model are dynamically inconsistent in the sense that the agent may exhibit (c {μ}) (c {η}) at t but μ η at t + 1. As a simple consequence of Reversal, we obtain a function τ : Δ×Δ → R that defines the “switching point” of preference reversals, that is, τ(μ η) is the minimum number of periods that μ and η need to be delayed before a preference 9 Let s (resp. l) denote a consumption stream that gives a small (resp. large) reward immediately and c in all other periods. Typical preference reversals are expressed by the Reversal axiom when μ and η are of the form μ = s+0 and η = l+d .
TEMPTATION AND REVEALED PREFERENCE
611
reversal is observed; if no reversal is observed, then τ(μ η) = 0. For instance, if μ+t η+t for all t < T and μ+t < η+t for all t ≥ T , then τ(μ η) = T . See Appendix C for a precise definition of the function τ. The final axiom makes specific statements about which menu rankings are not subject to reversals. Note that part (ii) presumes that A is a neighborhood of ({μ}+t {μ η}+t ) ∈ Δ × Δ with respect to the product topology on Δ × Δ. AXIOM 9—Menu Reversal: (i) If τ(μ η) = 0, then τ({μ}+1 {μ η}+1 ) = 0. (ii) If {μ}+t > {μ η}+t , then τ(A) = 0 for some neighborhood A of ({μ}+t {μ η}+t ). Part (i) of the axiom states that if the ranking of μ and η does not reverse with delay, then neither must the ranking of {μ}+1 and {μ η}+1 . Intuitively, the lack of reversal indicates that the ranking of μ and η is either not subject to any temptation or subject to resistible temptation. In either case, the ranking of the menus (which are necessarily one period away) is either not subject to temptation or subject to resistible temptation. Therefore, delaying the menus will not give rise to a reversal.10 Part (ii) of the axiom makes two statements. First, if {μ}+t > {μ η}+t , then
{μ}+t > {μ η}+t for all t ≥ t. That is, there is no reversal after a preference for commitment. Intuitively, the choice to commit is driven by the agent’s normative considerations and, thus, is not subject to a reversal. Second, the axiom says that there is also no reversal for any neighboring pair of alternatives. Since the ranking {μ}+t > {μ η}+t is driven by normative considerations, it is associated either with no temptation or with resistible temptation by {μ η}+t . In either of these cases, continuity of underlying temptation and normative preferences implies there is either no temptation or resistible temptation in any neighboring pairs of rewards. Hence no reversals will be observed when these neighboring pairs of rewards are pushed into the future. 4. FOUNDATIONS: REPRESENTATION RESULTS 4.1. Main Theorem Say that C is nondegenerate if (i) there exist μ η T such that μ < η and μ+T > η+T , and, moreover, (ii) there exist μ η such that τ(A) = 0 for a neighborhood A of (μ η) and {μ}+1 ≈ {μ η}+1 > {η}+1 . Part (i) of this definition asserts the existence of a preference reversal and part (ii) asserts the existence of μ η that neither exhibits a preference reversal (and neither do neighboring rewards) nor gives rise to a preference for commitment. This corresponds to the case in the model where U and V are nonconstant and affinely independent, and, in particular, where temptation is nontrivial. 10 The axiom may be weakened to hold for μ η such that μ > η. The case μ ≈ η is implied by Set-Betweenness and the case η > μ is implied by Sophistication.
612
JAWWAD NOOR
The main result in this paper is the axiomatization of the U-V model (Definition 2.1). THEOREM 4.1: If a nondegenerate choice correspondence C satisfies Axioms 1– 9, then it admits a U-V representation. Conversely, a choice correspondence C that admits a U-V representation also satisfies Axioms 1–9. The theorem states that an agent’s choices satisfy Axioms 1–9 if and only if it is as if they are the result of an aggregration of the functions U and V in Definition 2.1. The order defined by (3.1) is represented by U + V . It is worth noting that Axioms 1–9 do not explicitly restrict the nature of temptation by menus—none of the axioms makes the statement that, for instance, a menu is tempting only if it contains tempting items. Yet they produce the very special structure (2.6) on menu temptation in the U-V representation that capture this property. The proof of the theorem is discussed in detail in Section 6. Next is a uniqueness result that assures us that all the U-V representations of C deliver the same normative and temptation preferences. THEOREM 4.2: If a nondegenerate choice correspondence C admits two UV representations (u v δ β γ) and (u v δ β γ ) with respective normative and temptation utilities (U V ) and (U V ), then there exist constants a > 0 and bu bv such that U = aU + bu and V = aV + bv . Moreover, δ = δ , β = β , γ = γ , u = au + (1 − δ)bu , and v = av + βbv + (1 − γ)bv . See the Supplemental Material (Noor (2011)) for the proof. For the question of when there exist functions (U V ) that satisfy the equations in the U-V representation, the relevant proofs in GP (2004), Noor (2006) can be adapted to show that the equations in Definition 2.1 admit a unique continuous solution (U V ) when β = 0, that is, when the representation is either FT or DSC. More generally, however, a contraction mapping argument cannot be used because the U-V representation is not monotone in the appropriate sense.11 Nevertheless, we can ensure the following statement: THEOREM 4.3: For any continuous functions u v and scalars δ γ β as in Definition 2.1, the equations (2.4)–(2.5) admit a continuous linear solution (U V ). The proof of the theorem considers the space F × F of pairs of continuous linear functions on Δ endowed with the weak topology (induced by the norm dual of F × F ). We identify a compact subset of this space and establish that the mapping defined by the equations in Definition 2.1 is continuous in the 11 Krusell, Kurusçu, and Smith (2002) were the first to note that such an issue arises for generalizations of GP (2004). The question of existence was left open.
TEMPTATION AND REVEALED PREFERENCE
613
weak topology and is a self-map on the compact subset. We then invoke the Brouwer–Schauder–Tychonoff fixed point theorem, which states that a continuous self-map on a compact convex subset of a locally convex linear topological space has a nonempty set of fixed points. 4.2. Special Cases We characterize some subclasses that are of interest. The proofs for Theorems 4.4–4.6 are provided in the Supplemental Material. THEOREM 4.4: A nondegenerate choice correspondence C admits a U-V representation (u v δ β γ) with β ≥ 0 if and only if it satisfies Axioms 1–9 and Weak Menu-Temptation Stationarity: If x+t > y +t for all large t, then {x+2 }+t > {x+2 y +2 }+t for all large t ⇒ {x+1 }+t > {x+1 y +1 }+t for all large t The existence of a preference for commitment suggests the existence of temptation within a menu. Thus, the Weak Menu-Temptation Stationarity axiom states that, at any t, if y tempts the agent when it is available at t + 2, then it tempts him also when it is available at t + 1. That is, bringing a tempting menu closer to the present does not turn it into an untempting menu. This property is the behavioral meaning of the restriction β ≥ 0 in the representation. Observe that the axiom allows for the possibility that pushing a tempting menu into the future may not just make it easier to resist, but may make it altogether untempting. Consider the implication of a strong version of the axiom that rules this out, that is, that requires that a menu that tempts when it is t periods away also tempts when it is t periods away, for any t t > 0. THEOREM 4.5: A nondegenerate choice correspondence C admits either an FT representation or a QSC representation if and only if it satisfies Axioms 1–9 and Strong Menu-Temptation Stationarity: If x+t > y +t for all large t, then {x+2 }+t > {x+2 y +2 }+t for all large t ⇐⇒ {x+1 }+t > {x+1 y +1 }+t for all large t This result characterizes the union of the QSC and FT classes of models, thereby identifying the behavior that is common to them. The last result determines precisely what is different between them.12 Note that in Theorem 4.6, when C exhibits τ ≤ 1, Axiom 9 can be dropped from the hypothesis and Axioms 5–7 can be weakened by restricting their statements to hold only for t = 1. It is easily shown by invoking (GP (2004, Theorem 1)) that if C admits a QSC representation, then it admits a DSC representation if and only if C satisfies the Temptation by Immediate Consumption axiom in GP (2004). In this setting, the axiom states that if η and υ both induce the same marginal distribution on C, and if {μ}+1 > {μ η}+1 > {η}+1 and {μ}+1 > {μ υ}+1 > {υ}, then {μ η}+1 ≈ {μ υ}+1 . 12
614
JAWWAD NOOR
THEOREM 4.6: The following statements are equivalent for a nondegenerate choice correspondence C that satisfies Axioms 1–9: (i) C admits a QSC representation. (ii) C satisfies Menus Do Not Tempt: For all t > 0, {x+1 y +1 }+t {x+1 }+t (iii) τ ≤ 1. Thus, the key behaviors associated with QSC agents are that they exhibit no preference for delayed commitment (as captured by the Menus Do Not Tempt axiom) and that the switching point of preference reversals is at most one period away. The latter is intuitive for QSC agents: if μ is overwhelmingly tempting and μ > η, and if only immediate consumption tempts, then a single period delay in both rewards removes temptation and leads to a reversal, μ+1 < η+1 . The result tells us that in the presence of Axioms 1–9 and Strong MenuTemptation Stationarity, observing one instance of a preference for delayed commitment or one instance of a preference reversal with a switching point at two or more periods delay is equivalent to the existence of an FT representation. Note that this axiomatization of the FT model differs significantly from the axiomatization in Noor (2006), even after accounting for the different primitives: most notably, unlike any of the axioms we impose here, the key axiom in Noor (2006) (called Temptation Stationarity) explicitly identifies the source of menu temptation as being temptation within a menu. Our Strong Menu-Temptation Stationarity corresponds to a substantial weakening of that key axiom: it imposes stationarity of menu temptation without identifying the source of menu temptation. 4.3. Foundations for Normative Preference According to our interpretation of the representation, the U-V agent behaves as if he struggles with two preferences, represented by the normative utility U and temptation utility V . Theorem 4.2 assures us that these functions represent unique preferences, and thus there is a unique normative and temptation preference associated with the model. In this subsection we identify the behavioral underpinnings of the normative preference. Derive a sequence of preference relations {t }∞ t=0 over Δ, where for each t ≥ 0 and μ η ∈ Δ, μ t η
⇐⇒
μ+t ∈ C ({μ+t η+t })
Thus, the preference t ranks μ and η when both rewards are to be received t periods later. Define the preference ∗ over Δ by13 (4.1)
∗ ≡ lim t t→∞
13 To be formal, say that a binary relation B on Δ is nonempty if μBη for some μ η ∈ Δ. Following Hildenbrand (1974), identify any nonempty continuous binary relation on Δ with its graph,
TEMPTATION AND REVEALED PREFERENCE
615
Refer to ∗ as the normative preference derived from C . It captures the agent’s ranking of alternatives as the alternatives are distanced from him. The next theorem tells us that ∗ is the preference represented by the normative utility U. THEOREM 4.7: If C admits an U-V representation with normative utility U and if ∗ is the normative preference derived from C , then U represents ∗ . Thus, ∗ constitutes the empirical foundations for the ranking underlying U. The justification for referring to ∗ (resp. U) as normative preference (resp. normative utility) lies in the intuitive idea that the influence of temptation on choice is reduced when the agent is separated from the consequences of his choices, and consequently such choices are guided by the agent’s view of what he should do. Recall that for the U-V agent, choice maximizes U + V . In a sense, V fills the gap between choice C and normative preference ∗ . In our model, choice is determined by the normative perspective and visceral influences, and thus the gap between C and ∗ is naturally attributed to temptation. This is the justification for referring to V as temptation utility. 5. PERSPECTIVE AND RELATED LITERATURE Our model is an extension of GP (2001) to an infinite horizon. In this section, we compare it to existing infinite horizon extensions, namely GP (2004), Krussel, Kurusçu, and Smith (2002), and Noor (2006). These models are special cases of the following class of models: DEFINITION 5.1—W Representation: A preference over Z admits a W representation if it admits a GP representation W (x) = max U(μ) + V (μ) − max V (η) μ∈x
η∈x
such that the functions U V : Δ → R take the form (u(c) + δW (x)) dμ(c x) U(μ) = C×Z
and
V (μ) =
(v(c) + V (x)) dμ(c x)
C×Z
a nonempty compact subset of Δ × Δ. Thus, the space of nonempty continuous preferences on Δ can be identified with P = K(Δ × Δ), the space of nonempty compact subsets of Δ × Δ endowed with the Hausdorff metric topology. See Appendix B for details. Hence {t }∞ t=0 is a sequence in P and its limit in the Hausdorff metric topology is ∗ ≡ limt→∞ t .
616
JAWWAD NOOR
where δ ∈ (0 1), u v : C → R are continuous, and V : Z → R is continuous, and linear. Unlike the U-V model where the primitive is a revealed preference over Δ, the primitive of a W model is a preference over the set Z of menus. The functional forms for U and V are similar in the two models, except that in the W model, the temptation utility V from continuation menus lacks structure. The Dynamic Self-Control (DSC) model of GP (2004) takes V = 0, the (nonaxiomatic) generalization of DSC by Krussel, Kurusçu, and Smith (2002) takes V = βW for β ≥ 0, and the Future Temptation model of Noor (2006) takes V = γ maxη∈x V (η) for 0 < γ < δ. These functional forms for V were discussed at the end of Section 2. A benchmark comparison between the U-V and W models can be provided in terms of the following time line: t=0
t=1
t=2
xy
(cz)∈x
(c z )∈z
• ————– • ————–
•
—
The W model describes choice in a period 0, where the preference is used to guide choice of a menu x. The representation implicitly tells a story about subsequent choice: the selected menu x is faced in period 1 and a choice (c z) ∈ x is made by maximizing U + V ; immediate consumption c and a continuation menu z is obtained, and a choice (c z ) ∈ z is made in period 2 by maximizing U + V , and so forth. While these period t > 0 choices are derived from the interpretation of the representation W , the U-V model can be understood to be explicitly of period t > 0 choices: it describes an agent who maximizes U + V in any menu at any t > 0. In a sense, our model describes the revealed preference implications for period t > 0 choice of some W model. Indeed, observe that a W appears as a component in the U-V representation. This can be interpreted as a representation of preferences in a hypothetical period 0 where the agent ranks menus prior to the experience of temptation. Comparison of Dynamic Behavior A peculiar feature of the W model is a generic asymmetry between the agent’s ex ante and ex post ranking of menus. Observe that period t > 0 choice from {(c x) (c y)} maximizes U + V and, in particular, (U + V )(c x) ≥ (U + V )(c y) ⇐⇒
1 1 W (x) + V (x) ≥ W (y) + V (y) δ δ
That is, menus are ranked in period t > 0 by W + δ1 V . On the other hand, in period 0, the preference ranks menus by W . Evidently, this asymmetry arises
TEMPTATION AND REVEALED PREFERENCE
617
due to the menu temptation V experienced in periods t > 0, which is seemingly absent in period 0. That is, it appears that is not subject to the same menu temptation that it identifies for subsequent periods. Indeed, the primitive of the W model appears to describe behavior in a period 0 that is special relative to all subsequent periods t > 0.14 Our motivation for not pursuing a W model of menu temptation is our view that this “special period 0” feature is a problem from the perspective of foundations. If we take the common interpretation in the literature that period 0 behavior reflects how the agent ranks menus if he were not tempted by menus,15 then it is not obvious how to answer questions such as: How is such a ranking identified? Is it even observable? More generally, the special period 0 feature is problematic because it is not clear how to justify any particular interpretation of it. Yet the interpretation of the representation depends heavily on how the special period 0 is interpreted. If period 0 reflects behavior in the absence of temptation, then the representation suggests what the agent’s normative and temptation preferences and anticipated choices are, but if menu temptation contaminates period 0 behavior, then it is less clear what the components of the representation reflect. The special period 0 is a consequence of relying on a preference for commitment to identify temptation although the agent being studied is subject to menu temptation (recall the discussion in the Introduction). By relying on an alternative method of identifying temptation, we avoid the asymmetry between behavior in period 0 and all subsequent periods. Indeed, our agent’s behavior is time-invariant. Comparison of Static Behavior A distinguishing property of the ranking of menus in the W model is the Stationarity axiom: for all x y ∈ Z, (5.1)
xy
⇐⇒
{(c x)} {(c y)}
This adapts the standard stationarity condition (Koopmans (1960)) to a preference-over-menus setting and describes a property of choices from the perspective of one point in time. This is a key property of the W model, and thus also of GP (2004), Krusell, Kurusçu, and Smith (2002), Noor (2006).16 In The asymmetry does not exist when V is constant or a positive affine transformation of W , that is, V = βW for β ≥ 0. Observe that this corresponds to GP (2004) and Krussel, Kurusçu, and Smith (2002). Thus, period 0 is not special only when menus do not tempt. 15 The literature typically regards period 0 as reflecting the agent’s preferences in a “cold” state. 16 To connect with our observation about the special period 0, it should be noted that while Stationarity is satisfied by period 0 behavior, it may be violated in all subsequent periods t > 0. Intuitively, the manner in which menu-temptation component V (x+d ) changes with delay d for any given menu x determines whether Stationarity holds in periods t > 0, but this component is altogether absent in period 0. 14
618
JAWWAD NOOR
contrast, the U-V model violates Stationarity: given the distancing hypothesis, menu temptation may cause the ranking of tomorrow’s menus to differ from the ranking of more distant menus. Moreover, there may never exist a minimum distance after which the rankings always agree for all pairs of menus. In the absence of Stationarity, the job of relating the ranking of menus at different delays is done by the Reversal and Menu-Reversal axioms, and indirectly also by the Sophistication axiom. Stationarity enables a relatively straightforward extension of GP (2001) to an infinite horizon. Roughly, if W1 is a linear function representing that can be written as W1 ({(c x)}) = u(c) + W2 (x), where W2 is also linear, then Stationarity implies that W1 and W2 are ordinally equivalent, and thus by linearity, they are in fact cardinally equivalent. Ignoring constants, we can thus write W1 ({(c x)}) = u(c) + δW1 (x) for some δ > 0, a key step in establishing an infinite horizon extension of GP’s representation. Since our model violates Stationarity, we need to find an alternative means of establishing our representation. Our idea is to make use of the distancing hypothesis; see the next section. Comparison of Primitives Our primitive consists of a choice correspondence over menus that satisfies WARP. Thus, our primitive is effectively a complete and transitive revealed preference relation over Δ = Δ(C × Z). Observe that Z can be identified with a subdomain of Δ, and thus the restriction |Z is a preference over menus Z. The induced representation for |Z is (5.2)
x −→ W (x) +
γ max V (η) δ + β η∈x
where the components have the form in Definition 2.1. The representation suggests that the agent’s ranking of menus is a compromise between the normative evaluation (reflected in W (x)) and menu-temptation evaluation (reflected in maxη∈x V (η)). Instead of taking dynamic choice as a primitive, we could have axiomatized a single preference over menus that admits this representation (5.2).17 The resulting model would lie squarely within the literature on menu choice (Dekel, Lipman, and Rusticini (2009), GP (2001)) to which the W model also belongs. While this observation reveals that dynamic choice is, strictly speaking, not necessary to write down or axiomatize a model of tempting menus, we argue that such a primitive should be adopted: • As is, (5.2) represents static choice, and, therefore, any dynamic (or ex post) behavior derived from the model is a mere interpretation of the repre17
We thank an anonymous referee for pointing this out.
TEMPTATION AND REVEALED PREFERENCE
619
sentation, based on an assumption that the agent is sophisticated and his accurately anticipated ex post choices affect his ranking of menus in a particular way. Dynamic choice and sophistication are notions of central interest, but they lack empirical foundations when the primitive consists of a preference over menus—this problem is common to the literature on menu choice, including Dekel, Lipman, and Rustichini (2009) and GP (2001). In our model, dynamic choice behavior is the primitive data and is not derived from the representation, and sophistication is a refutable hypothesis rather than an assumption. Thus, our use of dynamic choice as a primitive permits us to fill the gaps that remain when a preference over menus is taken as the primitive. See Section 6.1 below for a general result on this. • More pertinent to menu temptation is the observation that without dynamic choice, our use of the distancing hypothesis would not be defensible. The agent exhibits reversals, but for all we know, these may be driven by anticipated preference shocks. Due to possibly time-variant choices, the distancing hypothesis would lose its appeal: When behavior is time-varying, we have to allow that the agent’s normative evaluations may be time-varying, and as a result it is harder to justify identifying normative preference over current alternatives from the ranking of distant alternatives.18 Being able to do so is important because the distancing hypothesis is the backbone of our model: it is needed to identify temptation and, in particular, since our agent may experience temptation even if he does not exhibit a preference for commitment, the hypothesis is required to identify menu temptation. • Finally, dynamic choice is a very useful technical tool. Since our primitive C simultaneously describes choice of (continuation) menus and choice from any menu, restrictions on C have implications for both. As a result, our model is characterized with relatively simple restrictions despite the very rich structure on the representation. If we were to axiomatize (5.2), restrictions could only be imposed on period 0 preference over menus. Indeed, the restrictions would need to be strong enough to get the rich structure on (implied) choice from menus. 6. PROOF OUTLINE FOR THEOREM 4.1 Before presenting the proof of Theorem 4.1, we first describe a lemma. While this simplifies the exposition of one step in the proof, the lemma is of independent interest as it shows how foundations can be provided for the interpretation of the GP (2001) representation. 18 The issue is not resolved if we suppose that the primitive for (5.2) is a dynamic but timeinvariant preference over menus. This is because ex post choice involves not just menus, but rather (c x) pairs, and even if the ranking of menus (obtained by fixing c and varying x) is timeinvariant, ex post choice of (c x) pairs can still be time-varying.
620
JAWWAD NOOR
6.1. A Result: Foundations for Sophistication Consider the two-period model of GP (2001). As noted earlier, the interpretation of GP’s representation (2.2) assumes that the agent is sophisticated in the sense of correctly anticipating ex post choices. However, there is nothing in the primitives that can justify this “sophistication assumption.”19 A lemma used in the proof of Theorem 4.1 provides the missing foundations for this assumption in GP’s model. We state it below as Theorem 6.1. Extend GP’s model to include ex post choice. Let C be some compact metric space and let Z2 = K(Δ(C)) be the set of nonempty compact menus. Take a preference over Z2 and a closed-valued choice correspondence C : Z2 Δ(C), where, for all x ∈ Z, C (x) = φ, and C (x) ⊂ x. The preference captures period 0 preference over menus, and the choice correspondence C captures period 1 choice from any menu. Say that is nontrivial if there exist μ η such that {μ η} {η}. The following is a representation theorem for our “extended GP model.” THEOREM 6.1: Consider a nontrivial preference and a choice correspondence C over Z2 such that admits a GP representation (U V ) and C is rationalized by a von Neumann–Morgenstern (vNM) preference. Then C admits the representation
C (x) = arg max{U(μ) + V (μ)} μ∈x
x ∈ Z2
if and only if, for any μ η such that {μ} {η}, {μ η} {η}
⇐⇒
C ({μ η}) = {μ}
Thus, the exhaustive testable implications of GP’s model for choice in both periods 0 and 1 are given by GP’s axioms for , rationalizability by a vNM preference for C , and the noted joint restriction (the counterpart of the Sophistication axiom) on and C . Indeed, these behaviors imply that the agent behaves as in the interpretation of GP’s representation: period 1 choice maximizes U + V . We thus obtain as a theorem what GP proposed as an interpretation. The result can be used directly in conjunction with the axiomatizations of GP (2004), Krusell, Kurusçu, and Smith (2002), and Noor (2006) to provide dynamic extensions of these models. However, while this poses no issue for the extension of models without menu temptation (namely GP (2004) and Krusell, Kurusçu, and Smith (2002)), extensions of models with menu temptation will still possess the problematic “special period 0” property noted in the previous 19 Indeed, as recently demonstrated by Dekel and Lipman (2007), GP’s axioms for a preference over menus also characterize another model with very different implications for ex post choice.
TEMPTATION AND REVEALED PREFERENCE
621
section. Given the question of what period 0 behavior reflects and whether it corresponds to any observed behavior, the question of what constitutes the empirical foundations for such models would therefore remain. 6.2. Proof Outline for Main Theorem The proof of Theorem 4.1 shows the following three broad steps: (i) There is a preference over menus Z with a W representation (as defined in Section 5) that can be derived from C . (ii) Temptation utility over menus V in this representation has the desired form (2.6). (iii) The ex post choice suggested by the W representation is exactly the original choice correspondence C . In the proof, steps (ii) and (iii) are completed simultaneously in the final lemma. The sequence of preferences {t }∞ t=0 defined in Section 4.3 is derived from C and it produces a normative preference ∗ over Δ via (4.1). Existence of ∗ is ensured by WARP, Continuity, and Reversal alone. Roughly speaking, the single-reversal property underlying Reversal implies increased agreement between t and t+1 as t grows, and gives rise to convergence in the limit. The limit preference ∗ is complete, transitive, and continuous. A connection with C that is exploited below is given by (6.1)
μ ∗ η
⇒
μ >t η for all large t
The candidate preference over menus Z is defined as the induced normative preference over menus: for all x y ∈ Z, xy
⇐⇒
x+1 ∗ y +1
Axioms 1–4 and Set-Betweenness imply that satisfies the GP (2001) axioms and thus admits a GP representation (2.2) with some U V . This together with Axiom 5 implies additive separability of U V . Reversal implies that satisfies the Stationarity axiom (5.1); roughly speaking, Reversal implies that there are no reversals in the limit. Under these conditions, Lemma D.5 shows that is a W model (Definition 5.1).20 20
The proof in an earlier version of this paper was relatively simpler and proceeded from this point as follows: additional axioms imposed further desired structure on W , and one axiom connected C with in a way that Theorem 6.1 could be invoked directly to complete the proof. The axiom connecting C with was unattractive since is a limit ranking. The current proof exploits the observation discussed next in the text: specifically, an intimate connection between C and already exists due to (6.1) and Sophistication. This allows us to axiomatize the representation with fewer and simpler axioms. See also the discussion after Theorem 4.1.
622
JAWWAD NOOR
At this point, we also obtain a key “partial rationalizability” result: Lemma D.6 shows that there is ξ ≥ 1 such that for all x
C (x) = arg max{ξU(μ) + V (μ)} μ∈x
The argument makes use of (6.1). This implies that if {μ η} {η} (in which case {μ} {η} also holds), then {μ η}+t > {η}+t and {μ}+t > {η}+t for all large t. By Sophistication, μ > η follows. That is, whenever the limit W agent normatively prefers μ over η and anticipates choosing μ over η, then our agent chooses μ over η. After exploiting Continuity, this generates the statement U(μ) ≥ U(η)
⇒
and
U(μ) + V (μ) ≥ U(η) + V (η)
μ η
Using Harsanyi’s aggregation theorem, we obtain the result. Full rationalizability (ξ = 1) is obtained only at the end of the proof. The partial rationalizability result is a crucial step because it enables the axioms on C to directly translate into restrictions on U and V . The remainder of the proof relies heavily on this. The first step (Lemma D.7) toward obtaining the functional form for V exploits the fact that the ranking of alternatives of the form x+1 has two different cardinally equivalent representations, specifically (Rep. A) and (Rep. B) below. To derive (Rep. A), begin by noting that Set-Betweenness implies that the ranking admits a GP representation:
x → max{U(μ) + V (μ)} − max V (η) μ∈x
η∈x
By Theorem 6.1 (presented in the previous subsection), Sophistication implies
+ V represents and thus by the partial rationalizability result, it is that U cardinally equivalent to ξU + V . Moreover, by Menu Reversal (ii), whenever the agent exhibits a preference for commitment {μ}+1 > {μ η}+1 , then so does the limit preference {μ} {μ η}. This fact is used to show that V = αU + α V . Thus, the ranking of period 1 menus is represented by (Rep. A)
x → max{ξU(μ) + V (μ)} − max{αU(η) + α V (η)} μ∈x
η∈x
The derivation of (Rep. B) is based on the fact that the partial rationalizability result implies that this ranking is also represented by x → ξU(x+1 ) + V (x+1 ). In fact, given the functional form of U V in the W representation, ξU(x+1 ) + V (x+1 ) is ordinally equivalent to ξδW (x) + V (x). Therefore, (Rep. A) is ordinally equivalent to (Rep. B)
x → ξδW (x) + V (x)
623
TEMPTATION AND REVEALED PREFERENCE
But linearity of (Rep. A) and (Rep. B) implies cardinal equivalence, and so, by writing down the affine transformation and rearranging terms, we find that V must take the following form with some λ > 0 (ignoring constants): V (x) = λ max{ξU(μ) + V (μ)} − ξδ max{U(μ) + V (μ)} μ∈x
μ∈x
+ ξδ max V (η) − max{αU(μ) + α V (μ)} η∈x
μ∈x
x ∈ Z
This yields a functional form for V . The remainder of the proof shows that by the partial rationalizability result, Reversal, and Set-Betweenness, it must be that V reduces to the desired form (for some appropriately defined β γ). Menu Reversal (i) is used in the process to rule out one additional form for V that is consistent with the other axioms but nonintuitive. Reversal helps to establish 0 ≤ γ ≤ δ. Finally, ξ = 1 gives rise to a violation of Set-Betweenness and, therefore, we must have ξ = 1. At this point, full rationalizability obtains and the proof is complete. 6.3. Necessity and Summarizing Comments Reversal This ensures the existence of a limit of a sequence of preferences. To see that Reversal is implied, note that in the model, for all μ η, and t, μ+t η+t
U(μ) + Dt V (μ) ≥ U(η) + Dt V (η) −1 t where Dt = 1+β/δ[(γ/δ) , adopting the convention that i=0 ( γδ )i = 0. This can t−1 (γ/δ)i ] (6.2)
⇐⇒
i=0
be established with a proof by induction. The model requires 0 ≤ γ ≤ δ and β > γ − δ, and these imply that Dt ≥ 0, D0 = 1, and Dt 0. Thus, whenever U and V strictly disagree on the ranking of μ η, delaying the two alternatives leads to at most one reversal in the ranking. When there is no strict disagreement, then there is no reversal. Set-Betweenness
As one would expect, this axiom is responsible for the GP form for the normative preference over menus. However, this feature could be achieved by requiring merely that Set-Betweenness hold for sufficiently distant menus. Imposing Set-Betweenness also for the ranking of more immediate menus places a lot of structure on choices and is largely responsible for a functional form for V that is not more general. The necessity of Set-Betweenness is established by using (6.2) to see that for any x y and t > 0, (6.3)
x+t y +t
⇐⇒
W (x) +
Dt Dt V (y) ≥ W (y) + V (η) δ δ
624
JAWWAD NOOR
Inserting the functional form for W and V , and defining at := (1 + > 0, we obtain and bt := 1 + Dt (β−γ) δ x+t y +t ⇐⇒
Dt β δ
)>0
max Ut + Vt − max Vt ≥ max Ut + Vt − max Vt μ∈x
η∈x
μ∈y
η∈y
where Ut (μ) = U(μ) + (1 − batt )V (μ) and Vt (μ) = batt V (μ). That is, the ranking of period t menus admits a GP (2001) representation and, hence, SetBetweenness is implied. Sophistication This is our only dynamic axiom. It plays a part in establishing the partial rationalizability result (Lemma D.6). Theorem 6.1 (which makes use of Sophistication) is used to get a connection between and the ranking of menus, and this eventually plays a part in getting the functional form for V . The proof for necessity uses the observation derived above for Set-Betweenness and invokes Theorem 6.1. Menu Reversal If Menu Reversal (i) is dropped, then Theorem 4.1 generalizes to permit V to also take an additional possible form. This additional form is nonintuitive, as a violation of the very intuitive Menu Reversal (i) axiom would suggest. Menu Reversal (ii) ensures that an immediate preference for commitment {μ}+1 > {μ η}+1 implies a preference for commitment in the limit {μ} {μ η}. This ensures that the functional form of V includes only functions that are linear combinations of U and V (Set-Betweenness then does the rest of the job.) One may ask why Menu Reversal (ii) requires a statement about the neighborhood of a pair of menus. The answer is that, in general, for any μ η, a strict preference μ+t > η+t for all t does not imply strict preference in the limit. The nature of the distant preference between neighboring pairs of rewards plays a role in how μ and η are ranked in the limit (Lemma C.3(b)). Thus, a statement about neighborhood pairs of rewards is required so as to directly make a statement about the limit preferences. The necessity of Menu Reversal is verified in the Supplemental Material (Noor (2011)). 7. CONCLUDING REMARKS We conclude with a comment on welfare. The standard revealed preference criterion suggests that welfare policy should be guided by . The concept of normative preference lends itself as an alternative welfare criterion. The agent’s view of what he should or should not choose is his own definition of his welfare. Thus, his normative preference is a subjective welfare criterion. If an
TEMPTATION AND REVEALED PREFERENCE
625
analyst believes that this is the appropriate criterion for welfare policy and if he takes the position that distancing is an appropriate tool (that serves as a veil of ignorance of sorts (Rawls (1971))) for determining normative preference, then the ranking ∗ defined in (4.1) would guide welfare analysis in our model while the revealed preference criterion would be viewed as contaminated with temptation (Noor (2008b)). APPENDIX A: PROOF OF THEOREM 6.1 Prove sufficiency of the axioms. By GP, is represented by (2.2) for some continuous linear functions U V : Δ → R. The first lemma collects some simple facts about the representation (Noor (2006)) and the second lemma establishes the result. LEMMA A.1: For all x y the following statements hold: (a) x x ∪ y ⇐⇒ maxy V > maxx V and W (x) > W (y) (b) x ∪ y y ⇐⇒ maxx U + V > maxy U + V and W (x) > W (y) (c) x x ∪ y y ⇐⇒ maxx U + V > maxy U + V and maxy V > maxx V LEMMA A.2: μ η ⇐⇒ U(μ) + V (μ) ≥ U(η) + V (η) PROOF: By hypothesis there exists ρ ν such that {ρ ν} {ν}. By SetBetweenness, {ρ} {ν}, and by Lemma A.1(b) and Sophistication, ρ > ν and U(ρ) + V (ρ) > U(ν) + V (ν). These observations will be used to prove the result.
⇒: Suppose that μ η. If {η} {μ}, then by Sophistication, {η μ} {μ} and it follows by Lemma A.1(b) that U(μ) + V (μ) ≥ U(η) + V (η). If, on the other hand, {μ} {η}, then by Set-Independence and Independence, {μαρ} {ηαν}
and
μαρ > ηαν
for all α ∈ (0 1). By Sophistication and Lemma A.1(b), U(μαρ) + V (μαρ) > U(ηαν) + V (ηαν) for all α ∈ (0 1). By continuity of U + V , it follows that U(μ) + V (μ) ≥ U(η) + V (η), as desired. ⇐ : Suppose that U(μ) + V (μ) ≥ U(η) + V (η). If {η} {μ}, then by Lemma A.1(b), {η μ} {μ} and by Sophistication, μ η. If, on the other hand, {μ} {η}, then for all α ∈ (0 1), {μαρ} {ηαν}
and Ut (μαρ) + Vt (μαρ) > Ut (ηαν) + Vt (ηαν)
By Lemma A.1(b) and Sophistication, μαρ > ηαν for all α ∈ (0 1). Thus, continuity of implies μ η. Q.E.D.
626
JAWWAD NOOR
APPENDIX B: TOPOLOGY ON P Since Δ is compact and metrizable, then Δ × Δ is compact and metrizable under the product topology. Let d be a metric that generates the topology on Δ × Δ Denote the space of nonempty compact subsets of Δ × Δ by P . For any A B ∈ P , let d(a B) = infb∈B d(a b) and d(b A) = infa∈A d(b a). The Hausdorff metric hd induced by d is defined by hd (A B) = max{sup d(a B) sup d(b A)}, for all A B ∈ P . An ε-ball centered at A is defined by B(A ε) = {B : hd (A B) < ε} The Hausdorff metric topology on P is the topology for which the collection of balls {B(A ε)}A∈P ε∈(0∞) is a base. View the set P as the space of nonempty and continuous binary relations on Δ by identifying any such binary relation B on Δ with Γ (B), the graph of B: Γ (B) = {(μ η) ∈ Δ × Δ : μBη} If B is a weak order (complete and transitive binary) relation, then Γ (B) is nonempty. Given that Δ is a connected separable space, if B is also continuous, then Γ (B) is closed and hence compact. Thus, the set of continuous weak orders on Δ is a subset of P . By Aliprantis and Border (1994, Theorem 3.71(3)), compactness of Δ × Δ implies that P is compact. Also, under compactness of Δ × Δ, Γ (B) is the Hausdorff metric limit of a sequence {Γ (Bn )} ⊂ P if and only if Γ (B) is the “closed limit” of {Γ (Bn )} (Aliprantis and Border (1994, Theorem 3.79)). To define the closed limit of a sequence {Γ (Bn )}, first define the topological limit superior Ls Γ (Bn ) := {a ∈ Δ × Δ : for every neighborhood V of a V ∩ Γ (Bn ) = φ for infinitely many n} and topological limit inferior Li Γ (Bn ) := {a ∈ Δ × Δ : for every neighborhood V of a V ∩ Γ (Bn ) = φ for all but a finite number of n}. The sequence {Γ (Bn )} converges to a closed limit Γ (B) if Γ (B) = Ls Γ (Bn ) = Li Γ (Bn ). APPENDIX C: NORMATIVE PREFERENCE This appendix collects some results from Noor (2008a). Take as given a set of preference relations {t }∞ t=0 on some set Δ of lotteries, defined over some compact metric space, that is endowed with the weak convergence topology. For each μ η, the preference t captures how the agent ranks the rewards μ η when they are to be received t periods later. Normative preference ∗ over Δ is defined by ∗ = limt→∞ t as in Section 4.3. Consider the following axioms on {t }. AXIOM A1—Order∗ : t is complete and transitive for all t. AXIOM A2—Continuity∗ : {η : μ t η} and {η : η t μ} are closed for all t. AXIOM A3—Reversal∗ : If μ
t η) for some t > t, then μ t
η (resp. μ >t
η) for all t
> t .
TEMPTATION AND REVEALED PREFERENCE
627
AXIOM A4—Independence∗ : μ >t η ⇒ μαν >t ηαν for all t. Define the function τ : Δ × Δ → R, which captures the time at which a reversal takes place for each (μ η), in the following way: First take any (μ η) ∈ Δ × Δ such that μ 0 η If μ ≈t η for all t or μ >t η for all t, then define τ(μ η) = 0. If there exists T such that μ 0 η and there exists T such that μ ≈t η for all t ≥ T , then define τ(μ η) = min{t : μ ≈t η}. Finally, to cover all the remaining cases, let τ(μ η) = τ(η μ) for all μ η. The following results are proved in Noor (2008a) (except for Lemma C.3(d)) and will be used later. LEMMA C.1: Suppose {t }∞ t=0 satisfies Axioms A1 and A3, and take any μ η such that μ 0 η. If τ(μ η) = 0, then μ ≈t η for all t or μ >t η for all t. If τ(μ η) > 0, then only one of the following statements holds: (a) μ >t η for t < τ(μ η) and μ t η for t < τ(μ η) and μ ≈t η for all t ≥ τ(μ η). (c) There is 0 ≤ T < τ(μ η) such that μ >t η for all t < T , μ ≈t η for all T ≤ t < τ(μ η), and μ
⇒ lim sup τ(μn ηn ) ≤ τ(μ η) n→∞
LEMMA C.3: (a) μ ∗ η ⇐⇒ [μ >τ(μη) η and (μ η) ∈ Ω]. (b) μ ∗ η ⇐⇒ there exists a sequence {(μn ηn )} that converges to (μ η) and μn τ(μn ηn ) ηn for all n (c) μ τ(μη) η ⇒ μ ∗ η (d) μ <0 η and μ >t η for some t ⇒ μ ∗ η PROOF: We prove part (d). Letting T = τ(μ η), we have μ >T η. Take any sequence {(μn ηn )} that converges to (μ η). By Continuity∗ , for sufficiently large n we have μn <0 ηn and μn >T ηn . It follows that τ(μn ηn ) ≤ T = τ(μ η) and thus (μ η) ∈ Ω. Invoke part (a) to get the result. Q.E.D.
628
JAWWAD NOOR
APPENDIX D: PROOF OF THEOREM 4.1 Necessity is readily or already verified for most of the axioms. The necessity of Menu Reversal is established in the Supplemental Material. The proof of sufficiency is divided into subsections. We start with a simple lemma. Define the choice correspondence C ∗ (· ) by C ∗ (x ) ≡ {μ ∈ x : μ η for all η ∈ x}. Say that a preference over Δ generates C (·) if C (x) = C ∗ (x ) for all x. LEMMA D.1: is the unique preference relation that generates C (·). Furthermore, satisfies the vNM axioms and there exists μ η such that {μ}+t > {μ η}+t for all large t > 0. PROOF: The first two assertions are standard. The last follows from the fact that by nondegeneracy of C , there exists μ η such that μ < η and μ+t > η+t for large t, and by Sophistication, Set-Betweenness, and transitivity, it is implied Q.E.D. that μ+t > {μ η}+t ≈ η+t for all large t. D.1. Properties of Normative Preference ∗ Define t over Δ for each t ≥ 0 and μ η ∈ Δ by μ t η
⇐⇒
μ+t η+t
Since C (·) satisfies WARP, Continuity, and Reversal, {t } satisfies the conditions in Lemma C.2. Thus, there is a well defined normative preference ∗ ≡ limt→∞ t over Δ and a well defined function τ : Δ × Δ → R as in Lemma C.1. LEMMA D.2: ∗ satisfies (i) the vNM axioms, (ii) a separability property— for all c c x x [(c x) 21 (c x ) ∼∗ (c x ) 12 (c x)]—and (iii) an indifference to timing property—for all μ η and α, [μ+1 αη+1 ∼∗ (μαη)+1 ]. PROOF: Lemma C.2 establishes that ∗ is complete, transitive, and continuous. Independence and Indifference to Timing imply that each t satisfies vNM independence. Thus by Lemma C.2, the limit ∗ satisfies vNM independence. By the Separability axiom, ( 12 (c x) + 12 (c x ))+t ≈ ( 12 (c x ) + 12 (c x))+t for all t, and hence by Lemma C.3(c), (c x) 12 (c x ) ∼∗ (c x ) 12 (c x). The Indifference to Timing property follows similarly from the Strong Indifference to Timing property of t proved in the Supplemental Material. Q.E.D. LEMMA D.3: ∗ satisfies a stationarity property: for any c μ η μ ∗ η
⇐⇒
(c μ) ∗ (c η)
PROOF: We prove this assertion in a series of steps.
TEMPTATION AND REVEALED PREFERENCE
629
Step 1. Show that for any c c μ η [(c μ) ∗ (c η) ⇐⇒ (c μ) ∗ (c η)] Suppose by way of contradiction that (c μ) ∗ (c η) and (c μ) ≺∗ (c η). Since ∗ satisfies the vNM axioms, (c μ) 21 (c η) ∗ (c η) 12 (c μ). But this contradicts the separability property in Lemma D.2. Step 2. Show that for any c c μ η [μ ∗ η ⇒ (c μ) ∗ (c η)] If μ ∗ η, then by Lemma C.3(b), there exists a sequence {(μn ηn )} such that (μn ηn ) → +1 (μ η) and μn τ(μn ηn ) ηn for all n. It follows by definitions21 that {(μ+1 n ηn )} +1 +1 +1 +1 +1 +1 is a sequence such that (μn ηn ) → (μ η ) and μn τ(μ+1 +1 η n for all n ηn ) n. But then by Lemma C.3(b), μ+1 ∗ η+1 Apply Step 1. Step 3. The result. By Step 1, it suffices to show that μ ∗ η ⇐⇒ (c μ) ∗ (c η). Define a binary relation ∗∗ over Δ by μ ∗∗ η ⇐⇒ (c μ) ∗ (c η). We need to show μ ∗ μ ⇐⇒ μ ∗∗ μ This follows from the facts that (i) μ ∗ η ⇒ μ ∗∗ η (Step 1), (ii) that ∗∗ satisfies the vNM axioms (by the first and last assertion in Lemma D.2), and (iii) that ∗∗ is nontrivial, that is, there exist ρ ν ∈ Δ such that ρ ∗∗ ν. To see nontriviality, recall that by Lemma D.1 there is x y such that x+t > (x ∪ y)+t for t > 1. Menu-Reversal (ii) implies τ(x+t (x ∪ y)+t ) = 0 and τ is continuous at (x+t (x ∪ y)+t ), and so (x+t (x ∪ y)+t ) ∈ Ω. By Lemma C.3(a), x+t ∗ (x ∪ y)+t . It follows that x+(t−1) ∗∗ (x ∪ y)+(t−1) , that is, Q.E.D. ∗∗ is nontrivial. D.2. Properties of Normative Menu Preference Consider the preference over Z defined by (D.1)
xy
⇐⇒
(c x) ∗ (c y)
for some c ∈ C. By Step 1 in the proof of Lemma D.3, the preference is invariant to the choice of c. This subsection shows that has a W representation (as defined in Section 5) and highlights a connection with C . LEMMA D.4: (i) {μ}+t > {μ η}+t ⇒ {μ} {μ η} (ii) There exists μ μ η η such that {μ} {μ η} and {μ } ∼ {μ η } {η }. PROOF: (i) If {μ}+t > {μ η}+t , then Menu Reversal (ii) implies τ({μ}+t {μ η}+t ) = 0 and τ is continuous at ({μ}+t {μ η}+t ), and so ({μ}+t {μ η}+t ) ∈ Ω. Hence, by Lemma C.3(a), {μ}+t ∗ {μ η}+t . Repeated application of Lemma D.3 yields {μ}+1 ∗ {μ η}+1 , and the result follows. (ii) By Lemma D.1 there is μ η such that {μ }+t > {μ η }+t for t ≥ 1. The above result implies {μ } {μ η }, thus establishing the first part of the 21 Specifically, it follows from the fact that μ τ(μη) η ⇐⇒ μ+1 τ(μ+1 η+1 ) η+1 . This holds by definition of τ if τ(μ η) = 0 (in which case τ(μ+1 η+1 ) = 0 as well); when τ(μ η) > 0, then note that μ+τ(μη) = (c μ)+(τ(μη)−1) and η+τ(μη) = (c η)+(τ(μη)−1) , in which case the assertion follows from the easily proven fact that τ(μ+1 η+1 ) = τ(μ η) − 1 whenever τ(μ η) > 0.
630
JAWWAD NOOR
statement. To show the second part, recall that by nondegeneracy of C there is μ η such that τ(A) = 0 for a neighborhood A of (μ η) and {μ}+1 ≈ {μ η}+1 > {η}+1 . By Sophistication, μ > η and since τ(A) = 0 for a neighborhood A of (μ η), Lemmas C.3(a) and D.3 imply {μ} {η}. By Menu Reversal (i), τ(μ η) = 0 and {μ}+1 ≈ {μ η}+1 implies {μ}+t ≈ {μ η}+t for all t. Lemma C.3(c) implies {μ}+1 ∼∗ {μ η}+1 and thus {μ} ∼ {μ η}. However, since {μ} {η}, transitivity implies {μ} ∼ {μ η} {η} as desired. Q.E.D. LEMMA D.5: admits the representation W (x) = max U(μ) + V (μ) − max V (η) μ∈x
where for μ ∈ Δ,
η∈x
(u(c) + δW (y)) dμ
U(μ) = C×Z
and
V (μ) =
(v(c) + V (y)) dμ
C×Z
and where δ ∈ (0 1), the functions u v are continuous, W V are continuous and linear, and U and V are nonconstant and affinely independent. PROOF: The result is obtained by confirming that satisfies Axioms 1–6 in Noor (2006). The analogs of the three vNM axioms for follow from Lemmas D.2 and D.3. The Stationarity property [z z ⇐⇒ {(c z)} {(c z )}] follows from Lemma D.3. The Supplemental Material confirms that (i) the Set-Betweenness property [x y ⇒ x x ∪ y y], (ii) a Strong Indifference to Timing condition, and (iii) a nondegeneracy property (there exist μ η such that {μ} {μ η} {η}) are also satisfied. Therefore, we can argue as in the proof of GP (2004, Theorem 1) to establish the existence of the desired representation (alternatively, see the proof of Noor (2006, Theorem 3.1)). The noted nondegeneracy property implies that U and V are nonconstant and affinely independent (apply Lemma A.1). Q.E.D. The next lemma establishes an important connection between U, V , and C . LEMMA D.6: There is ξ ≥ 1 such that C (x) = arg maxμ∈x {ξU(μ) + V (μ)} for all x PROOF: We know that C is rationalized by and that admits a nonconstant continuous linear representation—denote this by w : Δ → R. If w is a positive affine transformation of U + V , then the result holds trivially with ξ = 1.
631
TEMPTATION AND REVEALED PREFERENCE
Consider next the case where w is not a positive affine transformation of U +V . To ease notation, write UV (·) instead of U(·) + V (·). Since UV is nonconstant and linear, there exist μ η such that UV (μ ) > UV (η ) and w(η ) ≥ w(μ ).22 Show that it must be that U(η ) ≥ U(μ ). If U(μ ) > U(η ) holds by way of contradiction, then we have that U(μ ) > U(η ) and
UV (μ ) > UV (η )
⇒
{μ η } {η }
⇒
{μ η }+t > {η }+t
⇒
μ > η
for large t by Lemma C.3(c)
by Sophistication, contradicting w(η ) ≥ w(μ )
Therefore, for every μ η, −UV (η) > −UV (μ) and
w(η) ≥ w(μ)
⇒
U(η) ≥ U(μ)
If −UV (η) = −UV (μ) and w(η) ≥ w(μ), then linearity, continuity of the functions, and the fact that there exists μ η such that UV (μ ) > UV (η ) and w(η ) ≥ w(μ ) can be used to show U(η) ≥ U(μ). Therefore, the weak Pareto condition holds: −UV (η) ≥ −UV (μ) and
w(η) ≥ w(μ)
⇒
U(η) ≥ U(μ)
By Harsanyi’s aggregation theorem (Border (1985)), there is α β ≥ 0 such that U = −αUV + βw. Since U and V are affinely independent, β > 0. Moreover, by nondegeneracy, we know there is μ η such that μ < η and μ+t > η+t for all large t. Thus U(μ) > U(η) and w(μ) < w(η), which implies α > 0. Taking an affine transformation of w if necessary, we can write w = ξU + V for some ξ > 1. Q.E.D. D.3. Structure on V and Rationalizability LEMMA D.7: Under Axioms 1–9, there exist λ > 0 and β γ such that ξ(λ − δ) − β ≥ 0, λ − γ ≥ 0, and for all x ∈ Z, V (x) = λ max{ξU(μ) + V (μ)} μ∈x
− ξδ max{U(μ) + V (μ)} + ξδ max V (η) μ∈x
η∈x
− max{(ξ(λ − δ) − β)U(μ) + (λ − γ)V (μ)} − θ μ∈x
22 Since UV is not a positive affine transformation of w, there is μ η such that either UV (μ) > UV (η) and w(μ) ≤ w(η) or UV (μ) ≥ UV (η) and w(μ) < w(η). If the second case holds with UV (μ) = UV (η), then the nonconstancy and linearity of UV imply the existence of μ η that takes us into the first case (with strict inequalities). An aside: This kind of reasoning will be applied below to show the existence of μ η on whose ranking two nonconstant linear functions strictly disagree.
632
JAWWAD NOOR
PROOF: Consider the revealed preference over next-period menus, that is, the ranking 1 over Z defined by x 1 y ⇐⇒ x+1 y +1 . Since the ranking is complete, transitive, and continuous, and satisfies Independence and SetBetweenness, it must admit a GP representation (GP (2001))
(x) = max{U(μ) + V (μ)} − max V (η) W μ∈x
η∈x
+ V is cardinally equivaBy Sophistication, Lemma D.6, and Theorem 6.1, U lent to ξU + V . We claim also that V = αU + α V for α α ≥ 0. Use Harsanyi’s theorem to establish this. First suppose that U(μ) > U(η) and V (μ) ≥ V (η).
Then ξU(μ) + V (μ) > ξU(η) + V (η) and also U(μ) + V (μ) > U(η) + V (η).
Suppose by way of contradiction that V (η) > V (μ). Then the representation
implies {μ} >1 {μ η}. By Lemma D.4, this implies {μ} {μ η}, which in W turn implies V (μ) ≥ V (η), a contradiction. This establishes that U(μ) > U(η)
and
V (μ) ≥ V (η)
⇒
V (μ) ≥ V (η)
This holds also when U(μ) = U(η), as can be established by a limit argument that exploits linearity and the fact that, by Lemma D.4(ii), there exists μ η satisfying {μ } ∼ {μ η } {η }. Thus the weak Pareto property holds and Harsanyi’s theorem implies V = αU + α V + k for some α α ≥ 0, and k. The preceding shows that 1 admits the representation
(x) = max{ξU + V } − max{αU + α V } + k W x
x
Given Lemma D.6, we also know that the function ξδW + V is a representa are cardinally equivalent, and so redefining tion for 1 . Thus ξδW + V and W α α if necessary, there is λ > 0 and θ such that for all x ξδW (x) + V (x) = λ max{ξU(μ) + V (μ)} − max{αU(μ) + α V (μ)} − θ μ∈x
μ∈x
Rearranging yields V (x) = λ max{ξU(μ) + V (μ)} − ξδ max{U(μ) + V (μ)} μ∈x
μ∈x
+ ξδ max V (η) − max{αU(μ) + α V (μ)} − θ η∈x
μ∈x
Define β and γ such that ξ(λ − δ) − β = α ≥ 0 and λ − γ = α ≥ 0, and we get the desired representation. Q.E.D. Below we adopt the convention that
−1 i=0
= 0.
TEMPTATION AND REVEALED PREFERENCE
633
LEMMA D.8: For all x y t (D.2)
x+t y +t
⇐⇒
W (x) + Bt−1 V (x) ≥ W (y) + Bt−1 V (y)
where Bt :=
t γ δ t−1 i β γ
γt 1 t−1 i = ξδ
γ ξδt+1 + βδt 1+ δ ξδ i=0
i=0
δ
Moreover, if V is nonconstant and affinely independent of W , then (i) Bt ≥ 0 for all t and (ii) Bt is nonconstant and nonincreasing. In particular, Bt < ξδ1 for all large t. PROOF: In the previous lemma, the constants β and γ were chosen so that the temptation utility of a delayed menu V (x+1 ) is cardinally equivalent to βδW (x) + γ V (x). An induction argument establishes that for all t ≥ 1, V (x+t ) is cardinally equivalent to23 t−1 i
γ W (x) + γ t V (x) βδt δ i=0 This can be used to get a representation (D.2) for the ranking of future menus by noting that for any t ≥ 0, x+t+1 y +t+1 ⇐⇒
ξU(x+t+1 ) + V (x+t+1 ) ≥ ξU(y +t+1 ) + V (y +t+1 ) ξδt+1 W (x) + V (x+t ) ≥ ξδt+1 W (y) + V (y +t )
⇐⇒
W (x) + Bt V (x) ≥ W (y) + Bt V (y)
⇐⇒
PROOF: Lemma D.7 implies that V (x+1 ) is cardinally equivalent to βδW (x) + γ V (x). As γ i suming the induction hypothesis that V (x+t ) is cardinally equivalent to βδt [ t−1 i=0 ( δ ) ]W (x) + t +t+1 +t +t ) is equivalent to βδW (x ) + γ V (x ), which itself is equivalent to γ V (x), we see that V (x t−1
γ i βδt+1 W + γβδt W (x) + γ t+1 V (μ) δ i=0 23
t−1 i γ γ W (x) + γ t+1 V (μ) δ i=0 δ t
γ i = βδt+1 W (x) + γ t+1 V (μ) δ i=0
= βδt+1 1 +
completing the induction argument.
Q.E.D.
634
JAWWAD NOOR
where Bt is as in the statement of the lemma. (i) Since V is nonconstant and affinely independent of W , there exist ∗ x y ∗ such that W (x∗ ) = W (y ∗ ) and V (x∗ ) < V (y ∗ ).24 By (D.2), x∗+1 < y ∗+1 . Lemma C.3(d) rules out x∗+t > y ∗+t for any t. Thus x∗+t y ∗+t for all t. It fol(x∗ )−W (y ∗ ) lows from (D.2) that 0 = WV (y ∗ )−V (x∗ ) ≤ Bt for all t. (ii) Since V is nonconstant and affinely independent of W , there exist x∗ y ∗ x y such that W (x∗ ) = W (y ∗ ), V (x∗ ) < V (y ∗ ), V (x ) = V (y ), and W (x ) > W (y ) (see footnote 4). Note that W (x∗ αx ) − W (y ∗ αy ) (1 − α) W (x ) − W (y ) = ≥ 0 α V (y ∗ αy ) − V (x∗ αx ) V (y ∗ ) − V (x∗ ) Now suppose by way of contradiction that Bt is not nonincreasing: BT > BT for (x∗ αx )−W (y ∗ αy ) some T > T ≥ 0. There is some α ∈ (0 1) for which BT > WV (y ∗ αy )−V (x∗ αx ) > BT
∗
+T +1 ∗
+T +1 and thus there is a reversal: (x αx ) > (y αy ) and (x∗ αx )+T +1 < ∗
+T +1 ∗
. However, α ∈ (0 1) implies W (x αx ) > W (y ∗ αy ), while by (y αy ) Lemma C.3(d), the reversal implies W (x∗ αx ) < W (y ∗ αy ), a contradiction. Thus Bt is nonincreasing. To complete the step, we need to show that Bt is nonconstant. Suppose that it is constant, Bt = B0 = ξδ1 := B > 0 for all t. Since W V are nonconstant and affinely independent, B > 0 implies that W + BV and W are nonconstant and ordinally distinct. Thus there is μ η where W (μ) + BV (μ) < W (η) + BV (η) and W (μ) > W (η) (argue as in footnote 22, for instance). But then by (D.2) we have μ+t < η+t for all large t but W (μ) > W (η), violating Lemma C.3(c). Conclude that Bt is nonconstant and, therefore, that Q.E.D. Bt < B0 = ξδ1 for all large t. LEMMA D.9: If V is nonconstant and affinely independent of W , then 0 ≤ γ ≤ δ. PROOF: Step 1. γ ≥ 0. Suppose by contradiction that γ < 0 and consider three cases. We use the identities ⎧ (t−1)/2
γ 2i ⎪ γ ⎪ ⎪ ⎪ 1+ for odd t > 0, ⎪ t i ⎨ δ δ
γ i=0 = 2i t (t−2)/2 ⎪ δ
⎪ γ γ γ i=0 ⎪ ⎪ + for even t > 0. ⎪ ⎩ 1+ δ δ δ i=0 24 Note that if [W (x) = W (y) ⇒ V (x) = V (y)] for all x y, then W and V are affinely dependent.
TEMPTATION AND REVEALED PREFERENCE
Case (i): (D.3)
|γ| δ
635
= 1. The last statement of Lemma D.8(ii) requires
t−1 i t γ β γ <1+ δ ξδ i=0 δ
for all large t
This cannot be satisfied for large even t if |γ| = 1 and β ≤ 0. If β > 0, t γδ i t γ i ( ) = 0 for all odd t and ( ) = 1 for all even t. Thus then i=0 δ i=0 δ (γ/δ)t 1 1 1 Bt = ξδ 1+β/(ξδ)[t−1 (γ/δ)i ] fluctuates between ξδ and ξδ1 1+β/(ξδ) , violating Lemma i=0 D.8(ii). β t−1 γ i Case (ii): |γ| = 1 and β ≥ 0. If |γ| > 1, then the term ξδ [ i=0 ( δ ) ] is negative δ δ for all even t and thus for all large even t, while the left-hand side of (D.3) is strictly greater than 1, the right-hand side is strictly less than 1, a contradiction. t−1 If |γ| < 1, then [ i=0 ( γδ )i ] is positive and increasing for all t. Consequently, δ since γ < 0, Bt is negative for some odd t, contradicting Lemma D.8(i). t−1 Case (iii): |γ| = 1 and β < 0. If |γ| < 1, then [ i=0 ( γδ )i ] is positive for all t, δ δ increasing for all odd t, and increasing for all even t. Moreover, the limit is the same regardless of whether we consider the subsequence corresponding to all β t−1 γ i odd t or that corresponding to all even t. If the limit of 1 + ξδ [ i=0 ( δ ) ] is nonβ t−1 γ i negative (resp. negative), then 1 + ξδ [ i=0 ( δ ) ] is positive for some odd t (resp. negative for some even t) and Bt is negative, contradicting Lemma D.8(i). Suppose |γ| > 1. Then δ t γ 1 δ Bt = t−1 ξδ γ 1− β δ 1+ ξδ 1 − γ δ 1 1 = ξδ
−1 γ β δ 1 1 β 1 − t + t ξδ 1 − γ γ γ ξδ 1 − γ δ δ δ δ γ γ 1 1 = − 1 → −1 ξδ δβ δ γ −
β δ ξδ 1 − γ δ
636
JAWWAD NOOR
Denote the limit by B > 0. Since W , V are nonconstant and affinely independent, B implies that W + BV and W are nonconstant and ordinally distinct. Thus there is μ η where W (μ) + BV (μ) < W (η) + BV (η) and W (μ) > W (η). But then by (D.2), we have μ+t < η+t for all large t but W (μ) > W (η), violating Lemma C.3(c). Step 2. γδ ≤ 1. If β ≤ 0, then γδ ≥ 1 contradicts (D.3). If β > 0, then suppose by way of contradiction that γδ > 1. As at the end of Case (iii) above, γ γ ( δ − 1) := B > 0 and argue similarly to establish a contrashow that Bt → δβ diction. Q.E.D. LEMMA D.10: Under Axioms 1–9, for all x ∈ Z, V (x) = βW (x) + γ max V (η) − θ η∈x
Moreover, ξ = 1 and thus C (x) = arg maxμ∈x {U(μ) + V (μ)} for all x PROOF: If V is constant or an affine transformation of W , the result holds with γ = 0. Henceforth suppose that V is nonconstant and not an affine transformation of W . To ease notation, let α := ξ(λ − δ) − β ≥ 0 and α := (λ − γ) ≥ 0. Also denote the ranking of t-period menus by the binary relation t over Z, that is, x t y ⇐⇒ x+t y +t for t > 0. As in Lemma D.8, for any t > 0, the preference t is represented by the function x → W (x) + Bt−1 V (x) = (1 − Bt−1 ξδ) max{U(μ) + V (μ)} − (1 − Bt−1 ξδ) max V (η) μ∈x
η∈x
+ Bt−1 λ max{ξU(μ) + V (μ)} μ∈x
− Bt−1 max{αU(μ) + α V (μ)} − θ μ∈x
Consider the following cases: Case A: γ = 0. In this case Bt = 0 for all t > 0, and so t+1 is represented by W for all t > 0 (note that B0 = ξδ1 ). By Sophistication and Theorem 6.1, ξ = 1. If α = 0, then β = (λ − δ) and from the structure in Lemma D.7, we get V (x) = (λ − δ) max{U + V } + (δ − λ) max V (η) − θ = βW − θ x
η∈x
We show next that α > 0 is impossible. Note that 1 and 2 are, respectively, represented by α x → max{U + V } − max U + V x x λ
TEMPTATION AND REVEALED PREFERENCE
637
and x → max{U + V } − max V x
x
Since αλ > 0 and U V are affinely independent, there exist μ η such that U(μ) + V (μ) > U(η) + V (η), αλ U(μ) + V (μ) > αλ U(η) + V (η), and V (μ) < V (η)25 It follows from the representations that {μ}+1 ≈ {μ η}+1 and {μ}+2 > {μ η}+2 , and hence τ({μ}+1 {μ η}+1 ) = 0. We show that τ(μ η) = 0, thereby establishing a contradiction to Menu Reversal (i): Note that U(μ) + V (μ) > U(η) + V (η) and ξ ≥ 1 implies by Lemma D.6 that μ > η. Moreover, since U(μ) > U(η), Lemma C.3(c) then implies τ(μ η) = 0, as desired. Case B: γ > 0. In this case, Bt > 0 for all t and from Lemma D.8(ii), we know that 1 − Bt ξδ > 0 for all large t. Consider two possibilities: B(i): αU + α V is a positive affine transformation of ξU + V . First suppose ξ = 1. Then (λ − δ) − β = (λ − γ) and, therefore, β = −δ + γ. Moreover, V (x) = λ max{U + V } − δ max{U + V } x
x
+ δ max V − (λ − γ) max{U + V } − θ x
x
= βW (x) + γ max V − θ x
as desired. Next we show that ξ > 1 is impossible. First note that 0 Bt λ max{ξU(μ) + V (μ)} − Bt max{αU(μ) + α V (μ)} = μ∈x
μ∈x
If equality holds, then αU + α V = λξU + λV , and the fact that U and V are affinely independent then implies α = λ, and thus (λ − γ) = λ, which is not possible given the hypothesis γ > 0. Thus, it must be that for some k = 0, Bt λ max{ξU(μ) + V (μ)} − Bt max{αU(μ) + α V (μ)} μ∈x
μ∈x
= k max{ξU(μ) + V (μ)} μ∈x
Note that ξU + V is neither an affine transformation of U + V nor V , as U and V are affinely independent. We show that Set-Betweenness must be violated. Since the set of simple lotteries (lotteries with finite support) is a dense subset of Δ, we can find a finite set of simple lotteries on which U + V , V , and Argue as in the proof of Lemma D.8(ii) to show that there exists μ η μ
η
such that U(μ ) = U(η ), V (μ ) < V (η ), U(μ
) > U(η
) and V (μ
) = V (η
), and that for any θ U(μ
)−U(η
) U(μ θμ
)−U(η θη
) = (1−θ) := f (θ). Then choosing θ such that f (θ) > max{1 αλ } gives rise V (η θη
)−V (μ θμ
) θ V (η )−V (μ )
to the desired μ = μ θμ
and η = η θη
. 25
638
JAWWAD NOOR
ξU + V are nonconstant and mutually affinely independent (for each pair of functions, take two lotteries that the functions, rank equivalently and two that they rank differently). Denote the (finite) union of the (finite) supports of these simple lotteries by A ⊂ C × Z. Viewing A as just some finite set, one can then restrict attention to the subdomain of menus that consist of nonempty compact subsets of Δ(A) and apply the argument in Dekel, Lipman, and Rustichini (2009, Lemma 1) to establish a violation of Set-Betweenness. B(ii): αU + α V is not a positive affine transformation of ξU + V . First we show that ξ > 1 is impossible. The argument is similar to that in the previous case. Depending on whether αU + α V is a positive affine transformation of V or U + V or neither, we have either two distinct positive states or two distinct negative states. In either case, since the distinct states are mutually affinely independent, the argument in the previous case yields a contradiction to SetBetweenness. Next consider the case ξ = 1. If αU + α V is a positive affine transformation of V , then the fact that U and V are affinely independent implies α = 0 and thus (λ − δ) − β = 0, and therefore β = λ − δ. Moreover, V (x) = λ max{U + V } − δ max{U + V } x
x
+ δ max V − max(λ − γ)V − θ x
x
= βW (x) + γ max V − θ x
as desired. On the other hand if αU + α V is not a positive affine transformation of V , then we have a case with one positive state and two distinct negative states. Arguing as at the end of case B(i) yields a contradiction to Set-Betweenness. Q.E.D. LEMMA D.11: β > γ − δ. PROOF: From the representation, we see that the induced representation for the ranking 1 of next-period menus is δW (x) + V (x) = (δ + β)W (x) + γ max V x
= (δ + β) max{U + V } − (δ + β − γ) max V x
x
Set-Betweenness, nondegeneracy, and γ ≥ 0 imply (δ + β) > 0 and (δ + β − γ) > 0. Q.E.D. We have shown 0 ≤ γ ≤ δ and β > γ − δ; is represented by U + V ; for all c x, V (c x) = v(c) + V (x) = v(c) + [βW (x) + γ max V (η)] − θ η∈x
TEMPTATION AND REVEALED PREFERENCE
639
We can take θ = 0 without loss of generality (w.l.o.g.) since temptation utility appears in both a negative and positive term in the function W . This completes the proof. APPENDIX E: PROOF OF THEOREM 4.3 Consider the set F of continuous linear functions on Δ. The sup norm · makes F a Banach space. Endow F × F with the norm defined by (F G) = F+G, for all (F G) ∈ F × F . Then F × F is a Banach space. For a = u v, let ca and ca denote the a-best and a-worst consumption in C, and define U=
∞
δt u(ca )
t=0
V =
∞
t=0
γ t v(cv ) + β
∞
D(t)u(cu )
t=0
t 0 where D(t + 1) = δt i=0 ( γδ )i , adopting the convention that i=0 = 1 and −1 i=0 = 0. Let FU = {U ∈ F : U ≤ U ≤ U} and, similarly, FV = {V ∈ F : V ≤ V ≤ V }. Define X = FU × FV The first lemma establishes compactness of X when F × F has the weak topology (induced by the norm dual of F × F ): For any subset V of a normed vector space, denote by V ∗ the norm dual of V —the set of all norm-continuous and linear functionals on V . The weak topology on V is the weakest topology for which all the functionals in V ∗ are continuous. A net vα in V converges weakly (i.e., with respect to the weak topology) to v if and only if f (vα ) → f (v) for all f ∈ V ∗ . LEMMA E.1: X is a nonempty, bounded, convex, compact subset of F × F in the weak topology. PROOF: Step 1. Show that the weak topology of a product is the product of weak topologies. We need to show that the weak topology on F × F is identical to the product topology on F × F with the weak topology on F . The key observation is that any norm-continuous linear functional L ∈ (F × F )∗ can be written as a sum of norm-continuous linear functionals l l ∈ F ∗ , that is, L(f g) = l(f ) + l (g). This follows from linearity and the fact that for any fixed f g , (f g) + (f g ) = (f g ) + (f g). Take a net (fα gα ) in F × F . If (fα gα ) → (f g) in the weak topology, then L(fα gα ) → L(f g) for every norm-continuous linear function L ∈ (F × F )∗ . By taking all L that are constant either in the first or second argument, this implies that fα → f and gα → g in the weak topology on F . Thus, (fα gα ) → (f g) in the product topology. To prove the converse, suppose (fα gα ) → (f g)
640
JAWWAD NOOR
in the product topology. Then l(fα ) → l(f ) and l (gα ) → l (g) for all l l ∈ F ∗ . By our observation above, it follows that L(fα gα ) → L(f g) for all L ∈ (F × F )∗ and thus (fα gα ) → (f g) weakly. Step 2. Prove the lemma. Nonemptiness and convexity are evident. (Norm) boundedness follows from the Banach–Steinhaus theorem. To establish weak compactness, by Step 1 it suffices to show that FU and FV are weakly compact subsets of F . We prove this for FU below and an anologous argument implies it for FV . Convexity and boundedness, and the fact that Δ is compact metric and that a convex subset of a locally convex linear topological space is weakly closed if and only if it is closed (Dunford and Schwartz (1988, Theorem V.3.13, p. 422)) together imply the following statement: FU is compact in the weak topology on F if FU is a compact set of continuous linear functions when F has the topology of pointwise convergence (Dunford and Schwartz (1988, Theorem IV.6.14, p. 269)).26 To this end, begin by isometrically embedding Δ in the linear space ca(C) of finite Borel signed measures on C, normed by the total variation norm.27 Let F e denote the set of all linear and continuous functions on ca(C) and endow it with the topology of pointwise convergence. Define FUe = {U ∈ F e : U ≤ U ≤ U}. Given Aliprantis and Border (1994, Corollary 6.23, p. 248), the pointwise limit of a sequence fn in FUe lies in FUe . Thus FUe is closed in topology of pointwise convergence on F e . But FUe ⊂ [U U]ca(C) , that is, FUe is a closed subset of a compact set and thus is itself compact. Finally, show that FU is compact when F has the topology of pointwise convergence. Define the function Θ : F e → F by Θ(f ) = f |Δ , the restriction of f to Δ. It is obvious that Θ is continuous. Therefore, compactness of FUe in F e implies compactness of FU in F . This completes the proof. Q.E.D. Define the function Γ : F × F → F × F by Γ (U V )(μ) = u(c) + δ max{U + V } − max V dμ(c x)
C×Z
C×Z
ν∈x
υ∈x
v(c) + β max{U + V } + (γ − β) max V (η) dμ(c x) ν∈x
υ∈x
26 Theorem IV.6.14 in Dunford and Schwartz (1988, p. 269) states that for a set F of continuous functions on a compact Hausdorff space, the weak closure of F is weakly compact if and only if F is norm-bounded and its closure in the pointwise convergence topology is a compact set of continous functions in this topology. 27 Since C is compact, ca(C) is isometrically isomorphic to the topological dual B(C)∗ of the space B(C) of continuous functions on C (normed by sup-norm). Use this duality and endow ca(C) with weak∗ topology σ(B(C)∗ B(C)). This topology induces the topology of weak convergence on Δ.
641
TEMPTATION AND REVEALED PREFERENCE
LEMMA E.2: Γ |X is a self-map on X. PROOF: Take any (U V ) ∈ X. Write Γ (U V ) = (Γ U Γ V ). The Maximum theorem implies that (Γ U Γ V ) is a pair of continuous functions. Linearity is evident. The fact that U ≤ Γ (U) ≤ U is readily determined, given the GP functional form used. To see that V ≤ Γ (V ) ≤ V , observe that β maxν∈x {U + V } + (γ − β) maxυ∈x V (η) = βW (x) + γ maxυ∈x V (η) and thus Γ (V ) ≤ v(cv ) + βU + γV = v(cv ) + β
∞
+γ
∞
δt u(ca )
t=0 ∞
γ v(cv ) + β t
t=0
= v(cv ) +
∞
+ βu(ca )
∞
δt + γ
t=0
=
∞
δ
t−1
t−1 i
γ δ
u(cu )
t=0
i=0
∞
t−1 i
γ
γ t v(cv )
t=1
γ t v(cv ) + u(ca )β
δt−1
t=0
i=0
∞
δt + γδt−1
δ t−1 i
γ
δ ∞ t i
γ t t t δ +δ = γ v(cv ) + u(ca )β δ t=0 t=0 i=1 ∞ ∞ t i
γ t t δ = γ v(cv ) + u(ca )β δ t=0 t=0 i=0 t=0
=
∞
t=0
t=0
∞
γ v(cv ) + u(cu )β t
∞
i=0
D(t + 1)
t=0
= ∗V
∞ ∞ where =∗ follows from the fact that t=0 D(t + 1) = t=0 D(t) since D(0) = −1 δ−1 i=0 ( γδ )i = 0. An analogous argument yields V ≤ Γ (V ). Thus Γ (U V ) ∈ X. Q.E.D. LEMMA E.3: Γ |X is continuous with respect to the weak topology. PROOF: By Aliprantis and Border (1994, Theorem 6.21, p. 247), norm-tonorm continuity of a function between two normed spaces is equivalent to
642
JAWWAD NOOR
weak-to-weak continuity. Below we establish that Γ is sup-norm continuous.28 It then follows that Γ , and in turn Γ |X , is weakly continuous. Begin witha preliminary observation. Take any f ∈ F and consider the problem supμ∈Δ | (maxυ∈x f ) dμ(c x)|. Note that the objective function is insensitive to any c yielded by μ. Since x → maxυ∈x f is continuous, the sup is achieved as a max for some μ. Because of the expected utility form, μ is degenerate on some (c x) w.l.o.g. Denoting the maximizer of f in x∗ by ηx , we see, therefore, that sup (max f ) dμ(c x) = sup max f = sup |f (ηx )| μ∈Δ
υ∈x
(cx)∈Δ
υ∈x
(cx)∈Δ
= sup |f (η)| = f η∈Δ
Thus the sup is, in fact the, norm of f . To prove the lemma, suppose Un − U → 0 and Vn − V → 0. Take U = V = 0 w.l.o.g. Write Γ (Un Vn ) as (Γ Un Γ Vn ). Observe that Γ (Un Vn ) − Γ (0 0) = Γ Un − Γ 0 + Γ Vn − Γ 0 δ max{Un + Vn } − max Vn dμ(c x) = x x C×Z + β max{Un + Vn } + (γ − β) max Vn (η) dμ(c x) C×Z
ν∈x
υ∈x
Using the triangle inequality and the observations above, we see that Γ (Un Vn ) − Γ (0 0) → 0. Thus Γ is sup-norm continuous. Q.E.D. To complete the proof, we invoke the Brouwer–Schauder–Tychonoff fixed point theorem (Dunford and Schwartz (1988, Theorem V.10.5, p. 456)), which states that a continuous self-map on a compact convex subset of a locally convex linear topological space has a nonempty set of fixed points. APPENDIX F: PROOF OF THEOREM 4.7 Let be the preference relation that is represented by ϕ : Δ → R defined by ϕ(μ) = U(μ) + V (μ) for all μ ∈ Δ. Given Theorem 4.2, we can assume U V ≥ 0. For each t > 0, define t on Δ by μ t η ⇐⇒ μ+t η+t . We saw in (6.2) that t is represented by the function ϕt : Δ → R defined by ϕt (μ) = U(μ) + Dt V (μ) for all μ ∈ Δ where Dt 0. 28 By Aliprantis and Border (1994, Theorem 6.30, p. 252), the weak and norm topologies on a finite dimensional space coincide. Since the range of Γ is R, we need to be concerned only with the topology on its domain.
TEMPTATION AND REVEALED PREFERENCE
643
LEMMA F.1: The sequence {ϕt } uniformly converges to U. PROOF: The sequence {ϕt } is a sequence of continuous real functions defined on a compact space Δ. Since Dt 0, the sequence is monotone (decreasing) and ϕt converges pointwise to the continuous function U. Therefore, by Dini’s theorem (Alipranits and Border (1994, Theorem 2.62)), the convergence is uniform. Q.E.D. Since U is nonconstant, there exist ρ ν ∈ Δ such that U(ρ) > U(ν). By linearity of U, (F.1)
U(μ) ≥ U(η)
⇒
U(μαρ) > U(ηαν)
for all α ∈ (0 1)
This observation will be used in the next lemma. Let U be the preference relation represented by U. As in Appendix B, identify any binary relation B on Δ with its graph Γ (B) ⊂ Δ × Δ. LEMMA F.2: U = ∗ . PROOF: As limt→∞ Γ (t ) ≡ Γ (∗ ), it suffices to show that Γ (U ) = limt→∞ Γ (t ). First establish Ls Γ (t ) ⊂ Γ (U ). If (μ η) ∈ Ls Γ (t ), then there is a subsequence {Γ (t(n) )} and a sequence {(μn ηn )} that converge to (μ η) such that (μn ηn ) ∈ Γ (t(n) ) for each n. Therefore, for each n, ϕt(n) (μn ) ≥ ϕt(n) (ηn ). Since ϕt(n) converges to U uniformly, it follows that U(μ) ≥ U(η). Hence (μ η) ∈ Γ (U ), as desired. Next establish Γ (U ) ⊂ Li Γ (t ). Let (μ η) ∈ Γ (U ) and take any neighborhood V of (μ η). By (F.1), there exists α ∈ (0 1] such that (μαρ ηαν) ∈ V and U(μαρ) > U(ηαν). By Lemma F.1, there exists T < ∞ such that ϕt (μαρ) > ϕt (ηαν) for all t ≥ T , that is, (μαρ ηαν) ∈ Γ (t ) for all t ≥ T . Hence, V ∩Γ (t ) = φ for all but a finite number of t, that is, (μ η) ∈ Li Γ (t ). This completes the proof. Q.E.D. REFERENCES ALIPRANTIS, C., AND K. BORDER (1994): Infinite Dimensional Analysis: A Hitchhiker’s Guide (Second Ed.). Berlin: Springer-Verlag. [605,626,640-643] BORDER, K. (1985): “More on Harsanyi’s Utilitarian Cardinal Welfare Theorem,” Social Choice and Welfare, 1, 279–281. [631] DEKEL, E., AND B. LIPMAN (2007): “Self-Control and Random Strotz Representations,” Report, Boston University. [620] DEKEL, E., B. LIPMAN, AND A. RUSTICHINI (2001): “Representing Preferences With a Unique Subjective State Space,” Econometrica, 69, 891–934. [609] (2009): “Temptation-Driven Preference,” Review of Economic Studies, 76, 937–971. [609, 618,619,638] DUNFORD, N., AND J. SCHWARTZ (1988): Linear Operators: Part 1, General Theory. New York: Wiley-Interscience. [640,642]
644
JAWWAD NOOR
FREDERICK, S., G. LOEWENSTEIN, AND T. O’DONOGHUE (2002): “Time Discounting and Time Preference: A Critical Review,” Journal of Economic Literature, 40, 351–401. [602,610] FUDENBERG, D., AND D. LEVINE (2006): “A Dual Self Model of Impulse Control,” American Economic Review, 96, 1449–1476. [609] GUL, F., AND W. PESENDORFER (2001): “Temptation and Self-Control,” Econometrica, 69, 1403–1435. [601,604,608,609,615,618-621,624,632] (2004): “Self-Control and the Theory of Consumption,” Econometrica, 72, 119–158. [601,604,605,607,610,612,613,615-617,620,630] HILDENBRAND, W. (1974): Core and Equilibria of a Large Economy. Princeton, NJ: Princeton University Press. [614] KOOPMANS, T. (1960): “Stationary Ordinal Utility and Impatience,” Econometrica, 28, 287–309. [617] KRUSELL, P., B. KURUSÇU, AND A. SMITH, JR. (2002): “Time Orientation and Asset Prices,” Journal of Monetary Economics, 49, 107–135. [604,607,612,615-617,620] LAIBSON, D. (1997): “Golden Eggs and Hyperbolic Discounting,” Quarterly Journal of Economics, 112, 443–477. [603] NOOR, J. (2005): “Temptation and Choice,” Ph.D. Thesis, University of Rochester. [601] (2006): “Commitment and Self-Control,” Journal of Economic Theory, 135, 1–34. [604, 607,612,614-617,620,625,630] (2008a): “Removed Preference,” Report, Boston University. [626,627] (2008b): “Subjective Welfare,” Report, Boston University. [625] (2011): “Supplement to ‘Temptation and Revealed Preference’,” Econometrica Supplemental Material, 79, http://www.econometricsociety.org/ecta/Supmat/5800_proofs.pdf. [604, 612,624] NOOR, J., AND N. TAKEOKA (2008): “Menu-Dependent Self-Control,” Report, Boston University. [608,609] RAWLS, J. (1971): A Theory of Justice. Cambridge, MA: Harvard University Press. [602,625]
Dept. of Economics, Boston University, 270 Bay State Road, Boston, MA 02215, U.S.A.; [email protected]. Manuscript received April, 2005; final revision received August, 2010.
Econometrica, Vol. 79, No. 2 (March, 2011), 645–647
ANNOUNCEMENTS 2011 NORTH AMERICAN SUMMER MEETING
THE 2011 NORTH AMERICAN SUMMER MEETING of the Econometric Society will be held June 9–12, 2011, at Washington University in St. Louis, MO. The program will include submitted papers as well as the Presidential Address by Bengt Holmstrom (Massachusetts Institute of Technology), the Walras-Bowley Lecture by Manuel Arellano (CEMFI), the Cowles Lecture by Michael Keane (University of New South Wales and Arizona State University), and the following semi-plenary sessions: Behavioral economics Jim Andreoni, University of California, San Diego Ernst Fehr, University of Zurich Decision theory Bart Lipman, Boston University Wolfgang Pesendorfer, Princeton University Development economics Abhijit Bannerjee, Massachusetts Institute of Technology Edward Miguel, University of California, Berkeley Financial and informational frictions in macroeconomics George-Marios Angeletos, Massachusetts Institute of Technology Nobu Kiyotaki, Princeton University Game theory John Duggan, University of Rochester Ehud Kalai, Northwestern University Microeconometrics Guido Imbens, Harvard University Costas Meghir, University College London Networks Steven Durlauf, University of Wisconsin–Madison Brian Rogers, Northwestern University Time Series Econometrics Bruce E. Hansen, University of Wisconsin–Madison Ulrich Müller, Princeton University Urban Enrico Moretti, University of California, Berkeley Esteban Rossi-Hansberg, Princeton University © 2011 The Econometric Society
DOI: 10.3982/ECTA792ANN
646
ANNOUNCEMENTS
Information on local arrangements will be available at http://artsci.wustl. edu/~econconf/EconometricSociety/. Meeting Organizers: Marcus Berliant, Washington University in St. Louis (Chair) David K. Levine, Washington University in St. Louis John Nachbar, Washington University in St. Louis Program Committee: Donald Andrews, Yale University Marcus Berliant, Washington University in St. Louis Steven Berry, Yale University Ken Chay, University of California, Berkeley Sid Chib, Washington University in St. Louis John Conley, Vanderbilt University Charles Engel, University of Wisconsin Amy Finkelstein, Massachusetts Institute of Technology Sebastian Galiani, Washington University in St. Louis Donna Ginther, University of Kansas Bart Hamilton, Washington University in St. Louis Paul J. Healy, Ohio State University Gary Hoover, University of Alabama Tasos Kalandrakis, University of Rochester David Levine, Washington University in St. Louis Rody Manuelli, Washington University in St. Louis John Nachbar, Washington University in St. Louis Ray Riezman, University of Iowa Aldo Rustichini, University of Minnesota Suzanne Scotchmer, University of California, Berkeley William Thomson, University of Rochester Chris Waller, Federal Reserve Bank of St. Louis Ping Wang, Washington University in St. Louis 2011 AUSTRALASIA MEETING
THE 2011 AUSTRALASIA MEETING of the Econometric Society in 2011 (ESAM11) will be held in Adelaide, Australia, from July 5 to July 8, 2011. ESAM11 will be hosted by the School of Economics at the University of Adelaide. The program committee will be co-chaired by Christopher Findlay and
ANNOUNCEMENTS
647
Jiti Gao. The program will include plenary, invited and contributed sessions in all fields of economics. 2011 ASIAN MEETING
THE 2011 ASIAN MEETING of the Econometric Society will be held in Seoul, Korea, in the campus of Korea University in Seoul from August 11 to August 13, 2011. The program will consist of invited and contributed papers. Authors are encouraged to submit papers across the broad spectrum of theoretical and applied research in economics and in econometrics. The meeting is open to all economists including those who are not currently members of the Econometric Society. The preliminary program is scheduled to be announced on March 31, 2011. Although the deadline for general registration is June 30, 2011, the authors of the papers in the preliminary program will be required to register early by April 30, 2011. Otherwise, their submissions will be understood to be withdrawn. We plan to announce the final program by the end of May. Please refer to the conference website http://www.ames2011.org for more information. IN-KOO CHO AND JINYONG HAHN Co-Chairs of Program Committee 2011 EUROPEAN MEETING
THE 2011 EUROPEAN MEETING of the Econometric Society (ESEM) will take place in Oslo, Norway, from 25 to 29 August, 2011. The Meeting is jointly organized by the University of Oslo and it will run in parallel with the Congress of the European Economic Association (EEA). Participants will be able to attend all sessions of both events. The Program Committee Chairs are John van Reenen (London School of Economics) for Econometrics and Empirical Economics, and Ernst-Ludwig von Thadden (University of Mannheim) for Theoretical and Applied Economics. The Local Arrangements Chair is Asbjørn Rødseth (University of Oslo). Each author may submit only one paper to the ESEM and only one paper to the EEA Congress. The same paper cannot be submitted to both ESEM and the EEA Congress. At least one co-author must be a member of the Econometric Society or join at the time of submission. Decisions will be notified by 15 April, 2011. Paper presenters must register by 1 May, 2011.
Econometrica, Vol. 79, No. 2 (March, 2011), 649
FORTHCOMING PAPERS THE FOLLOWING MANUSCRIPTS, in addition to those listed in previous issues, have been accepted for publication in forthcoming issues of Econometrica. DE LOECKER, JAN: “Product Differentiation, Multi-Product Firms and Estimating the Impact of Trade Liberalization on Productivity.” HÖRNER, JOHANNES, TAKUO SUGAYA, SATORU TAKAHASHI, AND NICOLAS VIEILLE: “Recursive Methods in Discounted Stochastic Games: An Algorithm for δ → 1 and a Folk Theorem.” WILLIAMS, NOAH: “Persistent Private Information.”
© 2011 The Econometric Society
DOI: 10.3982/ECTA792FORTH