CONTENTS ROSA L. MATZKIN: Identification in Nonparametric Simultaneous Equations Models . . . ULRICH K. MÜLLER AND MARK W. WATSON: Testing Models of Low-Frequency Variability
945 979
MICHELLE SOVINSKY GOEREE: Limited Information and Advertising in the U.S. Personal
Computer Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017 ROBERT MARQUEZ AND BILGE YILMAZ: Information and Efficiency in Tender Offers . . . . 1075 MICHAEL JANSSON: Semiparametric Power Envelopes for Tests of the Unit Root Hypothesis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103 NOTES AND COMMENTS: ZVI SAFRA AND UZI SEGAL: Calibration Results for Non-Expected Utility Theories
1143
LUCA RIGOTTI, CHRIS SHANNON, AND TOMASZ STRZALECKI: Subjective Beliefs and ex
ante Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167 J. P. FLORENS, J. J. HECKMAN, C. MEGHIR, AND E. VYTLACIL: Identification of Treatment Effects Using Control Functions in Models With Continuous, Endogenous Treatment and Heterogeneous Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191 JEAN-YVES PITARAKIS: Comment on: Threshold Autoregressions With a Unit Root 1207 ANNOUNCEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219 FORTHCOMING PAPERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223 REPORT OF THE PRESIDENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
VOL. 76, NO. 5 — September, 2008
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org EDITOR STEPHEN MORRIS, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A.;
[email protected] MANAGING EDITOR GERI MATTSON, 2002 Holly Neck Road, Baltimore, MD 21221, U.S.A.; mattsonpublishingservices@ comcast.net CO-EDITORS DARON ACEMOGLU, Dept. of Economics, MIT, E52-380B, 50 Memorial Drive, Cambridge, MA 021421347, U.S.A.;
[email protected] STEVEN BERRY, Dept. of Economics, Yale University, 37 Hillhouse Avenue/P.O. Box 8264, New Haven, CT 06520-8264, U.S.A.;
[email protected] WHITNEY K. NEWEY, Dept. of Economics, MIT, E52-262D, 50 Memorial Drive, Cambridge, MA 021421347, U.S.A.;
[email protected] WOLFGANG PESENDORFER, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A.;
[email protected] LARRY SAMUELSON, Dept. of Economics, Yale University, New Haven, CT 06520-8281, U.S.A.;
[email protected] HARALD UHLIG, Dept. of Economics, University of Chicago, 1126 East 59th Street, Chicago, IL 60637, U.S.A.;
[email protected] ASSOCIATE EDITORS YACINE AÏT-SAHALIA, Princeton University JOSEPH G. ALTONJI, Yale University JAMES ANDREONI, University of California, San Diego DONALD W. K. ANDREWS, Yale University JUSHAN BAI, New York University MARCO BATTAGLINI, Princeton University PIERPAOLO BATTIGALLI, Università Bocconi DIRK BERGEMANN, Yale University MICHELE BOLDRIN, Washington University in St. Louis VICTOR CHERNOZHUKOV, Massachusetts Institute of Technology J. DARRELL DUFFIE, Stanford University JEFFREY ELY, Northwestern University LARRY G. EPSTEIN, Boston University HALUK ERGIN, Washington University in St. Louis FARUK GUL, Princeton University JINYONG HAHN, University of California, Los Angeles PHILIP A. HAILE, Yale University PHILIPPE JEHIEL, Paris School of Economics YUICHI KITAMURA, Yale University PER KRUSELL, Princeton University and Stockholm University OLIVER LINTON, London School of Economics
BART LIPMAN, Boston University THIERRY MAGNAC, Toulouse School of Economics GEORGE J. MAILATH, University of Pennsylvania DAVID MARTIMORT, IDEI-GREMAQ, Université des Sciences Sociales de Toulouse STEVEN A. MATTHEWS, University of Pennsylvania ROSA L. MATZKIN, University of California, Los Angeles LEE OHANIAN, University of California, Los Angeles WOJCIECH OLSZEWSKI, Northwestern University ERIC RENAULT, University of North Carolina PHILIP J. RENY, University of Chicago JEAN-MARC ROBIN, Université de Paris 1 and University College London SUSANNE M. SCHENNACH, University of Chicago UZI SEGAL, Boston College CHRIS SHANNON, University of California, Berkeley NEIL SHEPHARD, Oxford University MARCIANO SINISCALCHI, Northwestern University JEROEN M. SWINKELS, Washington University in St. Louis ELIE TAMER, Northwestern University IVÁN WERNING, Massachusetts Institute of Technology ASHER WOLINSKY, Northwestern University
EDITORIAL ASSISTANT: MARY BETH BELLANDO, Dept. of Economics, Princeton University, Fisher Hall, Princeton, NJ 08544-1021, U.S.A.;
[email protected] Information on MANUSCRIPT SUBMISSION is provided in the last two pages. Information on MEMBERSHIP, SUBSCRIPTIONS, AND CLAIMS is provided in the inside back cover.
SUBMISSION OF MANUSCRIPTS TO ECONOMETRICA 1. Members of the Econometric Society may submit papers to Econometrica electronically in pdf format according to the guidelines at the Society’s website: http://www.econometricsociety.org/submissions.asp Only electronic submissions will be accepted. In exceptional cases for those who are unable to submit electronic files in pdf format, one copy of a paper prepared according to the guidelines at the website above can be submitted, with a cover letter, by mail addressed to Professor Stephen Morris, Dept. of Economics, Princeton University, Fisher Hall, Prospect Avenue, Princeton, NJ 08544-1021, U.S.A. 2. There is no charge for submission to Econometrica, but only members of the Econometric Society may submit papers for consideration. In the case of coauthored manuscripts, at least one author must be a member of the Econometric Society. Nonmembers wishing to submit a paper may join the Society immediately via Blackwell Publishing’s website. Note that Econometrica rejects a substantial number of submissions without consulting outside referees. 3. It is a condition of publication in Econometrica that copyright of any published article be transferred to the Econometric Society. Submission of a paper will be taken to imply that the author agrees that copyright of the material will be transferred to the Econometric Society if and when the article is accepted for publication, and that the contents of the paper represent original and unpublished work that has not been submitted for publication elsewhere. If the author has submitted related work elsewhere, or if he does so during the term in which Econometrica is considering the manuscript, then it is the author’s responsibility to provide Econometrica with details. There is no page fee; nor is any payment made to the authors. 4. Econometrica has the policy that all empirical and experimental results as well as simulation experiments must be replicable. For this purpose the Journal editors require that all authors submit datasets, programs, and information on experiments that are needed for replication and some limited sensitivity analysis. (Authors of experimental papers can consult the posted list of what is required.) This material for replication will be made available through the Econometrica supplementary material website. The format is described in the posted information for authors. Submitting this material indicates that you license users to download, copy, and modify it; when doing so such users must acknowledge all authors as the original creators and Econometrica as the original publishers. If you have compelling reason we may post restrictions regarding such usage. At the same time the Journal understands that there may be some practical difficulties, such as in the case of proprietary datasets with limited access as well as public use datasets that require consent forms to be signed before use. In these cases the editors require that detailed data description and the programs used to generate the estimation datasets are deposited, as well as information of the source of the data so that researchers who do obtain access may be able to replicate the results. This exemption is offered on the understanding that the authors made reasonable effort to obtain permission to make available the final data used in estimation, but were not granted permission. We also understand that in some particularly complicated cases the estimation programs may have value in themselves and the authors may not make them public. This, together with any other difficulties relating to depositing data or restricting usage should be stated clearly when the paper is first submitted for review. In each case it will be at the editors’ discretion whether the paper can be reviewed. 5. Papers may be rejected, returned for specified revision, or accepted. Approximately 10% of submitted papers are eventually accepted. Currently, a paper will appear approximately six months from the date of acceptance. In 2002, 90% of new submissions were reviewed in six months or less. 6. Submitted manuscripts should be formatted for paper of standard size with margins of at least 1.25 inches on all sides, 1.5 or double spaced with text in 12 point font (i.e., under about 2,000 characters, 380 words, or 30 lines per page). Material should be organized to maximize readability; for instance footnotes, figures, etc., should not be placed at the end of the manuscript. We strongly encourage authors to submit manuscripts that are under 45 pages (17,000 words) including everything (except appendices containing extensive and detailed data and experimental instructions).
While we understand some papers must be longer, if the main body of a manuscript (excluding appendices) is more than the aforementioned length, it will typically be rejected without review. 7. Additional information that may be of use to authors is contained in the “Manual for Econometrica Authors, Revised” written by Drew Fudenberg and Dorothy Hodges, and published in the July, 1997 issue of Econometrica. It explains editorial policy regarding style and standards of craftmanship. One change from the procedures discussed in this document is that authors are not immediately told which coeditor is handling a manuscript. The manual also describes how footnotes, diagrams, tables, etc. need to be formatted once papers are accepted. It is not necessary to follow the formatting guidelines when first submitting a paper. Initial submissions need only be 1.5 or double-spaced and clearly organized. 8. Papers should be accompanied by an abstract of no more than 150 words that is full enough to convey the main results of the paper. On the same sheet as the abstract should appear the title of the paper, the name(s) and full address(es) of the author(s), and a list of keywords. 9. If you plan to submit a comment on an article which has appeared in Econometrica, we recommend corresponding with the author, but require this only if the comment indicates an error in the original paper. When you submit your comment, please include any correspondence with the author. Regarding comments pointing out errors, if an author does not respond to you after a reasonable amount of time, then indicate this when submitting. Authors will be invited to submit for consideration a reply to any accepted comment. 10. Manuscripts on experimental economics should adhere to the “Guidelines for Manuscripts on Experimental Economics” written by Thomas Palfrey and Robert Porter, and published in the July, 1991 issue of Econometrica. Typeset at VTEX, Akademijos Str. 4, 08412 Vilnius, Lithuania. Printed at The Sheridan Press, 450 Fame Avenue, Hanover, PA 17331, USA. Copyright © 2008 by The Econometric Society (ISSN 0012-9682). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation, including the name of the author. Copyrights for components of this work owned by others than the Econometric Society must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works, requires prior specific permission and/or a fee. Posting of an article on the author’s own website is allowed subject to the inclusion of a copyright statement; the text of this statement can be downloaded from the copyright page on the website www.econometricsociety.org/permis.asp. Any other permission requests or questions should be addressed to Claire Sashi, General Manager, The Econometric Society, Dept. of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA. Email:
[email protected]. Econometrica (ISSN 0012-9682) is published bi-monthly by the Econometric Society, Department of Economics, New York University, 19 West 4th Street, New York, NY 10012. Mailing agent: Sheridan Press, 450 Fame Avenue, Hanover, PA 17331. Periodicals postage paid at New York, NY and additional mailing offices. U.S. POSTMASTER: Send all address changes to Econometrica, Blackwell Publishing Inc., Journals Dept., 350 Main St., Malden, MA 02148, USA.
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Membership, Subscriptions, and Claims Membership, subscriptions, and claims are handled by Blackwell Publishing, P.O. Box 1269, 9600 Garsington Rd., Oxford, OX4 2ZE, U.K.; Tel. (+44) 1865-778171; Fax (+44) 1865-471776; Email
[email protected]. North American members and subscribers may write to Blackwell Publishing, Journals Department, 350 Main St., Malden, MA 02148, USA; Tel. 781-3888200; Fax 781-3888232. Credit card payments can be made at www.econometricsociety.org. Please make checks/money orders payable to Blackwell Publishing. Memberships and subscriptions are accepted on a calendar year basis only; however, the Society welcomes new members and subscribers at any time of the year and will promptly send any missed issues published earlier in the same calendar year. Individual Membership Rates Ordinary Member 2008 Print + Online 1933 to date Ordinary Member 2008 Online only 1933 to date Student Member 2008 Print + Online 1933 to date Student Member 2008 Online only 1933 to date Ordinary Member—3 years (2008–2010) Print + Online 1933 to date Ordinary Member—3 years (2008–2010) Online only 1933 to date Subscription Rates for Libraries and Other Institutions Premium 2008 Print + Online 1999 to date Online 2008 Online only 1999 to date
$a $55
€b €40
£c £28
Concessionaryd $40
$25
€18
£12
$10
$40
€30
£20
$40
$10
€8
£5
$10
$160
€115
£80
$70
€50
£35
$a
£c
Concessionaryd
$520
£302
$40
$480
£278
Free
a All
countries, excluding U.K., Euro area, and countries not classified as high income economies by the World Bank (http://www.worldbank.org/data/countryclass/classgroups.htm), pay the US$ rate. High income economies are: Andorra, Antigua and Barbuda, Aruba, Australia, Austria, The Bahamas, Bahrain, Barbados, Belgium, Bermuda, Brunei, Canada, Cayman Islands, Channel Islands, Cyprus, Czech Republic, Denmark, Estonia, Faeroe Islands, Finland, France, French Polynesia, Germany, Greece, Greenland, Guam, Hong Kong (China), Iceland, Ireland, Isle of Man, Israel, Italy, Japan, Rep. of Korea, Kuwait, Liechtenstein, Luxembourg, Macao (China), Malta, Monaco, Netherlands, Netherlands Antilles, New Caledonia, New Zealand, Norway, Portugal, Puerto Rico, Qatar, San Marino, Saudi Arabia, Singapore, Slovenia, Spain, Sweden, Switzerland, Taiwan (China), Trinidad and Tobago, United Arab Emirates, United Kingdom, United States, Virgin Islands (US). Canadian customers will have 6% GST added to the prices above. b Euro area countries only. c UK only. d Countries not classified as high income economies by the World Bank only. Back Issues Single issues from the current and previous two volumes are available from Blackwell Publishing; see address above. Earlier issues from 1986 (Vol. 54) onward may be obtained from Periodicals Service Co., 11 Main St., Germantown, NY 12526, USA; Tel. 518-5374700; Fax 518-5375899; Email
[email protected].
An International Society for the Advancement of Economic Theory in its Relation to Statistics and Mathematics Founded December 29, 1930 Website: www.econometricsociety.org Administrative Office: Department of Economics, New York University, 19 West 4th Street, New York, NY 10012, USA; Tel. 212-9983820; Fax 212-9954487 General Manager: Claire Sashi (
[email protected]) 2008 OFFICERS TORSTEN PERSSON, Stockholm University, PRESIDENT ROGER B. MYERSON, University of Chicago, FIRST VICE-PRESIDENT JOHN MOORE, University of Edinburgh and London School of Economics, SECOND VICE-PRESIDENT LARS PETER HANSEN, University of Chicago, PAST PRESIDENT RAFAEL REPULLO, CEMFI, EXECUTIVE VICE-PRESIDENT
2008 COUNCIL DILIP ABREU, Princeton University (*)DARON ACEMOGLU, Massachusetts Institute of Technology GEORGE AKERLOF, University of California, Berkeley ALOISIO ARAUJO, IMPA and FGV MANUEL ARELLANO, CEMFI SUSAN ATHEY, Harvard University (*)TIMOTHY J. BESLEY, London School of Economics KENNETH BINMORE, University College London TREVOR S. BREUSCH, Australian National University DAVID CARD, University of California, Berkeley (*)EDDIE DEKEL, Tel Aviv University and Northwestern University MATHIAS DEWATRIPONT, Free University of Brussels TAKATOSHI ITO, University of Tokyo MATTHEW O. JACKSON, Stanford University
LAWRENCE J. LAU, The Chinese University of Hong Kong HITOSHI MATSUSHIMA, University of Tokyo PAUL R. MILGROM, Stanford University STEPHEN MORRIS, Princeton University ADRIAN R. PAGAN, Queensland University of Technology JOON Y. PARK, Texas A&M University and Sungkyunkwan University CHRISTOPHER A. PISSARIDES, London School of Economics ROBERT PORTER, Northwestern University ALVIN E. ROTH, Harvard University ARUNAVA SEN, Indian Statistical Institute MARILDA SOTOMAYOR, University of São Paulo GUIDO TABELLINI, Bocconi University HARALD UHLIG, University of Chicago XAVIER VIVES, IESE Business School and UPF JÖRGEN W. WEIBULL, Stockholm School of Economics
The Executive Committee consists of the Officers, the Editor, and the starred (*) members of the Council.
REGIONAL STANDING COMMITTEES Australasia: Trevor S. Breusch, Australian National University, CHAIR; Maxwell L. King, Monash University, SECRETARY. Europe and Other Areas: Torsten Persson, University of Stockholm, CHAIR; Helmut Bester, Free University Berlin, SECRETARY; Enrique Sentana, CEMFI, TREASURER. Far East: Joon Y. Park, Texas A&M University and Sungkyunkwan University, CHAIR. Latin America: Pablo Andres Neumeyer, Universidad Torcuato Di Tella, CHAIR; Klaus SchmidtHebbel, Banco Central de Chile, SECRETARY. North America: Roger B. Myerson, University of Chicago, CHAIR; Claire Sashi, New York University, SECRETARY. South and Southeast Asia: Arunava Sen, Indian Statistical Institute, CHAIR.
Econometrica, Vol. 76, No. 5 (September, 2008), 945–978
IDENTIFICATION IN NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS BY ROSA L. MATZKIN1 This paper provides conditions for identification of functionals in nonparametric simultaneous equations models with nonadditive unobservable random terms. The conditions are derived from a characterization of observational equivalence between models. We show that, in the models considered, observational equivalence can be characterized by a restriction on the rank of a matrix. The use of the new results is exemplified by deriving previously known results about identification in parametric and nonparametric models as well as new results. A stylized method for analyzing identification, which is useful in some situations, is also presented. KEYWORDS: Nonparametric methods, nonadditive models, nonseparable models, identification, simultaneous equations, endogeneity.
1. INTRODUCTION THE INTERPLAY BETWEEN ECONOMETRICS AND ECONOMIC THEORY comes to its full force when analyzing the identification of underlying functions and distributions in structural models. Identification in structural models that are linear in variables and parameters and have additive unobservable variables has been studied for a long time. On the other hand, identification in structural models that do not impose parametric assumptions in the functions and distributions in the model, or do not impose additivity in the unobservable variables, has not yet been completely understood. The objective of this paper is to provide insight into these latter cases. Starting from a characterization of observational equivalence, this paper provides new conditions that can be used to determine the identification of the underlying functions and distributions in simultaneous equations models. The study of identification is a key element in the econometric analysis of many structural models. Such study allows one to determine conditions under which, from the distribution of observable variables, one can recover fea1 The support of NSF through grants SES 0551272, BCS 0433990, and SES 0241858 is gratefully acknowledged. I am very grateful to Whitney Newey for his invaluable input in this paper and to three anonymous referees for their insightful and useful reports. I have benefited from the comments and suggestions of participants at the 2005 Berkeley Conference in Honor of Daniel McFadden, the 2005 BIRS workshop in the series “Mathematical Structures in Economic Theory and Econometrics,” the 2006 Winter Meeting of the Econometric Society, seminar participants at Caltech, Columbia University, Harvard/MIT, Northwestern University, NYU, Stanford University, UC Berkeley, UC Irvine, UCLA, UCSD, Universidad Torcuato Di Tella, University College London, University of Chicago, University of Maryland, University of Wisconsin at Madison, USC, and Yale University, research assistants David M. Kang and Yong Hyeon Yang, and from conversations with Richard Blundell, Donald Brown, Steven Berry, Lanier Benkard, Andrew Chesher, Jinyong Hahn, Jerry Hausman, James Heckman, Joel Horowitz, Guido Imbens, Arthur Lewbel, Daniel McFadden, Lars Nesheim, Elie Tamer, and Chris Sims.
© 2008 The Econometric Society
DOI: 10.3982/ECTA5940
946
ROSA L. MATZKIN
tures of the primitive functions and distributions in the model. These features are needed, for example, for the analysis of counterfactuals, where one wants to calculate the outcomes that would result when some of the elements in the model change. The analysis of identification dates back to the works of H. Working (1925), E. J. Working (1927), Tinbergen (1930), Frisch (1934, 1938), Haavelmo (1943, 1944), Hurwicz (1950a), Koopmans and Reiersol (1950), Koopmans, Rubin, and Leipnik (1950), Wald (1950), Fisher (1959, 1961, 1965, 1966), Wegge (1965), Rothenberg (1971), and Bowden (1973). While the importance of identification in models with nonparametric functions and with nonadditive unobservable random terms has been recognized since the early years (see Hurwicz (1950a, 1950b)), most works at the time concentrated on providing conditions for linear models with additive unobservable random terms or for nonlinear parametric models. More recently, nonparametric models with nonadditive unobservable variables have received increasing attention, with new theoretical developments and application possibilities frequently appearing. This has motivated researchers to revisit older studies armed with new tools. In the context of identification in simultaneous equations models, Benkard and Berry (2006) recently revisited the path-breaking results by Brown (1983), and their extension by Roehrig (1988), on the identification of nonlinear and nonparametric simultaneous equations, and found some arguments to be controversial. A contribution of this paper is to provide a different set of conditions for the identification of such models. The current literature on identification of nonparametric models with endogenous regressors is very large. Within this literature, Ng and Pinske (1995), Newey, Powell, and Vella (1999), and Pinske (2000) considered nonparametric triangular systems with additive unobservable random terms. Altonji and Ichimura (2000) considered models with latent variables. Altonji and Matzkin (2001, 2005) provided estimators for average derivatives in nonseparable models, using conditional independence, and for nonparametric nonseparable functions, using exchangeability. Altonji and Matzkin (2003) extended their 2001 results to discrete endogenous regressors. Chesher (2003) considered local identification of derivatives in triangular systems with nonadditive random terms. Imbens and Newey (2003) studied global identification and estimation of derivatives and average derivatives in triangular systems with nonadditive random terms. Matzkin (2003, 2004) considered estimation under conditional independence, with normalizations and restrictions on nonadditive functions. Vytlacil and Yildiz (2007) studied estimation of average effects in models with weak separability and dummy endogenous variables. For nontriangular systems, Newey and Powell (1989, 2003), Darolles, Florens, and Renault (2003), and Hall and Horowitz (2005) considered estimation using conditional moment conditions between additive unobservables and instruments. Brown and Matzkin (1998), Ai and Chen (2003), Chernozhukov and Hansen (2005), and Chernozhukov, Imbens, and Newey (2007) allowed for the unobservable variables to be nonadditive. The latter two articles exploited an independence as-
NONPARAMETRIC SIMULTANEOUS EQUATIONS
947
sumption between the unobservable variables and an instrument, to study identification. Matzkin (2004) considered identification using instruments and an independence condition across the unobservable variables. Blundell and Powell (2003) analyzed several nonparametric and semiparametric models, and provided many references. Matzkin (2007a) provided a partial survey of recent results on nonparametric identification. A parallel approach has considered partial identification in structural triangular and nontriangular models (see Chesher (2005, 2007)). The outline of the paper is as follows. In the next section, we describe the model and its main assumptions. In Section 3, we derive several characterizations of observational equivalence. We demonstrate how these characterizations can be used to determine identification in a linear and an additively separable model, in Section 4, and how they can be used to obtain the already known results for single and triangular nonadditive models, in Section 5. A more stylized method for analyzing identification is presented in Section 6. Section 7 presents the main conclusions of the paper. 2. THE MODEL We consider a system of structural equations, described as (2.1)
U = r(Y X)
where r : RG+K → RG is an unknown, twice continuously differentiable function, Y is a vector of G observable endogenous variables, X is a vector of K observable exogenous variables, and U is a vector of G unobservable variables, which is assumed to be distributed independently of X Let fU denote the density of U, assumed to be continuously differentiable. Our objective is to determine conditions under which the function r and the density fU are identified within a set of functions and densities to which r and fU belong. We will assume that the vector X has a continuous, known density fX that has support RK Assuming that fX is known does not generate a loss of generality, for the purpose of the analysis of identification, because X is observable. A typical example of a system (2.1) is a demand and supply model, (2.2)
Q = D(P I U1 ) P = S(Q W U2 )
where Q and P denote the quantity and price of a commodity, I denotes consumers’ income, W denotes producers’ input prices, U1 denotes an unobservable demand shock, and U2 denotes an unobservable supply shock. If the demand function, D, is strictly increasing in U1 and the supply function, S, is strictly increasing in U2 , one can invert these functions and write this system as in (2.1), with Y = (P Q), X = (I W ), and U = (U1 U2 ), r1 denoting the
948
ROSA L. MATZKIN
inverse of D with respect to U1 , and r2 denoting the inverse of S with respect to U2 : U1 = r1 (Q P I) U2 = r2 (Q P W ) We will assume that the system of structural equations (2.1) possesses a unique reduced form system (2.3)
Y = h(X U)
where h : RK+G → RG is twice continuously differentiable. In particular, conditional on X, r is one-to-one in Y In the supply and demand example, this reduced form system is expressed as (2.4)
Q = h1 (I W U1 U2 ) P = h2 (I W U1 U2 )
where the values of Q and P are the unique values satisfying (2.2). To determine conditions for identification, we start out from a characterization of observational equivalence, within a set of functions and distributions to which r and fU are, respectively, assumed to belong. We will let Γ denote the set of functions to which r belongs and let Φ denote the set of densities to which fU belongs. The functions r : RG+K → RG , in the set Γ , satisfy the following properties. (i) r is twice continuously differentiable on RG+K . (ii) For each x ∈ RK , r(· x) : RG → RG is one-to-one and onto RG . (iii) For each (y x) ∈ RG+K , the Jacobian determinant |∂ r(y x)/∂y| is strictly positive. Note that to each such r there corresponds a function h : RK+G → RG , which K+G assigns to each value (x u) ∈ R the unique value y satisfying u = r(y x). The function h is twice continuously differentiable on RK+G . For each x ∈ RK , h(x ·) : RG → RG is one-to-one and onto RG . The set Φ will be defined to be the set of densities fU : RG → R such that (i) fU is continuously differentiable on RG and (ii) the support of fU is RG . The differentiability of r and fU will allow us to express conditions in terms of derivatives. The support conditions on fU , the density of X, and on the density of Y conditional on X will allow us to guarantee that all densities converge to 0 as the value of one of their arguments tends to infinity. The condition on the sign of the Jacobian determinant |∂ r(y x)/∂y| is a normalization. Given fX , we can derive, for each ( r fU ) ∈ (Γ × Φ), a unique distribution function, FYX (·; ( r fU )) for the vector of observable variables (Y X). Under
NONPARAMETRIC SIMULTANEOUS EQUATIONS
949
our conditions, if fX is differentiable, FYX (·; ( r fU )) is characterized by a differentiable density fYX (·; ( r fU )), which is defined, for all y x, by fYX (y x; ( r fU )) = fY |X=x (y; ( r fU ))fX (x) ∂ r(y x) fX (x) = fU ( r(y x)) ∂y 3. OBSERVATIONAL EQUIVALENCE Following the standard definition, we will say that two elements of (Γ × Φ) are observationally equivalent if they generate the same distribution of the observable variables. Formally, this is stated as follows. DEFINITION 3.1: ( r fU ) (r fU ) ∈ (Γ × Φ) are observationally equivalent, if for all (y x) ∈ RG+K , FYX (y x; ( r fU )) = FYX (y x; (r fU )) The standard, closely related, definition of identification is given by the next statement: r fU ) ∈ DEFINITION 3.2: (r fU ) ∈ (Γ × Φ) is identified in (Γ × Φ), if for all ( r fU ), ( r fU ) is not observationally equivalent to (Γ × Φ) such that (r fU ) = ( (r fU ). More generally, we might be interested in the identification of the value of r fU )|( r fU ) ∈ (Γ × Φ)} denote the set some functional, μ(r fU ). Let Ω = {μ( of all possible values that μ can attain over pairs ( r fU ) in (Γ × Φ), given fX . Then we have the following definition. DEFINITION 3.3: The value ω ∈ Ω is observationally equivalent to ω ∈ Ω if = μ( r fU ), ω = μ(r fU ), and there exist ( r fU ) (r fU ) ∈ (Γ × Φ) such that ω ( r fU ) is observationally equivalent to (r fU ). The value ω of any functional μ at (r fU ) is identified if all pairs ( r fU ) ∈ (Γ × Φ) that are observationally equivalent to (r fU ) are assigned, by μ, the same value, ω; that is, μ( r fU ) = ω. The formal statement follows. DEFINITION 3.4: The value ω = μ(r fU ) ∈ Ω is identified within Ω, with ∈ Ω such that ω = respect to (Γ × Φ), if for any ( r fU ) ∈ (Γ × Φ) and ω μ( r fU ) = μ(r fU ) = ω, ( r fU ) is not observationally equivalent to (r fU ); that is, r fU )) = FYX (·; (r fU )) FYX (·; ( Since, in our model, the continuous marginal density fX , whose support is RK , does not depend on ( r fU ) or (r fU ), we can state that ( r fU ), (r fU ) ∈
950
ROSA L. MATZKIN
(Γ × Φ) are observationally equivalent if for all y x, ∂r(y x) ∂ r(y x) = fU ( (3.1) r(y x)) fU (r(y x)) ∂y ∂y Note that, under our conditions, if r is identified, so is fU . This is easy to see, since for any u, ∂h(x u) fU (u) = fY |X=x (h(x u)) ∂u and if r is identified, so is h. We will analyze the identification of functionals, μ, of (r fU ). The approach used will be to first determine conditions for observational equivalence ber fU ) ∈ (Γ ×Φ), and then verify that for any such ( r fU ) tween (r fU ) and any ( that is observationally equivalent to (r fU ), μ( r fU ) = μ(r fU ) Our starting point is equation (3.1). Given the true (r fU ) and an alternative function r ∈ Γ , this equation can be used to derive a density, fU , such that ( r fU ) is observationally equivalent to (r fU ). For this, we study the re lationship between the mapping which assigns to any (y x), the value u of U, satisfying (3.2)
u = r(y x)
and the mapping which assigns to that same (y x), the value u of U, satisfying (3.3)
u = r(y x)
Since r ∈ Γ , (3.2) implies that y = h(x u) where h is the reduced form function corresponding to r. Substituting in (33), we get that2 u = r( h(x u) x) Hence, we can write (3.1) as ∂ ∂r( r(h(x u) x) h(x u) x) u) u) x) fU ( = fU r(h(x ∂y ∂y 2 Brown (1983) and Roehrig (1988) used a mapping like this. They analyzed the restrictions that independence between the observable and unobservable explanatory variables imposes on this mapping, deriving different results than the ones we derive in this paper.
NONPARAMETRIC SIMULTANEOUS EQUATIONS
951
or, after dividing both sides by the first determinant, as (3.4)
−1 ∂r( r(h(x u) x) h(x u) x) ∂ u) = fU r(h(x fU ( u) x) ∂y ∂y
Two important implications can be derived from expression (3.4). First, (3.4) implies that fU ( u) is completely determined by r, r, and fU . That is, once we = know r, r, and fU , we can, when it exists, determine the distribution of U r(Y X) such that ( r fU ) is observationally equivalent to (r fU ). Second, since the left-hand side of (3.4) does not depend on x, the right-hand side should not depend on x either. As we next show, the latter is a condition for independence and X. between U 3.1. Independence = of U r(Y X) Consider deriving, for each x, the conditional density, fU|X=x given X = x. Under our assumptions, this conditional density always exists and X), and belongs to Φ. Since U = r( h(X U) h and r are one-to-one and onto, conditional on X = x, it follows by the standard formula for transformation of variables that ∂r( h(x u) x) (3.5) h(x u) x) ( u) = fU r( fU|X=x ∂ u Differentiating with respect to u the identity u = r( h(x u) x) one gets that −1 ∂ h(x u) ∂ r(h(x u) x) = ∂ u ∂y conditional on X = x is given by Hence, the density of U ∂r( h(x u) x) fU|X=x ( u) = fU r(h(x u) x) ∂ u ∂r( h(x u) x) ∂ h(x u) = fU r( h(x u) x) ∂y ∂ u −1 ∂r( r(h(x u) x) h(x u) x) ∂ = fU r(h(x u) x) ∂y ∂y
952
ROSA L. MATZKIN
is independent of X if and Under our assumptions, the random variable U only if for all x, fU|X=x ( u) = fU ( u) Note that this is exactly the same condition as in (3.4). Hence, requiring observational equivalence between ( r fU ) and (r fU ) is equivalent to requiring Making use of in (3.5) equals, for all x, the marginal density of U. that fU|X=x our support and differentiability assumptions, the condition for independence and X can be expressed as the condition that for all x between U u, ∂fU|X=x ( u) = 0 ∂x 3.2. Characterization of Independence To obtain a more practical characterization of the independence condition, we proceed to express it in terms of the derivatives of the functions r and r. Let ∂ log fU (r(y x))/∂u denote the G × 1 gradient of log(fU (u)) with respect to u, evaluated at u = r(y x). Since fU|X=x ( u) > 0, the condition that for all x u, ∂fU|X=x ( u )/∂x = 0 is equivalent to the condition that for all x u, ∂ log fU|X=x ( u) = 0 ∂x Since −1 ∂r( r(h(x u) x) h(x u) x) ∂ fU|X=x r( h(x u ) x) ( u ) = f U ∂y ∂y the above is equivalent to the condition that h(x u) x))) ∂ log(fU (r( 0= ∂u h(x u) ∂r( h(x u) x) ∂r(h(x u) x) ∂ + × ∂y ∂x ∂x ∂r(h(x ∂ u) x) r(h(x u) x) ∂ log + − log ∂x ∂y ∂y ∂r(h(x ∂ u) x) r(h(x u) x) ∂h(x u) ∂ log − log + ∂y ∂y ∂y ∂x
NONPARAMETRIC SIMULTANEOUS EQUATIONS
953
or, substituting h(x u) by y, to (3.6)
h(x u) ∂r(y x) ∂r(y x) ∂ ∂ log fU (r(y x)) + ∂u ∂x ∂y ∂x ∂r(y x) ∂r(y x) ∂ h(x u) ∂ ∂ log log + + ∂x ∂y ∂y ∂y ∂x ∂ ∂ r(y x) r(y x) ∂h(x u) ∂ ∂ log log + = ∂x ∂y ∂y ∂y ∂x
This may be interpreted as stating that the proportional change in the conditional density of Y given X, when the value of X changes and Y responds to that change according to h, has to equal the proportional change in the value of the determinant determined by r when X changes and Y responds to that change according to h. To obtain an equivalent expression for (3.6) in terms of only the structural functions, r and r, and the density fU , we note that differentiating the identity y = h(x r(y x)) with respect to x gives h(x r(y x)) ∂ r(y x) ∂ h(x u) ∂ + 0= ∂x ∂ u ∂x Using the relationship, derived above, that −1 ∂ h(x u) ∂ r(h(x u) x) = ∂ u ∂y and substituting y for h(x u) gives an expression for the derivative with respect to x of the reduced form function h, in terms of derivatives with respect to y and x, of the structural function r: −1 ∂ r(y x) ∂ r(y x) ∂ h(x u) =− ∂x ∂y ∂x where h(x u) is the reduced form function of the alternative model evaluated at u = r(y x). Hence, a different way of writing condition (3.6), in terms of the structural functions r and r of the observable variables, and the density fU , is (3.7)
∂ log fU (r(y x)) ∂u
−1 r(y x) ∂ r(y x) ∂r(y x) ∂r(y x) ∂ − ∂x ∂y ∂y ∂x
954
ROSA L. MATZKIN
∂r(y x) ∂ ∂ r(y x) ∂ − log log + ∂x ∂y ∂x ∂y ∂r(y x) ∂ ∂ r(y x) ∂ − log log − ∂y ∂y ∂y ∂y −1 ∂ r(y x) ∂ r(y x) × ∂y ∂x = 0 We can express condition (3.7) in a more succinct way. Define, for any y x, the G × K matrix A(y x; ∂r ∂ r), the K × 1 vector b(y x; ∂r ∂ r ∂2 r ∂2 r), and the G × 1 vector s(y x; fU r) by
−1 ∂r(y x) ∂r(y x) ∂ r(y x) ∂ r(y x) A(y x; ∂r ∂ r) = − ∂x ∂y ∂y ∂x r) b(y x; ∂r ∂ r ∂2 r ∂2 ∂r(y x) ∂ ∂ r(y x) ∂ − log log =− ∂x ∂y ∂x ∂y ∂r(y x) ∂ ∂ r(y x) ∂ − log log + ∂y ∂y ∂y ∂y −1 ∂ r(y x) ∂ r(y x) × ∂y ∂x and s(y x; fU r) =
∂ log(fU (r(y x))) ∂u
Condition (3.7) can then be expressed as stating that for all y x, s(y x; fU r) A(y x; ∂r ∂ r) r) = b(y x; ∂r ∂ r ∂2 r ∂2 We index the G × K matrix A(y x) by (∂r ∂ r), the K × 1 vector b(y x) by 2 2 (∂r ∂ r ∂ r ∂ r), and the G × 1 vector s(y x) by (fU r) to emphasize that the value of A depends on the first order derivatives of the functions r and r, the value of b depends on the first and second order derivatives of the functions r and r, and the value of s depends on the function fU and the value of the function r. Our arguments above lead to the following result:
NONPARAMETRIC SIMULTANEOUS EQUATIONS
955
r ∈ Γ . Define the denTHEOREM 3.1: Suppose that (r fU ) ∈ (Γ × Φ) and that conditional on X = x as in (3.5). Then sity of U ∂fU|X=x ( u) = 0 for all x u ∂x if and only if for all y x, (3.8)
s(y x; fU r) A(y x; ∂r ∂ r) r) = b(y x; ∂r ∂ r ∂2 r ∂2 3.3. Characterization of Observational Equivalence as an Independence Condition
Making use of the connection between independence and observational equivalence, we can use (3.8) to provide a characterization of observational equivalence. This is established in the next theorem. THEOREM 3.2: Suppose that (r fU ) ∈ Γ × Φ and r ∈ Γ . There exists fU ∈ Φ such that ( r fU ) is observationally equivalent to (r fU ) if and only if for all y x, (3.8) is satisfied. The proof of Theorem 3.2 follows, again, by the previous arguments. Observational equivalence between ( r fU ) and (r fU ), as in (3.1) and (3.4), implies that fU|X=x defined by (3.5) satisfies ∂fU|X=x ( u)/∂x = 0 for all u x. By The orem 3.1, this implies (3.8). Conversely, given (r fU ) and r, define fU|X=x by (3.5). The condition in (3.8) implies, by Theorem 3.1, that ∂fU|X=x ( u)/∂x = 0 is independent of X. This together with (3.4) and (3.5) for all u x. Hence, U implies that ( r fU ) and (r fU ) are observational equivalent. We next provide some intuition about condition (3.8) by means of a particular example. Note that (3.8) is a set of K restrictions on the density fU , the function r, and the alternative function r. These restrictions highlight the power of the density fU to restrict the set of observationally equivalent values of functionals. Suppose, for example, that the model has the form U = m(Y Z) + BX where Y is the vector of observable endogenous variables and (Z X) ∈ RK1 +K2 is a vector of observable exogenous variables (K2 ≥ G; K1 may be 0), and where B is a G × K2 matrix of constants. Let an alternative model be =m (Y Z) + BX U Consider determining the implications of observational equivalence for the relationship between ∂m(y z)/∂(y z) and ∂ m(y z)/∂(y z) at some specified value (y z) of (Y Z). Assume that the range of the function
956
ROSA L. MATZKIN
∂ log(fU (r(y z ·)))/∂u : RK2 → RG contains an open neighborhood. Note that as y z stay fixed and x varies, the matrix A(y z x; ∂r ∂ r) and the vector do r) stay constant, since the derivatives of m and m b(y z x; ∂r ∂ r ∂2 r ∂2 with respect to X are constant. not depend on x, and the derivatives B and B On the other hand, the value of ∂ log(fU (r(y z ·)))/∂u will, by assumption, vary. When multiplied by nonzero elements of A(y x; ∂r ∂ r), these different values of ∂ log(fU (r(y z ·)))/∂u should cause the equality in (3.8) not to be satisfied. Hence, observational equivalence will force elements of the matrix A(y z x; ∂r ∂ r) to be zero. Let aij denote the element in the ith row and r). It is possible to show (see, e.g., Brown (K1 + j)th column of A(y z x; ∂r ∂ (1983), Roehrig (1988), or Matzkin (2005)) that aij = 0 if and only if the rank of the matrix ⎛ ∂r i (y z x) ∂r i (y x) ⎞ ⎜ ∂(y z) ⎜ ⎝ ∂ r(y z x) ∂(y z)
∂xj ⎟ ⎟ ∂ r(y x) ⎠ ∂xj
is G + 1, where r = (r 1 r G ). Hence, observational equivalence together with variation in the value of the vector ∂ log(fU (r(y z x)))/∂u will imply restrictions on the rank of matrices whose elements are derivatives of r and of r. The next subsection provides a rank condition on a matrix that depends also on r), and from r ∂2 r ∂2 the vector ∂ log(fU (r(y z x)))/∂u and on b(y z x; ∂r ∂ which all particular cases can be derived. 3.4. Rank Conditions for Observational Equivalence and X, or alternatively, the conThe condition for independence between U dition for observational equivalence, can be expressed in terms of a condition about the rank of a matrix. To see this, recall the equation determining the conditional on X = x. By our assumptions, this distribution distribution of U always exists. Its density is defined by the condition that for all y x, ∂ ∂r(y x) r(y x) = fU (r(y x)) ( r(y x)) fU|X=x ∂y ∂y Taking logs on both sides and differentiating the expression first with respect to y and then with respect to x, one gets that ∂ ∂ log fU|X=x ( r(y x)) ∂ r(y x) ∂ r(y x) + log (3.9) ∂ u ∂y ∂y ∂y ∂r(y x) ∂ ∂ log(fU (r(y x))) ∂r(y x) + log = ∂u ∂y ∂y ∂y
NONPARAMETRIC SIMULTANEOUS EQUATIONS
957
and (3.10)
∂ ∂ log fU|X=x ( r(y x)) ∂ r(y x) ∂ r(y x) + log ∂ u ∂x ∂x ∂y (t) ∂ log fU|X=x + ∂x t= r(yx) ∂r(y x) ∂ ∂ log(fU (r(y x))) ∂r(y x) + log = ∂u ∂x ∂x ∂y
r(y x))/∂ u and ∂ log fU (r(y x))/∂u are G × 1 vectors, ∂ r(y where ∂ log fU ( x)/∂y and ∂r(y x)/∂y are G × G matrices, whose i jth entries are, rer(y x)/∂x and ∂r(y x)/∂x are spectively, ∂ r i (y x)/∂yj and ∂r i (y x)/∂yj ; ∂ G × K matrices, whose i jth entries are, respectively, ∂ r i (y x)/∂xj and i r(y x)/∂y|)/∂y, ∂ log(|∂r(y x)/∂y|)/∂y are G × 1 vec∂r (y x)/∂xj ; ∂ log(|∂ tors, and ∂ log(|∂ r(y x)/∂y|)/∂x, ∂ log(|∂r(y x)/∂y|)/∂x are K × 1 vectors, r G ) and r = (r 1 r G ). where r = ( r 1 The critical term in these expressions, whose value determines the depen and X, is ∂ log fU|X=x (t)/∂x. Given r, fU , and r, one can view dence between U (3.9) and (3.10) as a system of equations with unknown vectors ∂ log fU|X=x ( r(y x)) ∂ u
and
∂ log fU|X=x (t) ∂x t= r(yx)
We may ask under what conditions a solution exists and satisfies for all t, ∂ log fU|X=x (t) = 0 ∂x t= r(yx) The following theorem establishes a rank condition that guarantees this, and hence it provides an alternative characterization of observational equivalence. Let ∂r(y x) ∂ r(y x) ∂ ∂ 2 2 log − log y (y x; ∂r ∂ r ∂ r) = r ∂ ∂y ∂y ∂y ∂y and ∂r(y x) ∂ ∂ r(y x) ∂ − log log r) = r ∂ x (y x; ∂r ∂ r ∂ ∂x ∂y ∂x ∂y 2
2
958
ROSA L. MATZKIN
THEOREM 3.3: Suppose that (r fU ) ∈ Γ × Φ and r ∈ Γ . There exists fU ∈ Φ such that ( r fU ) is observationally equivalent to (r fU ) if and only if for all y x, the rank of the matrix ⎞ ⎛ r) y (y x; ∂r ∂2 r ∂ r ∂2 ∂ r(y x) ∂r(y x) ∂ log(fU (r(y x))) ⎟ ⎜ ∂y ⎟ ⎜ + ⎟ ⎜ ∂y ∂u ⎟ ⎜ (3.11) ⎜ ⎟ ⎟ ⎜ 2 2 r) x (y x; ∂r ∂ r ∂ r ∂ ⎟ ⎜ r(y x) ⎠ ⎝ ∂ ∂r(y x) ∂ log(fU (r(y x))) ∂x + ∂x ∂u is G. PROOF: Let ry =
∂ r(y x) ∂y
ry =
∂r(y x) ∂y
rx =
∂ r(y x) ∂x
rx =
∂r(y x) ∂x
( r(y x)) ∂ log fU|X=x ∂ log fU (r(y x)) su = ∂u ∂u (t) ∂ log fU|X=x sx = ∂x t= r(yx) ∂ ∂r(y x) r(y x) ∂ log ∂ log ∂y ∂y y = − and ∂y ∂y ∂ ∂r(y x) r(y x) ∂ log ∂ log ∂y ∂y x = − ∂x ∂x su =
Equations (3.9) and (3.10) can be written as su ry = su ry + y rx + sx = su rx + x su or, after transposing, as (3.12)
ry su = ry su + y
(3.13)
rx su + sx = rx su + x
Equation (3.12) states that ry su + y is a linear combination of the columns of ry , with the coefficients given by su . Since ry is invertible, this vector of coefficients is unique. Suppose that sx = 0. Then equation (3.13) states that rx su + x
NONPARAMETRIC SIMULTANEOUS EQUATIONS
959
is a linear combination of the columns of rx and that the vector of coefficients is su also. Consider the (G + K) × (G + 1) matrix ry ry su + y (3.14) rx rx su + x sx = 0, The rank of this matrix must be at least G, because ry is invertible. When the last column is a linear combination of the other G columns. Hence, when sx = 0, the rank of this matrix is G. But observational equivalence implies that su , substituting the sx = 0. (This can also be seen by using (3.12) to solve for result in (3.13), and obtaining then that rx ( ry )−1 ry )su + x − rx ( ry )−1 y sx = (rx − which is exactly the transpose of the expression in (3.7).) Hence, observational equivalence implies that the rank of the matrix in (3.11) and (3.14) is G. Conversely, suppose that the matrix in (3.11) and (3.14) has rank G for all y x. Then since ry is invertible, it must be that the last column is a linear combination of the first G columns. Let λ ∈ RG be the vector of coefficients such that (3.15)
ry λ = ry su + y
su , and since Note that λ is unique. Since su satisfies (3.12), it must be that λ = the rank of the matrix being G implies that λ satisfies (3.16)
rx λ = rx su + x
it must be also that (3.17)
rx su = rx su + x
This implies that sx = 0, which, as shown above, is just (3.7). Hence, if the rank Q.E.D. of the matrix is G, ( r fU ) is observationally equivalent to (r fU ). 4. IDENTIFICATION IN LINEAR AND SEPARABLE MODELS We next provide examples that use the results derived in the previous sections to determine the identification of functionals of (r fU ). Recall that a funcr fU ) is observationally equivalent to tional of (r fU ) is identified if whenever ( r fU ) equals its value at (r fU ). (r fU ), the value of the functional at ( 4.1. A Linear Simultaneous Equations Model Suppose that r and r are specified to be linear: r(y x) = By + Cx
+ Cx and r(y x) = By
960
ROSA L. MATZKIN
are G × K are G × G nonsingular matrices, and C and C where B and B matrices. Since the functions are linear, for all y x, y = x = 0. Let denote the matrices of all coefficients. Con = [B C] F = [B C] and F sider identification of the first row, F1 , of F . Suppose that there exists a value of (y x) such that the gradient of log fU evaluated at r(y x) is (s1 (r(y x) sG (r(y x))) = (1 0 0) ; that is, ∂ log fU (r(y x))/∂u1 = 0 and for j = 2 G, ∂ log fU (r(y x))/∂uj = 0. Observational equivalence then implies that the matrix (F
F1 )
has rank G. Consider linear restrictions on F1 , denoted by φF1 = 0, where φ is a constant matrix. The rank condition for identification is then3 rank(φF ) = G − 1 is G, F = F c for some c. Then, To see this, note that since the rank of F 1 premultiplying by φ gives c = φF = 0 φF 1 says that rank(φF ) = G − 1. Since the first column The rank condition for F of φF is zero, this rank condition implies that the other G − 1 columns of F must be linearly independent, so that all the elements of c other than the first element, c1 , must be zero. Therefore, 1 F1 = c1 F 1 is equal to 1, By the usual normalization that one of the elements of F1 and F we have c = 1. That is, we must have that F1 = F1 . Hence, if Γ is the set of linear functions whose coefficients are characterized by F , the linear restrictions on the first row, φF1 = 0, satisfy rank(φF ) = G − 1, one coefficient of F1 is normalized to 1, and for some (y x), s1 (r(y x)) = 0 while for j = 2 G, sj (r(y x)) = 0, then F1 is identified. 4.2. A Demand and Supply Example Consider a demand and supply model specified as u1 = D(p q) + m(I) u2 = S(p q) + v(w) 3 I am grateful to Whitney Newey for detailed comments on this and the example in Section 4.2, which included the following new result with its proof. See Matzkin (2005, 2007a, 2007b, 2008) for other sets of conditions.
NONPARAMETRIC SIMULTANEOUS EQUATIONS
961
and an alternative model specified as (I) u1 = D(p q) + m S(p q) + v(w) u2 = where p and q are, respectively, price and quantity, and I and w are, respectively, income and wages. Suppose that for all I, mI (I) = ∂m(I)/∂I > 0, and for all w vW (w) = ∂v(w)/∂w > 0. Assume that (I w) is independent of (u1 u2 ) and the supports of (m(I) v(w)) and of (D(p q) S(p q)) are R2 . Further assume that there exists (u01 u02 ), and that for all u11 , u22 , there exist u12 u21 such that ∂fU (u01 u02 ) ∂fU (u01 u02 ) = = 0 ∂u1 ∂u2 ∂fU (u11 u12 ) = 0 ∂u1
∂fU (u11 u12 ) = 0 ∂u2
∂fU (u21 u22 ) = 0 ∂u1
∂fU (u21 u22 ) = 0 ∂u2
We will show that the derivatives of the demand and supply functions are iden+m tified up to scale. That is, for any alternative function r = (D S + v) for r fU ) is observationally equivalent to (r fU ), which there exists fU such that ( there exists λ1 λ2 ∈ R such that for all p q I w, p (p q) Dp (p q) = λ1 D q (p q) Dq (p q) = λ1 D I (I) mI (I) = λ1 m and Sp (p q) Sp (p q) = λ2 Sq (p q) Sq (p q) = λ2 vW (w) vW (w) = λ2 Note that, because of the additive separability, p and q depend only on (p q), and I = w = 0. (We suppress arguments for simplicity.) Let p q be arbitrary. By our assumptions, there exists (I 0 w0 ), and for all values I1 w2 there exist values I2 and w1 such that (4.1)
∂fU (D(p q) + m(I0 ) S(p q) + v(w0 )) = 0 ∂u1 ∂fU (D(p q) + m(I0 ) S(p q) + v(w0 )) = 0 ∂u2
962
ROSA L. MATZKIN
∂fU (D(p q) + m(I1 ) S(p q) + v(w1 )) = 0 ∂u1 ∂fU (D(p q) + m(I1 ) S(p q) + v(w1 )) = 0 ∂u2 ∂fU (D(p q) + m(I2 ) S(p q) + v(w2 )) = 0 ∂u1 ∂fU (D(p q) + m(I2 ) S(p q) + v(w2 )) = 0 ∂u2 Observational equivalence implies that for any values of (I w), ⎛ ⎞ Dp (p q) Sp (p q) p + s1 Dp (p q) + s2 Sp (p q) ⎜D Sq (p q) q + s1 Dq (p q) + s2 Sq (p q) ⎟ ⎜ q (p q) ⎟ rank ⎜ ⎟ = 2 ⎝ m ⎠ I (I) 0 s1 mI (I) 0
vW (w)
s2 vW (w)
where s1 = ∂ log fU (D(p q) + m(I) S(p q) + v(w))/∂u1 and s2 = ∂ log fU (D(p q) + m(I) S(p q) + v(w))/∂u2 Letting (I w) = (I0 w0 ), we get that ⎞ ⎛ Sp (p q) p Dp (p q) ⎜D Sq (p q) q ⎟ ⎟ ⎜ q (p q) rank ⎜ ⎟ = 2 ⎝ m I (I0 ) 0 0 ⎠ 0
vW (w0 )
0
Since, by assumption, the matrix Sp (p q) Dp (p q) q (p q) Sq (p q) D is invertible, the third column must be a linear combination of the first two. It follows that for some λ01 = λ1 (p q I0 w0 ) and λ02 = λ2 (p q I0 w0 ), I (I0 ) = 0 λ01 m
vW (w0 ) = 0 and λ02
NONPARAMETRIC SIMULTANEOUS EQUATIONS
963
I (I0 ) = 0 and vW (w0 ) = 0, it must be that λ01 = λ02 = 0. Hence, Since m p (p q) = q (p q) = 0. Since p (p q) and q (p q) do not depend on I w, it follows that for all (I w), ⎛ ⎞ Dp (p q) Sp (p q) s1 Dp (p q) + s2 Sp (p q) ⎜D Sq (p q) s1 Dq (p q) + s2 Sq (p q) ⎟ ⎜ q (p q) ⎟ rank ⎜ ⎟ = 2 ⎝ m ⎠ I (I) 0 s1 mI (I) 0
vW (w)
s2 vW (w)
Letting (I w) = (I1 w1 ), for arbitrary I1 and for w1 as in (4.1), the matrix becomes ⎛ ⎞ Dp (p q) Sp (p q) s1 Dp (p q) ⎜D Sq (p q) s1 Dq (p q) ⎟ ⎜ q (p q) ⎟ ⎜ ⎟ ⎝ m I (I1 ) 0 s1 mI (I1 ) ⎠ 0
vW (w1 )
0
Again, linear independence of the first two columns and the matrix having rank 2 implies that the third column is a linear combination of the first two. The zeroes in the fourth row imply that the coefficient of the second column is zero. Hence, for some λ11 = λ1 (p q I1 w1 ), p (p q) = Dp (p q) λ11 (p q I1 w1 )D q (p q) = Dq (p q) λ11 (p q I1 w1 )D mI (I1 ) = mI (I1 ) λ11 (p q I1 w1 ) and D are not functions of I w, the first two equaSince I1 was arbitrary and D 1 tions imply that λ1 is not a function of I w. Likewise, reaching these equations by fixing I1 , varying (p q) arbitrarely, and letting w1 satisfy (4.1), the third equation implies that λ11 is not a function of p q. Hence, λ11 is a constant. It follows that the derivatives Dp Dq , and mI are identified up to scale. An analogous argument can be used to show that Sp , Sq , and vW are also identified up to scale. 5. IDENTIFICATION IN NONPARAMETRIC NONSEPARABLE MODELS We next apply our results to two standard nonparametric models with nonadditive unobservable random terms. We first consider the single equation model, with G = 1, considered in Matzkin (1999, 2003): y = m(x u)
964
ROSA L. MATZKIN
We show below that in this model, with m strictly increasing in u, application of our theorems implies the well known result that for all u, ∂m(x u) ∂x the partial derivative of m with respect to x, for any fixed value of x and u is identified. In Section 5.2, we consider the triangular model with nonadditive unobservable random terms considered in Chesher (2003) and Imbens and Newey (2003): y1 = m1 (y2 u1 ) y2 = m2 (x u2 ) Assuming that X is distributed independently of (u1 u2 ), and that m1 and m2 are strictly increasing, respectively, in u1 and u2 , we derive the well-known result that for all u1 y2 , ∂m1 (y2 u1 ) ∂y2 is identified.4 5.1. Single Equation Model Consider the model y = m(x u) with y u ∈ R, u and x independently distributed, fU ∈ Φ, and the inverse of m belonging to Γ . Letting r denote the inverse of m with respect to u, we have the model u = r(y x) with ∂r(y x)/∂y > 0. Let r ∈ Γ be an alternative function, so that u = r(y x). The condition for observational equivalence requires that the matrix ry sry + y rx srx + x has rank 1, for all y x, where s = ∂ log fU (r(y x))/∂u. Hence, for all y x, srx ry + x ry = sry rx + y rx 4
See Matzkin (2008) for identification of ∂m1 (y2 u1 )/∂y2 when y2 = m2 (y1 x u2 ).
NONPARAMETRIC SIMULTANEOUS EQUATIONS
965
or rx − rx ry ) = x ry − y rx s(ry Note that
ryx ryy rx ∂ rx − = ∂y ry ry ry ry
Hence, x =
ryx ryx − ry ry
ryy ryy rx rx rx ∂ rx = − + − ry ry ry ry ry ∂y ry and, since y =
ryy ryy − ry ry
ry + y rx ryy − x rx rx rx rx ∂ = − − + ry ry ry ry ry ∂y ry Hence, the rank condition implies that ryy rx rx rx rx ∂ rx rx + + = 0 − − − sry ry ry ry ry ry ry ∂y ry Writing explicitly the arguments of all functions and multiplying both sides of the equality by fU (r(y x))ry (y x) gives rx (y x) rx (y x) ∂ fU (r(y x))ry (y x) − = 0 ry (y x) ry (y x) ∂y Observational equivalence then implies that the function v defined by rx (y x) rx (y x) − v(y x) = fU (r(y x))ry (y x) ry (y x) ry (y x) is a constant function of y. Since for any x, the range of r(· x) is R and fU ry → ry and rx /ry are uniformly bounded, 0 as |y| → ∞, as long as the ratios rx / it must be that for any y, x, v(y x) = 0. Since ry > 0, it follows from these conditions that for all y x at which fU (r(y x)) > 0, (5.1)
rx (y x) rx (y x) = ry (y x) ry (y x)
966
ROSA L. MATZKIN
Hence, observational equivalence implies that the ratio of the derivatives of r is identified. Since u = r(m(x u) x) it follows that rx (y x) ∂m(x u) =− ∂x ry (y x) is identified. 5.2. A Triangular Model Consider now the model y1 = m1 (y2 u1 ) y2 = m2 (x u2 ) with y1 y2 u1 u2 ∈ R, m1 strictly increasing in u1 and m2 strictly increasing in u2 . Assume that x is distributed independently of (u1 u2 ) and that the density of u belongs to Φ. Let r 1 denote the inverse of m1 with respect to u1 and let r 2 denote the inverse of m2 with respect to u2 . Hence u1 = r 1 (y1 y2 ) u2 = r 2 (y2 x) Consider the alternative model u1 = r 1 (y1 y2 ) u2 = r 2 (y2 x) r = ( r 1 r2) ∈ Γ . Assume that r = (r 1 r 2 ) ∈ Γ and Observational equivalence implies that the rank of the matrix ⎡ 1 ⎤ ry1 0 s1 ry11 + y1 ⎢ 1 ⎥ ry2 ry22 s1 ry12 + s2 ry22 + y2 ⎦ ⎣ 0
rx2
s2 rx2 + x
is 2, where s1 = ∂ log fU1 U2 (r 1 (y1 y2 ) r 2 (y2 x))/∂u1 and s2 = ∂ log fU1 U2 (r 1 (y1 y2 ) r 2 (y2 x))/∂u2
967
NONPARAMETRIC SIMULTANEOUS EQUATIONS
Since the first two columns are linearly independent, the third column must be a linear combination of the first two. Hence, for some λ1 , λ2 , λ1 ry11 = s1 ry11 +
ry11 y1 ry11
−
ry11 y1 ry11
λ1 ry12 + λ2 ry22 = s1 ry12 + s2 ry22 + rx2 = s2 rx2 + λ2
ry22 x ry22
−
ry22 x ry22
ry11 y2 ry11
−
ry11 y2 ry11
+
ry22 y2 ry22
−
ry22 y2 ry22
Solving for λ1 and λ2 from the first two equations, and substituting them into the third, one gets, after rearranging terms, the expression 1 1 y1
sr
ry12
−
ry12
r
2 2 y2 2 y2 2 x
−s r
rx2 rx2 − ry22 ry22
r ry22 ∂ rx2 ry22 ry22 y2 rx2 rx2 rx2 − 2 − 2 − 2 2 − 2 rx ∂y2 ry22 ry2 rx ry2 ry22 ry2 1 ry12 ry2 ry12 ry11 y1 ry12 ∂ − − − + ry11 ry11 ry11 ∂y1 ry11 ry11 ry11
ry11
= 0 Multiplying both sides of the equality by fU ry11 ry22 gives 2 y2
r
1 ry12 ry2 ∂fU (r 1 (y1 y2 ) u2 ) 1 ry1 1 − 1 2 ry1 ∂y1 ry1 u2 =r (y2 x) 1 ry2 ry12 ∂ 1 − 1 1 1 ry12 ry1 ∂ry1 ry2 ry1 2 2 1 − 1 + ry2 fU ry1 + ry2 fU 1 ry1 ∂y1 ry1 ∂y1 ry22 1 ∂fU (u1 r 2 (y2 x)) rx2 rx2 2 + 2 ry1 ry2 2 − 2 1 rx ry2 ry2 ∂y2 u1 =r (y1 y2 ) 2 rx rx2 ∂ 2 − 2 2 2 ry2 ry2 ∂ry2 ry22 ry22 1 rx2 rx 1 2 − f r r + + 2 ry1 fU U y y 1 2 2 2 2 rx ry2 ry2 rx ∂y2 ∂y2
=0
968
ROSA L. MATZKIN
or, after gathering terms, 1 ry2 ry12 ∂ 1 1 fU (r (y1 y2 ) u2 ) u =r 2 (y x) ry1 1 − 1 r 2 2 ry1 ∂y1 ry1 2 ry22 1 ∂ 2 rx rx2 2 + 2 ry1 fU (u1 r (y2 x)) u =r 1 (y y ) ry2 2 − 2 1 1 2 rx ∂y2 ry2 ry2 2 y2
= 0 Hence, after dividing by ry22 ry11 , it follows that observational equivalence implies that 1 ry2 ry12 1 ∂ 1 1 (5.2) fU (r (y1 y2 ) u2 ) u =r 2 (y x) ry1 1 − 1 2 2 ry1 ry11 ∂y1 ry1 2 ry22 1 ∂ 2 rx rx2 2 + 2 2 fU (u1 r (y2 x)) u =r 1 (y y ) ry2 2 − 2 1 1 2 rx ry2 ∂y2 ry2 ry2 = 0 Note that the first term does not depend on x, other than through u2 , and the second term does not depend on y1 , other than through u1 . Since the independence between x and (u1 u2 ) implies independence between x and u2 , the result, (5.1), derived in the single equation model, applied to r 2 , can be used to prove that rx2 rx2 = ry22 ry22 This means that the ratio of the derivatives of the structural function r 2 can be identified. This also implies that the second term in (5.2) equals zero. So (5.2) becomes 1 ry ry1 ∂ fU (r 1 (y1 y2 ) u2 )u =r 2 (y x) ry11 12 − 12 = 0 2 2 ry1 ry1 ∂y1 In other words, the function v defined by
v(y1 y2 u2 ) = fU (r (y1 y2 ) u2 )u 1
1 2 2 =r (y2 x) y1
r
1 ry2 ry11
−
ry12
ry11
must be constant in y1 . Since for any y2 and u2 , fU (r 1 (y1 y2 ) u2 )ry11 → 0 as |y1 | → ∞, as long as the ratios of the derivatives of r 1 and r 1 are uniformly
NONPARAMETRIC SIMULTANEOUS EQUATIONS
969
bounded, we can conclude that for all (y1 y2 u2 ), v(y1 y2 u2 ) = 0. Hence, under these conditions, observational equivalence implies that ry12 ry11
=
ry12 ry11
This shows that the ratio of the derivatives of the structural function r 1 can be identified. Since u1 = r 1 (y1 y2 ) = r 1 (m1 (y2 u1 ) y2 ) and ry11 = 0, the implicit function theorem implies that ∂r 1 (m1 (y2 u1 ) y2 ) ∂r 1 (y1 y2 ) ∂m1 (y2 u1 ) ∂y2 ∂y =− 1 =− 1 2 ∂y2 ∂r (m1 (y2 u1 ) y2 ) ∂r (y1 y2 ) y1 =m1 (y2 u1 ) ∂y1 ∂y1 Hence, the partial derivative of m1 with respect to y2 is identified. 6. OBSERVATIONAL EQUIVALENCE OF TRANSFORMATIONS OF STRUCTURAL FUNCTIONS
A stylized way of analyzing observational equivalence can be derived by considering directly the mapping from the vectors of observable and unobservable explanatory variables, X and U, to an alternative vector of unobservable ex generated by an alternative function, planatory variables, U, r.5 To define such a relationship, we note that given the function r and an alternative function r ∈ Γ , we can express r as a transformation g of U and X by defining g for all x u as (6.1)
g(u x) = r(h(x u) x)
where h(x u) is the reduced form function derived from the structural function r. By our assumptions on r and r, it follows that g is invertible in u and that ∂ r(h(x u) x) ∂h(x u) g(u x) ∂ (6.2) ∂u > 0 ∂u = ∂y 5
See Brown (1983) for an earlier development of this approach.
970
ROSA L. MATZKIN
The representation of r in terms of the transformation g implies that for all y x, (6.3)
r(y x) = g(r(y x) x)
Recall that, for any given x, ( r fU|X=x ) generates the same distribution of Y given X = x, as (r fU ) does, if and only if for all y, ∂ ∂r(y x) r(y x) (6.4) = fU (r(y x)) fU|X=x ( r(y x)) ∂y ∂y Hence, using (6.1)–(6.3), we can state that the transformation g(U X) generates the same distribution of Y given X = x as (r fU ) generates if and only if for all u, ∂ g (u x) = fU (u) fU|X=x (6.5) ( g(u x)) ∂u The analogous results to Theorems 3.1 and 3.3 are Theorems 6.1 and 6.2 below. THEOREM 6.1: Suppose that (r fU ) ∈ (Γ × Φ) and that r ∈ Γ . Define the transformation g by (6.1) and let U = g(U X) be such that for all x, fU|X=x ∈ Φ. Then ∂fU|X=x ( u)/∂x = 0 for all x and u if and only if for all u and x, (6.6)
−1 ∂ ∂ g(u x) ∂ g(u x) ∂ g(u x) ∂ log(fU (u)) + log − ∂u ∂u ∂u ∂u ∂x ∂ g(u x) ∂ = log ∂x ∂u
r ∈ Γ . Define the THEOREM 6.2: Suppose that (r fU ) ∈ (Γ × Φ) and that = transformation g by (6.1) and let U g(U X). Suppose further that for all x, fU|X=x ∈ Φ. Then (fU g(r(y x) x)) is observationally equivalent to (r fU ) if and only if for all u x, the rank of the matrix ⎞ ⎛ ∂ g (u x) ∂ log ⎜ ∂ ∂ log fU (u) g(u x) ∂u ⎟ ⎜ ⎟ − ⎜ ⎟ ∂u ∂u ∂u ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ∂ g(u x) ⎜ ⎟ ∂ log ⎝ ∂ ⎠ g(u x) ∂u − ∂x ∂x is G.
NONPARAMETRIC SIMULTANEOUS EQUATIONS
971
The proofs of Theorems 6.1 and 6.2 use arguments similar to the ones used to derive Theorems 3.1 and 3.3, and are given in the Appendix. To provide an example of the usefulness of these results, suppose that X ∈ R and G = 2, and consider evaluating the implications of observational equivalence when the relationship between r and r is given by r(y x) = B(x)r(y x) where B(x) is a 2 × 2 matrix of functions. One such example for B(x) could 6 be cos(x) sin(x) (6.7) B(x) = − sin(x) cos(x) Application of Theorem 6.2 yields the result that observational equivalence implies that for all x u, the matrix ⎛ ∂ log fU (u) ⎞ cos(x) − sin(x) ⎜ ⎟ ∂u1 ⎜ ⎟ ⎜ ⎟ (u) ∂ log f U ⎟ ⎜ sin(x) cos(x) ⎠ ⎝ ∂u2 −u1 sin(x) + u2 cos(x) −u1 cos(x) − u2 sin(x) 0 must have rank 2. This holds if and only if for all u1 u2 , ∂fU (u1 u2 )/∂u1 u1 = ∂fU (u1 u2 )/∂u2 u2 Note that this condition is satisfied by the bivariate independent standard nor mal density. Hence, if U is distributed N(0 I), B(x) is as specified above, and u = r(y x) = g(r(y x) x) = B(x)r(y x) it follows by Theorem 6.2 that ( r fU ) is observationally equivalent to (r fU ). 7. CONCLUSIONS We have developed several characterizations of observational equivalence for nonparametric simultaneous equations models with nonadditive unobservable variables. The models that we considered can be described as U = r(Y X) 6
This is the example in Benkard and Berry (2006, p. 1433, footnote 4).
972
ROSA L. MATZKIN
where U ∈ RG is a vector of unobservable exogenous variables, distributed independently of X, X ∈ RK is a vector of observable exogenous variables, Y is a vector of observable endogenous variables, and r is a function such that, conditional on X, r is one-to-one. Our characterizations were developed by considering an alternative func = tion, r, and analyzing the density of U r(Y X). We asked what restrictions on r, r, and the density, fU , of U are necessary and sufficient to guarantee that is distributed independently of X. We showed that these restrictions charU acterize observational equivalence and we provided an expression for them in terms of a restriction on the rank of a matrix. The use of the new results was exemplified by deriving known results about identification in nonadditive single equation models and triangular equations models. An example of a separable demand and supply model provided insight into the power of separability restrictions. We also developed a simplified approach to characterize observational equivalence, which is useful when the alternative function, r, is defined as a transformation of the function r. APPENDIX PROOF OF THEOREM 6.1: Define the function g : RG+K → RG by u = g( u x) where u is defined, as in Section 6, by u = g(u x) Then, since u = g( g(u x) x) we get, by differentiating this expression with respect to u and with respect to x, that (A.1)
I=
g(u x) ∂g( g(u x) x) ∂ ∂ u ∂u
0=
∂g( u x) ∂ g(u x) ∂g( u x) + ∂ u ∂x ∂x
and (A.2)
To derive (6.6), we rewrite (6.5) as ∂g( u x) (A.3) ( u) = fU (g( u x)) fU|X=x ∂ u
NONPARAMETRIC SIMULTANEOUS EQUATIONS
973
and X is equivalent to requiring that for all Independence between U u x, ( u) ∂ log fU|X=x = 0 ∂x Taking logs on both sides of (A.3) and differentiating the resulting expressions with respect to x, we get (A.4)
∂ log fU|X=x ( u) ∂x
∂g( u x) ∂ log ∂ log fU (g( u x) u x)) ∂g( ∂ u = + ∂u ∂x ∂x
We will get expressions for the terms in the right-hand side of (A.4) in terms of g, u, and x. By (A.1) and (A.2), −1 ∂ g(u x) ∂ g(u x) ∂g( u x) =− ∂x ∂u ∂x Differentiating with respect to u the expression u = g(g( u x) x) one gets I=
u x) ∂ g(g( u x) x) ∂g( ∂u ∂ u
Hence, (A.5)
∂g( g(u x) g(u x) x) ∂ 1= ∂u ∂ u
Taking logs and differentiating both sides of (A.5) with respect to x and with respect to u we get ∂g( ∂g( g(u x) x) g(u x) x) ∂ log ∂ log ∂ g(u x) ∂ u ∂ u 0= + ∂ u ∂x ∂x ∂ g(u x) ∂ log ∂u + ∂x
974 and
ROSA L. MATZKIN
∂ ∂g( g(u x) g(u x) x) ∂ log ∂ log ∂ g(u x) ∂ u ∂u + 0= ∂ u ∂u ∂u
Hence, ∂g( u x) ∂ log ∂ u ∂x ∂ ∂ g (u x) g (u x) −1 ∂ log ∂ log ∂ g(u x) ∂ g(u x) ∂u ∂u − = ∂u ∂u ∂x ∂x Substituting into (A.4), we get ( u) ∂ log fU|X=x ∂x
∂g( u x) ∂ log u x) u x)) ∂g( ∂ log fU (g( ∂ u + = ∂u ∂x ∂x −1 ∂ log fU (g( ∂ g(u x) g(u x) u x)) ∂ =− ∂u ∂u ∂x ∂ g(u x) −1 ∂ log g(u x) ∂ g(u x) ∂u ∂ + ∂u ∂u ∂x ∂ g(u x) ∂ log ∂u − ∂x
( u)/∂x = 0, we get (6.6). Substituting g( u x) by u, and setting ∂ log fU|X=x Q.E.D. PROOF OF THEOREM 6.2: Let ( u) ( u) ∂ log fU|X=x ∂ log fU|X=x s = su = x ∂ u ∂x u= u= g(ux) g(ux) su =
∂ log fU (u) ∂u
NONPARAMETRIC SIMULTANEOUS EQUATIONS
975
∂ g(u x) ∂ g(u x) gx = ∂u ∂x ∂ ∂ g (u x) g (u x) ∂ log ∂ log ∂u ∂u u = x = ∂u ∂x
gu =
Taking logs of both sides of (6.5) and differentiating with respect to u and x, we get su gu + u = su gx + x = 0 su sx + or, after transposing, (A.6)
gu su + u = su
(A.7)
su + x = 0 gx sx +
Using the first equality to solve for su and substituting the result into the second equality, one gets gx ( gu )−1 su + gx ( gu )−1 u − x sx = − is independent of X if and only if By Theorem 6.1, U sx = 0. Consider the matrix gu su − u (A.8) gx − x and X, and, by Observational equivalence implies independence between U sx = 0, equations (A.6) and (A.7) imply that Theorem 6.1, that sx = 0. When gu , and that − x is that same su − u is a linear combination of the columns of gu is invertible, this imlinear combination, but of the columns of gx . Since plies that the rank of the matrix must be G. Hence, observational equivalence implies that the rank of (A.8) is G. Conversely, suppose that the rank of the matrix in (A.8) is G. It follows by the invertibility of gu that there exists a unique λ ∈ RG such that gu λ = su − u By (A.6), gu su = su − u Hence, λ = su
976
ROSA L. MATZKIN
Since the matrix in (A.8) has rank G, it must be that − x = gx λ By solving for λ from the first equation, we get that λ = ( gu )−1 (su − u ) Then it follows that − x = gx λ gu )−1 (su − u ) = gx ( This implies that gx ( gu )−1 (su − u ) + x = 0 sx = is independent of X By the equivalence By Theorem 6.1, it follows that U between (6.4) and (6.5) and the definition of observational equivalence in (3.1), ( g(r(y x) x) fU ) is observationally equivalent to (r fU ). Q.E.D. REFERENCES AI, C., AND X. CHEN (2003): “Efficient Estimation of Models With Conditional Moments Restrictions Containing Unknown Functions,” Econometrica, 71, 1795–1843. [946] ALTONJI, J. G., AND H. ICHIMURA (2000): “Estimating Derivatives in Nonseparable Models With Limited Dependent Variables,” Mimeo, Northwestern University. [946] ALTONJI, J. G., AND R. L. MATZKIN (2001): “Panel Data Estimators for Nonseparable Models With Endogenous Regressors,” Working Paper T0267, NBER. [946] (2003): “Cross Section and Panel Data Estimators for Nonseparable Models With Endogenous Regressors,” Mimeo, Northwestern University. [946] (2005): “Cross Section and Panel Data Estimators for Nonseparable Models With Endogenous Regressors,” Econometrica, 73, 1053–1102. [946] BENKARD, C. L., AND S. BERRY (2006): “On the Nonparametric Identification of Nonlinear Simultaneous Equations Models: Comment on B. Brown (1983) and Roehrig (1988),” Econometrica, 74, 1429–1440. [946,971] BLUNDELL, R., AND J. L. POWELL (2003): “Endogeneity in Nonparametric and Semiparametric Regression Models,” in Advances in Economics and Econometrics, Theory and Applications, Eighth World Congress, Vol. II, ed. by M. Dewatripont, L. P. Hansen, and S. J. Turnovsky. Cambridge, U.K.: Cambridge University Press, 312–357. [947] BOWDEN, R. (1973): “The Theory of Parametric Identification,” Econometrica, 41, 1069–1074. [946] BROWN, B. W. (1983): “The Identification Problem in Systems Nonlinear in the Variables,” Econometrica, 51, 175–196. [946,950,956,969] BROWN, D. J., AND R. L. MATZKIN (1998): “Estimation of Nonparametric Functions in Simultaneous Equations Models, With and Application to Consumer Demand,” CFDP 1175, Cowles Foundation for Research in Economics, Yale University. [946] CHERNOZHUKOV, V., AND C. HANSEN (2005): “An IV Model of Quantile Treatment Effects,” Econometrica, 73, 245–261. [946]
NONPARAMETRIC SIMULTANEOUS EQUATIONS
977
CHERNOZHUKOV, V., G. IMBENS, AND W. NEWEY (2007): “Instrumental Variable Estimation of Nonseparable Models,” Journal of Econometrics, 139, 4–14. [946] CHESHER, A. (2003): “Identification in Nonseparable Models,” Econometrica, 71, 1405–1441. [946,964] (2005): “Nonparametric Identification Under Discrete Variation,” Econometrica, 73, 1525–1550. [947] (2007): “Endogeneity and Discrete Outcomes,” Mimeo, CEMMAP. [947] DARROLLES, S., J. P. FLORENS, AND E. RENAULT (2003): “Nonparametric Instrumental Regression,” IDEI Working Paper 228, University of Toulouse I. [946] FISHER, F. M. (1959): “Generalization of the Rank and Order Conditions for Identifiability,” Econometrica, 27, 431–447. [946] (1961): “Identifiability Criteria in Nonlinear Systems,” Econometrica, 29, 574–590. [946] (1965): “Identifiability Criteria in Nonlinear Systems: A Further Note,” Econometrica, 33, 197–205. [946] (1966): The Identification Problem in Econometrics. New York: McGraw–Hill. [946] FRISCH, R. A. K. (1934): “Statistical Confluence Analysis by Means of Complete Regression Systems,” Publication 5, Universitets Okonomiske Institutt, Oslo. [946] (1938): “Statistical versus Theoretical Relations in Economic Macrodynamics,” Memorandum prepared for a conference at Cambridge, England, July 18–20, 1938, to discuss drafts of Tinbergen’s League of Nations publications. [946] HAAVELMO, T. M. (1943): “The Statistical Implications of a System of Simultaneous Equations,” Econometrica, 11, 1. [946] (1944): “The Probability Approach in Econometrics,” Econometrica, 12, Supplement (July). [946] HALL, P., AND J. L. HOROWITZ (2005): “Nonparametric Methods for Inference in the Presence of Instrumental Variables,” Annals of Statistics, 33, 2904–2929. [946] HURWICZ, L. (1950a): “Generalization of the Concept of Identification,” in Statistical Inference in Dynamic Economic Models, Cowles Commission Monograph, Vol. 10, ed. by T. C. Koopmans. New York: Wiley, 245–257. [946] (1950b): “Systems With Nonadditive Disturbances,” in Statistical Inference in Dynamic Economic Models, Cowles Commission Monograph, Vol. 10, ed. by T. C. Koopmans. New York: Wiley, 410–418. [946] IMBENS, G. W., AND W. K. NEWEY (2003): “Identification and Estimation of Triangular Simultaneous Equations Models Without Additivity,” Mimeo, MIT. [946,964] KOOPMANS, T. C., AND O. REIERSOL (1950): “The Identification of Structural Characteristics,” Annals of Mathematical Statistics, 21, 165–181. [946] KOOPMANS, T. C., A. RUBIN, AND R. B. LEIPNIK (1950): “Measuring the Equation System of Dynamic Economics,” in Statistical Inference in Dynamic Equilibrium Models, Cowles Commission Monograph, Vol. 10, ed. by T. C. Koopmans. New York: Wiley, 53–237. [946] MATZKIN, R. L. (1999): “Nonparametric Estimation of Nonadditive Random Functions,” Mimeo, Northwestern University. [963] (2003): “Nonparametric Estimation of Nonadditive Random Functions,” Econometrica, 71, 1339–1375. [946,963] (2004): “Unobservable Instruments,” Mimeo, Northwestern University. [946,947] (2005): “Identification in Nonparametric Simultaneous Equations,” Mimeo, Northwestern University. [956,960] (2007a): “Nonparametric Identification,” in Handbook of Econometrics, Vol. 6B, ed. by J. J. Heckman and E. E. Leamer. Amsterdam: North-Holland, 5307–5368. [947,960] (2007b): “Heterogeneous Choice,” in Advances in Economics and Econometrics, Theory and Applications, Ninth World Congress, Vol. III, ed. by R. Blundell, W. Newey, and T. Persson. Cambridge, U.K.: Cambridge University Press, 75–110. [960] (2008): “Nonparametric Estimation in Simultaneous Equations Models,” Mimeo, UCLA. [960,964]
978
ROSA L. MATZKIN
NEWEY, W. K., AND J. L. POWELL (1989): “Instrumental Variables Estimation for Nonparametric Models,” Mimeo, Princeton University. [946] (2003): “Instrumental Variables Estimation for Nonparametric Models,” Econometrica, 71, 1557–1569. [946] NEWEY, W. K., J. L. POWELL, AND F. VELLA (1999): “Nonparametric Estimation of Triangular Simultaneous Equations Models,” Econometrica, 67, 565–603. [946] NG, S., AND J. PINSKE (1995): “Nonparametric Two-Step Estimation of Unknown Regression Functions When the Regressors and the Regression Error Are Not Independent,” Mimeo, University of Montreal. [946] PINSKE, J. (2000): “Nonparametric Two-Step Regression Estimation When Regressors and Errors Are Dependent,” Canadian Journal of Statistics, 28, 289–300. [946] ROEHRIG, C. S. (1988): “Conditions for Identification in Nonparametric and Parametric Models,” Econometrica, 56, 433–447. [946,950,956] ROTHENBERG, T. J. (1971): “Identification in Parametric Models,” Econometrica, 39, 577–592. [946] TINBERGEN, J. (1930): “Bestimmung und Deutung con Angebotskurven: Ein Beispiel,” Zeitschrift fur Nationalokonomie, 70, 331–342. [946] WALD, A. (1950): “Note on Identification of Economic Relations,” in Statistical Inference in Dynamic Economic Models, Cowles Commission Monograph, Vol. 10, ed. by T. C. Koopmans. New York: Wiley, 238–244. [946] WEGGE, L. L. (1965): “Identifiability Criteria for a System of Equations as a Whole,” The Australian Journal of Statistics, 7, 67–77. [946] WORKING, E. J. (1927): “What Do Statistical ‘Demand Curves’ Show?” Quarterly Journal of Economics, 41, 212–235. [946] WORKING, H. (1925): “The Statistical Determination of Demand Curves,” Quarterly Journal of Economics, 39, 503–543. [946] VYTLACIL, E., AND N. YILDIZ (2007): “Dummy Endogenous Variables in Weakly Separable Models,” Econometrica, 75, 757–779. [946]
Dept. of Economics, 8283 Bunche Hall, University of California, Los Angeles, Los Angeles, CA 90095, U.S.A.;
[email protected]. Manuscript received July, 2005; final revision received March, 2008.
Econometrica, Vol. 76, No. 5 (September, 2008), 979–1016
TESTING MODELS OF LOW-FREQUENCY VARIABILITY BY ULRICH K. MÜLLER AND MARK W. WATSON1 We develop a framework to assess how successfully standard time series models explain low-frequency variability of a data series. The low-frequency information is extracted by computing a finite number of weighted averages of the original data, where the weights are low-frequency trigonometric series. The properties of these weighted averages are then compared to the asymptotic implications of a number of common time series models. We apply the framework to twenty U.S. macroeconomic and financial time series using frequencies lower than the business cycle. KEYWORDS: Long memory, local-to-unity, unit root test, stationarity test, business cycle frequency, heteroskedasticity.
1. INTRODUCTION PERSISTENCE AND LOW-FREQUENCY VARIABILITY has been an important and ongoing empirical issue in macroeconomics and finance. Nelson and Plosser (1982) sparked the debate in macroeconomics by arguing that many macroeconomic aggregates follow unit root autoregressions. Beveridge and Nelson (1981) used the logic of the unit root model to extract stochastic trends from macroeconomic time series, and showed that variations in these stochastic trends were a large, sometimes dominant, source of variability in the series. Meese and Rogoff’s (1983) finding that random walk forecasts of exchange rates dominated other forecasts focused attention on the unit root model in international finance. In finance, interest in the random walk model arose naturally because of its relationship to the efficient markets hypothesis (Fama (1970)). This empirical interest led to the development of econometric methods for testing the unit root hypothesis, and for estimation and inference in systems that contain integrated series. More recently, the focus has shifted toward more general models of persistence, such as the fractional (or long-memory) model and the local-to-unity autoregression, which nest the unit root model as a special case, or the local level model, which allows an alternative nesting of the I(0) and I(1) models. While these models are designed to explain lowfrequency behavior of time series, fully parametric versions of the models have implications for higher frequency variation, and efficient statistical procedures thus exploit both low- and high-frequency variations for inference. This raises 1 The first draft of this paper was written for the Federal Reserve Bank of Atlanta conference in honor of the twenty-fifth anniversary of the publication of Beveridge and Nelson (1981), and we thank the conference participants for their comments. We also thank Tim Bollerslev, David Dickey, John Geweke, Barbara Rossi, two referees, and a co-editor for useful comments and discussions, and Rafael Dix Carneiro for excellent research assistance. Support was provided by the National Science Foundation through Grants SES-0518036 and SES-0617811. Data and replication files for this research can be found at http://www.princeton.edu/~mwatson.
© 2008 The Econometric Society
DOI: 10.3982/ECTA6814
980
U. K. MÜLLER AND M. W. WATSON
the natural concern about the robustness of such inference to alternative formulations of higher frequency variability. These concerns have been addressed by, for example, constructing unit root tests using autoregressive models that are augmented with additional lags as in Said and Dickey (1984) or by using various nonparametric estimators for long-run covariance matrices and (as in Geweke and Porter-Hudak (1983) (GPH)) for the fractional parameter. As useful as these approaches are, there still remains a question of how successful these various methods are in controlling for unknown or misspecified highfrequency variability. This paper takes a different approach. It begins by specifying the lowfrequency band of interest. For example, the empirical analysis presented in Section 4 focuses mostly on frequencies lower than the business cycle, that is, periods greater than eight years. Using this frequency cutoff, the analysis then extracts the low-frequency component of the series of interest by computing weighted averages of the data, where the weights are low-frequency trigonometric series. Inference about the low-frequency variability of the series is exclusively based on the properties of these weighted averages, disregarding other aspects of the original data. The number of weighted averages, say q, that capture the low-frequency variability is small in typical applications. For example, only q = 13 weighted averages almost completely capture the lower than business cycle variability in postwar macroeconomic time series (for any sampling frequency). This suggests basing inference on asymptotic approximations in which q is fixed as the sample size tends to infinity. Such asymptotics yield a q-dimensional multivariate Gaussian limiting distribution for the weighted averages, with a covariance matrix that depends on the specific model of lowfrequency variability. Inference about alternative models or model parameters can thus draw on the well-developed statistical theory concerning multivariate normal distributions. An alternative to the methods proposed here is to use time domain filters, such as bandpass or other moving average filters, to isolate the low-frequency variability of the data. The advantage of the transformations that we employ is that they conveniently discretize the low-frequency information of the original data into q data points, and they are applicable beyond the I(0) models typically analyzed with moving average linear filters. There are several advantages to focusing exclusively on the low-frequency variability components of the data. The foremost advantage is that many empirical questions are naturally formulated in terms of low-frequency variability. For example, the classic Nelson and Plosser (1982) paper asks whether macroeconomic series such as real gross national product (GNP) tend to revert to a deterministic trend over periods longer than the business cycle, and macroeconomic questions about balanced growth involve the covariability of series over frequencies lower than the business cycle. Questions of potential mean reversion in asset prices or real exchange rates are often phrased in terms of long “horizons” or low frequencies. Because the statistical models studied here
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
981
were developed to answer these kinds of low-frequency questions, it is natural to evaluate the models on these terms. In addition, large literatures have developed econometric methods in the local-to-unity framework, and also in the fractional framework. These methods presumably provide reliable guidance for empirical analysis only if, at a minimum, their assumed framework accurately describes the low-frequency behavior of the time series under study. The tests developed here may thus also be used as specification tests for the appropriateness of these methods. Other advantages, including robustness to high-frequency misspecification and statistical convenience (because weighted averages are approximately multivariate normal), have already been mentioned. An important caveat is that reliance on low-frequency methods will result in a loss of information and efficiency for empirical questions involving all frequencies. Thus, for example, questions about balanced growth are arguably properly answered by the approach developed here, while questions about martingale difference behavior involve a constant spectrum over all frequencies, and focusing only on low frequencies entails a loss of information. In addition, because only q = 13 weighted averages of the data are required to effectively summarize the below-business-cycle variability of postwar economic time series, there are obvious limits to what can be learned from a postwar series about low-frequency variability. Thus, for example, in this 13-observation context one cannot plausibly implement a nonparametric study of low-frequency variability. That said, as the empirical analysis in Section 4 shows, much can be learned from 13 observations about whether the data are consistent with particular low-frequency models. Several papers have addressed other empirical and theoretical questions in similar frameworks. Bierens (1997) derived estimation and inference procedures for cointegration relationships based on a finite number of weighted averages of the original data, with a joint Gaussian limiting distribution. Phillips (2006) pursued a similar approach with an infinite number of weighted averages. Phillips (1998) provided a theoretical analysis of “spurious regressions” of various persistent time series on a finite (and also infinite) number of deterministic regressors. Müller (2007b) found that long-run variance estimators based on a finite number of trigonometrically weighted averages is optimal in a certain sense. All these approaches exploit the known asymptotic properties of weighted averages for a given model of low-frequency variability. In contrast, the focus of this paper is to test alternative models of low-frequency variability and their parameters. The plan of the paper is as follows. The next section introduces the three classes of models that we will consider: fractional models, local-to-unity autoregressions, and the local level model, parameterized as an unobserved components model with a large I(0) component and a small unit root component. This section discusses the choice of weights for extracting the low-frequency components and the model-specific asymptotic distributions of the resulting
982
U. K. MÜLLER AND M. W. WATSON
weighted averages. Section 3 develops tests of the models based on these asymptotic distributions and studies their properties. Section 4 uses the methods of Section 3 to study the low-frequency properties of twenty macroeconomic and financial time series. Section 5 offers some additional comments on the feasibility of discriminating between the various low-frequency models. Data and programs are provided in the supplement (Müller and Watson (2008)). 2. MODELS AND LOW-FREQUENCY TRANSFORMATIONS Let yt , t = 1 T , denote the observed time series, and consider the decomposition of yt into unobserved deterministic and stochastic components (1)
yt = dt + ut
This paper focuses on the low-frequency variability of the stochastic component2 ut ; the deterministic component is modelled as a constant dt = μ or as a constant plus linear trend dt = μ + βt, with unknown parameters μ and β. We consider five leading models used in finance and macroeconomics to model low-frequency variability. The first is a fractional (FR) or “longmemory” model; stationary versions of the model have a spectral density S(λ) ∝ |λ|−2d as λ → 0, where −1/2 < d < 1/2 is the fractional parameter. We follow Velasco (1999) and define the fractional model FR with 1/2 < d < 3/2 for ut as a model where first differences ut − ut−1 (with u0 = 0) are a stationary fractional model with parameter d − 1. The second model is the autoregressive model with largest root close to unity; using standard notation, we write the dominant autoregressive coefficient as ρT = 1 − c/T , so that the process is characterized by the local-to-unity parameter c. For this model, normalized versions of ut converge in distribution to an Ornstein–Uhlenbeck process with diffusion parameter −c, and for this reason we will refer to this as the OU model. We speak of the integrated OU model, I-OU, when ut − ut−1 (with u0 = 0) follows the OU model. The fourth model that we tconsider decomposes ut into an I(0) and I(1) component, ut = wt + (g/T ) s=1 ηs , where (wt ηt ) are I(0) with long-run covariance matrix σ 2 I2 and g is a parameter that governs the relative importance of the I(1) component. In this “local level” (LL) model (cf. Harvey (1989)) both components are important for the low-frequency variability of ut . Again, we also define the integrated LL model, I-LL, as the model for ut that arises when ut − ut−1 follows the LL model. 2.1. Asymptotic Representation of the Models As shown below, the low-frequency variability implied by each of these models can be characterized by the stochastic properties of the partial sum process 2 Formally, ut is allowed to follow a triangular array, but we omit any dependence on T to ease notation.
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
983
for ut , so for our purposes it suffices to define each model in terms of the behavior of these partial sums of ut . Table I summarizes the standard conver[·T ] gence properties of the partial sum process T −α t=1 ut ⇒ σG(·) for each of the five models, where α is a model-specific constant and G is a model-specific mean-zero Gaussian process with covariance kernel k(r s) given in the final column of the table. A large number of primitive conditions have been used to justify these limits. Specifically, for the stationary fractional model, weak convergence to the fractional Wiener process W d has been established under various primitive conditions for ut by Taqqu (1975) and Chan and Terrin (1995)— see Marinucci and Robinson (1999) for additional references and discussion. The local-to-unity model and local level model rely on a functional central limit theorem applied to the underlying errors; various primitive conditions are given, for example, in McLeish (1974), Wooldridge and White (1988), Phillips and Solo (1992), and Davidson (2002); see Stock (1994) for general discussion. The unit root and I(0) models are nested in several of these models. The unit root model corresponds to the fractional model with d = 1, the OU model with c = 0, and the integrated local level model with g = 0. Similarly, the I(0) model corresponds to the fractional model with d = 0 and the local level model with g = 0. The objective of this paper is to assess how well these specifications explain the low-frequency variability of the stochastic component ut in (1). But ut is not observed. We handle the unknown deterministic component dt by restricting attention to statistics that are functions of the least-square residuals of a regression of yt on a constant (denoted uμt ) or on a constant and time trend (denoted uτt ). Because {uit }Tt=1 , i = μ τ, are maximal invariants to the groups of transformations {yt }Tt=1 → {yt + m}Tt=1 and {yt }Tt=1 → {yt + m + bt}Tt=1 , respectively, there is no loss of generality in basing inference on functions of {uit }Tt=1 for tests that are invariant to these transformations. Under the assumptions given above, [·T ] a straightforward calculation shows that for i = μ τ, T −α t=1 uit ⇒ σGi (·), where α is a model-specific constant and Gi is a model-specific mean-zero Gaussian process with covariance kernel ki (r s) given by (2) (3)
kμ (r s) = k(r s) − rk(1 s) − sk(r 1) + rsk(1 1) τ μ k (r s) = k (r s) − 6s(1 − s) kμ (r λ) dλ − 6r(1 − r)
kμ (λ s) dλ
+ 36rs(1 − s)(1 − r)
kμ (l λ) dl dλ
where k(s r) is the model’s covariance kernel given in Table I.
984
TABLE I
Process
Parameter
1a. FR
− 12 < d <
1b. FR
1 2
2a. OU
c>0
2b. OU
c=0
3. I-OU
c>0
4. LL
g≥0
5. I-LL
g≥0
3 2
1 2
Partial Sum Convergence
Covariance Kernel k(r s), s ≤ r
d t=1 ut ⇒ W (·) • d−1 [·T ] −1/2−d −1 T σ (l) dl t=1 ut ⇒ 0 W • [·T ] T −3/2 σ −1 t=1 ut ⇒ 0 J c (l) dl [·T ] T −3/2 σ −1 t=1 ut ⇒ 0 W (l) dl •λ [·T ] ut ⇒ 0 0 J c (l) dl dλ T −5/2 σ −1 t=1 • [·T ] T −1/2 σ −1 t=1 ut ⇒ W1 (·) + g 0 W2 (l) dl • •λ [·T ] T −3/2 σ −1 t=1 ut ⇒ 0 W1 (l) dl + g 0 0 W2 (l) dl dλ
1 (r 2d+1 + s2d+1 − (r − s)2d+1 ) 2 (r−s)2d+1 +(1+2d)(rs2d +r 2d s)−r 2d+1 −s2d+1 4d(1+2d) 2cs−1+e−cs +e−cr −e−c(r−s) 2c 3 1 (3rs2 − s3 ) 6 3−sc(3+c 2 s2 )+3rc(1−cs+c 2 s2 )−3e−cs (1+cr)−3e−cr (1+cs−ecs ) 6c 5 s + 16 g2 (3rs2 − s3 ) 1 1 (3rs2 − s3 ) + 120 g2 (10r 2 s3 − 5rs4 + s5 ) 6
T −1/2−d σ −1
[·T ]
a Notes. W , W , and W are independent standard Wiener processes, W d is a “Type I” fractional Brownian motion defined as W d (s) = A(d) 0 [(s − l)d − (−l)d ] dW (l) + 1 2 −∞ √ 1 + ∞ [(1 + l)d − ld ]2 dl)−1/2 and J c is the stationary Ornstein–Uhlenbeck process J c (s) = Ze−sc / 2c + s e−c(s−l) dW (l) with A(d) 0s (s − l)d dW (l), where A(d) = ( 2d+1 0 0
Z ∼ N(0 1) independent of W .
U. K. MÜLLER AND M. W. WATSON
ASYMPTOTIC PROPERTIES OF PARTIAL SUMS OF POPULAR TIME SERIES MODELSa
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
985
2.2. Asymptotic Properties of Weighted Averages We extract the information about the low-frequency variability of ut using a fixed number (q) of weighted averages of uit , i = μ τ, where the weights are known and deterministic low-frequency trigonometric series. We discuss and evaluate specific choices for the weight functions below, but first summarize the joint asymptotic distribution of these q weighted averages. Thus, let Ψ : [0 1] → Rq denote a set of q weight functions Ψ = (Ψ1 Ψq ) 1 with derivatives ψ = (ψ1 ψq ) , let XTj = T −α+1 0 Ψj (s)ui[sT ]+1 ds = T −α × t/T T ˜ i ˜ t=1 ΨTtj ut , where ΨTtj = T (t−1)/T Ψj (s) ds denotes the jth weighted aver1 [·T ] age, and let XT = (XT 1 XT q ) = T −α+1 0 Ψ (s)ui[sT ]+1 ds. If T −α t=1 uit = GiT (·) ⇒ σGi (·), by integration by parts and the continuous mapping theorem, 1 (4) GiT (s)ψ(s) ds XT = GiT (1)Ψ (1) − 0
⇒
1
X = −σ
Gi (s)ψ(s) ds 0
= −σ
1
Ψ (s) dGi (s) ∼ N (0 σ 2 Σ)
0
since GiT (1) = 0. The covariance matrix Σ depends on the weight function Ψ and the covariance kernel for Gi , with j lth element equal to 11 ψj (r)ψl (s)ki (r s) dr ds for i = μ τ. 0 0 The convergence in distribution of XT in (4) is an implication of the stan[·T ] dard convergence T −α t=1 uit ⇒ σGi (·) for the five models discussed above. While, as a formal matter, (4) holds for any fixed value of q, it may provide a poor guide to the small sample behavior of XT for a given sample size T if q is chosen very large. As an example, consider the case of a demeaned I(0) model (so that √ Gμ is the demeaned Wiener process W μ and α = 1/2) and suppose Ψj (s) = 2 cos(πjs). As we show below, Σ in (4) then becomes Σ = Iq , leading q to the asymptotic approximation of {XTj }j=1 being independent and identically 2 ]/(2π) is (almost) equal distributed (i.i.d.) N (0 σ 2 ) for any fixed q. But E[XTj to the spectrum of ut at frequency j/2T , so that this approximation implies a flat spectrum for frequencies below q/2T . Thus, for a given sample size (such as 60 years of quarterly data), it may make little sense to use (4) as an approximation for values of q that are large enough to incorporate business cycle (or higher) frequencies. Indeed, in this context, a reasonable definition for an I(0) process (or any other of the five processes discussed above) in a macroeconomic context might thus be that (4) provides reasonable approximations for a choice of Ψ that captures below-business-cycle frequency variability. If XT captures the information in yt about the low-frequency variability of ut , then the question of model fit for a specific low-frequency model becomes the question whether XT is approximately distributed N (0 σ 2 Σ). For the models
986
U. K. MÜLLER AND M. W. WATSON
introduced above, Σ depends only on the model type and parameter value, so that Σ = Σi (θ) for i ∈ {FR, OU, I-OU, LL, I-LL} and θ ∈ {d c g}. The parameter σ 2 is an unknown constant governing the low-frequency scale of the process—for example, σ 2 is the long-run variance of the errors in the localto-unity model. Because q is fixed (that is, our asymptotics keep q fixed as T → ∞), it is not possible to estimate σ 2 consistently using the q elements in XT . This suggests restricting attention to scale invariant tests of XT . Imposing scale invariance has the additional advantage that the value of α in 1 XT = T −α+1 0 Ψ (s)ui[sT ]+1 ds does not need to be known. Thus, consider the following maximal invariant to the group of transformation XT → aXT , a = 0: vT = XT / XT XT √ By the continuous mapping theorem and (4), vT ⇒ X/ X X. The density of √ v = (v1 vq ) = X/ X X with respect to the uniform measure on the surface of a q-dimensional unit sphere is given by (see, for instance, Kariya (1980) or King (1980)) (5)
fv (Σ) = C|Σ|−1/2 (v Σ−1 v)−q/2
where the positive constant C = 12 (q/2)π −q/2 and (·) is the gamma function. For a given model for ut , the asymptotic distribution of vT depends only on the q × q matrix Σi (θ), which is known for each model i and parameter θ. Our strategy therefore is to assess the model fit for a specific stochastic model i and parameter θ by testing whether vT is distributed (5) with Σ = Σi (θ). 2.3. Choice of Weights and the Resulting Covariance Matrices Our choice of Ψ = (Ψ1 Ψq ) is guided by two goals. The first goal is that Ψ should extract low-frequency variations of ut and, to the extent possible, be uncontaminated by higher frequency variations. The second goal is that Ψ should produce a diagonal (or nearly diagonal) covariance matrix Σ, as this facilitates the interpretation of XT because the models’ implications for persistence in ut become implications for specific forms of heteroskedasticity in XT One way to investigate how well a candidate Ψ extracts low-frequency variability is to let ut be exactly equal to a generic periodic series ut = sin(πϑt/T + φ), where ϑ ≥ 0 and φ ∈ [0 π). The variability captured by XT can then be measured by the R2 of a regression of ut on the demeaned/detrended weight functions. For T not too small, this R2 is well approximated by the R2 of a continuous time regression of sin(πϑs + φ) on Ψ1i (s) Ψqi (s) on the unit interval, where Ψji , i = μ τ, are the residuals of a continuous time regression of Ψj (s) on 1 and (1 s), respectively. Ideally, the R2 should equal unity for ϑ ≤ ϑ0 and zero for ϑ > ϑ0 for all phase shifts φ ∈ [0 π), where ϑ0 corresponds to the
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
987
prespecified cutoff frequency. Standard sets of orthogonal trigonometric functions, such as the cosine expansion or the Fourier expansion with frequency smaller or equal to ϑ0 are natural candidates for Ψ . The left panels of Figure 1 plot R2 as a function of ϑ for a cutoff frequency ϑ0 = 14 in the demeaned case, so √ that Ψ consists of the q = 14 elements3 of the cosine expansion Ψj (s) = 2 cos(πjs) (denoted eigenfunc-
FIGURE 1.—R2 regression of sin(πϑs + φ) onto Ψ1 (s) Ψ q (s). These figures show the R2 of a continuous time regression of a generic periodic series sin(πϑs + φ) onto the demeaned (column A) or detrended (column B) weight functions Ψ1 (s) Ψq (s), with q chosen such that Ψj (s), j = 1 q, has frequency smaller or equal to ϑ0 = 14. Panels (i) show the R2 value averaged over values of φ ∈ [0 π), panels (ii) show the R2 maximized over these values of φ for each ϑ, and panels (iii) show the R2 minimized over these values of φ for each ϑ. The solid μ curves in the first column (labeled Demeaned) show results using the eigenfunctions ϕl (s) from Theorem 1, and the dashed curves show results using Fourier expansions. The solid curves in the second column (labeled Detrended) show results using the eigenfunctions ϕτl (s) from Theorem 1, and the dashed curves show results using detrended Fourier expansions. 3 For postwar data in our empirical analysis, below-business-cycle variability is captured by q = 13 weighted averages in the demeaned case. We choose an even number here to ensure that in the demeaned case, the Fourier and cosine expansions have an equal number of elements.
988
U. K. MÜLLER AND M. W. WATSON
√ tions) and the Fourier expansion Ψ 2 sin(π(j + 1)s) for j odd and j (s) = √ Ψj (s) = 2 cos(πjs) for j even. In the top panel, for each value of ϑ, R2 is averaged over all values for the phase shift φ ∈ [0 π); in the middle panel, R2 is maximized over φ; in the bottom panel, R2 is minimized over φ. Both choices for Ψj come reasonably close to the ideal of extracting all information about cycles of frequency ϑ ≤ ϑ0 (R2 = 1) and no information about cycles of frequency ϑ > ϑ0 (R2 = 0). In general, orthogonal functions Ψji only lead to a diagonal Σ in the I(0) model, but not in persistent models—see, for instance, Akdi and Dickey (1998) for an analysis of the unit root model using the Fourier expansion. It is not possible to construct Ψj that lead to diagonal Σ for all models we consider, but consider a choice of Ψj as the eigenfunctions of the covariance kernel kμW (r s) and kτW (r s) of a demeaned and detrended Wiener process, respectively: THEOREM 1: Let √ ϕμj (s) = 2 cos(πjs) for j ≥ 1 ⎧√ ⎪ 2 cos(πs(j + 1)) for odd j ≥ 1 ⎪ ⎪ ⎪ ⎨
2ωj/2 ϕτj (s) = (−1)(j+2)/2 sin(ωj/2 (s − 1/2)) ⎪ ⎪ ω − sin(ω ) j/2 j/2 ⎪ ⎪ ⎩ for even j ≥ 2 √ ϕμ0 (s) = ϕτ−1 (s) = 1, and ϕτ0 (s) = 3(1 − 2s), where π(2l + 1) − π/6 < ωl < π(2l + 1) is the l th positive root of cos(ω/2) = 2 sin(ω/2)/ω. The sets of orμ τ ∞ thonormal functions {ϕμj }∞ j=0 and {ϕj }j=−1 are the eigenfunctions of kW (r s) and μ ∞ τ τ ∞ kW (r s) with associated eigenvalues {λj }j=0 and {λj }j=−1 , respectively, where λμ0 = 0 and λμj = (jπ)−2 for j ≥ 1, and λτ−1 = λτ0 = 0, λτj = (jπ + π)−2 for odd j ≥ 1, and λτj = (ωj/2 )−2 for even j ≥ 2 √ Theorem 1 identifies the cosine expansion 2 cos(πjs), j = 1 2 as the eigenfunctions of kμW (r s) that correspond to nonzero eigenvalues and also, in the detrended case, the eigenfunctions ϕτj (s) are trigonometric functions. A natural choice for Ψ in the trend case is thus ϕτj , j ≥ 1, with frequency smaller than or equal to ϑ0 . By construction, eigenfunctions result in a diagonal Σ for both the I(1) and I(0) models, and thus yield a diagonal Σ for all values of g in the local level model. For the fractional model and the OU model, the eigenfunctions produce a diagonal Σ only for d = 0 and d = 1, and for c = 0 and c → ∞, respectively. Table II summarizes the size of the off-diagonal elements of Σ for various values of θ ∈ {d c g} in the FR, OU, and LL models using the eigenfunctions. It presents the average absolute correlation when ϑ0 = 14, a typical value in the empirical analysis. The average
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
989
TABLE II AVERAGE ABSOLUTE CORRELATIONS FOR Σ(θ)a d = −025
d = 000
d = 025
d = 075
d = 100
d = 125
Demeaned Detrended
0.03 0.03
0.00 0.00
0.01 0.01
0.01 0.01
0.00 0.00
0.03 0.02
OU Model
c = 30
c = 20
c = 15
c = 10
c=5
c=0
Demeaned Detrended
0.02 0.02
0.02 0.02
0.02 0.02
0.02 0.02
0.02 0.01
0.00 0.00
Local Level Model
g=0
g=2
g=5
g = 10
g = 20
g = 30
Demeaned Detrended
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
Fractional Model
a Notes. Entries in the table are the average values of the absolute values of the correlations associated with Σ(θ) with q = 14 for the demeaned model and q = 13 for the detrended model.
absolute correlation is zero or close to zero for all considered parameter values. What is more, the eigenfunctions ϕτj corresponding to nonzero eigenvalues are orthogonal to (1 s), so that with Ψj = ϕτj , the detrending to Ψjτ leaves Ψj unaltered and thus orthogonal. This is not the case for the Fourier expansion. The choice of Ψj as the Fourier expansion of frequency smaller than or equal to ϑ0 might thus inadvertently lead to more leakage of higher frequencies, as some linear combination of the detrended Fourier expansion approximates a higher frequency periodic series. This effect can be seen in the right panels of Figure 1, which contains R2 plots for the eigenfunctions and the Fourier expansion in the detrended case with frequencies less than or equal to ϑ0 = 14 (so that q = 13 for the eigenfunctions and q = 14 for the Fourier expansion). We conclude that the eigenfunctions ϕij , j = 1 2 of Theorem 1 of frequency below the cutoff do a good job both at the extraction of low-frequency information with little leakage and at yielding approximately diagonal Σ for i = μ τ, and the remainder of the paper is based on this choice. With this choice, the covariance matrix Σ is close to diagonal, so the models can usefully be compared by considering the diagonal elements of Σ only. Figure 2 plots the square roots of these diagonal elements for the various models considered in Table II in the demeaned case. Evidently, more persistent models produce larger variances for low-frequency components, a generalization of the familiar “periodogram” intuition that for stationary ut , the variance of T 2/T t=1 cos(πjt/T )ut is approximately equal to 2π times the spectral density at frequency j/2T . For example, for the unit root model (d = 1 in the fractional model or c = 0 in the OU model), the standard deviation of X1 is 14 times larger than the standard deviation of X14 . In contrast, when d = 025 in the fractional model, the relative standard deviation of X1 falls to 18, and
990
U. K. MÜLLER AND M. W. WATSON
FIGURE 2.—Standard deviation of Xl in different models. These figures show the square roots of the diagonal elements of Σi (θ) for different values of the parameter θ = (d c g) i denotes the covariance matrix for the fractional (panel A), OU (panel B), and LL models (panel C), computed using ϕμ . Larger values of d and g, and smaller values of c yield relatively larger standard deviations of X1 .
when c = 5 in the OU model, the relative standard deviation of X1 is 63. In the I(0) model (d = 0 in the fractional model or g = 0 in the local level model), Σ = Iq , and all of the standard deviations are unity. 2.4. Continuity of the Fractional and Local-to-Unity Models It is useful to briefly discuss the continuity of Σi (θ) for two of the models. In the local-to-unity model, there is a discontinuity at c = 0 in our treatment of the initial condition and this leads to different covariance kernels in Table I; similarly, in the fractional model there is a discontinuity at d = 1/2 as we move from the stationary to the integrated version of the model. As it turns out, these discontinuities do not lead to discontinuities of the density of v in (5) as a function of c and d. This is easily seen in the local-to-unity model. Translation invariance implies that it suffices to consider the asymptotic distribution of T −1/2 (u[·T ] − u1 ). As
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
991
) ⇒ J c (·) − J c (0) = noted by Elliott in the stable model T −1/2 (u[·T ] − u1√ √ (1999), s −c(s−λ) −sc −sc dW (λ) and limc↓0 (e − 1)/ 2c = 0, so that the Z(e − 1)/ 2c + 0 e asymptotic distribution of T −1/2 (u[·T ] − u1 ) is continuous at c = 0. The calculation for the fractional model is somewhat more involved. Note that the density (5) of v remains unchanged under reparameterizations Σ → aΣ for any a > 0. Because ΣFR (d) is a linear function of ki (r s), it therefore suffices to show that (6)
lim ↓0
kiFR(1/2−) (r s) kiFR(1/2+) (r s)
=b
for some constant b > 0 that does not depend on (r s), where kiFR(d) is the covariance kernel of the demeaned (i = μ) or detrended (i = τ) fractional model with parameter d. As shown in the Appendix, (6) holds with b = 2, so that the density of v is continuous at d = 1/2.4 3. TEST STATISTICS This section discusses several test statistics for the models. As √ discussed above, when (4) holds, the transformed data satisfy vT ⇒ v = X/ X X with X ∼ N (0 Σ). The low-frequency characteristics of the models are summarized by the covariance matrix Σ = Σi (θ), which is known for a given model i ∈ {FR, OU, I-OU, LL, I-LL} and model parameter θ. A test of adequacy of a given model and parameter value can therefore be conducted by testing H0 : Σ = Σ0 against H1 : Σ = Σ0 . This section derives optimal tests for this problem based on v (or, equivalently, optimal scale invariant tests based on X). Because a uniformly most powerful test does not exist, one must specify the alternatives for which the tests have relatively high power. We consider four optimal tests that direct power to different alternatives. The first two tests are low-frequency versions of point-optimal unit root and “stationarity” tests: these tests focus on two specific null hypotheses (the I(1) and the I(0) models) and maximize power against the local-to-unity and local level models, respectively. The final two tests are relevant for any null model: the first maximizes weighted average power against alternatives that correspond to misspecification of the persistence in ut , and the second maximizes weighted average power against alternatives that correspond to misspecification of the second moment of ut . For all four tests, we follow King (1988) and choose the distance from the null so that 4 This result suggests a definition of a demeaned or detrended fractional process with d = 1/2 as any process whose partial sums converge to a Gaussian process with covariance kernel that is μ given by an appropriately scaled limit of kFR or kτFR as d ↑ 1/2; see equations (11) and (12) in the Appendix. The possibility of a continuous extension across all values of d renders Velasco’s (1999) definition of fractional processes with d ∈ (1/2 3/2) as the partial sums of a stationary fractional process with parameter d − 1 considerably more attractive, as it does not lead to a discontinuity at the boundary d = 1/2, at least for demeaned or detrended data with appropriately chosen scale.
992
U. K. MÜLLER AND M. W. WATSON
a 5% level test has approximately 50% power at the alternative for which it is optimal. The tests we derive are optimal scale invariant tests based on X, the limiting random variable in XT ⇒ X. As shown by Müller (2007a), these tests, when applied to XT (i.e., vT ), are optimal in the sense that they maximize (weighted average) power among all scale invariant tests whose asymptotic rejection probability is smaller than or equal to the nominal level for all data generating processes that satisfy XT ⇒ X ∼ N (0 Σ0 ). In other words, if the convergence XT ⇒ X of (4) completely summarizes the implications for data yt generated by a given low-frequency model, then the test statistics derived in this section applied to vT are asymptotically most powerful (in a weighted average sense) among all scale invariant asymptotically valid tests. 3.1. Low-Frequency I(1) and I(0) Tests We test the I(1) and I(0) null hypotheses using low-frequency point-optimal tests. Specifically, in the context of the local-to-unity model we test the unit root model c = c0 = 0 against the alternative model with c = c1 using the likelihood ratio statistic LFUR = v ΣOU (c0 )−1 v/v ΣOU (c1 )−1 v where the value of c1 is chosen so that the 5%-level test has power of approximately 50% when c = c1 for the model with q = 13 (a typical value in our empirical analysis). This yields c1 = 14 for demeaned series and c1 = 28 for detrended series. We label the statistic LFUR as a reminder that it is a lowfrequency unit root test statistic. We similarly test the I(0) null hypothesis against the point alternative of a local level model with parameter g = g1 > 0 (which is the same nesting of the I(0) model as employed in Nyblom (1989) and Kwiatkowski, Phillips, Schmidt, and Shin (1992)). A calculation shows that the likelihood ratio statistic rejects for large values of
q
q vj2 2 vj LFST = 1 + g12 λj j=1 j=1 where λj are the eigenvalues defined in Theorem 1 and LFST denotes lowfrequency stationarity. The 50% power requirement imposed for q = 13 yields approximately g1 = 10 in the mean case and g1 = 20 in the trend case. 3.2. Testing for Misspecified Persistence in ut As discussed in Section 2, low-frequency persistence in ut leads to heteroskedasticity in X, so that misspecification of the persistence for ut translates
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
993
into misspecification of the heteroskedasticity function for X. This motivates a specification test that focuses on the diagonal elements of Σ. Thus, let Λ denote a diagonal matrix and consider an alternative of the form Σ = ΛΣ0 Λ. The relative magnitudes of the diagonal elements of Λ distort the relative magnitude of the diagonal elements of Σ0 and produce values of Σ associated with processes that are more or less persistent than the null model. For example, decreasing diagonal elements of Λ represent ut with more persistence (more very low-frequency variability) than under the null model. More complicated patterns for the diagonal elements of Λ allow more subtle deviations from the null model in the persistence features of ut . To detect a variety of departures from the null, we consider several different values of Λ and construct a test with best weighted average power over these alternatives. Letting F denote the weight function for Λ, the best test is simply the Neyman–Pearson test associated with a null in which v has density fv (Σ0 ) and an alternative in which the density of v is the F -weighted mixture of fv (ΛΣ0 Λ). The details of the test involve the choice of values of Λ and their associated weights. A simple and flexible way to specify the values of Λ and corresponding weights F is to represent Λ as Λ = diag(exp(δ1 ) exp(δq )), where δ = (δ1 δq ) is a mean-zero Gaussian vector with covariance matrix γ 2 Ω. Specifically, the empirical analysis has δj following a random walk: δj = δj−1 + εj with δ0 = 0 and εj ∼ iidN (0 γ 2 ). For this choice, the weighted average power maximizing test seeks to detect misspecification in the persistence in ut against a wide range of alternatives, while maintaining that the implied heteroskedasticity in X is relatively smooth. The weighted average power maximizing test is the best test of the simple hypotheses (7)
H0 : v has density fv (Σ0 ) vs.
H1 : v has density Eδ fv (ΛΣ0 Λ)
where Eδ denotes integration over the measure of δ and fv is defined in (5). By the Neyman–Pearson lemma and the form of fv , an optimal test of (7) rejects for large values of S=
Eδ [|ΛΣ0 Λ|−1/2 (v (ΛΣ0 Λ)−1 v)−q/2 ] −q/2 (v Σ−1 0 v)
Because the power of the S test does not depend on Σ0 when Σ0 is diagonal (which is approximately true for all of the models considered), the same value of γ 2 can be used to satisfy the 50% power requirement for all models for a given q. Numerical analysis shows γ = 5/q to be a good choice for values of q ranging from 5 to 30. 3.3. Testing for Low-Frequency Heteroskedasticity in ut Limiting results for partial sums like those shown in Table I are robust to time varying variances of the driving disturbances as long as the time varia-
994
U. K. MÜLLER AND M. W. WATSON
tion is a stationary short-memory process; this implies that the values of Σ are similarly robust to such forms of heteroskedasticity. However, instability in the second moment of financial and macroeconomic data is often quite persistent (e.g., Bollerslev, Engle, and Nelson (1994) and Andersen, Bollerslev, Christoffersen, and Diebold (2007), Balke and Gordon (1989), Kim and Nelson (1999), and McConnell and Perez-Quiros (2000)), so it is interesting to ask whether second moments of ut exhibit enough low-frequency variability to invalidate limits like those shown in Table I. To investigate this, we nest each of the models considered thus far in a more general model that allows for such low-frequency heteroskedasticity, derive the resulting value of Σ for the more general model, and construct an optimal test against such alternatives. For each of the low-frequency models, we consider a version of the model with low-frequency heteroskedastic driving disturbances in their natural moving average (MA) representations. For example, for the I(0) model, con[·T ] · sider models for {ut } that satisfy T −1/2 t=1 ut ⇒ σ 0 h(λ) dW1 (λ), where h : [0 1] → R is a continuous function. When h(s) = 1 for all s, this yields the I(0) model in Table I, but nonconstant h yield different limiting processes and different values of Σ. Indeed, a calculation shows that the l jth element of Σ 1 is Σlj = 0 Ψl (s)Ψj (s)h(s)2 ds. Because the Ψ functions are orthogonal, Σlj = 0 for l = j when h is constant, but nonconstant h lead to nonzero values of Σlj . Said differently, low-frequency heteroskedasticity in ut leads to serial correlation in X. The form of this √serial correlation depends on h. For example, in the mean case when h(s) = 1 + 2a cos(πs) with |a| < 1/2, Σ is the covariance matrix of a MA(1) process with first-order autocorrelation equal to a. The same device can be used to generalize the other models. Thus, using the [·T ] ˜ notation from Table I, consider {ut } that satisfy T −α t=1 ut ⇒ σ G(·), where for the different models:
FR:
⎧ 0 ⎪ ⎪ A(d) ((s − λ)d − (−λ)d ) dW (λ) ⎪ ⎪ ⎪ −∞ ⎪ ⎪ ⎪ s ⎪ ⎪ 1 1 ⎪ d ⎪ ⎪ + A(d) (s − λ) h(λ) dW (λ) d ∈ − ⎪ ⎪ 2 2 ⎪ 0 ⎪ ⎪ ⎨ 0 ˜ G(s) = A(d − 1) ((s − λ)d − (−λ)d−1 (sd − λ)) dW (λ) ⎪ ⎪ d ⎪ −∞ ⎪ ⎪ ⎪ ⎪ ⎪ A(d − 1) s ⎪ ⎪ (s − λ)d h(λ) dW (λ) + ⎪ ⎪ d ⎪ 0 ⎪ ⎪ ⎪ 1 3 ⎪ ⎪ ⎩ d∈ 2 2
995
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
OU:
⎧ 0 cλ 1 ⎪ ⎪ e − e−c(s−λ) dW (λ) ⎪ ⎪ ⎪ c −∞ ⎪ ⎪ ⎪ ⎨ 1 s ˜ G(s) = 1 − e−c(s−λ) h(λ) dW (λ) c > 0, + ⎪ c 0 ⎪ ⎪ ⎪ ⎪ s ⎪ ⎪ ⎪ ⎩ (s − λ)h(λ) dW (λ) c = 0, 0
LL:
˜ G(s) =
s 0
I-OU :
s
h(λ) dW1 (λ) + g
˜ G(s) = c −2
(s − λ)h(λ) dW2 (λ)
g ≥ 0
0
0 −∞
e−c(s−λ) − (1 − cs)ecλ dW (λ)
s
+ c −2
e−(s−λ) − c(s − λ) − 1 h(λ) dW (λ)
0
I-LL :
c > 0 s ˜ G(s) = (s − λ)h(λ) dW1 (λ) 0
1 + g 2
s
(s − λ)2 h(λ) dW2 (λ)
g ≥ 0
0
In these representations the function h only affects the stochastic component ˜ of G(s) that stems from the in-sample innovations, but leaves unaffected terms 0 associated with initial conditions, such as 1c −∞ (e−c(s−λ) − ecλ ) dW (λ) in the local-to-unity model. The idea is that h(t/T ) describes the square root of the time varying long-run variance of the in-sample driving disturbances at date t ≥ 1, while maintaining the assumption that stable models were stationary ˜ prior to the beginning of the sample. This restriction means that G(s) is a sum of two pieces, and the one that captures the pre-sample innovations remains unaffected by h. Especially in the fractional model, such a decomposition is computationally convenient, as noted by Davidson and Hashimadze (2006). As in the I(0) example, for any of the models and any continuous function h, it is ˜ and the resulting covariance possible to compute the covariance kernel for G matrix of X. Let Σi (θ0 h) denote the value of Σ associated with model i with parameter θ0 and heteroskedasticity function h. The homoskedastic versions of the models from Table I then yield Σ = Σi (θ0 1), while their heteroskedastic counterparts yield Σ = Σi (θ0 h). The goal therefore is to look for departures from the null hypothesis Σ = Σi (θ0 1) in the direction of alternatives of the form Σ = Σi (θ0 h). Because there is no uniformly most powerful test over all functions h, we consider a test with best weighted average power for a wide range
996
U. K. MÜLLER AND M. W. WATSON
of h functions. The details of the test involve the choice of values of h and their associated weights. Similar to the choice of values of Λ for the S test, we consider a flexible model for the values of h that arise as realizations from a Wiener process. In ∗ particular, we consider functions generated as h = eηW , where W ∗ is a standard Wiener process on the unit interval independent of G, and η is a parameter. The test with best weighted average power over this set of h functions is the best test associated with the hypotheses (8)
H0 : v has density fv (Σ(θ0 1))
vs. ηW ∗
H1 : v has density EW ∗ fv (Σ(θ0 e
))
where EW ∗ denotes integration over the distribution of W ∗ . The form of fv and the Neyman–Pearson lemma imply that the optimal test of (8) rejects for large values of ∗
H=
∗
EW ∗ [|Σ(θ0 eηW )|−1/2 (v Σ(θ0 eηW )−1 v)−q/2 ] (v Σ(θ0 1)−1 v)−q/2
A choice of η = 6q−1/2 satisfies the 50% power requirement for a wide range of values of q for both the I(0) and I(1) models. 3.4. Some Properties of the Tests This section takes up the issues of the asymptotic power of the various tests for various alternatives and the accuracy of the asymptotic approximations in finite samples. The numerical results will be presented for 5%-level tests, q = 13, and demeaned data. 3.4.1. Asymptotic Power The asymptotic rejection frequency of the LFUR and LFST tests is shown in Figure 3 for a range of values of d in the fractional model (panel A), c in the OU model (panel B), and g in local level model (panel C). For example, panel B shows that the LFST test has power of approximately 90% for the unit root alternative (c = 0), but power of less than 25% for values of c greater than 20 in the OU model. Applying this asymptotic approximation to an autoregression using T = 200 observations, the LFST test will reject the I(0) null with high probability when the largest autoregressive (AR) root is unity, but is unlikely to reject the I(0) null when the largest root is less than 1−20/200 = 09. Similarly, in the local level model studied in panel C, the LFUR test has power of over 90% for the I(0) model, but power of less than 25% for values of g greater than 20. This asymptotic approximation suggests that in a MA model for yt − yt−1 and with T = 200 observations, the LFUR test will reject the I(1) null with high probability when the MA polynomial has a unit root (that is, when the
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
997
FIGURE 3.—Asymptotic rejection frequencies for 5%-level LFST and LFUR tests (q = 13, demeaned data).
level of yt is I(0)), but is unlikely to reject when the largest MA root is less than 09. Figure 3 also allows power comparisons between the optimal low-frequency tests and tests that use all frequencies. For example, from panel B, the q = 13 low-frequency unit root test has approximately 50% power when c = 14. This is the best power that can be achieved when yt exhibits low-frequency OU behavior with c = 0 under the null. If instead, yt followed an exact Gaussian AR(1) model with unit AR coefficient under the null and local-to-unity coefficient under the alternative, then it would be appropriate to use all frequencies to test the null, and the best all-frequency test has approximately 75% power when c = 14 (cf. Elliott (1999)). This 25% difference in power is associated with the relatively weaker restriction on the null model of the LFUR test, with an assumption that the I(1) model only provides an accurate description of lowfrequency behavior, while allowing for unrestricted behavior of the series at higher frequencies. Figure 4 compares the power of the S test to the power of the LFUR and LFST tests, with alternatives of the form Σ1 = ΛΣ0 Λ, where Λ = diag(exp(δ1 ) exp(δq )). Because Λ is diagonal, the power of the S test does not depend
998
U. K. MÜLLER AND M. W. WATSON
FIGURE 4.—Power of 5%-level S, LFST, and LFUR tests (q = 13, demeaned data). The alternatives have the form Σ1 = ΛΣ0 Λ, where Λ = diag[exp(δ1 ) exp(δ13 )], where δi = κ(i − 1)/(q − 1) in panel A, and δi = κ(i − 1)/6 for i ≤ 7 and δi = δ14−i for 8 ≤ i ≤ 13 in panel B. The LFST results are for the I(0) model for Σ0 , the LFUR results are for the I(1) model, and the S results are for a model with diagonal Σ0 .
on Σ0 when Σ0 is diagonal, and because Σ0 is exactly or approximately diagonal for all of the models considered in Table II, the power results for S apply to each of these models. In contrast, the LFST and LFUR tests utilize particular values of Σ0 , so the results for LFST apply to the I(0) null and the results for LFUR apply to the I(1) null. In panel A, δi follows the linear trend δi = κ(i − 1)/(q − 1), where κ = 0 yields Σ1 = Σ0 , κ < 0 produces models with more persistence than the null model, and κ > 0 produces models with less persistence. In panel B, {δi } has a triangular shape: δi = κ(i − 1)/6 for i ≤ 7 and δi = δ14−i for 8 ≤ i ≤ 13. As in panel A, κ = 0 yields Σ1 = Σ0 , but now nonzero values of κ correspond to nonmonotonic deviations in the persistence of ut across frequencies. Because the LFST test looks for alternatives that are more persistent than the null hypothesis, it acts as a one-sided test for κ < 0 in panel A and it has power less than its size when κ > 0. Similarly, LFUR acts as a one-sided test for alternatives that are less persistent than the null and is biased when κ < 0. In contrast, the S test looks for departures from the null in several directions (associated with realizations from draws of a demeaned
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
999
random walk), and panel A indicates that it is approximately unbiased with a symmetric power function that is roughly comparable to the one-sided LFST and LFUR tests under this alternative. Panel B, which considers the triangular alternative, shows a power function for S that is similar to the trend alternative, while the power functions for LFST and LFUR indicate bias, and (because of the nonmonotonicity of the alternative) these tests have one-sided power that is substantially less than the one-sided power for the trend alternative shown in panel A. Figure 5 presents the power of the H test. Because low-frequency heteroskedasticity in ut leads to serial correlation in X, we compare the power q−i of the H tests to two tests for serial correlation in X: let ρi = ( j=1 Xj Xj+i ) × q ( j=1 Xj2 )−1 ; the first test statistic is |ρ1 | (and thus checks for first-order serial q−1 correlation), while the second is i=1 |ρi |/i (and checks for serial correlation at all lags). Figure 5 shows results for the I(0) null model where ln(h(s)) follows a linear trend model in panel A (ln(h(s)) = κs) and a triangular model in panel B (ln(h(s)) = κs for s ≤ 1/2 and ln(h(s)) = κ(1 − s) for s > 1/2). In panel A, the power of H is slightly smaller than the power of the |ρ1 | test for
q−1 FIGURE 5.—Power of 5%-level H |ρ1 |, and i=1 |ρi |/i tests for the I(0) model (q = 13, demeaned data). The alternatives are generated by the I(0) model with ln[h(s)] = sκ in panel A, and ln[h(s)] = sκ for s ≤ 1/2 and ln[h(s)] = κ(1 − s) for s > 1/2 in panel B.
1000
U. K. MÜLLER AND M. W. WATSON
values of κ near zero, slightly larger for more distant alternatives, and the |ρ1 | q−1 test appears to dominate the i=1 |ρi |/i test. All of the tests are approximately unbiased with symmetric power functions. In panel B, where the alternative involves a nonmonotonic heteroskedasticity function h(s), the H test remains approximately unbiased with a power function that is symmetric, but the two other tests are biased and show better power performance for κ < 0. 3.4.2. Finite Sample Performance There are three distinct issues related to the finite sample performance of the tests. First, the data used in the tests (XT ) are weighted averages of the original data (yt ) and, by virtue of the central limit theorem, the probability distribution of XT is approximated by the normal distribution. Second, as we implement the tests in the empirical section below, the covariance matrix of XT is approximated by the covariance matrix of X, that is, by the expression below equation (4). Finally, our analysis is predicated on the behavior of the process over a set of low frequencies, but as the R2 functions shown in Figure 1 indicate, there is some contamination in XT caused by leakage from higher frequencies. The first of these issues—the quality of the normal approximation to the distribution of a sample average—is well studied and we say nothing more about it except to offer the reminder that because XT is a weighted average of the underlying data, it is exactly normally distributed when the underlying data yt are normal. As for the second issue—the approximation associated with using the asymptotic form of the covariance matrix for XT —we have investigated the quality of the approximation for empirically relevant values of T , and found it to be very good. For example, using T = 200, q = 13, and i.i.d. Gaussian data, the size of the asymptotic 5%-level LFUR test is 005 and the power for the stationary AR(1) model (1 − 095L)yt = εt is 036, which can be compared to the asymptotic power for c = 10, which is 035. To investigate the third issue—leakage of high frequency variability into XT —consider two experiments. In the first experiment stationary Gaussian data are generated from a stochastic process with spectrum s(ω) = 1 for |ω| ≤ 2π/R and s(ω) = κ for |ω| > 2π/R, where R is a cutoff period used to demarcate low-frequency variability. When κ = 1 the spectrum is flat, but when κ = 1, the spectrum is a step function with discontinuity at |ω| = 2π/R. With R = 32 and T = 208 this corresponds to 52 years of quarterly data with a cutoff frequency corresponding to a period of 8 years and implies that q = 13 (and the choice with T/R a natural number maximizes potential leakage). Since the spectrum is constant for low frequencies independent of the value of κ, one would want the small sample rejection probability of LFST, S, and H tests under the I(0) null hypothesis to be equal to the nominal level. A second experiment uses partial sums of the data from the first experiment to compute the small sample rejection probability of LFUR, S, and H tests under the I(1)
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
1001
null. In both experiments, size distortions of 5%-level tests based on asymptotic critical values are small for 0 ≤ κ ≤ 3: The largest size is 71% for the S test in the I(1) model, and the smallest is 36% for the LFST test in the I(0) model, both with κ = 3. 4. EMPIRICAL RESULTS In this section we use the low-frequency tests to address four empirical questions. The first is the Nelson–Plosser question: after accounting for a deterministic linear trend, is real gross domestic product (GDP) consistent with the I(1) model? The second is a question about the cointegration of long term and short term interest rates: is the term spread consistent with the I(0) model? We answer both of these questions using postwar quarterly U.S. data and focus the analysis on periods greater than 32 periods (that is, frequencies lower than the business cycle). The third question involves the behavior of real exchange rates where a large literature has commented on the connection between the persistence of real exchange rates and deviations from purchasing power parity. Here we use the LFST test to determine whether a long-annual series on real exchange rates is consistent with the I(0) model over any set of low frequencies, and this allows us to construct a confidence set for the range of low frequencies consistent with the I(0) model. Finally, we use the S and H tests to construct confidence sets for the parameters of the five low-frequency models for below-business-cycle variability in twenty U.S. macroeconomic and financial time series. 4.1. Testing the I(0) and I(1) Null for Real GDP and the Term Spread Table III shows selected empirical results for quarterly values (1952:1– 2005:3) of the logarithm of (detrended) real GDP and the (demeaned) term spread—the difference between interest rates for 10 year and 1 year U.S. Treasury bonds. Panel A shows results computed using standard methods: p-values for the DFGLS unit root test of Elliott, Rothenberg, and Stock (1996), the stationarity test of Nyblom (1989) (using a heteroskedasticityautocorrelation robust (HAC) variance estimator as suggested in Kwiatkowski, Phillips, Schmidt, and Shin (1992)), and the estimated values of d and standard errors from Geweke and Porter-Hudak (1983) or “GPH regressions” as described in Robinson (2003). Panel B shows p-values for the LFST, LFUR, S, and H tests under the I(0) and I(1) nulls. Looking first at the results for real GDP, the traditional statistics shown in panel A suggest that the data are consistent with the I(1) model, but not the I(0) model: the p-value for the “Dickey–Fuller generalized least squares” (DFGLS) test is 016, while the Nyblom/KPSS test has a p-value less than 1%; the GPH regressions produce point estimates of d close to the unit root null, and the implied confidence intervals for d include the I(1) model but exclude
1002
U. K. MÜLLER AND M. W. WATSON TABLE III RESULTS REAL GDP AND TERM SPREADa A. DFGLS, Nyblom/KPSS, and GPH Results GPH Regressions: dˆ (SE)
Series
Real GDP Tbond spread
DFGLS p-Value
Nyblom/ KPSS p-Value
[T 05 ]
[T 065 ]
[T 05 ]
[T 065 ]
0.16 <0.01
<0.01 0.01
1.00 (0.17) 0.18 (0.17)
0.98 (0.11) 0.61 (0.11)
−0.19 (0.17) −0.80 (0.17)
−0.09 (0.11) −0.41 (0.11)
Levels
Differences
B. p-Values for I(0) and I(1) Models I(0)
Series
Real GDP Tbond spread
I(1)
LFST
S
H
LFUR
S
H
<0.01 0.25
0.09 0.46
0.12 0.14
0.37 <0.01
0.84 <0.01
0.59 0.05
a Notes. The entries in the column labeled DFGLS are p-values for the DFGLS test of Elliott, Rothenberg, and Stock (1996). The entries in the column labeled Nyblom/KPSS are p-values for the Nyblom (1989) I(0) test (using a HAC covariance matrix as suggested in Kwiatkowski, Phillips, Schmidt, and Shin (1992)). Results are computed using a Newey–West HAC estimator with [075 × T 1/3 ] lags. The results in the columns labeled GPH Regressions are the estimated values of d and standard errors computed from regressions using the lowest [T 05 ] or [T 065 ] periodogram ordinates. The GPH regressions and standard errors were computed as described in Robinson (2003): specifically, the GPH regressions are of the form ln(pi ) = β0 + β1 ln(ωi ) + error, where pi is the ith periodogram ordinate and ωi is the corresponding frequency, the estimated value of dˆ = −βˆ 1 /2, where βˆ 1 is the ordinary least squares √ ˆ = π/ 24m, where m is the number of periodogram ordinates used in estimator, and the standard error of dˆ is SE(d) the regression.
the I(0) model. The low-frequency results shown in panel B reach the same conclusion: the p-value for the LFUR test is 37%, while the p-value for the LFST test is less than 1%. Moreover, the H statistic indicates that the well known post-1983 decline in volatility of real GDP (the “Great Moderation”) is not severe enough to reject the homoskedastic low-frequency I(1) model. Thus for real GDP, the low-frequency inference and inference based on traditional methods largely coincide. This is not the case for the term spread. Panel A indicates that the DFGLS statistic rejects the I(1) model, the Nyblom/KPSS tests rejects the I(0) model, and the results from the GPH regressions depend critically on whether [T 05 ] or [T 065 ] observations are used in the GPH regression. In contrast, panel B shows that the low-frequency variability in the series is consistent with the I(0) model but not with the I(1) model. Thus, traditional inference methods paint a murky picture, while the low-frequency variability of the series is consistent with the hypothesis that long rates and short rates are cointegrated.
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
1003
4.2. A Low-Frequency Confidence Interval for I(0) Variability in Real Exchange Rates A large empirical literature has examined the unit root or near unit root behavior of real exchange rates. The data used here—annual observations on the real dollar/pound real exchange rate from 1791 to 2004—come in large part from one important empirical study in this literature: Lothian and Taylor (1996). A natural question to ask in the context of the framework developed here is whether real exchange rates are consistent with I(0) behavior over any low-frequency band. That is, is there any value of q such that the low-frequency transformed data XT are consistent with the I(0) model and, more generally, what is the largest value of q (or the highest frequency) consistent with the I(0) model? Figure 6 plots the p-value of the LFST test applied to the logarithm of the (demeaned) real exchange rate for values of q ≤ 30, corresponding to periods longer than approximately 14 years. The figure shows that the I(0) model is rejected at the 5% level for values of q > 7 (periods shorter than 61 years) and at the 1% level for values of q > 10 (periods shorter than 43 years). Equivalently, a 95% confidence interval for periods for which the real exchange rate behaves like an I(0) process includes periods greater than 61
FIGURE 6.—p-values for the LFST test as a function of q. The figure shows the p-value for LFST computed using (X 1T XqT ) constructed from demeaned values of the logarithm of the real $/£ exchange rate using annual observations from 1791–2004. The cutoff period (in years) corresponding to q can be computed as period = 2T/q, with T = 214.
1004
U. K. MÜLLER AND M. W. WATSON
years, while a 99% confidence interval includes periods greater than 43 years. In either case, the results suggest very long departures from purchasing power parity. 4.3. Confidence Intervals for Model Parameters Confidence sets for model parameters can be formed by inverting the S and H tests. Table IV presents the resulting confidence sets for twenty macroeconomic and financial time series that include postwar quarterly versions of important macroeconomic aggregates (real GDP, aggregate inflation, nominal and real interest rates, productivity, and employment) and longer annual versions of related series (real GNP from 1869 to 2004, nominal and real bond yields from 1900 to 2004, and so forth). We also study several cointegrating relationships by analyzing differences between series (such as the long– short interest rate spread discussed above) or logarithms of ratios (such as consumption–income or dividend–price ratios). A detailed description of the data is given in the Appendix. As usual, several of the data series are transformed by taking logarithms and, as discussed above, the deterministic component of each series is modelled as a constant or a linear trend. Table A.I summarizes these transformations for each series. The postwar quarterly series span the period 1952:1–2005:3, so that T = 215, and q = [2T/32] = 13 for the demeaned series and q = 12 for the detrended series. Each annual time series is available for a different sample period (real GNP is available from 1869 to 2004, while bond rates are available from 1900 to 2004, real exchange rates from 1791 to 2004, for example), so the value of q is series-specific. One series (returns on the SP500) contains daily observations from 1928 to 2005, and for this series, q = 17. Looking at the table, it is clear that the relatively short sample (less than 60 years of data for many of the series) means that confidence sets often contain a wide range of values for d, c, and g. That said, looking at the individual series, several results are noteworthy: The unit root model for inflation is not rejected using the postwar quarterly data, while the I(0) model is rejected. Results are shown for inflation based on the GDP deflator, but similar conclusions follow from the personal consumption expenditure (PCE) deflator and consumer price index (CPI). Stock and Watson (2007) documented instability in the “size” of the unit root component (corresponding to the value of g in the local level model) over the postwar period, but apparently this instability is not so severe that it leads to rejections based on the tests considered here. Different results are obtained from the long-annual (1869–2004) series, which shows less persistence than the postwar quarterly series. Labor productivity is very persistent. The I(0) model is rejected but the I(1) model is not. The S test rejects values of c > 5 in the OU model and d < 084 in the fractional model. The behavior of employee hours per capita has received
TABLE IV
Fractional Model (d ) Series
Real GDP Tbond spread Inflation Productivity Hours 10yrTbond 1yrTbond
S
H
S
H
LL Model (g) S
H
Integrated OU Model (c ) S
Postwar Quarterly Time Series (−0.06, 1.50) (−0.14, 1.50) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (6.0, 30.0) (−0.44, 0.54) (−0.50, 0.96) (15.0, 30.0) (0.5, 30.0) (0.0, 16.5) (0.0, 30.0) (0.26, 1.48) (−0.50, −0.30) (0.0, 27.5) (0.0, 28.5) (7.5, 30.0) (3.5, 30.0) (12.5, 30.0) (0.26, 1.50) (0.84, 1.50) (−0.30, 1.50) (0.0, 5.0) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.06, 1.50) (−0.50, 1.50) (0.0, 30.0) (0.0, 30.0) (2.5, 30.0) (0.0, 30.0) (10.0, 30.0) (0.44, 1.48) (−0.50, 0.34) (0.0, 15.0) (0.0, 0.5) (10.5, 30.0) (0.0, 30.0) (6.5, 30.0) (0.48, 1.08) (12.5, 30.0) (0.22, 1.30) (−0.50, 1.42) (0.0, 30.0) (0.0, 30.0) (6.0, 30.0) (0.0, 30.0) (29.5, 30.0)
3mthTbill (0.18, 1.28) Real Tbill rate (−0.24, 1.48)
(−0.50, 1.24) (−0.50, 1.38)
Unit labor cost (−0.12, 0.76) Real C-GDP (0.52, 1.30)
(−0.50, 1.50) (−0.50, 0.26) (0.40, 1.50) (−0.42, 0.34)
Real I-GDP
OU Model (c )
(0.38, 1.34)
(0.0, 30.0) (0.0, 30.0)
H
Integrated LL Model (g) S
H
(0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.0, 27.0) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.0, 28.5) (0.0, 30.0) (30.0, 30.0) (0.0, 30.0) (0.0, 0.5)
(0.0, 0.0) (0.0, 10.5) (0.0, 30.0) (6.0, 30.0) (5.0, 30.0) (0.0, 30.0) (29.0, 30.0) (17.0, 30.0) (0.0, 10.0) (0.0, 2.5) (0.0, 30.0) (0.0, 30.0) (10.5, 30.0) (19.0, 30.0) (0.0, 24.0) (0.0, 2.5)
(0.0, 30.0) (0.0, 4.0) (12.0, 30.0) (6.5, 30.0) (0.0, 30.0) (0.0, 28.0) (0.0, 30.0) (0.0, 30.0) (0.0, 30.0) (0.0, 9.0) (0.0, 10.5) (8.0, 30.0) (0.0, 30.0) (9.5, 30.0) (0.0, 30.0) (0.0, 8.0) (0.0, 30.0) (17.5, 30.0) (0.0, 13.0) (5.5, 30.0) (0.0, 17.0) (17.5, 30.0) (0.0, 13.0) (Continues)
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
95% CONFIDENCE SETS FOR MODEL PARAMETERS USING S AND H TESTSa
1005
1006
TABLE IV—Continued Fractional Model (d )
OU Model (c )
S
S
H
H
LL Model (g) S
H
Long-Annual Time Series (0.0, 30.0) (19.5, 30.0) (−0.50, −0.06) (29.0, 30.0) (2.5, 30.0) (1.50, 1.50) Real ex. rate (0.52, 1.04) (−0.50, 1.50) (0.0, 16.5) (0.0, 30.0) (23.0, 30.0) (0.0, 30.0) Bond rate (0.64, 1.44) (0.0, 13.0) Real bond rate (−0.12, 0.66) (−0.50, −0.16) (20.5, 30.0) (0.0, 30.0) Earnings/price (0.28, 0.90) (0.34, 0.86) (6.5, 30.0) (12.5, 30.0) (4.0, 30.0) (1.50, 1.50) Div/price (0.50, 1.26) (−0.50, 1.50) (0.0, 17.0) (0.0, 30.0) (18.0, 30.0) (0.0, 30.0) Real GNP Inflation
Abs.returns
(0.24, 1.18) (0.06, 0.60)
(0.08, 0.88)
(1.38, 1.50)
(5.0, 30.0)
Daily Time Series (3.0, 30.0)
Integrated OU Model (c ) S
H
Integrated LL Model (g) S
H
(0.0, 16.5) (0.0, 3.0) (0.0, 30.0)
(0.0, 2.5) (0.0, 30.0) (0.0, 28.0)
(0.0, 30.0)
(0.0, 11.5) (0.0, 30.0)
(0.5, 4.0)
a Notes. The entries in the table show 95% confidence sets for the model parameters constructed by inverting the S and H tests. The sets were constructed by computing the
tests for a grid of parameter values, where −049 ≤ d ≤ 149, 0 ≤ c ≤ 30, and 0 ≤ g ≤ 30. The confidence sets are shown as intervals (a, b), where disconnected sets include more than one interval, and empty sets are denoted by blanks.
U. K. MÜLLER AND M. W. WATSON
Series
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
1007
considerable attention in the recent vector autoregression (VAR) literature (see Gali (1999), Christiano, Eichenbaum, and Vigfusson (2003), Pesavento and Rossi (2005), and Francis and Ramey (2005)). The results shown here are consistent with unit root but not I(0) low-frequency behavior. Francis and Ramey (2006) discussed demographic trends that are potentially responsible for the high degree of persistence in this series. Postwar nominal interest rates are consistent with a unit root but not an I(0) process, and the S statistic similarly rejects the I(0) model of the long-annual nominal bond rates. In contrast, the I(0) model is not rejected using the S test for real interest rates. Several of the data series, such as the logarithm of the ratio of consumption to income, represent error correction terms from putative cointegrating relationships. Under the hypothesis of cointegration, these series should be I(0). Table IV shows that real unit labor costs (the logarithm of the ratio of labor productivity to real wages, y − n − w in familiar notation) exhibit limited persistence: the I(1) model is rejected but the I(0) model is not rejected. The “balanced growth” cointegrating relationship between consumption and income (e.g., King, Plosser, Stock, and Watson (1991)) fares less well, where the I(1) model is not rejected, but the I(0) model is rejected. The apparent source of this rejection is the large increase in the consumption–income ratio over the 1985–2004 period, a subject that has attracted much recent attention (for example, see Lettau and Ludvigson (2004) for an explanation based on increases in asset values). The investment–income relationship also appears to be at odds with the null of cointegration (although, in results not reported in the table, this rejection depends in part on the particular series used for investment and its deflator). Finally, the stability of the logarithm of the earnings–stock price ratio or dividend–price ratio, and the implication of this stability for the predictability of stock prices, has been an ongoing subject of controversy (see Campbell and Yogo (2006) for a recent discussion). Using Campbell and Yogo’s (2006) annual earnings–price data for the SP500 from 1880 to 2002, both the I(0) and I(1) models are rejected (and similar results are found for their dividend–price data over the same sample period). Confidence intervals constructed using the S statistic suggest less persistence than a unit root (for example, the confidence interval for the fractional model includes 028 ≤ d ≤ 090). The shorter (1928– 2004) Center for Research in Security Prices (CRSP) dividend yield (also from Campbell and Yogo (2006)) is consistent with more low-frequency persistence and the I(1) model is not rejected. Ding, Granger, and Engle (1993) analyzed the absolute value of daily returns from the SP500 and showed that the autocorrelations decayed in a way that was remarkably consistent with a fractional process. Low-frequency characteristics of the data are consistent with this finding. Both the unit root and I(0) models are rejected by the S statistic, but models with somewhat less persistence than the unit root, such as the fractional model with 008 < d < 088, are not rejected.
1008
U. K. MÜLLER AND M. W. WATSON
A striking finding from Table IV is the number of models that are rejected by the H test. For example, the low-frequency heteroskedasticity in the longannual GDP series (in large part associated with the post-WWII decline in volatility) leads to a rejection for all of the models. Several of the other series show similar rejections or produce confidence intervals for model parameters with little overlap with the confidence intervals associated with the S tests. These results suggest that for many of the series, low-frequency heteroskedasticity is so severe that it invalidates the limiting results shown in Table I. Two main findings stand out from these empirical analysis. First, despite focus on the narrow below-business-cycle frequency band, very few of the series are compatible with the I(0) model. This hold true even for putative cointegration error correction terms involving consumption, income, and investment, and stock prices and earnings. Most macroeconomic series and relationships thus exhibit pronounced nontrivial dynamics below business-cycle frequencies. In contrast, the unit root model is often consistent with the observed lowfrequency variability. Second, maybe the most important empirical conclusion is that for many series there seems to be too much low-frequency variability in the second moment to provide good fits for any of the models. From an economic perspective, this underlines the importance of understanding the sources and implications of such low-frequency volatility changes. From a statistical perspective, this finding motivates further research into methods that allow for substantial time variation in second moments. 5. ADDITIONAL REMARKS The analysis in this paper has focused on tests for whether a low-frequency model with a specific parameter value is a plausible data generating mechanism for the transformed data vT . Alternatively, one might ask whether a model as such, with unspecified parameter value, is rejected in favor of another model. A large number of inference procedures have been developed for specific low-frequency models, such as the local-to-unity model and the fractional model. Yet, typically there is considerable uncertainty about the appropriate low-frequency model for a given series. A high-power discrimination procedure would therefore have obvious appeal. Consider then the problem of discriminating between the three continuous bridges between the I(0) and the I(1) models: the fractional model with 0 ≤ d ≤ 1, the local-to-unity model with c ≥ 0, and the local level model with g ≥ 0. These models are obviously similar in the sense that they all nest (or arbitrarily well approximate) the I(0) and I(1) model. More interestingly, recent literature has pointed out that regime switching models and fractional models are similar along many dimensions—see, for example, Parke (1999), Diebold and Inoue (2001), and Davidson and Sibbertsen (2005). Since the local level model can be viewed as a short-memory model with time varying
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
1009
mean, this question is closely related to the similarity of the fractional model with 0 < d < 1 and the local level model with g > 0. This suggests that it will be challenging to discriminate between lowfrequency models using information contained in vT . A convenient way to quantify the difficulty is to compute the total variation distance between the models. Recall that the total variation distance between two probability measures is defined as the largest absolute difference the two probability measures assign to the same event, maximized over all events. Let Σ0 and Σ1 be the covariance matrices of X induced by two models and specific parameter values. Using a standard equality (see, for instance, Pollard (2002, p. 60)), the total variation distance (TVD) between the two probability measures described by the densities fv (Σ0 ) and fv (Σ1 ) is given by 1 |fv (Σ0 ) − fv (Σ1 )| dη(v) TVD(Σ0 Σ1 ) = 2 where η is the uniform measure on the surface of a q-dimensional unit sphere. There is no obvious way to analytically solve this integral, but it can be evaluated using Monte Carlo integration. To see how, write (9) TVD(Σ0 Σ1 ) = 1[fv (Σ1 ) < fv (Σ0 )](fv (Σ0 ) − fv (Σ1 )) dη(v) =
1[LRv < 1](1 − LRv )fv (Σ0 ) dη(v)
where LRv = fv (Σ1 )/fv (Σ0 ). Thus, TVD(Σ0 Σ1 ) can be approximated by drawing v’s under fv (Σ0 ) and averaging the resulting values of 1[LRv < 1](1 − LRv ).5 Let Σi (θ) denote the covariance matrix of X for model i ∈ {FR, OU, LL} with parameter value θ and consider the quantity Dij (θ) = min TVD(Σi (θ) Σj (γ)) γ∈Γ
where Γ = [0 1] for j = FR and Γ = [0 ∞) for j ∈ {OU, LL}. If Dij (θ) is small, then there is a parameter value γ0 ∈ Γ for which the distribution of v with Σ = Σj (γ0 ) is close to the distribution of v with Σ = Σi (θ), so it will be difficult to discriminate model i from model j if indeed Σ = Σi (θ). More formally, consider any model discrimination procedure between models i and j based on v, which correctly chooses model i when Σ = Σi (θ) with probability p. By definition of the total variation distance, the probability of the event “procedure 5
It is numerically advantageous to rely on (9) rather than on the more straightforward ex pression TVD(Σ0 Σ1 ) = 12 |1 − LRv |fv (Σ0 ) dη(v) for the numerical integration, since 1[LRv < 1](1−LRv ) is bounded and thus possesses all moments, which is not necessarily true for |1−LRv |.
1010
U. K. MÜLLER AND M. W. WATSON
selects model i” under Σ = Σj (γ) is at least p − TVD(Σi (θ) Σj (γ)). If Dij (θ) is small, then either the probability p of correctly selecting model i is small or the probability of correctly selecting model j is small for some Σ = Σj (γ0 ), γ0 ∈ Γ . In the language of hypothesis tests, for any test of the null hypothesis that Σ = Σi (θ) θ ∈ Θ against Σ = Σj (γ) γ ∈ Γ , the sum of the probabilities of Type I and Type II errors is bounded below by 1 − maxθ∈Θ Dij (θ). The value of Dij (θ) is an (increasing) function of q. Figure 7 plots Dij (θ) for each of the model pairs for q = 13 which corresponds to 52 years of data with interest focused on frequencies lower than 8-year cycles. Panel A plots DFR OU (d) and DFR LL (d), and panels B and C contain similar plots for the OU and LL models. Evidently Dij (θ) is small throughout. For example, for all values of d, the largest distance of the fractional model to the local-tounity and local level model is less than 25%, and the largest distance between the OU and LL models is less than 45%. For comparison, the total variation distance between the I(0) and I(1) model for q = 13 is approximately 90%. Total variation distance using detrended data is somewhat smaller than the values shown in Figure 7. Evidently then, it is impossible to discriminate between these standard models with any reasonable level of confidence using sample sizes typical in macroeconomic applications, at least based on the
FIGURE 7.—Total variation distance. Results are shown for the demeaned case with q = 13.
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
1011
below-business-cycle variability in the series summarized by vT . Indeed, to obtain, say, max0≤d≤1 DFR OU (d) ≈ 09, one would need a sample size of 480 years (corresponding to q = 120). These results imply that it is essentially impossible to discriminate between these models based on low-frequency information using sample sizes typically encountered in empirical work. When using any one of these one parameter low-frequency models for empirical work, either one must rely on extraneous information to argue for the correct model choice or one must take these models seriously over a much wider frequency band. Neither of these two options is particularly attractive for many applications, which raises the question whether econometric techniques can be developed that remain valid for a wide range of low-frequency models. APPENDIX A.1. Proof of Theorem 1 By standard calculations (10)
μ
kW (r s) = min(r s) + k (r s) = min(s r) + τ W
1 1 − (r + s) + (r 2 + s2 ) 3 2 4
ς5−l (r)ςl (s)
l=1
where ς1 (r) = 151 − 11 r + 2r 2 − r 3 , ς2 (r) = 35 r − 3r 2 + 2r 3 , ς3 (r) = r, and ς4 (r) = 1. 10 Noting that for any real λ = 0, s > 0, and φ, s sin(λu + φ)u du 0
= sin(λs + φ) − λs cos(λs + φ) − sin(φ) /λ2 s
sin(λu + φ)u2 du 0
= 2sλ sin(λs + φ) + (2 − λ2 s2 ) cos(λs + φ) − 2 cos(φ) /λ3 s
sin(λu + φ)u3 du 0
= 3(λ2 s2 − 2) sin(λs + φ) + λs(6 − λ2 s2 ) cos(λs + φ) + 6 sin(φ) /λ4
1 it is straightforward, but very tedious, to confirm that 0 kiW (r s)ϕij (s) ds = λij ϕij (r) for j = 0 1 when i = μ and for j = −1 0 1 2 when i = τ.
1012
U. K. MÜLLER AND M. W. WATSON
Note that {ϕμj }∞ j=0 is necessarily the complete set of eigenfunctions, since the cosine expansion is a basis of L2 [0 1]. For the detrended case, it is not hard to see that the two functions ϕτ−1 and ϕτ0 are the only possible eigenfunctions of kτW (r s) that correspond to a zero eigenvalue. Furthermore, Nabeya and Tanaka (1988) showed that eigenfunctions of kernels of the form (10) corresponding to nonzero eigenvalues, that is, functions f sat1 isfying 0 kτW (r s)f (s) ds = λf (r) with λ = 0, are the solutions of the second2 order differential equation f (s) + λf (s) = l=1 al ςl (s) under some appropriand ς2 are linear, we conclude that f is of the ate boundary conditions. Since ς1 √ √ form f (s) = c1 cos( λs) + c2 sin( λs) + c3 + c4 s. It thus suffices to show that 1 f (s)ϕτj (s) ds = 0 for j ≥ −1 implies cl = 0 for l = 1 4. As ϕτ−1 (s) and 0 τ ϕ0 (s) span {1 s}, and ϕτj , j ≥ 1, are orthogonal to ϕτ−1 and ϕτ0 , this is equivalent 1 to showing 0 f (s)ϕτj (s) ds = 0 for j ≥ 1 implies c0 = 0 in the parameterization f (s) = c0 sin(ω(s − 12 ) + φ), ω > 0 and φ ∈ (−π π). A calculations yields that 1 1 f (s)ϕτ1 (s) ds = 0 and 0 f (s)ϕτ2[ω/2π]−1 (s) ds = 0 imply φ = 0 or c0 = 0, and 0 1 c0 0 sin(ω(s − 12 ))ϕτ2[ω/2π] (s) ds = 0 implies c0 = 0. A.2. Continuity of Fractional Process at d = 1/2 From Table I and (2), for −1/2 < d < 1/2 and s ≤ r, kμFR(d) (r s) =
1 1+2d + r 1+2d − (r − s)1+2d + 2rs s 2 − s(1 − (1 − r)1+2d + r 1+2d ) − r(1 − (1 − s)1+2d + s1+2d )
and for 1/2 < d < 3/2, kμFR(d) (r s) =
1+2d 1 (1 − s) −r 4d(1 + 2d) − s(s2d + (r − s)2d + (1 − r)2d − 1) + r(s1+2d + 1 − (1 − s)2d + (r − s)2d ) + sr((1 − s)2d + (1 − r)2d − 2)
Now for 0 < s < r using that for any real a > 0, limx↓0 (ax − 1)/x = ln a, we find (11)
kμFR(d) (r s) = −(1 − r)2 s ln(1 − r) − r 2 (1 − s) ln r d↑1/2 1/2 − d lim
− r(1 − s)2 ln(1 − s) + (r − s)2 ln(r − s) + (r − 1)s2 ln s
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
1013
and (12)
kμFR(d) (r r) = 2(1 − r)r(−(1 − r) ln(1 − r) − r ln r) lim d↑1/2 1/2 − d
Performing the same computation for limd↓1/2 kμFR(d) (r s)/(d − 1/2) yields the desired result in the demeaned case. The detrended case follows from these results and (3). A.3. Data Appendix Table A.I lists the series used in Section 4, the sample period, data frequency transformation, and data source and notes. TABLE A.I DATA DESCRIPTION AND SOURCESa
Series
Real GDP Real GNP (long annual) Inflation Inflation (long annual)
Productivity Hours 10yr Tbond 1yr Tbond 3mth Tbill Bond rate Real Tbill rate Real bond rate
Dollar/pound real ex. rate
Unit labor cost Tbond spread Real C-GDP
Sample Period
F
Tr
Source and Notes
1952:1–2005:3 Q ln τ DRI: GDP157 1869–2004 A ln τ 1869–1928: Balke and Gordon (1989) 1929–2004: BEA (series are linked in 1929) 1952:1–2005:3 Q lev μ DRI: 400×ln(GDP272(t)/GDP272(t − 1)) 1870–2004 A lev μ GNP deflator (PGNP): 1869–1928: Balke and Gordon (1989) 1929–2004: BEA (series are linked in 1929) Inflation series is 100 × ln(PGNP(t)/PGNP(t − 1)) 1952:1–2005:2 Q ln τ DRI: LBOUT (output per hour, business sector) 1952:1–2005:2 Q ln τ DRI: LBMN(t)/P16(t) (employee hours/population) 1952:1–2005:3 Q lev μ DRI: FYGT10 1952:1–2005:3 Q lev μ DRI: FYGT1 1952:1–2005:2 Q lev μ DRI: FYGM3 1900–2004 A lev μ NBER: M13108 (1900–1946) DRI: FYAAAI (1947–2004) 1952:1–2005:2 Q lev μ DRI: FYGM3(t) − 400 × ln(GDP273(t + 1)/ GDP273(t)) 1900–2004 A lev μ R(t) − 100 × ln(PGNP(t)/PGNP(t − 1)) R(t) = bond rate (described above) PGNP = GNP deflator (described above) 1791–2004 A ln μ 1791–1990: Lothian and Taylor (1996) 1991–2004: FRB (nominal exchange rate) BLS (U.S. PPI finished goods) IFS (U.K. PPI manufactured goods) 1952:1–2005:2 Q ln μ DRI: LBLCP(t)/LBGDP(t) 1952:1–2005:3 Q lev μ DRI: FYGT10 − FYGT1 1952:1–2005:3 Q lnr μ DRI: GDP 158/GDP157 (Continues)
1014
U. K. MÜLLER AND M. W. WATSON TABLE A.I—Continued
Series
Real I-GDP Earnings/price (SP500) Div/price (CRSP) Abs.returns (SP500)
Sample Period
F
Tr
1952:1–005:3 1880–2002
Q A
lnr μ lnr μ
DRI: GDP 177/ GDP 157 Campbell and Yogo (2006)
1926–2004
A
lnr μ
Campbell and Yogo (2006)
1/3/1928– 1/22/2005
D
lnr μ
SP: SP500(t) is the closing price at date t. Absolute returns are | ln[SP500(t)/SP500(t − 1)]|
Source and Notes
a Notes. The column labeled F shows the data frequency (A, annual; Q, quarterly; D, daily). The column labeled Tr (transformation) show the transformation: demeaned levels (lev μ), detrended levels (lev τ ), demeaned logarithms (ln μ), and detrended logarithms (ln τ ), and lnr denotes the logarithm of the indicated ratio. In the column labeled Source and Notes, DRI denotes the DRI economics database (formerly Citibase) and NBER denotes the National Bureau of Economic Research historical data base.
REFERENCES AKDI, Y., AND D. A. DICKEY (1998): “Periodograms of Unit Root Time Series: Distributions and Tests,” Communications in Statistics: Theory and Methods, 27, 69–87. [988] ANDERSEN, T., T. BOLLERSLEV, P. CHRISTOFFERSEN, AND F. X. DIEBOLD (2007): Volatility: Practical Methods for Financial Applications. Princeton, NJ: Princeton University Press (forthcoming). [994] BALKE, N. S., AND R. J. GORDON (1989): “The Estimation of Prewar Gross National Product: Methodology and New Evidence,” Journal of Political Economy, 94, 38–92. [994,1013] BEVERIDGE, S., AND C. R. NELSON (1981): “A New Approach to Decomposition of Economics Time Series Into Permanent and Transitory Components With Particular Attention to Measurement of the Business Cycle,” Journal of Monetary Economics, 7, 151–174. [979] BIERENS, H. J. (1997): “Nonparametric Cointegration Analysis,” Journal of Econometrics, 77, 379–404. [981] BOLLERSLEV, T., R. F. ENGLE, AND D. B. NELSON (1994): “ARCH Models,” in Handbook of Econometrics, Vol. IV, ed. by R. F. Engle and D. McFadden. Amsterdam: Elsevier Science. [994] CAMPBELL, J. Y., AND M. YOGO (2006): “Efficient Tests of Stock Return Predictability,” Journal of Financial Economics, 81, 27–60. [1007,1014] CHAN, N. H., AND N. TERRIN (1995): “Inference for Unstable Long-Memory Processes With Applications to Fractional Unit Root Autoregressions,” Annals of Statistics, 23, 1662–1683. [983] CHRISTIANO, L., M. EICHENBAUM, AND R. VIGFUSSON (2003): “What Happens After a Technology Shock,” Working Paper 9819, NBER. [1007] DAVIDSON, J. (2002): “Establishing Conditions for the Functional Central Limit Theorem in Nonlinear and Semiparametric Time Series Processes,” Journal of Econometrics, 106, 243–269. [983] DAVIDSON, J., AND N. HASHIMADZE (2006): “Type I and Type II Fractional Brownian Motions: A Reconsideration,” Working Paper, University of Exeter. [995] DAVIDSON, J., AND P. SIBBERTSEN (2005): “Generating Schemes for Long Memory Processes: Regimes, Aggregation and Linearity,” Journal of Econometrics, 128, 253–282. [1008] DIEBOLD, F. X., AND A. INOUE (2001): “Long Memory and Regime Switching,” Journal of Econometrics, 105, 131–159. [1008] DING, Z., C. W. J. GRANGER, AND R. F. ENGLE (1993): “A Long Memory Property of Stock Market Returns and a New Model,” Journal of Empirical Finance, 1, 83–116. [1007] ELLIOTT, G. (1999): “Efficient Tests for a Unit Root When the Initial Observation Is Drawn From Its Unconditional Distribution,” International Economic Review, 40, 767–783. [991,997]
TESTING MODELS OF LOW-FREQUENCY VARIABILITY
1015
ELLIOTT, G., T. J. RETHENBERG, AND J. H. STOCK (1996): “Efficient Test for an Autoregressive Unite Root,” Econometrica, 64, 813–836. [1001,1002] FAMA, E. F. (1970): “Efficient Capital Markets: A Review of Theory and Empirical Work,” Journal of Finance, 25, 383–417. [979] FRANCIS, N., AND V. A. RAMEY (2005): “Is the Technology-Driven Business Cycle Hypothesis Dead? Shocks and Aggregate Fluctuations Revisited,” Journal of Monetary Economics, 52, 1379–1399. [1007] (2006): “Measures of Per Capita Hours and Their Implications for the Technology– Hours Debate,” Working Paper, U.C. San Diego. [1007] GALI, J. (1999): “Technology, Employment, and the Business Cycle: Do Technology Shocks Explain Aggregate Fluctuations?” American Economic Review, 89, 249–271. [1007] GEWEKE, J., AND S. PORTER-HUDAK (1983): “The Estimation and Application of Long Memory Time Series Models,” Journal of Time Series Analysis, 4, 221–238. [980,1001] (1989): Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge, U.K.: Cambridge University Press. [982] KARIYA, T. (1980): “Locally Robust Test for Serial Correlation in Least Squares Regression,” Annals of Statistics, 8, 1065–1070. [986] KIM, C.-J., AND C. R. NELSON (1999): “Has the Economy Become More Stable? A Bayesian Approach Based on a Markov-Switching Model of the Business Cycle,” Review of Economics and Statistics, 81, 608–616. [994] KING, M. L. (1980): “Robust Tests for Spherical Symmetry and Their Application to Least Squares Regression,” Annals of Statistics, 8, 1265–1271. [986] KING, M. L. (1988): “Towards a Theory of Point Optimal Testing,” Econometric Reviews, 6, 169–218. [991] KING, R., C. I. PLOSSER, J. H. STOCK, AND M. W. WATSON (1991): “Stochastic Trends and Economic Fluctuations,” American Economic Review, 81, 819–840. [1007] KWIATKOWSKI, D., P. C. B. PHILLIPS, P. SCHMIDT, AND Y. SHIN (1992): “Testing the Null Hypothesis of Stationarity Against the Alternative of a Unit Root,” Journal of Econometrics, 54, 159–178. [992,1001,1002] LETTAU, M., AND S. C. LUDVIGSON (2004): “Understanding Trend and Cycle in Asset Values: Reevaluating the Wealth Effect on Consumption,” American Economic Review, 94, 276–299. [1007] LOTHIAN, J. R., AND M. P. TAYLOR (1996): “Real Exchange Rate Behavior: The Recent Float From the Perspective of the Past Two Centuries,” Journal of Political Economy, 104, 488–509. [1003,1013] MARINUCCI, D., AND P. M. ROBINSON (1999): “Alternative Forms of Fractional Brownian Motion,” Journal of Statistical Planning and Inference, 80, 111–122. [983] MCCONNELL, M. M., AND G. PEREZ-QUIROS (2000): “Output Fluctuations in the United States: What Has Changed Since the Early 1980’s,” American Economic Review, 90, 1464–1476. [994] MCLEISH, D. L. (1974): “Dependent Central Limit Theorems and Invariance Principles,” Annals of Probability, 2, 620–628. [983] MEESE, R. A., AND K. ROGOFF (1983): “Empirical Exchange Rate Models of the Seventies: Do They Fit Out of Sample?” Journal of International Economics, 14, 3–24. [979] MÜLLER, U. K. (2007a): “An Alternative Sense of Asymptotic Efficiency,” Working Paper, Princeton University. [992] (2007b): “A Theory of Robust Long-Run Variance Estimation,” Journal of Econometrics, 141, 1331–1352. [981] MÜLLER, U. K., AND M. W. WATSON (2008): “Supplement to ‘Testing Models of Low-Frequency Variability’,” Econometrica Supplemental Material, 76, http://www.econometricsociety.org/ecta/ Supmat/6814_data and programs.zip. [982] NABEYA, S., AND K. TANAKA (1988): “Asymptotic Theory of a Test for Constancy of Regression Coefficients Against the Random Walk Alternative,” Annals of Statistics, 16, 218–235. [1012]
1016
U. K. MÜLLER AND M. W. WATSON
NELSON, C. R., AND C. I. PLOSSER (1982): “Trends and Random Walks in Macroeconomic Time Series—Some Evidence and Implications,” Journal of Monetary Economics, 10, 139–162. [979, 980] NYBLOM, J. (1989): “Testing for the Constancy of Parameters Over Time,” Journal of the American Statistical Association, 84, 223–230. [992,1001,1002] PARKE, W. R. (1999): “What Is Fractional Integration?” Review of Economics and Statistics, 81, 632–638. [1008] PESAVENTO, E., AND B. ROSSI (2005): “Do Technology Shocks Drive Hours Up or Down? A Little Evidence From an Agnostic Procedure,” Macroeconomic Dynamics, 9, 478–488. [1007] PHILLIPS, P. C. B. (1998): “New Tools for Understanding Spurious Regression,” Econometrica, 66, 1299–1325. [981] (2006): “Optimal Estimation of Cointegrated Systems With Irrelevant Instruments,” Discussion Paper 1547, Cowles Foundation. [981] PHILLIPS, P. C. B., AND V. SOLO (1992): “Asymptotics for Linear Processes,” Annals of Statistics, 20, 971–1001. [983] POLLARD, D. (2002): A User’s Guide to Measure Theoretic Probability. Cambridge, U.K.: Cambridge University Press. [1009] ROBINSON, P. M. (2003): “Long-Memory Time Series,” in Time Series With Long Memory, ed. by P. M. Robinson. London: Oxford University Press, 4–32. [1001,1002] SAID, S. E., AND D. A. DICKEY (1984): “Testing for Unit Roots in Autoregressive-Moving Average Models of Unknown Order,” Biometrika, 71, 2599–2607. [980] STOCK, J. H. (1994): “Unit Roots, Structural Breaks and Trends,” in Handbook of Econometrics, Vol. 4, ed. by R. F. Engle and D. McFadden. New York: North-Holland, 2740–2841. [983] STOCK, J. H., AND M. W. WATSON (2007): “Why Has Inflation Become Harder to Forecast?” Journal of Money, Credit, and Banking, 39, 3–34. [1004] TAQQU, M. S. (1975): “Convergence of Integrated Processes of Arbitrary Hermite Rank,” Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 50, 53–83. [983] VELASCO, C. (1999): “Non-Stationary Log-Periodogram Regression,” Journal of Econometrics, 91, 325–371. [982,991] WOOLDRIDGE, J. M., AND H. WHITE (1988): “Some Invariance Principles and Central Limit Theorems for Dependent Heterogeneous Processes,” Econometric Theory, 4, 210–230. [983]
Dept. of Economics, Princeton University, Princeton, NJ 08544, U.S.A. and Dept. Economics and Woodrow Wilson School, Princeton University, Princeton, NJ 08544, U.S.A.;
[email protected]. Manuscript received November, 2006; final revision received November, 2007.
Econometrica, Vol. 76, No. 5 (September, 2008), 1017–1074
LIMITED INFORMATION AND ADVERTISING IN THE U.S. PERSONAL COMPUTER INDUSTRY BY MICHELLE SOVINSKY GOEREE1 Traditional discrete-choice models assume buyers are aware of all products for sale. In markets where products change rapidly, the full information assumption is untenable. I present a discrete-choice model of limited consumer information, where advertising influences the set of products from which consumers choose to purchase. I apply the model to the U.S. personal computer market where top firms spend over $2 billion annually on advertising. I find estimated markups of 19% over production costs, where top firms advertise more than average and earn higher than average markups. High markups are explained to a large extent by informational asymmetries across consumers, where full information models predict markups of one-fourth the magnitude. I find that estimated product demand curves are biased toward being too elastic under traditional models. I show how to use data on media exposure to improve estimated price elasticities in the absence of micro ad data. KEYWORDS: Advertising, information, discrete-choice models, product differentiation, personal computer industry.
1. INTRODUCTION IN 1998 OVER 36 MILLION PERSONAL COMPUTERS (PCs) were sold in the United States, generating over $62 billion in revenues—over $2 billion of which was spent on advertising. The PC industry is one in which products change rapidly, with approximately 200 new products introduced by the top 15 firms every year. Due to the large number of PCs available and the frequency with which new products are brought into the market, consumers are unlikely to be aware of all PCs for sale. Furthermore, it is reasonable to suspect consumers have limited information in many industries. Traditional random coefficient discrete-choice models are estimated under the assumption that buyers are aware of all available products. Within the full information framework, Berry, Levinsohn, and Pakes (hereafter BLP) (1995) showed that it is important to allow for consumer taste heterogeneity so as to obtain realistic estimates of demand elasticities. This paper adds to BLP and shows that it is just as important to allow for heterogeneity in consumer information in industries with a rapidly changing product line. Indeed, in rapidly 1 This paper is based on my 2002 dissertation. Special thanks to my advisors, Steven Stern and Simon Anderson. I am grateful to Costas Meghir and three anonymous referees for their detailed comments, which substantially improved the paper. The paper has benefited from comments of seminar participants at Amsterdam, Arizona, Claremont McKenna, Edinburgh, KU Leuven, Southern California, Tilburg, UC Irvine, Virginia, Warwick, Yale, EARIE meetings, and IIOC meetings, and discussions with Dan Ackerberg, Steve Berry, Greg Crawford, Jacob Goeree, Phil Haile, Mike Keane, Aviv Nevo, Margaret Slade, Matt Shum, Frank Verboven, and Michael Waterson. I thank Gartner Inc. and Sandra Lahtinen for making the data available. I am grateful for financial support from the University of Virginia’s Bankard Fund for Political Economy.
© 2008 The Econometric Society
DOI: 10.3982/ECTA4158
1018
MICHELLE SOVINSKY GOEREE
changing markets, informational asymmetries may explain (perhaps a significant) part of the variation in sales. This paper presents a model of limited information where the imperfect substitutability between different brands may arise from limited consumer information about product offerings as well as from idiosyncratic brand preferences. The limited information model incorporates three important sources of consumer heterogeneity: choice sets, tastes, and advertising media exposure. Following the data combining approach of Petrin (2002), I show how to estimate a model of limited information in the absence of microlevel advertising data, which are difficult to obtain in many industries.2 The results suggest that traditional models, which rule out nonrandom informational asymmetries a priori, can yield estimates of product-specific demand curves that are biased toward being too elastic. The estimates indicate that advertising has very different informative effects across individuals and media, and that allowing for heterogeneity in consumer information yields more realistic estimates of demand elasticities. The results show that (i) limited information about a product is a contributing factor to differences in purchase outcomes and (ii) information is distributed across households in a nonrandom way. An implication of these findings is that assuming full information may lead to incorrect conclusions regarding the intensity of competition. Indeed, I found high estimated median markups in the PC industry in 1998, about 19%, whereas traditional full information models suggest the industry was more competitive, with estimated markups of only 5%. Furthermore, the results suggest top firms benefit from limited consumer information with the top firms earning higher than average markups and engaging in higher than average advertising. These implications are of particular importance when addressing policy issues. The paper proceeds as follows: in the next section I describe the data. I discuss the model and identification in Sections 3 and 4. Estimation is discussed in Section 5. The results from preliminary regressions and from the full model are presented in Sections 6 and 7, respectively. I describe the specification tests and conclude in the final Sections 8 and 9. 2. DATA Product Level Data The product level data were provided by Gartner Inc. and consist of quarterly shipments and dollar sales of all PCs sold between 1996 and 1998.3 The 2 Recent structural studies of advertising utilizing micro purchase and advertising exposure data include Erdem and Keane (1996), Ackerberg (2003), and Anand and Shachar (2004). Shum (2004) matched aggregate advertising data to micro purchase data. 3 Prices are dollar sales divided by units sold and are deflated using the Consumer Price Index from the Bureau of Labor Statistics.
1019
LIMITED INFORMATION AND ADVERTISING TABLE I
SUMMARY STATISTICS FOR MARKET SHARES, ADVERTISING, PRICES, AND MARKUPSa
Percentage Dollar Home Market Share Manufacturer
1996
1997
1998
Median Percentage Markup, Home Sector
Average Annual Ad Expend
Industry
Ad-to-Sales Median Price Over Marginal Including Ratio Home Sector Costs Ad Costs
34%
$2239
15%
10%
6567 6831 7526
$469
91%
$2172
17%
12%
Acer 620 602 437 Apple 666 579 916 AST 308 153 Compaq 1189 1629 1643 Dell 246 287 257 Gateway 894 1177 1643 Hewlett–Packard 402 552 1005 IBM 849 742 685 Micron 326 405 168 NEC 322 Packard–Bell 2348 Packard–Bell NEC 2102 1633 Texas Instruments 140
$117 $161
54% 53%
$1708 $1859
9% 9%
$208 $150 $277 $651 $1189
24% 21% 56% 177% 201%
$2070 $2297 $2767 $2203 $2565
11% 16% 13% 23% 10% 12% 16% 16% 7%
$327
72%
$2075
Top 6 firms
15 included
16%
16% 10% 10% 10%
11%
7%
8311 8227 8388
a Notes: Others in the 15 included are ATT (NCR), DEC, and Epson, each of which held less than 1% of the home (and total) market shares in 1996 and 1997. AST and Micron held less than 1% total market shares on average. In 1997 three mergers occurred: Packard Bell, NEC, ZDS; Acer, Texas Instruments; Gateway, Advanced Logic Research. Ad expenditures (in $M) and ad-to-sales ratios are annual averages from LNA and include all sectors (home, business, education, government). Percentage markups are the median (price − marginal costs)/price across all products. The last column is percentage total markups per unit after including advertising. These are determined from estimated markups and estimated effective product advertising in the home sector.
majority of firms sell to the home market, businesses, educational institutions, and the government. Since the focus of this research is on consumer behavior, I use the home market data to estimate the model.4 Sales to the home market comprise over 30% of all PCs sold. As can be seen from Table I, the PC industry is concentrated, with the top six firms accounting for over 69% (71%) of the dollar (unit) home market share on average. The major market players did not change over the period, although there was significant change in some of their market shares. The top ten firms, based on home market share (Acer, Apple, Compaq, Dell, Gateway, Hewlett– Packard, IBM, Micron, NEC, and Packard–Bell), account for over 80% of PC sales to the home market. The analysis includes the top ten firms and five oth4
I use the non-home sector data in the supply side of the model (see Section 3.3).
1020
MICHELLE SOVINSKY GOEREE
ers (AST, AT&T/NCR, DEC, Epson, and Texas Instruments) to make full use of micro-purchase data.5 The 15 included firms account for over 85% (83%) of the dollar (unit) home market share on average. I have data on five main PC attributes: manufacturer (e.g., Dell), brand (e.g., Latitude LX), form factor (e.g., desktop), CPU type (e.g., Pentium II), and CPU speed (megahertz (MHz)). I define a model as a manufacturer–brand– CPU type–CPU speed–form factor combination. Due to data limitations, I do not include some essential product characteristics (such as memory or hard disk) or product peripherals (such as CD-ROM or modem). However, the ease with which consumers can add on after purchase (by buying RAM or a CD-ROM, for instance) would make it difficult to determine consumer preferences over these dimensions. The data I use consist of a more limited set of attributes, but those which cannot be easily altered after purchase. The Gartner data still allow for a very narrow model definition. For example, the Compaq Armada 6500 and the Armada 7400 are two separate models. Both have Pentium II 300/366 processors, 64 MB standard memory, 56 KB/s modem, an expansion bay for peripherals, and full-size displays and keyboards. The 7400 is lighter, although somewhat thicker, and it has a larger standard hard drive, and more cache memory. In both models the hard drive and memory are expandable up to the same limit. In addition, the Apple Macintosh Power PC 604 180/200 desktop and deskside are two separate models. They differ only in their form factor. Treating a model/quarter as an observation, the sample size is 2112,6 representing 723 distinct models. The majority of the PCs offered to home consumers were desk PCs (70%) and over 83% of the processors were Pentium based. The number of models offered by each firm varied. Compaq had the largest selection with 138 different choices, while Texas Instruments offered only five. On average, each firm offered a model for three quarters. The market size is the number of U.S. households in a given period, as reported by the Census Bureau. Market shares are unit sales of each model divided by market size. The outside good market share is one minus the share of the inside goods. Advertising Data Due to data limitations, previous studies were unable to consider the differential effects of advertising across media. I use advertising data from Competitive Media Reporting’s (CMR) LNA/Multi-Media publication, which includes 5 While all firms were active in 1996, by 1998, Texas Instruments had merged with Acer, DEC had merged with Compaq, and the other three firms had disappeared from the home market. I treat changes in number of products and firms as exogenous variation, a common assumption made in this literature. I discuss the impact of including the smaller firms on the results in Section 8. 6 This is the sample size after eliminating observations with negligible quarterly market shares.
LIMITED INFORMATION AND ADVERTISING
1021
quarterly ad expenditures across ten media. Some of the media channels are not used frequently by PC firms. For example, outdoor advertising for PCs is rare (on average less than 0.3% of ad expenditures). I aggregate the media into newspaper, magazine, television (TV), and radio categories.7 These broader channels contain more nonzero observations, aiding identification of media-specific parameters. These data are not broken down by sector (e.g., home, business, etc.). CMR categorizes advertising across product types, which, in some instances, allows me to isolate non-home expenditures. For example, some expenditures are reported with detail (e.g., IBM RS/6000 server), while others are reported generally (e.g., IBM various computers). As a result, the ad measure includes some expenditures on non-PC systems intended for non-home sectors (such as mainframe servers and UNIX workstations). Total ad expenditures by the top firms in the computer industry have grown from $1.4 billion in 1995 to over $2 billion in 1998 (an average annual rate close to 13%). As Table I shows, there is much variation across firms. The industry ad-to-sales ratio is 3.4%. However, the top firms spend on average over 9% of their sales revenue on advertising. Notably, the majority of the top firm expenditures are by IBM whose ad-to-sales ratio is over 20%. IBM’s large relative ad expenditures may be due to its non-PC interests (servers, mainframes, UNIX workstations, etc.). To examine this hypothesis, in the model, I allow the position of the firm in the non-PC sector to affect the non-home sector marginal revenue of advertising. Excluding IBM’s expenditures, the remaining top firms spend an average of 6.5% of their revenue on advertising. In contrast, Compaq’s ad-to-sales ratio is only 2.4%. It is common for PC firms to advertise products simultaneously in groups. For example, in 1996, one of Compaq’s ad campaigns involved all Presarios (of which there are 12). One possibility is that group advertising provides as much information about the products in the group as product-specific advertising. However, if group advertising were as effective as product advertising, we would observe only group advertising (the most efficient use of resources). An alternative possibility is that group advertising merely informs the consumer about the firm. If this were the case, we should observe either firm-level (the largest possible group) or product-specific advertising. In reality, firms use a combination of product-specific and group advertising (with groups of varying sizes). I need a measure of ad expenditures by product that incorporates all advertising done for the product. I construct “effective” ad expenditures by adding observed product-specific expenditures to a weighted average of all group expenditures for that product, where the weights are estimated. Let Gj be the set of all product groups that include product j (I suppress 7 The magazine medium includes Sunday magazines. The television medium includes network, spot, cable, or syndicated TV. The radio medium includes network and spot radio. There are many zero observations for outdoor advertising, so I choose to add it to the radio medium.
1022
MICHELLE SOVINSKY GOEREE
the time subscript). Let adH be (observed) total ad expenditures for group H ∈ G j where the average expenditure per product in the group is adH ≡
adH |H|
Then effective ad expenditures for product j are given by (1)
adj =
2
(π1 adH + π2 adH )
H∈Gj
where the sum is over the different groups that include product j.8 This specification allows for increasing or decreasing returns to group advertising. If there is only one product in the group (i.e., it is product-specific), I restrict π1 to unity and π2 to zero. Consumer Level Data The consumer-level data come from the Survey of Media and Markets conducted by Simmons Market Research Bureau. Simmons collects data on consumers’ media habits, product usage, and demographics from about 20,000 households annually. Ideally, one would have individual-level purchase, ad exposure, and demographic data. Unfortunately, these data are not available for the PC industry. However, I am able to use the Simmons data to link demographics with purchases and to control for household variation in advertising media exposure. I use two years of the survey from 1996 to 1997 (data from 1998 were not publicly available). Descriptive statistics are given in Table II.9 The Simmons respondents were asked about their media habits. I use the self-reported media exposure information to control for variation in advertising media exposure across households. I combine the Simmons data with (separate) information on market shares and product characteristics, which enables me to obtain a more precise picture of how media exposure and demand are related. I use these data to construct “media exposure” moments. In addition, Simmons collects information on PC ownership, including whether the individual purchased in the past year and the manufacturer. Ap8 I call these effective product ad expenditures to indicate they are constructed from observed group and product-specific advertising. To get an idea of the level of detail in the data, in the first quarter of 1998, there were 18 group advertisements for Apple computers. The groups advertised ranged from various computers to PowerBook to Macintosh Power PC G3 Portable (the later being a specific model). In this quarter the Apple Macintosh Power PC G3 Portable computer belonged to seven different product groups. 9 The Simmons survey over samples in large metropolitan areas. This causes no estimation bias because residential location is treated as exogenous. To reduce the sample to a manageable size, I select 6700 respondents randomly from each year. The final sample size is 13,400.
1023
LIMITED INFORMATION AND ADVERTISING TABLE II DESCRIPTIVE STATISTICS FOR SIMMONS DATAa Sample Variable Description
Population
Mean
Std. Dev.
Mean
Std. Dev.
0663 0881 4738 0443 1398 0564 2633 0695 56,745 0667 0107 0466 0113
0474 0324 1568 0497 254 0496 1429 0460 45,246 0471 0309 0499 0317
0661 0881 4687 0449 1400 0572 2631 0693 56,340 0669 0106 0470 0112
0473 0324 1513 0497 235 0495 1428 0461 44,465 0471 0308 0499 0316
Media Exposure
Mean
Std. Dev.
Min
Max
Cable (= 1 if receive cable) Hours cable (per week) cable (per week) Hours noncable (per week) Hours radio (per day) Magazine (= 1 if read last quarter) Number magazines (read last quarter) Weekend newspaper (= 1 if read last quarter) Weekday newspaper (= 1 if read last quarter)
0749 3607 3003 2554 0954 6870 0819 0574
Male White Age (years) 30 to 50 (= 1 if 30 < age < 50) Education (years) Married Household size Employed Income ($) Inclow (= 1 if income < $60,000) Inchigh (= 1 if income > $100,000) Own PC (= 1 if own a PC) PCnew (= 1 if PC bought in last 12 months)
0434 2201 2105 2244 0170 6141 0318 0346
0 0 0 0 0 0 0 0
1 7 62 65 1 95 1 1
a Notes: Unless units are specified, variable is a dummy. Number of observations in survey is 39,931. Sample size is 13,400. Media exposure summary statistics are based on reports published by Simmons Market Research.
proximately 11% of the households purchased a PC in the last 12 months. Respondents were not asked any specifics regarding their PC other than the manufacturer. Only the 15 firms used in estimation were listed separately. I use these data to construct “firm choice” moments. Finally, I use data on the distribution of consumer characteristics from the Consumer Population Survey (CPS) in the macromoments. Unlike Simmons, the CPS data are available from 1996 to 1998.10 I discuss the media exposure micromoments, the firm choice micromoments, and the macromoments in Section 5.1. 10
For each year I drew 3000 individuals from the March CPS. Quarterly income was constructed from annual data and deflated using the Consumer Price Index. I dropped a few households where annual income was below $5000. Simmons data indicate that no purchases were made by households with income below $5000; hence eliminating these households should not affect the group of interest.
1024
MICHELLE SOVINSKY GOEREE
3. ECONOMIC MODEL The model primitives are product attributes, consumer preferences, and the notion of equilibrium. In the product and ad-level data, I observe price, quantity, other measurable product attributes, and ad expenditures across media. In the consumer-level data, I observe consumer attributes, including media exposure, and firm choice. The structural estimation strategy requires me to specify a model of consumer choice and firm behavior, and to derive the implied relationships among choice probabilities.11 3.1. Utility and Demand An individual chooses from J products, indexed j = 1 J, where a product is a PC model defined as a firm brand–CPU type–CPU speed–form factor combination. Product j characteristics are price (p), nonprice observed attributes (x) (CPU speed, Pentium CPU, firm, laptop form factor, etc.), and attributes unobserved to the researcher but known to consumers and producers (ξ).12 The indirect utility consumer i obtains from j at time t is uijt = δjt + μijt + ijt where δjt = xj β + ξjt captures the base utility every consumer derives from j and mean preferences for xj are captured by β.13 The composite random shock, μijt + ijt ,14 captures heterogeneity in consumers’ tastes for product attributes, and ijt is a mean zero stochastic term distributed independent and identically distributed (i.i.d.) type I extreme value across products and consumers. The μijt term includes interactions between observed consumer attributes (Dit ), unobserved (to the econometrician) consumer tastes (νi ), and xj . Specifically, (2)
μijt = α ln(yit − pjt ) + xj (ΩDit + Σνi )
νi ∼ N(0 Ik )
11 The model is static, primarily due to lack of microdata on purchases and ad exposure. A static model does not capture long-term advertising effects, such as brand building. While brand building is important, the majority of PC firms have not changed over the period and most had been in existence for many years prior to 1996. These firms would not have as much need to establish a brand image as to spread information about new products. The static framework permits me to focus on the influence of advertising on the choice set absent the additional structure and complications of a dynamic setting. Also, the nature of advertising in the PC industry lends itself to a static framework. Products change rapidly and the effects of advertising today on future information provision are minimal since the same products are no longer for sale. 12 I do not include brand fixed effects because there are over 200 brands. 13 Note that this indirect utility can be derived from a Cobb–Douglas utility function (see BLP). 14 Choices are invariant to multiplication by a person-specific constant, so I fix the standard deviation of ijt . Since there are over 2000 products, estimating an unrestricted covariance matrix is not feasible.
LIMITED INFORMATION AND ADVERTISING
1025
The Ω matrix measures how tastes vary with xj . I assume that νi are independently normally distributed with a variance to be estimated. Σ is a scaling matrix. Income is yit . Consumers have an “outside” option, which includes nonpurchase, purchase of a used PC, or purchase of a new PC from a firm not in the 15 included firms. Normalizing p0t to zero, the indirect utility from the outside option is ui0t = α ln(yit ) + ξ0t + i0t I also normalize ξ0t to zero, because I cannot identify relative utility levels. 3.2. Information Technology In industries where new product introductions are frequent, the full information assumption is not innocuous. This paper considers a model of random choice sets, where the probability that consumer i purchases product j depends upon the probability she is aware of j, the probability she is aware of the other products competing with j, and the probability she would buy j given her choice set.15 Assuming consumers are aware of the outside option with probability 1, the (conditional) probability that consumer i purchases j is (3)
sijt =
S ∈Cj l∈S
φilt
k∈ /S
(1 − φikt )
exp{δjt + μijt } y + r∈S exp{δrt + μirt } α it
where Cj is the set of all choice sets that include product j. The φijt term is the probability i is informed about j. The yitα term is from the presence of the outside good. The outside sum is over all the choice sets that include product j. One could consider calculating (3) directly for each individual, which would require computing all purchase probabilities that correspond to each possible choice set. If there were three products, one could easily calculate the four purchase probabilities associated with each choice set for each individual. Given the large number of products in the PC industry (J = 2112), it is not feasible to calculate the 2J−1 purchase probabilities that correspond to each choice set for each individual and product. Obviously, if one observed the choice set, then the computational burden would be substantially eased. Unfortunately, these data are not available. A solution to the computational problem is to simulate the choice set facing i, thereby making only one purchase probability computation per individual necessary: the one corresponding to i’s simulated choice set. I implement this solution and provide details in Section 5.2. Therefore, the choice set facing an individual is a simulated one and hence is not observed 15 Leslie (2004) presented a discrete-choice model with random choice sets. In his model consumers choose seat quality at a Broadway play. Patrons receive a coupon, which gives them the opportunity to purchase a high quality ticket at a discount, with a certain probability.
1026
MICHELLE SOVINSKY GOEREE
directly from the data; rather the data used to form the choice sets are those used to construct the φijt term, which I now discuss. The information technology, φijt , describes the effectiveness of advertising at informing consumers about products. Suppressing time notation, it is given by (4)
φij (θφ ) =
exp(γj + λij ) 1 + exp(γj + λij )
which is a function of medium advertising, where the m = 1 M media are magazines, newspapers, television, and radio. The mth element of the M × 1 vector aj is the number of ads for j in m.16 The components of φij that are the same for all consumers are given by γj = aj (ϕ + ρaj + im Ψf ) + ϑxj age
where the vectors, ϕ and ρ, measure the effectiveness of advertising media at informing consumers. I include fixed effects for those firms that offered a product every quarter (the Ψf ), but do not estimate a fixed effect for each medium, so im is a column vector of 1’s. Finally, consumers may be more likely to know a product the longer it has been on the market: this is captured by ϑ, age where xj is the PC age measured in quarters. Ideally, one would have individual ad exposure data. Unfortunately, these data are not available for many industries including the PC industry. I control for variation in household ad exposure (as it is related to observables) by using media exposure information from Simmons. The λij captures consumer information heterogeneity: λij = aj (Υ Dsi ζ + κi ) + D i λ
ln κi ∼ N(0 Im )
The Υ matrix captures how advertising media’s effectiveness varies by observed consumer characteristics. Simmons data are used to identify Υ , where Ds is a larger set of demographic characteristics from the Simmons data.17 Thus Υm Dsi is the exposure of individual i to medium m, and aj Υ Dsi is the exposure of i to ads for product j. The parameter ς measures the effect of this ad exposure on the information set. The κi vector denotes unobserved (to the econometrician) consumer heterogeneity with regard to ad medium effectiveness.18 I assume κ is independent of other unobservables. 16 The number of advertisements in medium m are advertising expenditures, adjm , divided by the weighted average price of an advertisement in medium m. Recall from equation (1) that adjm is a weighted sum of model-specific and group advertising, where the weights, π1 π2 , are to be estimated. 17 There are 11 demographic characteristics included in Ds . These are measures of age, household size, marital status, income, sex, race, and education. 18 To limit the number of parameters to estimate, I normalized the variance of the κ to 1 for all media.
LIMITED INFORMATION AND ADVERTISING
1027
In the absence of advertising, consumers still may be (differentially) in (a subset of D) proxy for the opportunity formed (i.e., φ(a = 0) > 0). The D 19 costs of acquiring information. The magnitude of φij when no advertising age occurs depends on D i λ + ϑxj . Notice that φij depends on own-product advertising only. Allowing informational spillovers would greatly complicate the model. First, the theoretical framework would have to address free-riding in advertising choices across firms. Second, one would need adequate variation in the data to empirically identify the spillover effect across products. For these reasons, I assume the probability a consumer is informed about a product is (conditional on her attributes) independent of the probability she is informed about any other product. Information provided (via advertising) for one product (or by one firm) cannot “spill over” to another product (or to another firm). That is, I assume product or group advertising for product r = j provides no information about j. Let κi = (yi Di νi κi ) be the vector of individual characteristics. I assume that the consumer purchases at most one good per period,20 that which provides the highest utility, U, from all the goods in her choice set. Let Rj ≡ {κ : U(κ pj xj aj ξj ij ) ≥ U(κ pr xr ar ξr ir ) ∀r = j} define the set of variables that results in the purchase of j given the parameters of the model. The home market share of product j is sj = (5) dG(y D ν κ ) = sij dGyD (y D) dGν (ν) dGκ (κ) Rj
Rj
where G(·) denotes the respective distribution functions. The second equality follows from independence assumptions. The conditional probability that i purchases j, sij , is given in (3). Market share is a function of prices and advertising of all products. The smaller is φij , the smaller is product market share. If φij were equal to 1 for all products, market share would be the standard full information choice probability.21 Demand for j at time t is Mt sjt , where Mt is the market size given by the number of households in the United States. 19
These consist of dummies for high school graduate, income < $60,000, and income > $100,000. 20 This assumption may be unwarranted for some products for which multiple purchase is common. However, it is not unreasonable to restrict a consumer to purchase one computer per quarter. Hendel (1999) examined purchases of PCs by businesses and presented a multiple-choice model of PC purchases. 21 Grossman and Shapiro (1984) (GS) presented a theoretical circle model in which ad messages provide information about product availability. The empirical model presented here differs along several dimensions: (i) I allow for a more flexible model of differentiation and estimate a discrete-choice model (Anderson, de Palma, and Thisse (1989)); (ii) unlike GS, consumers may be informed if there is no advertising; (iii) I do not observe individual-specific ad messages, which is central to GS; (iv) once a consumer is aware of the product, she is also aware of its attributes. Hence, the information technology (and market shares) differ from GS.
1028
MICHELLE SOVINSKY GOEREE
3.3. Firm Behavior I include the supply side for a few reasons. First, firms often advertise products in groups. The model of demand requires a measure of product advertising that incorporates all advertising done for the product. I construct effective product ad expenditures that is a weighted average of group ads for that product with estimated weights (π1 and π2 ). Supply side moments are used to identify the weights. Second, following BLP, I use information from the first-order conditions to estimate marginal costs, which allows me to calculate markups. Finally, I compare my model to benchmark cases. The supply side helps to more precisely estimate some of the parameters in the benchmark models. I assume there are f = 1 F noncooperative Bertrand–Nash competitors. Each firm produces a subset of the J products, Jf . Suppressing time notation, profits of firm f are nh nh ad (6) (pj − mcj )Msj (p a) + Πj (p ) − mcjm ajm − Cf j∈Jf
j∈Jf
m
j∈Jf
where sj is home market share given in (5), mcj is marginal cost of production, Πjnh is gross profit (before advertising) from the non-home sectors, pnh is price in the non-home sector, mcad jm is marginal cost of advertising in medium m, and Cf are fixed costs of production. Following BLP, I assume mcj are log-linear and composed of unobserved (ωj ) and observed (wj ) cost characteristics and parameters to be estimated (η). I expect ωj to be correlated with ξj because PCs with high unobserved quality might be more expensive to produce. I account for the correlation in estimation. The (log) marginal cost function is (7)
ln(mcj ) = wj η + ωj
ad I assume mcad jm are composed of observed components, wjm (such as the average price of an ad),22 and unobserved components, τj . The (log) marginal cost of advertising in m is
(8)
ad ln(mcad jm ) = wjm ψ + τj
τj ∼ N(0 Im )
where ψ is to be estimated. I set the variance of τj to 1 for all media channels.23 22 The CMR data consist of ad expenditures across ten media. The quarterly average ad price in media group m is a weighted average of ad prices in the original categories comprising the group m. The weights are firm-specific and are determined by the distribution of the firms’ advertising across the original media. 23 Computational constraints dictate I choose which are the more interesting parameters to estimate.
LIMITED INFORMATION AND ADVERTISING
1029
Given their products and the advertising, prices, and attributes of competing products, firms choose prices and advertising media levels simultaneously to maximize profits. Product attributes that affect demand (xj ξj ) and those that ad τj ) are treated as exogenous to price and adaffect marginal costs (wj ωj wjm 24 vertising decisions. Firms may sell to home and non-home sectors. Constant marginal costs imply pricing decisions are independent across sectors.25 Any product sold in the home sector will have prices that satisfy (9)
sj (p a) +
(pr − mcr )
r∈Jf
∂sr (p a) = 0 ∂pj
However, an advertisement intended to reach a home consumer may affect sales in other sectors. Optimal advertising choices must equate the marginal revenue of an additional advertisement in all sectors with the marginal cost. Advertising medium choices satisfy (10)
M
r∈Jf
(pr − mcr )
∂sr (p a) ad + mrnh j = mcjm ∂ajm
where mrnh is the marginal revenue of advertising in non-home market secnh nh nh nh 26 tors. Specifically, mrnh j = θp pj + xj θx . Characteristics of product j sold in nh the non-home sector are price (pnh j ) and other observable characteristics (xj ), 27 nh including advertising, CPU speed, and non-PC firm sales. The θ are parameters to be estimated. Let ηAD = {vec(ψ) vec(θnh )}. 24
Adequately addressing the issue of endogenous product characteristics would require a dynamic model of the process that generates product characteristics. This topic is beyond the scope of this paper. 25 Pricing decisions may not be independent across sectors (if the price of a particular laptop is lower for business, a consumer might buy the laptop from their business account for use at home). Identification of a model which includes pricing decisions across all sectors would require richer data for non-home sectors. Also, education, business, and government groups usually purchase multiple PCs, which greatly complicates the model (Hendel (1999)). While the assumptions that I impose imply independent pricing decisions, the estimates are sensible, and goodness-of-fit tests suggest the model fits the data reasonably well. 26 Ideally, one would construct mrnh in a structural framework. Identification would require much richer data and one should allow for multiple purchases. The mrnh could also depend on rivals’ prices and advertising; this would increase the estimation burden and require more of the advertising data. Since my focus is on the home sector, I approximate the mrnh with the simplified specification above. 27 Non-PC sales are constructed by subtracting quarterly PC sales from quarterly total manufacturer sales (as recorded in firm quarterly reports). Therefore, non-home sales include sales of computer systems such as mainframes, servers, and UNIX workstations.
1030
MICHELLE SOVINSKY GOEREE
4. IDENTIFICATION Following the literature, I assume that the demand and pricing unobservables (evaluated at the true parameter values, Θ0 ) are mean independent of a set of exogenous instruments, z: (11)
E[ξj (Θ0 ) | z] = E[ωj (Θ0 ) | z] = 0
I do not observe ξj or ωj , but market participants do. This leads to endogeneity problems because prices and ad choices are most likely functions of unobserved characteristics. If price is positively correlated with unobserved quality, price coefficients (in absolute value) will be understated (as preliminary estimates in Section 6 indicate), whereas if advertising is positively correlated with quality, its effect will be overstated.28 A solution involves instrumental variables.29 BLP showed that variables that shift markups are valid instruments for price in differentiated products models. In a limited information framework, the components of z include the characteristics of all the products marketed (the x), variables that determine production costs (the components of the w that are not in x), and variables that determine advertising costs (the components of wad ).30 The value of the instrument for any given product can be any function of z. The intuition to motivate the advertising instruments is similar to that used by BLP to motivate the price instruments. Products which face more competition (due to many rivals offering similar products) will tend to have lower markups relative to more differentiated products. Advertising for j depends on j’s markup. As ad first-order conditions (FOC) in (10) indicate, a firm will advertise a product more the more they make on the sale of the product, ceteris paribus. The pricing FOCs in (9) show the optimal price (and hence markup) for j depends on characteristics of all of the products offered. Therefore, the optimal price and advertising depends on the characteristics, prices, and advertising of all products offered. Note also that the level of advertising for j in media m depends on the marginal cost of advertising in that media. Thus the instruments will be functions of attributes, product cost shifters, and advertising cost shifters of all other products. Given (11) and regularity conditions, the optimal instrument for any disturbance–parameter pair is the expected value of the derivative of the disturbance with respect to the parameter (evaluated at Θ0 ) (Chamberlain 28
See Milgrom and Roberts (1986). Berry (1994) was the first to discuss the implementation of instrumental variables methods to correct for endogeneity between unobserved characteristics and prices. BLP provided an estimation technique. My model and estimation strategy is in this spirit, but is adapted to correct for advertising endogeneity. 30 Variables that determine production costs that are not in x include a time trend. Hence, production costs shifters do not play a large role in identifying demand in the model presented in Section 3. 29
LIMITED INFORMATION AND ADVERTISING
1031
(1987)). Optimal instruments are functions of advertising and prices. To use the optimal instruments, I would have to calculate the price and advertising equilibrium for different {ξj ωj } sequences, compute the derivatives at equilibrium values, and integrate out over the distribution of the {ξj ωj } sequences. This is computationally demanding and requires additional assumptions on the joint distribution (ξ ω). I form approximations to the optimal instruments, following BLP (1999), by evaluating the derivatives at the expected value of the unobservables (ξ = ω = 0). The instruments will be biased since the derivatives evaluated at the expected values are not the expected value of the derivatives. However, the approximations are functions of exogenous data and are constructed such that they are highly correlated with the relevant functions of prices and advertising. Hence the exogenous instruments will be consistent estimates of the optimal instruments.31 Details are given in Appendix A. There is a potential endogeneity problem in the microdata. If a consumer with an a priori higher tendency to purchase a particular product chooses which media to consult in the decision process, then media exposure will be correlated with the unobservables. To the extent that exposure is driven by the intention to buy, exposure and purchase decisions will be correlated even if ad exposure has no impact on the purchase decision. To account for the dependence of media exposure on the decision to buy, I would have to model the decision to engage in a particular media and define the joint probability of purchase and media exposure as a function of observables and unobservables.32 Estimation would require richer data and additional assumptions on the distribution of unobservables. I test for the exogeneity of media exposure (see Smith and Blundell (1986), Rivers and Vuong (1988)), using purchase and media exposure data from Simmons.33 As instruments for media exposure I use the cost of access (subscription price) to various media. Details are given in Appendix B. The tests indicate media exposure endogeneity is not an issue in the data. I cannot reject the null hypothesis that exposure to newspapers, magazines, and cable television is exogenous to the PC purchase decision. Given this motivation, I treat media exposure as exogenous to the purchase decision in the structural model. I next present an informal discussion of how variation in the data identifies the parameters. I begin with the demand side. Associated with each PC is a 31 One could use a series approximation (BLP) to construct exogenous instruments. I use the more direct approximation (BLP (1999)) since it is more closely tied to the model. Results from logit instrumental variable (IV) regressions indicate the instruments are strong and that they address the endogeneity issues. 32 Anand and Shachar (2004) used microlevel data to estimate a model of TV viewing choices and show how to overcome the exposure endogeneity problem when consumption decisions also determine ad exposures. 33 Rivers and Vuong (1988) developed a two-step test for the exogeneity of regressors in limited dependent variable models. Wooldridge (2002) showed the exogeneity test is valid when the regressor is a binary variable.
1032
MICHELLE SOVINSKY GOEREE
mean utility, which is chosen to match observed and predicted market shares. If consumers were identical, then all variation in sales would be driven by variation in product attributes. Variation in product market shares corresponding to variation in the observable attributes of those products (such as CPU speed)is used to identify the parameters of mean utility (β). While a PC may have attributes that are preferred by many consumers (high β’s), it may also have attributes that appeal to certain types of consumers. For instance, if children like to play PC games, then consumers from large households may place a higher valuation on CPU speed relative to smaller households. Identification of the taste distribution parameters (Σ Ω) relies on information on how consumers substitute (see (2)). There are two issues that merit attention. First, new product introductions are common in the PC industry. Variation of this sort is helpful for identification of Σ. The distribution of unobserved tastes, νi , is fixed over time, but the set of available products is changing over time. Variation in sales patterns over time as the set of available products changes allows for identification of Σ. Second, I augment the market level data with microdata on firm choice. The extra information in the microdata allows variation in choices to mirror variation in tastes for product attributes. Correlation between xj Di and choices identifies the Ω parameters. If consumers were identical, then all variation in the information technology, and induced variation in shares, would be driven by variation in advertising or the age of the PC. Variation in sales corresponding to variation in PC age identifies ϑ. Variation in sales corresponding to variation in advertising identifies the other parameters of γj . Returns to scale in media advertising (ρm ) are identified by covariation in sales with the second derivative of ajm .34 Identification of firm fixed effects (Ψf ) is from two sources: In the macro-moments they are identified by the total variation in sales of all products sold by the firm corresponding to variation in firm advertising; in the micro-moments they are identified by observed variation in firm sales patterns corresponding to variation in firm advertising. One major drawback of aggregate ad data is that I do not observe variation across households. Normally observed variation in market shares corresponding to variation in household ad media exposure would be necessary to identify Υ and ς. The Simmons data contain useful information on media exposure across households. Variation in choices of media exposure corresponding to variation in observable consumer characteristics (Dsi ) identifies Υ . Variation in sales and ad exposure (aj Υ Dsi ) identifies the effect of ad exposure on the information set (ς). Thus, the Simmons data allow me to sidestep the need for observed ad variation across households. The other parameters of λij which do 34
There is not enough variation in the ad data to estimate ϕ and ρ effects for all media separately. I estimate these parameters for the TV medium and for the combination of newspaper and magazine media.
LIMITED INFORMATION AND ADVERTISING
1033
not interact with advertising ( λ) are separately identified from Ω due to nonlinearities. Finally, the parameters on group advertising (π1 and π2 ) are identified by observed variation in expenditures on group advertisements (adm ) with the number of products in the group and by functional form. Variation in prices and shares corresponding to variation in observed cost attributes identifies the corresponding cost attributes’ effect on production costs. Covariation in ad prices, advertising, and the generalized residuals identifies the effect of ad prices on ad costs. 5. THE ESTIMATION TECHNIQUE The econometric technique follows recent studies of differentiated products, such as BLP (1995, 2004) and Nevo (2000). The parameters are β, λ Υ ς}. Under θ = {α Σ Ω θφ }, η, and ηAD , where θφ = {π1 π2 ϕ ρ Ψ ϑ the assumption that the observed data are the equilibrium outcomes, I estimate the parameters simultaneously using generalized method of moments (GMM). There are five “sets” of moments: (i) from demand, which match the predicted market shares to observed shares; (ii) from pricing decisions, which express an orthogonality between the cost side unobservable and instruments; (iii) from advertising media decisions, which express an orthogonality between the advertising residuals and instruments; (iv) from purchase decisions, which match the model’s predictions for the probability individuals purchase from firm f (conditional on observed characteristics) to observed purchases; (v) from media exposure decisions, which match the model’s predictions for exposure to media m (conditional on observed characteristics) to observed exposure. 5.1. The Moments I use macro product data, ad data, and the CPS consumer data in the first three sets of moments. I use micro consumer data in the last two sets of moments. The strategy of combining micro- and macrodata follows work by Petrin (2002) and BLP (2004). BLP-Type Macromoments Following BLP, I restrict the model predictions for j’s market share to match observed shares. I solve for δ(S θ) that is the implicit solution to Stobs − st (δ θ) = 0
1034
MICHELLE SOVINSKY GOEREE
where Stobs and st are vectors of observed and predicted shares, respectively. I substitute δ(S θ) for δ when calculating the moments.35 The first moment unobservable is (12)
ξjt = δjt (S θ) − xj β
I use the demand system estimates to compute marginal costs (Bresnahan (1989)). In vector form, the J FOCs from (9) imply (13)
mc = p − Δ(θ δ)−1 s(θ δ)
where Δjr = −(∂sr /∂pj )Ijr with Ijr an indicator function equal to 1 when j and r are produced by the same firm. Combining (13) and (7) yields the second moment unobservable: (14)
ω = ln(p − Δ(θ δ)−1 s(θ δ)) − w η
Advertising Macromoments Some firms choose not to advertise some products in some media. To allow for corner solutions, I use the method of generalized residuals proposed by Gourieroux et al. (1987). The method is best illustrated by an example. For ease of exposition, I suppress the time subscript. Let yi∗ = xi β + ui . We observe yi∗ if yi∗ ≥ 0 and zero otherwise. The errors, ui (β), are linked with yi∗ . The errors cannot be used to construct moments because they depend on unobserved variables. Gourieroux et al. suggested an alternative method: replace the errors by their best prediction conditional on the observable variables, E[ui (β) | yi ], and use these to construct moments. In this paper the latent variables are optimal advertising levels (denoted a∗jm ). Due to nonlinearities, the application is more complex, but the technique is the same. We observe ∗ ajm if ∂Πj /∂ajm |ajm =a∗jm = 0, ajm = 0 if ∂Πj /∂ajm |ajm =0 < 0, where Πj is product j’s profit from (6). Rewrite the advertising medium FOC as (15)
ad ψ = τjm ln(mrjm (ajm )) − wjm
where mrjm is medium marginal revenue (the left-hand side of (10)). The latent variable is the implicit solution to (15), so the errors, τjm , will depend on 35 I use a contraction mapping suggested by BLP to compute δ(S θ). Goeree (2008) showed that the function used in the fixed point algorithm is a contraction mapping. The proof parallels the proof for the full information case.
LIMITED INFORMATION AND ADVERTISING
1035
a∗jm . I use the best prediction of τjm , conditional on observed advertising, to construct moments. In estimation, I fix στ = 1. Using ad marginal costs (8) and the interior FOCs (10), the likelihood function is φnormal (
mrjm ) 1 − "(
mrjm ) £= j : ajm >0
j : ajm ≤0
ad
rjm ≡ ln(mrjm (ajm )) − wjm ψ, φnormal is the standard normal probability where m density function and " is the cumulative standard normal. The generalized residual for the jth observation is
(16)
= E[τjm (Ξ) | ajm ] τjm (Ξ)
rjm 1(ajm > 0) − =m
mrjm ) φnormal (
1(ajm = 0) 1 − "(
mrjm )
its maximum likelihood estimator. where Ξ are the parameters of (15) and Ξ The (third set of) moments express an orthogonality between the generalized residuals and the instruments. For instance, the Ξ that solves mrjm 1 ∂
τjm = 0 J j ∂Ξ is the method of moments estimator, where ∂
mrjm /∂Ξ are the appropriate instruments. Let T (δ mc θ ηAD ) be the vector of residuals stacked over media and products. Firm Choice Micromoments I combine micro firm choice data from Simmons with macro product level data (á la Petrin (2002)).36 The Simmons data connect consumers to firms, thus associating consumer and average product attributes (across firms). These moments allow me to obtain more precise estimates of the parameters of the taste distribution (Ω and Σ) and advertising effectiveness (Ψf ). The demographic characteristics for these moments (denoted Ds ) are not given by the CPS, but are linked directly to purchases. Let Bi be a F × 1 vector of firm choices for individual i. Let bi be a realization of Bi , where bif = 1 if a brand produced by f was chosen. Define the residual as the difference between the vector of observed choices and the model prediction given (δ θ): (17)
Bi (δ θ) = bi − Eνκ E[Bi | Dsi δ θ]
36 Petrin (2002) showed how to combine macrodata with data that link average consumer attributes to product attributes to obtain more precise estimates.
1036
MICHELLE SOVINSKY GOEREE
For example, the element of Eνκ E[Bi | Dsi δ θ] corresponding to firm 2 for consumer i is φilt (1 − φikt ) S ∈Cj l∈S
j∈J2
k∈ /S
×
exp{δjt + μijt } dGν (ν) dGκ (κ) y + r∈S exp{δrt + μirt } α it
where the first summand is over products sold by firm 2, the integral is over the assumed distributions of ν and κ, and the second summand is over all the different choice sets that include product j.37 The population restriction for the micromoment is E[Bi (δ θ) | (x ξ)] = 0. Let B (δ θ) be the vector formed by stacking the residuals Bi (δ θ) over individuals. Media Exposure Micromoments The fifth set of moments is used to estimate Υ . These allow me to control for variation in ad exposure across households (as related to observables) via variation in media exposure. The Simmons respondents were ranked according to how often they watched TV, read newspapers, and so forth relative to others in the surveyed population. I have information on the ranges of respondents’ answers, but the survey reports only the quintile to which the consumer belongs. I construct moments arising from an ordered-response likelihood. Let h∗im be the amount of exposure of i to medium m: h∗im = Dsi Υm + εim where εim is a mean zero term distributed i.i.d. standard normal. Defining the first quintile as the highest, i belongs to the qth quintile in medium m if cqm < h∗im < c(q−1)m , where c are cutoff values. Let Him be the vector of quintiles for i in m. Let him be a realization of Him , where the qth element himq = 1 if i’s level of exposure falls in q. If " is the cumulative standard normal and "iqm = "(cqm − Dsi Υm ), then Pr(hiqm = 1) = "iq−1m − "iqm The maximum likelihood estimate of Υm solves i
q
hiqm
∂ ln Pr(hiqm = 1 | Dsi ) = 0 ∂Υm
37 Simmons data are annual, so the outermost summand is over all products sold by each firm over the year.
LIMITED INFORMATION AND ADVERTISING
1037
The difference between the vector of observed quintiles and the prediction given Υm , (18)
Him (Υm ) = him − E[Him | Dsi Υm ]
is the residual where the qth element of E[Him | Dsi Υm ] = "iqm − "iq−1m and Zmediaim =
∂ ln Pr(hiqm = 1 | Dsi ) ∂Υmd
are the appropriate instruments. Let Hi (Υ ) be the residuals stacked over media. 5.2. The GMM Estimator I use the GMM to find the parameter values that minimize the objective function Λ ZA−1 Z Λ, where A is a weighting matrix, which is a consistent estimate of E[Z ΛΛ Z], and Z are instruments orthogonal to the composite error term Λ. Specifically, if Zξ , Zω , Zad , Zmicro , and Zmedia are the respective instruments for each disturbance/residual, the sample moments are J ⎤ ⎡ 1 j=1 Zξj ξj (δ β) J ⎢ 1 J Z ω (δ θ η) ⎥ ⎢ J j=1 ωj j ⎥ ⎢ 1 m∗J ⎥ ⎥ ZΛ = ⎢ ⎢ J j=1 Zadj Tj (δ θ ηAD ) ⎥ ⎢ 1 N ⎥ ⎣ N i=1 Zmicroi Bi (δ θ) ⎦ N 1 i=1 Zmediai Hi (Υ ) N where Zξj is column j of Zξ . Joint estimation takes into account the crossequation restrictions on the parameters that affect both demand and supply, which yields more efficient estimates. This comes at the cost of increased computation time since joint estimation requires a non-linear search over all the parameters of the model.38 Simulation As in BLP, the distribution of consumer demographics is empirical. As a result there is no analytical solution for predicted market shares, making simulation of equation (5) necessary. Furthermore, consumers may not know all 38 I restrict the nonlinear search to a subset of the parameters Ω = {θ ηAD }. This restriction is possible since the FOCs with respect to β and η can be expressed in terms of θ. (See Nevo (2000).) I could separately estimate Υ and substitute predicted for actual exposure when estimating the remaining parameters. This would decrease computational time, but, due to the nonlinear nature of the model, would not yield consistent estimates except under specific distributional assumptions.
1038
MICHELLE SOVINSKY GOEREE
products for sale, but I do not observe the choice set facing any one consumer. As I discussed in Section 3.2, a solution is to simulate the choice set.39 An outline of the simulation technique follows. Details are given in Appendix C. I sample a set of “individuals,” where each consists of (vi1 vik ) taste parameters drawn from a multivariate normal, demographic characteristics, (yi Di1 Did ), drawn from the CPS for use in the macromoments, and unobserved advertising medium effectiveness draws, (κi1 κim ), from a multivariate log normal. Simulating individual i’s choice set is a two-step process. I begin by drawing J uniform variables for each individual. First, I compute the probability individual i knows product j for a given value of the parameters. That is, I compute the information technology for each person–product combination (the φij from equation (4) evaluated at the parameter values). Second, I compare i’s uniform draw for each product with the computed φij . If the computed probability i knows product k (i.e., the value of φik ) is larger than the corresponding uniform draw for k, product k is in i’s choice set. I repeat this comparison for all products and form i’s simulated choice set. Note that i’s choice set may change as the parameter values change. I simulate the choice set for the remaining individuals analogously. Given the simulated choice set, I compute choice probabilities for each individual for each product and construct an importance sampler to smooth the simulated choice probabilities.40 The market share simulator is the average over individuals of the smoothed choice probabilities. The process is similar for the micromoments, but I take R draws for each product–individual. The individual product choice probability simulator is the average over the R draws. Individual firm choice probabilities are the sum over the products offered by each firm. 39
Chiang, Chib, and Narasimhan (1999) use micro purchase data for ketchup to model “consideration set” formation. A consideration set is a subset of the 2J−1 choice sets. Due to the stable nature of the industry, the consumer’s consideration set does not change over time, allowing the authors to eliminate choice sets which do not contain all previously purchased brands. Also, there are only four brands for a consumer to consider. The PC industry is much different: it is rapidly changing and there are a large number of products. Therefore, I use a very different approach in modeling (and estimating) choice set heterogeneity. While the approach I take does not a priori limit the potential set of products available to the consumer, the Chiang, Chib, and Narasimhan approach is more flexible in the sense that it does not impose conditional independence among products in a particular consumer’s consideration set. Recent papers addressing consideration sets are Mehta, Rajiv, and Srinivasan (2003), Nierop et al. (2005), and Ching, Erdem, and Keane (2008). 40 I construct an importance sampler by using the initial choice set weight to smooth the simulated choice probabilities. The initial choice set weight is the product over the φs for products in the choice set (computed at initial parameter values) multiplied by the product of (1 − φ) for all products not in the choice set.
LIMITED INFORMATION AND ADVERTISING
1039
The Estimation Algorithm and Properties of the Estimator First calculate the instruments and keep them fixed for the duration of the estimation. Then, given a value of the parameters, Θ, take the following steps: (i) Compute the simulated market shares and solve for the vector δ that equates simulated and observed shares. (ii) Calculate β and compute the demand unobservables, ξ (see (12)). Calculate η and compute the cost side unobservables, ω (see (14)). Compute the ad residual, T . (iii) Simulate the firm purchase probabilities and calculate the microresidual (see (17)). (iv) Compute the media residual (see (18)). (v) Search for the parameter values that minimize the objective function: where Λ is the composite error term resulting from simulated Λ ZA−1 Z Λ, moments. If the parameters do not minimize the moments (according to some criteria), make a new guess of the parameters. Repeat until moments are close to zero. The estimator is consistent and asymptotically normal (Pakes and Pollard (1989)). As the number of pseudorandom draws used in simulation R → ∞, the method of simulated moments covariance matrix approaches the method of moments covariance matrix. To reduce the variance due to simulation, I employ antithetic acceleration (see Stern (1997, 2000)). Geweke (1988) showed that if antithetic acceleration is implemented during simulation, then the loss in precision is of order 1/N (where N are the number of observations), which requires no adjustment to the asymptotic covariance matrix. The reported (asymptotic) standard errors are derived from the inverse of the simulated information matrix which allows for possible heteroskedasticity.41 6. PRELIMINARY ANALYSIS First, I estimate a series of probit models of the decision to purchase a PC (using the Simmons data).42 These regressions establish that advertising exposure impacts demand and guide the choice of variables to include in the structural model. I started by allowing for many explanatory variables including interactions between consumer attributes, education and income splines, and media exposure variables (see Appendix D, Table D.I for selected results). 41
The reported standard errors do not include additional variance due to simulation error. While reduced form estimation is computationally easy, structural analysis has many advantages. It provides estimates that are invariant to changes in policy or competitive factors. It also allows one to specify the effects of advertising. If advertising affects a consumer’s choice set we would expect changes in behavior as advertising changes. This effect is not captured in reduced form models because it is not possible to be specific about how advertising affects demand. Also we would expect changes in firm behavior as variables relating to advertising change, which will have an impact on markups and prices. 42
1040
MICHELLE SOVINSKY GOEREE
The estimates suggest media exposure affects the decision to buy a PC, after controlling for observed consumer covariates.43 Results from likelihood ratio tests reject the hypothesis that media exposure has no effect on PC purchase (at 1% significance level) and indicate exposure to the TV and magazine media impact the purchase decision the most.44 I found the consumer attributes which matter most are age, education, and marital status. Household income and size also significantly affect the probability of purchase, although including the presence and/or number of kids does not improve the fit. Next, I estimate models of firm choice that illustrate the need to instrument for price and advertising in the structural model. As discussed in Section 4, advertising may be endogenous. Due to data limitations I cannot examine the effects of product advertising on product choice without estimating the structural model. Instead, I examine the effects of firm advertising on firm choice using Simmons data and CMR advertising data combined with data on observable product characteristics. Suppose a consumer who buys a computer first chooses a firm and then a model. Let the consumer’s indirect utility be a function of observed attributes that vary by model and firm (these are price, CPU speed, form factor, etc.), of observed attributes that vary only by firm (these are firm advertising), and a generalized extreme value term. Table D.II in Appendix D presents results of the nested logit regressions. In all specifications, price coefficient estimates are positive and significant. The most obvious explanation is that prices are correlated with quality. After including CPU speed, Pentium, and laptop as explanatory variables (specification 2), the price coefficient is still positive, suggesting there are other product attributes that are positively correlated with prices. Specification 3, which includes total advertising expenditures as an explanatory variable, fits better even though it has fewer explanatory variables. Without indicating how advertising affects demand, the coefficient estimates indicate that advertising may be correlated with higher quality. This obtains from comparing estimates from specifications 1 and 3: price coefficients in the specification with advertising are smaller. Advertising may be capturing some of the effect of unobserved product attributes.45 The results suggest advertising’s effect differs across media (specification 4). Finally, after including consumer covariates (specification 6), advertising still influences the decision of firm choice. I account for the possibility that unobserved attributes are correlated with prices and correct for the possible correlation with advertising in the structural 43 Unobserved consumer attributes may influence media effectiveness at providing information. The full model allows for unobserved consumer heterogeneity in media effectiveness (the κi ; see Section 3.2). 44 I cannot reject the hypothesis that all other media have no impact on purchase probabilities. 45 Comparing specifications 2 and 5 suggests that advertising may impact choice as much as observable product characteristics. However, these results should be interpreted with caution since the coefficients on product characteristics are estimable up to a scale factor and are identified due to nonlinearities.
LIMITED INFORMATION AND ADVERTISING
1041
model. Previous papers (Berry (1994); BLP (1995, 1999); Nevo (2000), and many others) have shown that BLP-type instruments (which I use) can account for the possible correlation between prices and unobserved characteristics, and result in a more reasonable estimate of the coefficient on price. Finally, I estimate a logit model to show that the instruments I use in the full model address the endogeneity issues. Table D.III in Appendix D presents results. As previous studies have shown, logit demand estimates are obtained from an ordinary least squares (OLS) regression of ln(sj ) − ln(s0 ) on price, other product characteristics, and firm dummy variables. Included product characteristics are the same as those in the full specification. The first two columns report OLS results. As expected, the price coefficient is negative but small in magnitude. The second column reports results with firm dummy variables, which improves the fit of the model, but does not significantly change the price coefficient estimate. Columns (iii) and (iv) present results using BLP (1995) instruments. These instruments are the sum of the values of the same characteristics of other products offered by the same firm, the sum of the values of the same characteristics of all products offered by rival firms, and the number of own-firm products and number of rival firm products. The remaining columns present the results from instrumental variables (IV) regressions using a more direct (but computationally burdensome) approximation to the efficient IV estimator in the spirit of BLP (1999). See Appendix A for details. Both sets of instruments appear to address the endogeneity of price issue and result in estimates for the price coefficient that are significantly higher in absolute value. Other parameter estimates are similar across specifications, with an exception being the sign change on the coefficient for laptop. This is consistent with the idea that price is endogenous as laptops are more portable and hence better (all else constant) and certainly demand a higher price. The firststage F -statistic for the IV regressions are high, suggesting the instruments have power. While the results suggest both sets of instruments are reasonable candidates to use in the full model, I chose to use the more direct approximation to the optimal instruments (based on BLP (1999)) since they are more closely tied to the structure of the model. 7. STRUCTURAL ESTIMATION RESULTS Product Differentiation There is much variation in tastes across consumers with respect to product attributes. I estimate the means and the standard deviations of the taste distribution for CPU speed, Pentium, and laptop. In all tables the (asymptotic) standard errors are in parentheses. The mean coefficients (β) are given in the first column and panel of Table III. Estimates of heterogeneity around these means are presented in the next columns. The means of CPU speed and laptop are positive and significant. The results imply that CPU speed and laptop have a significant positive effect on the distribution of utility. In addition, the
1042
MICHELLE SOVINSKY GOEREE TABLE III STRUCTURAL ESTIMATES OF UTILITY AND COST PARAMETERSa Interactions With Demographics
Variable
Coefficient
Utility Coefficients Constant −12026∗∗ CPU speed (MHz) 9288∗∗
Std. Error
Standard Deviation
Std. Error
Household Income > Age 30 size $100,000 to 50
Pentium
1236∗
(0796) 0044 (0558) (1599) 0156∗∗ (0017) 4049∗∗ (0674) (0890) 0209 (0886)
Laptop
2974∗∗
(0525) 0953
ln(income − price) 1211∗∗ Acer 2624 Apple 3070∗∗ Compaq 2662 Dell 2658∗∗ Gateway 7411 Hewlett–Packard 1309 IBM 2514∗∗ Micron −1159 Packard–Bell 4372∗ Cost Side Parameters In marginal cost of production Constant 7427∗∗ ln(CPU speed) 0462∗∗ Pentium −0250∗∗ Laptop 1204∗∗ Quarterly trend −0156∗∗ In marginal cost of advertising Constant 2631 Price of advertising 1051∗∗
White Male
0016 (0489)
(4619)
2048 4099 (8870) (9192)
(0057) (4900) (1032) (18009) (0301) (14615) (3905) (0712) (6011) (4002)
(0212) (0044) (0007) (0071) (0027) (7087) (0074)
Non-Home Sector Marginal Revenue Constant 11085 (278374) Non-home (0354) sector price 1815∗∗ CPU speed 0010∗∗ (0004) (1881) Non-PC sales 3688∗ a Notes: ∗∗ indicates t -stat > 2; ∗ indicates t -stat > 1. Standard errors are given in parentheses.
marginal valuation for CPU speed is (significantly) increasing in household size (405). This is intuitive as children often use the PC to play games (which require higher CPU speeds). Coefficients for the Pentium dummy are not significant at the 5% level. This suggests that once you control for CPU speed
LIMITED INFORMATION AND ADVERTISING
1043
(and other product attributes), consumers do not place extra value on whether the chip is a Pentium. During this time period 80% of PCs had a Pentium chip. In that light the results may not be so surprising. The nonrandom coefficient results are also presented in the first panel. The coefficient on ln(y − p) is of the expected sign and is highly significant (1.2). Firm fixed effect estimates indicate that the marginal valuation for a product is (significantly) higher if it is produced by Apple, Dell, IBM, or Packard–Bell. This could capture prestige effects of owning a computer produced by one of the top firms (Apple, IBM, and Packard–Bell). Apple operates on a different platform, so Apple fixed effects could reflect the extra valuation consumers, on average, place on the Apple platform. Finally they could capture extra valuation consumers place on enhanced services offered by the firms (for instance, Dell is known for its excellent consumer service) or other reputational effects. The cost and non-home sector estimates are given in the lower panel. Most of the coefficients (η) are of the expected sign and are significantly different from zero. The estimates indicate marginal costs are declining over time, and increases in CPU speed or producing a laptop increase marginal costs. The only variable with an unexpected sign is Pentium (−025), indicating that PCs with a Pentium chip are cheaper to produce. The coefficient on the (log) price of advertising (ψ) is highly significant and indicates that there are not many product-specific cost characteristics that affect the cost of advertising. The parameter estimates for non-home marginal revenue are given in the bottom panel. All coefficients are positive and significant. Recall the majority of industry advertising expenditures are by IBM. My conjecture that the high expenditures are due to IBM’s non-PC enterprises seems to be supported. I included non-PC sales in the non-home marginal revenue to adjust for the fact that the measure of advertising includes some for non-PCs. The coefficient on non-PC sales (3.7) is significant (at the 5% level) and positive, but the interaction term between IBM and advertising in the information technology (0.9) indicates that advertising by IBM is still more effective relative to some other firms, after controlling for non-PC enterprises. If the IBM fixed effect in the information technology were not significantly different from zero, then I would have concluded that the presence of IBM in the non-PC sector fully explained their large advertising expenditures. Consumer Information Heterogeneity and Advertising Effectiveness Not surprisingly, the results indicate that advertising has very different effects across individuals and that exposure to advertising significantly impacts the information set. The first panel of Table IV presents estimates of how media exposure varies with observed demographic characteristics (Υ ). These coefficients proxy for effectiveness of ads in reaching consumers through various media. The results indicate magazines are most effective at reaching high income individuals where the effectiveness is increasing in household size. Newspapers are most effective at reaching high income, married individuals who
1044
TABLE IV STRUCTURAL ESTIMATES OF INFORMATION TECHNOLOGY PARAMETERSa Coefficient Estimates for Interactions With Media Magazine (mag) Variable
Coefficient
Std. Error
−1032∗∗ −0042∗ 0005 −0022∗ 0040∗∗ −0194∗∗ 0153∗∗ −0078∗∗ −0102∗∗ 0032∗ −0024 −0028∗∗
Advertising media exposure (ζ) media exposure∗ advertising
0948∗∗
(0059)
Demographics (λ) Constant High school graduate Income < $60,000 Income > $100,000
0104∗∗ 0834∗∗ 0687∗∗ 0139
(0004) (0028) (0009) (0318)
Television (TV)
Radio
(0040) (0025) (0025) (0018) (0006) (0021) (0029) (0018) (0026) (0028) (0025) (0003)
−0973∗∗ 0207∗∗ 0541∗∗ 0187∗∗ −0038∗∗ −0251∗∗ 0127∗∗ 0002 −0338∗∗ −0166∗∗ −0063∗∗ −0069∗∗
(0040) −1032∗∗ (0025) 0019 (0025) 0193∗∗ (0018) 0075∗∗ (0006) 0018∗∗ (0021) 0114∗∗ (0028) −0025 (0018) −0019∗ (0026) 0296∗∗ (0027) 0278∗∗ (0024) 0145∗∗ (0003) 0034∗∗
(0041) (0025) (0025) (0018) (0006) (0021) (0030) (0018) (0027) (0028) (0025) (0003)
−1000∗∗ −0030∗ −0245∗∗ −0011 0012∗ −0117∗∗ 0069∗∗ 0006 0076∗∗ 0115∗∗ 0081∗∗ −0014∗∗
(0043) (0025) (0025) (0018) (0006) (0022) (0030) (0018) (0027) (0029) (0026) (0003)
(Continues)
MICHELLE SOVINSKY GOEREE
Consumer Information Heterogeneity Coefficients Media and demographic interactions (Υ ) Constant 30 to 50 (= 1 if 30 < age < 50) 50 plus (= 1 if age > 50) Married (= 1 if married) hh size (household size) inclow (= 1 if income < $60,000) inchigh (= 1 if income > $100,000) malewh (= 1 if male and white) eduhs (= 1 if highest edu 12 years) eduad (= 1 if highest edu 1–3 college) edubs (= 1 if highest edu college grad) edusp (education if <11)
Newspaper (np)
Coefficient Std. Error Coefficient Std. Error Coefficient Std. Error Coefficient Std. Error
TABLE IV—Continued Coefficient Estimates for Interactions With Media Magazine (mag) Coefficient
Std. Error
Coefficient
Std. Error
Newspaper (np) Coefficient
Information Technology Coefficients Common Across Consumers Age of PC 0159∗∗ (0005) Media Advertising (φ ρ) (0488) npand mag advertising 0720∗ (0418) TV advertising 1078∗∗ (0014) (np and mag advertising)2 −0013 (TV advertising)2 −0049∗∗ (0004) Firm total advertising (Ψ ) Acer Apple Compaq Dell Gateway Hewlett–Packard IBM Micron Packard–Bell
0520 0163 0504∗∗ 0497∗ 0918∗∗ 0199 0926∗∗ 0029 0231∗
(0042) (0790) (0077) (0460) (0065) (11750) (0184) (5832) (0149)
Group advertising (π) Group advertising (Group advertising)2
0891∗∗ 0104∗∗
(0007) (0011)
Std. Error
Television (TV) Coefficient
Std. Error
Radio Coefficient
Std. Error
LIMITED INFORMATION AND ADVERTISING
Variable
a Notes: ∗∗ indicates t -stat > 2; ∗ indicates t -stat >1. Unless units are specified, variable is a dummy.
1045
1046
MICHELLE SOVINSKY GOEREE
are above the age of 30, although newspaper advertising is less likely to reach a family the larger is their household (−004). Hence, newspaper advertising targeted at large households would not be effective in increasing the probability of being informed for this particular cohort. Perhaps not surprisingly, TV advertising is the most effective medium for reaching low income households. Television advertising is also effective at reaching married individuals over 50, although not as effective as newspapers. Interestingly most advertising in the PC industry is in magazines, suggesting PC firms target high income households. The results confirm that variation in ad media exposure across households is an important source of consumer heterogeneity. The variation in ad exposure translates into variation in information sets as evidenced by the positive and highly significant estimate for ς. The estimates highlight the importance of considering the differential effects of advertising both across households and across media. Most of the literature does not incorporate consumer information heterogeneity, which has implications for markups as discussed shortly. Parameter estimates of λ suggest other means of information provision, such as word-of-mouth or experience, play a role in informing certain types of consumers. The coefficient on income less than $60,000 (069) indicates these individuals are likely to be informed about 41% of the products without seeing an ad, whereas having a high income is not significantly different from having a middle income in terms of being informed without seeing an ad. This could arise because low income individuals are likely to have lower opportunity costs and thus more time to search for information. In addition, the probability of being informed without seeing any advertising is higher for high school grads relative to nongraduates. The lower panel presents estimates of the parameters that are the same across households (the γj parameters). Consumers are significantly more likely to know a PC the longer it has been on the market (0.16). This is intuitive, for the longer it has been on the market, the more opportunity consumers have had to learn of it by word-of-mouth or through advertising. There are decreasing returns to advertising in the TV (−005) and newspapers and magazines (−001) media, but they are decreasing at a faster rate for TV. Estimates of firm fixed effects interacted with total advertising (Ψ ) indicate that some firms are more effective at informing consumers through advertising. Most notably, ads by Compaq, Dell, Gateway, IBM, and Packard–Bell are significantly more effective, which could be due to differences in advertising techniques across firms. Some products are advertised in groups while others are advertised individually. The coefficient estimates on group advertising (π1 ) and group advertising squared (π2 ) are given in the last rows of Table IV. These (unrestricted) estimates predict that we will observe both group and product-specific advertising, which is supported by the data. There are economies of scope in group advertising (0.1). The estimates imply that if average group ad expenditures (ad)
LIMITED INFORMATION AND ADVERTISING
1047
for a particular product group are above a threshold level of $1.05 million per quarter46 (either the expenditures for a group are high or the groups are small), the firm will find it worthwhile to engage in group advertising to capitalize on the economies of scope. To put this into context, in the first quarter of 1998 Apple’s advertising strategy involved 17 group advertisements. The estimates suggest we would observe 17 group ads only if Apple’s home sector advertising budget was at least $18 million. Apple spent over $180 million in advertising in 1998 and more than $20 million in the first quarter—consistent with the model’s prediction. Substitution Patterns and Information Provision The estimated parameters have important implications for pricing and advertising behavior and markups. The markups earned by firms are determined, in part, by the substitution behavior of consumers. Substitution could be induced by changes in prices or choice sets, the latter of which is significantly impacted by advertising with varying effects across consumers. When advertising changes, the impact on the choice set is more pronounced for those consumers who are more sensitive to advertising. The firms’ decisions of what prices to charge and how much information to provide through advertising depend on the price and advertising elasticities of demand. The top panel of Table V presents a sample from 1998 of own- and crossprice elasticities of demand.47 The table shows all negative elements on the diagonal. Consistent with oligopolistic conduct, the results indicate that the products are priced in the elastic portion of the demand curve. The results show that products are more sensitive to changes in prices of computers with similar characteristics. For example, Apple computers are most sensitive to changes in the prices of other Apple computers, implying there is less substitution across platforms. Among PCs that have a Windows operating system, form factor plays a strong role in substitution patterns. For example, the Compaq Armada laptop is most sensitive to changes in prices of other laptops rather than to changes in other Compaq non-laptop computers. These intuitive substitution patterns are consistent across the data. Estimated advertising demand elasticities indicate that, for some firms, advertising for one product has negative effects on other products sold by that firm but it is less negative than for some of the rival products.48 The lower 46 The ad threshold is (1 − π1 )/π2 . If there is only one product in the group, I restrict π1 = 1 and π2 = 0. 47 Elasticities are computed by multiplying the numerical derivative of estimated demand by price and dividing by actual sales. 48 The model does not allow advertising for one product (or by one firm) to have positive spillovers to another product. Hence, the cross-product advertising effects (the off-diagonals in the lower panel of Table V) are all negative. The diagonal elements report the increase in market share from own advertising. For example, an increase of $1000 for advertising on Dell Latitude results in an increased market share of 0.02%.
1048
TABLE V A SAMPLE FROM 1998 OF ESTIMATED PRICE AND ADVERTISING ELASTICITIESa Apple Power Mac
Compaq Armada∗
Compaq Presario
Dell Latitude∗
HP Omnibook∗
HP Pavilion
IBM PC
IBM Thinkpad∗
−12861 00856 00150 00122 00263 00179 00118 00137 00330
00692 −11097 00107 00272 00274 00147 00212 00322 00192
00243 00202 −57066 00125 00357 00363 00153 00137 00376
00287 00222 00193 −36032 00261 00298 00336 00381 00195
00170 00196 00606 00230 −55701 00228 00167 00153 00304
00219 00202 00209 00272 00225 −56501 00227 00148 00425
00213 00248 00203 00308 00217 00269 −51178 00325 00297
00182 00298 00162 00348 00394 00222 00396 −32626 00291
00165 00364 00426 00385 00453 00499 00359 00215 −69745
Advertising Semielasticities PowerBook∗ 00076 Power Mac −00057 Armada 7xxx∗ −00616 Presario 2xxx −00779 Latitude XPI∗ −00233 −00034 Omnibook 4xxx∗ Pavilion 6xxx −00036 PC 3xxx −00076 Thinkpad 7xxx∗ −00107
−00057 00215 −00564 −00827 −00114 −00042 −00045 −00085 −00088
−00142 −00147 00017 −00060 −00278 −00039 −00038 −00082 −00168
−00110 −00273 −00057 00120 −00274 −00043 −00082 −00161 −00164
−00044 −00179 −00314 −00208 00230 −00064 −00051 −00182 −00185
−00139 −00136 −00625 −01092 −00380 00054 −00066 −00127 −00127
−00166 −00243 −00441 −01413 −00239 −00021 00101 −00194 −00196
−00072 −00263 −00684 −00825 −00199 −00030 −00143 00095 −00020
−00097 −00213 −00948 −00830 −00438 −00044 −00054 −00029 00089
Price Elasticities PowerBook∗ Power Mac Armada 7xxx∗ Presario 2xxx Latitude XPI∗ Omnibook 4xxx∗ Pavilion 6xxx PC 3xxx Thinkpad 7xxx∗
a Notes: A * indicates a laptop. For price elasticities, cell entries i, j , where i indexes row and j indexes column, give the percentage change in market share of brand I with a 1% change in the price of j . Each entry represents the median of the elasticities from 1998. For advertising elasticities, cell entries i, j give the percent change in the market share of i with a $1000 increase in the advertising of j .
MICHELLE SOVINSKY GOEREE
Apple PowerBook∗
LIMITED INFORMATION AND ADVERTISING
1049
panel presents a sample from 1998. Each semi-elasticity gives the percentage change in the market share of the row computer associated with a $1000 increase in the (estimated) advertising of the column computer. For instance, a $1000 increase in advertising for Apple Power Mac results in a decreased market share of around 0.1% for Compaq Presario, but has very little effect on the market share for Apple Power Book. In contrast, an increase in advertising for HP Omnibook has a large effect (relative to increase in own market share) on the market share for HP Pavilion. To gain more insight into firms’ advertising choices, I use estimated demand to infer marginal costs and markups. Summary statistics are in Table I. The median markup charged by PC firms is 15% over marginal costs of production and 10% over per unit production and (estimated) advertising costs. As the first two rows show, the top firms have higher than average markups and advertising expenditures relative to the industry. Indeed the non-top firms’ average median markup is much lower, 12%, with an ad-to-sales ratio of about 2%. The final column shows that, even after controlling for the fact that the top firms advertise more, they continue to earn higher than average markups. In 1998 the median industry markup was 19% over costs, with the top firms earning a 22% markup. Overall industry and top firm markups were increasing over the period. The bottom portion of the table gives details for top firms. Firms’ advertising choices are determined by their markup and their advertising elasticity of demand. IBM has one of the highest ad-to-sales ratios. IBM’s demand is not more sensitive to advertising relative to other top firms; however, IBM markups are higher than average. The results indicate that IBM is advertising more than the average non-top firm because they earn more per product than the average non-top firm. Compaq, on the other hand, has one of the highest markup margins (23%) but still advertises less than average (although not less than the average non-top firm). As expected, Compaq’s demand is less sensitive to advertising relative to other firms, which is the driving factor in their advertising decision. In addition, Gateway has the highest median price of the top firms but earns lower than average markups. The lower markups are due to higher costs, as reflected in a higher than average cost unobservable (ω), suggesting they are not as cost-effective in making their computers. Effects of Limited Information The high estimated markups are explained in part by the fact that consumers know only some of the products for sale, due in part to the advertising decisions of firms. If all consumers had full information (the assumption made in the literature to date), the market would appear very different. Table VI compares the markups resulting from a model of limited information to those predicted
1050
MICHELLE SOVINSKY GOEREE TABLE VI ESTIMATED PERCENTAGE MARKUPS UNDER LIMITED AND FULL INFORMATIONa Median Percentage Markup Under Limited Information
Under Full Information
Change in Markups
Total industry
15%
5%
67%
Apple iMac Power Mac PowerBook∗
2.5% 3.1% 2.0% 1.6%
84%
221% 137% 100%
Compaq Armada 7xxx∗ Presario 2xxx Presario 1xxx∗ ProLinea
7.0% 3.5% 2.6% 2.0% 7.0%
69%
414% 181% 152% 233%
Dell Latitude XPI∗ Dimension Inspiron
1.8% 1.4% 2.4% 1.6%
82%
70% 155% 94%
Gateway Gateway Desk Series Gateway Portable Series
1.7% 1.9% 1.5%
86%
128% 81%
HP OmniBook 4xxx∗ Pavilion 6xxx Vectra 5xx
4.5% 5.7% 3.1% 6.8%
72%
83% 227% 158%
IBM Aptiva Thinkpad 7xxx∗ IBM PC 3xx
2.0% 2.3% 1.6% 2.1%
88%
160% 74% 261%
Packard–Bell NEC Versa∗ NEC Desk Series
3.0% 1.6% 2.5%
81%
111% 176%
a Notes: Percentage markups are defined as (price − marginal cost)/price. Full information is the traditional model in which consumers know all products; under limited information the choice set is estimated. * indicates that computers are laptop.
by traditional models. I estimated a benchmark BLP model49 (the baseline model), which allows me to examine the additional markup firms earn as a result of limited consumer information. The estimates indicate median markups 49
I include the micromoments in the BLP model to obtain as precise estimates of the parameters of the taste distribution as possible (see Petrin (2002)). Parameter estimates are given in Goeree (2008).
LIMITED INFORMATION AND ADVERTISING
1051
would be 5% under full information, one-third the magnitude of those under limited information. The bottom rows present markup comparisons broken down by top firms with some representative products for each firm. The model of limited information suggests there is a larger markup gap between the top firms and the industry average relative to the prediction under full information. Not surprisingly, the firm with the largest percentage change in markups is IBM, the one that spends the most on advertising currently. The extent to which a firm can exercise market power depends on the elasticity of its products’ demand curves. The greater the number of competitors or the larger the cross-elasticity of demand with the products of other firms, the greater the elasticity of the firm’s demand curve and the less its market power. A comparison of estimated product price elasticities for a sample of products is given in Table VII. The model of full information (bottom panel) presents an image of an industry that is quite competitive, and indicates markups are similar across products sold by the top firms.50 In addition, demand is very sensitive to price changes and cross-elasticities imply the products are somewhat substitutable. However, if we remove the full information assumption, the industry looks very different. Firms have much more market power, as evidenced by the elasticities given along the diagonal in the top panel. Also cross-price elasticities indicate products are not as substitutable. This is intuitive: if consumers know of fewer products then products, effectively face fewer competitors, resulting in a less competitive industry. Results suggest (i) limited information about a product is a contributing factor to differences in purchase outcomes and (ii) information is distributed across households in a nonrandom way. Traditional full information models capture all differences in information through the additive unbounded i.i.d. term or the unobserved product characteristics term (ξ), both of which are independent across households. Information heterogeneity indirectly captured by the i.i.d. error will be restricted such that each consumer–product pair has its own realization that is independent of consumer and product attributes (such as advertising) and of all other consumer–product pairs. This does not permit correlation in information across consumers nor does it permit informational advantages to depend on consumer and product observables. Alternatively, information heterogeneity can be indirectly captured via unobserved product characteristics. In the model of limited information, a product with little advertising is unlikely to be in many consumer’s choice sets and will have a low market share. In the BLP model, a small market share could be explained by a low value for ξj .51 Again, the unobserved term is independent of consumer 50 Bajari and Benkard (2005) estimated PC demand and found high implied demand elasticities (median own-price elasticity −100) consistent with those I obtained from the BLP full information model. I discuss the Bajari and Benkard model in the next section and compare their model to the limited information model. 51 I thank an anonymous referee for this point.
1052
TABLE VII MEDIAN PRODUCT PRICE ELASTICITIES UNDER LIMITED AND FULL INFORMATIONa Apple Apple Performa PowerBook∗
Compaq Presario
Dell Latitude∗
Gateway Desk
Gateway Portable∗
0034 0023
0013 0009 0052
0009 0007 0031 0009 0042 −3955 0015 0133 0010 0008 0026 0006
0006 0018 0040 0009 0046 0008 −6757 0026 0010 0037 0080 0004
−8119 0061 0014 0011 0027 0015 0029
0085 −11568 0010 0011 0009 0008 0015
0018 0024 −8929
0016 0013
0012 0007
0011 0029
0008
0005
0003
Under Full Information (BLP benchmark) Performa −28648 0106 0089 −31654 PowerBook Duo∗ 0065 0080 Contura∗ Presario 4xxx 0025 0013 Latitude∗ 0030 0010 Gateway Desk Series 0033 0039 0030 0032 Gateway Portable Series∗ Pavilion 4xxx Vectra XU 0069 0040 IBM PC 7xx 0149 0138 Thinkpad 6xx∗ Packard–Bell Desk Series 0028 0031
0025 0034 0055
0088 0047 −31721 0160 0170 0315 0031 0180 0185
−3508 0105 0037 0018 0019 0010 0018 0060 0099 −29491 0263 0212 0017 0236 0163 0213
−8344 0008 0013 0012 0007 0007 0072 0060 0235 −29547 0019 0023 0080 0060 0050
HP Pavilion
0030 0018 −5173
HP Vectra
IBM PC
0021 0018 0012 0026 0011 0014 0022
0019 0012 0013 0025 0009 0012 0020
−5534 0007
0026 −3687
0012
0022
0024
IBM Thinkpad∗
0036 0006 0079 0045
Pack–Bell Desk
0023 0028 0025 0024 0033 0027 0015 0036 0086
−5209
0066 0051 0097 0090 0058 0046 0060 0076 0307 0128 0050 0038 0038 0099 0131 0128 0062 0195 0175 0025 0092 −34213 0011 0107 0011 0012 0038 0017 −34453 0133 0023 0023 0060 0135 0019 −35362 0090 0080 0047 −39009 0011 0081 0078 0030 −20780 0080 0056 0069 −39809 0048 0045 0300 0260
−3317 0057 0060 0028 0061 0076 0069 0017 0035 0209 −26327
a Notes: A ∗ indicates a laptop. For price elasticities, cell entries i, j , where i, indexes row and j indexes column, give the percentage change in market share of brand I with a
1% change in the price of j . Each entry represents the median of the product elasticities over all quarters during which the PC was sold. The BLP benchmark is the BLP model with micromoments.
MICHELLE SOVINSKY GOEREE
Under Limited Information Performa PowerBook Duo∗ Contura∗ Presario 4xxx Latitude∗ Gateway Desk Series Gateway Portable Series∗ Pavilion 4xxx Vectra XU IBM PC 7xx Thinkpad 6xx∗ Packard–Bell Desk Series
Compaq Contura∗
LIMITED INFORMATION AND ADVERTISING
1053
attributes. Not explicitly allowing for informational asymmetries is particularly restrictive in rapidly changing markets where consumers are likely to have limited information, and hence where heterogeneity in the distribution of information across consumers and products explains(perhaps a significant) part of the variation in sales across products. The results indicate that relying on an additive unbounded i.i.d. error term or unobserved product characteristics to explain differences in information across consumer–product pairs can generate inconsistent estimates of product–specific demand curves that are biased toward being too elastic. Consider as an example a market that consists of three products, each produced by a different firm. These products have identical characteristics, but the firms are each monopolies due to limited consumer information. That is, there are three groups of consumers where each group knows only one product. In this world, each of the three firms would earn monopoly markups. Let us consider how the full and limited information models would address the data generated from such a world. First, assume the consumers are identical. In the data we would observe identical individuals purchasing different products with identical observed characteristics. The model would need to make the products differ somehow to match the data. Traditional models would rely on the i.i.d. term to explain the observed purchase patterns. (The model could not use different ξ to match the data since all consumers would buy the product with the highest ξ.) Would this result change if consumers were heterogenous, that is, could the model explain different purchases through different consumer tastes? No, because observed product characteristics are the same (i.e., a consumer with a large taste for CPU speed has to choose among three products with identical CPU speed that are also identical in every other observable respect). There are two points here: (i) the i.i.d. terms would allow the model to match the purchase patterns but would use random consumer-product variation to do so and (ii) the estimated elasticities would be more elastic than the true elasticities. Estimated markups would be much lower than true markups. The limited information framework could explain differences in choices among otherwise identical products through differences in consumer information across products. First, in the case of no advertising, the model permits information heterogeneity due to differences in consumer attributes. Second, household information heterogeneity could arise if firms advertise products using different media, where certain media are more effective at informing certain types of consumers. The limited information model allows consumers to be nonrandomly differentially informed, which may explain differences in purchase patterns observed in the data. There are two points here: (i) the limited information model matches purchase patterns using nonrandom information heterogeneity across consumers and products (relying less on the i.i.d. term) and (ii) the estimated elasticities would be more inelastic than the traditional
1054
MICHELLE SOVINSKY GOEREE
elasticities. In this example, the markups estimated from the limited information model would be higher relative to those obtained under traditional models. Consider another example. The market again consists of three products with identical observed characteristics, but product 1 has a low market share relative to the others. Again, for the sake of illustration, assume product 1 is a high quality product where the unequal distribution of market shares is due to limited information: few consumers know product 1. The BLP model can match the data through one of two ways: (i) through the i.i.d. error term or (ii) through unobserved product characteristics (ξ). Since mean utility is chosen to match market shares, the model will force product 1’s mean utility to be lower through a low value of ξ. This has the implication that consumers are more sensitive to price changes in product 1, ceteris paribus, since their mean utility is lower. However, in truth product 1 is a high quality product and hence should have a high ξ value. The limited information model would allow for the following conclusion: few consumers are informed about the existence of product 1 (perhaps due to low advertising for the product), implying it has a low market share. High quality implies a high value of ξ, resulting in higher utility for consumers who know the product, ceteris paribus, resulting in more inelastic demand among fewer consumers. The limited information model would predict that product 1 would have higher markups than those predicted by traditional models. This is best illustrated by examining the differences in the value of the unobserved product characteristics terms when the parameters are estimated via BLP full information versus limited information models. Apple’s PowerBook G3 was introduced in November 1997 and was designed to use a high speed “backside” cache which could interact with the processor at much faster speeds than a standard L2 cache (which was restricted by the motherboard speed). At the time, the PowerBook G3 was considered the fastest notebook in the world. It received very favorable reviews for its speed, weight, size, design, and overall performance.52 The PowerBook G3 had a very small share of the market, both because Apple’s market share was low during this period (around 6%) and also because the PowerBook G3 was only on the market for 5 months. To match the low market shares, the BLP model generates a low value of ξ (relative to other products in the quarter). In contrast, the limited information model generates a low average value for φ, but a significantly higher ξ value than in the BLP model. The anecdotal evidence seems to support the limited information results: there are few consumers who know the PowerBook G3, but among the informed subset mean utility is increased from buying it, ceteris paribus. This suggests Apple could earn a high markup on the PowerBook G3 from the subset of consumers who know it. Indeed, the estimated limited 52 apple-history.com; consumerreports.com.
pcworld.com/article/id.11954/article.html;
epinions.com;
and
LIMITED INFORMATION AND ADVERTISING
1055
information markups for PowerBook G3 are on the order of 11%, while the BLP estimates suggest this is a product with low markups (around 1%). The results suggest that traditional models, which rule out nonrandom informational asymmetries across households and products a priori, yield inconsistent estimates for product-specific elasticities that are biased toward being too elastic.53 8. SENSITIVITY ANALYSIS I examined the robustness of the limited information model by conducting goodness-of-fit tests. First, I tested whether all the moments were satisfied. The objective function is a Wald statistic distributed chi-squared with degrees of freedom equal to the number of moment restrictions less the number of parameters. This test is conditional on all assumptions of the model and tests the over identifying moment restrictions together with all functional form and distributional assumptions. The test is stringent and generally rejects for large samples. It is not surprising then, given the large sample size and stylized nature of the model, that the model is rejected by the data. Second, I conducted goodness-of-fit tests focused on various aspects of the model. I partitioned the region in which the response variables (and in some cases covariates) lie into disjoint cells. I calculated the quadratic form based on the difference between the observed number of outcomes in each cell and the expected number (given the observed covariates). If the model is correct, the normalized quadratic form converges in distribution to a chi-squared random variable as the sample size increases.54 Formal tests were not able to reject the null that predicted values for market shares are the same as the observed values.55 I also constructed test statistics based on the average value of shares that fall into specified cells. Again, the test statistic is below the 10% level of significance critical value: the null hypothesis is not rejected. Controlling for product attributes, the model does a good job of predicting average market shares across cells. However, the model tends to miss more among non-Pentiums. Third, I compared the limited information model (hereafter LIM) to three alternatives. The first is the BLP model (with micromoments). The second is a full information model where advertising affects the utility function directly. I refer to this model as the uninformative model (hereafter UN). The third 53 See Goeree (2002, 2008) for details concerning why full information and limited information models will (most likely) result in different estimates for price elasticities of demand. 54 These tests are based on those presented in Andrews (1988). The predetermined number of cells is centered at the mean of the response variable with a width proportional to its standard deviation. 55 The test statistic is chi-squared with 7 degrees of freedom. The realized value (4.7) is below the 10% level critical value (12). The model fits well, but misses more among lower market share products.
1056
MICHELLE SOVINSKY GOEREE
is a modification of the BLP model proposed by Bajari and Benkard (2005; hereafter BB). They estimated PC demand and found high estimated ownprice elasticities. They (independently) attribute their unrealistically high estimates to the full information assumption. They estimated a modified BLP model limited to those products with large market shares, the intuition being that consumers are more likely to know these products since it is easier to obtain information on them.56 I would prefer to be able to test the relative fit of the models parametrically. Unfortunately, a formal test of nonnested hypotheses (Vuong (1989)) would require additional assumptions on the distribution of the errors. While the data suggest no natural assumptions for the error distributions, I present analysis that highlights the strengths and weaknesses of the fit of LIM relative to other models. For instance, both LIM and UN predict a threshold level of average group ad expenditures (above which products will be advertised in groups and below which they will be advertised individually). We should never observe group (product-specific) expenditures below (above) this level. The LIM and UN models predict different threshold levels. These predictions are presented in the second panel of Table VIII. The LIM model misses about 3% of the time, while UN misses more than twice as much, 8%. Most of the misses for both models are among Apple products (2.4% for LIM and 8% for UN), while both models’ predictions match the data for HP and Packard–Bell. In addition, both models miss more among TV advertisements (1.5% for LIM and 5.5% for UN). The fact that UN fits worse in this dimension is not surprising since UN predicts a higher threshold level (so we expect to observe a larger percentage of group expenditures below the predicted threshold). It is surprising that LIM does no worse than UN regarding the proportion of product-specific expenditures above the predicted threshold. Both models miss less than 1% on average, with all the misses coming among Apple and Compaq products. This anecdotal evidence suggests, at the very least, that the LIM model fits no worse than the UN model. Another dimension along which the models can be compared is with regard to the role of unobserved product attributes. In all models mean utility is chosen such that predicted shares match observed shares. While there is no explicit role for advertising in the BLP model or BB modification, one can interpret the unobserved product heterogeneity terms (ξj ) as containing product advertising.57 Using the parameter estimates from the respective models, I restricted ξj to zero and recalculated the predicted “pseudo” shares. These “pseudo” predicted shares are presented in the first panel of Table VIII. These provide insight into the importance of unobserved product attributes in each 56
The parameter estimates for the alternative models can be found in Goeree (2008). In the LIM model, a product with little advertising is unlikely to be in many consumer’s choice sets and will have a low market share. In the BLP and BB models, a small market share would be explained by a low value for ξj . 57
LIMITED INFORMATION AND ADVERTISING
1057
TABLE VIII GOODNESS OF FITa Prediction for Different Models
Response Variable
Observed
Limited Information
Average Annual Percent Unit Market Shares Apple 645% 887% Compaq 1617% 1775%∗ Gateway 1076% 1132%∗∗ HP 653% 686%∗∗ IBM 760% 851%∗ Packard–Bell 2261% 2037%∗ Mean Industry Elasticity
439%
Group and Product-Specific Advertising Predicted threshold value ($millions) 105
Full Information Uninformative Advertising
Full Information No Advertising BLP
Large Market Shares Only Bajari–Benkard
896% 1798% 1099%∗∗ 599%∗ 859% 2434%∗
515% 1974% 1307% 198% 938% 2741%
654%∗∗ 2216% 1334% 785% 810%∗ 2700%
441%
441%
438%
NA
NA
166
Percent group expenditures below predicted threshold value All products 27% 82% Apple 24% 82% Compaq 14% 44% Gateway 11% 26% HP 00% 00% IBM 11% 38% Packard–Bell 00% 00% Newspaper Magazine Television Radio
00% 01% 15% 09%
08% 09% 55% 09%
Percent product-specific expenditures above predicted threshold value All products 08% 08% Apple 09% 09% Compaq 09% 09% Gateway 00% 00% HP 00% 00% IBM 00% 00% Packard–Bell 00% 00% Newspaper Magazine Television Radio
00% 08% 01% 08%
00% 08% 01% 08%
a Notes: Predicted market shares are evaluated at parameter estimates with unobserved product attributes restricted to zero. ∗∗ indicates predicted values within 5% of the observed value; ∗ indicates within 10% of the true value. Predicted group advertising expenditures threshold value in millions. Advertising expenditures are computed using equation (1) evaluated at the optimal parameter values. Firm percentages are calculated as percent of product/medium advertising by that firm. The BLP model includes micromoments. The Bajari–Benkard (BB) model includes only those products which sold more than 5000 units. NA denotes not applicable.
1058
MICHELLE SOVINSKY GOEREE
model as well as indicate how well the model fits market shares based solely on observables and the form of the model. The BLP model’s predicted pseudoshares do not come within 10% of the observed market shares for any of the top firms (second to last column). The BB modification (last column) fits the market shares of the top firms better than the BLP model: the Apple shares are within 5% of the observed shares and IBM within 10%. This is not surprising since BB restrict estimation to the larger firms. Both BLP and BB provide a worse fit than the models in which advertising plays an explicit role. Again, this is not a surprise as the ξj play a larger role in the BLP and BB model relative to the advertising models. The LIM model fits the market shares better than the UN model. For Gateway and HP, the pseudo market shares are within 5% of observed shares and for Compaq, IBM, and Packard–Bell, the pseudoshares are within 10%. The UN model comes within 5% of the observed market shares for Gateway and within 10% for HP and Packard–Bell. Neither model predicts Apple market shares within 10%. This is perhaps not so surprising given that the firm for which the advertising predictions miss the most is Apple. These results suggest the model of limited information does a good job predicting advertising and market shares in the PC industry relative to models in which consumers are assumed to be aware of all products. While the alternative models present different pictures of product elasticities, they are consistent in their predictions of industry elasticities. For all models, I simulated a 1% increase in the price of all (inside) goods and calculated the percentage change in total home market share. Mean industry elasticities are given in the second panel. Industry demand is more inelastic, an intuitive result given the relative scarcity of products which are substitutable for PCs (particularly over this time frame). Due to the difficulty in obtaining ad data for some industries, a comparison of BLP and BB may be useful. If LIM is believed to be the correct model, then the BB modification may be preferred in that it generates estimates of product demand curves that are less elastic (relative to BLP) and closer in magnitude to those of LIM.58 However, the ξj still play a large role in BB, namely only for Apple is the reliance on the ξj small enough to provide an adequate fit of market shares based on observables. To the extent that the role (or number of) smaller firms is an important dimension of industry competition, the BB modification will not be preferred to other models of full information. Recall that LIM restricts attention to the top ten firms plus five other small firms. This sample selection could effect estimated margins in two ways. First, the smaller products not included in the sample are likely to have higher own-price elasticities and hence lower markups (relative to similar included products). Estimated markups for the included products will be higher the 58 BB found estimated product-specific demand elasticities ranging from −4 to −72, with a median elasticity of −11 for their modified model.
LIMITED INFORMATION AND ADVERTISING
1059
more smaller firms (or less-advertising-intensive firms) are excluded. This effect would be largest for the BB modification, which limits the sample to large firms. Indeed BB found evidence of much higher markups (less elastic demand curves). This effect is less pronounced for LIM in that five of the firms are small. The other effect of limiting the sample has to do with the impact on the “outside” good. The fewer products are included among the “inside” goods (the larger is the outside good), the lower will be estimated markups for the inside goods. Under full information, when a product is added to the sample, that product is a competitor with every other product. The overall impact on markups will depend on the substitution patterns among the inside goods and the size of the outside good. However, when a product is added to the LIM model, it may not be a competitor with every other product (some consumers may not know it exists). Hence, LIM markups will not be as sensitive to adding new firms to the included sample as will models of full information. Modeling advertising as affecting a consumer’s choice set requires significant computation time since the choice sets must be simulated. To test if the benefits of simulating choice sets are worth the costs of increased computation time, I performed a Monte Carlo experiment. Consider a market consisting of two products and one outside good. Denote the probability consumers are aware of a product by φj . The limited information market share is s1 = φ1 (1 − φ2 )
D1 D1 + φ1 φ2 1 + D1 1 + D1 + D2
where Dj represents exp(δj ), the mean utility from product j, analogously for product 2. A version of the market share which would not require simulating choice sets is s1∗ =
φ1 D1 φ1 D1 + 1 + φ1 D1 1 + φ1 D1 + φ2 D2
I calculated the values of sj and sj∗ for different values of φ and D. The resulting value of sj∗ was within 5% of the value of sj only 2% of the time. Notice also that the specification for sj∗ is not separately identifiable from a model in which advertising enters the utility function directly (or a model in which advertising is included in ξj ). This obtains by defining φ∗ = ln(φ) and D = exp(δ + φ∗ ). These results suggest that the more computationally demanding LIM model cannot be replaced easily with a simplified version. Second, advertising which influences consumers’ choice sets has very different effects from that which shifts demand directly through utility. That is, the standard BLP model and models in which advertising are one of the observed product attributes are not observationally equivalent to the model presented in this research.59 59 A model that includes both effects of advertising, through the choice set and directly in utility, is, theoretically, separately identifiable. However, in practice, one would like identification to be
1060
MICHELLE SOVINSKY GOEREE
9. CONCLUSIONS In markets characterized by rapid change, such as the PC industry, it is probable that consumers know only a subset of all available products. Models estimated under the assumption of full information present an image of the PC industry that is quite competitive. For example, a BLP full information model yields modest estimated median markups of 5%. When we remove the full information assumption, the industry looks very different. Indeed, estimated cross-price elasticities indicate products are not as substitutable as full information estimates suggest. I estimated a model of limited consumer information, where firms provide information through advertising. I found estimated median markups in the PC industry are high: 19% over production costs in 1998, where the top firms engage in higher than average advertising and earn higher than average markups. The results suggest firms have significant market power due in part to limited consumer information. The differences in estimated price elasticities (and implied markups) across the approaches reflect the inconsistency in the full information model, which does not allow consumers to be differentiated in terms of information. I extended the BLP framework to permit systematic (nonrandom) differences in information based on consumer observables. I tried to capture potential correlation in information across consumers using differences in information exposure across consumer types based on media exposure choices. The model allows for the possibility that the imperfect substitutability between different brands of consumer products is due to consumers’ having limited information about product offerings as well as to consumer-idiosyncratic brand preferences. I showed how to use additional data on media exposure to improve estimated price elasticities, á la BLP, in the absence of micro ad data. The results suggest that (i) allowing for heterogeneity in consumers’ choice sets yields more realistic estimates of substitution patterns between goods, (ii) assuming full information may result in incorrect conclusions regarding the intensity of industry competition, and (iii) firms benefit from limited consumer information. I find that exposure to advertising significantly impacts consumers’ information sets, but that advertising has very different informative effects across individuals and across media. The estimates suggest that some firms are more effective at informing consumers through advertising. For some firms, advertising one product can have a negative effect on the market share of other products sold by that firm, but the effect is less negative than it is for most of the rivals’ products. There are economies of scope in group advertising and some firms find it worthwhile to engage in group advertising for some product lines to capitalize on the increasing returns. Considering the implications of limited information is particularly important when addressing policy issues. In the PC industry, models estimated under the driven by variation in the data. See Ackerberg (2001, 2003), who used microdata to estimate a model which allows for informative and uninformative effects of advertising.
LIMITED INFORMATION AND ADVERTISING
1061
assumption that consumers are aware of all products generate estimates of product-specific demand curves that are biased toward being too elastic. The results of this paper suggest that antitrust authorities may reach different conclusions regarding the welfare implications of mergers depending on their assumptions regarding consumer information.60 APPENDIX A: APPROXIMATIONS TO THE OPTIMAL INSTRUMENTS To motivate the instruments discussed in Section 4, it easiest to first consider a simpler context. The following text borrows heavily from the Appendix to BLP (1999). The full information linear model has an estimating equation of (A.1)
ln(sj ) − ln(s0 ) ≡ δj = xj β − αpj + ξj
The optimal instruments are E(x | z) and E(p | z), assuming ξ is i.i.d. Given that x is an element of z, E(x | z) = x. If the demand and cost unobservable have some known density which is independent of z, then (A.2) E(p | z) = p(x w θ ξ ω)f (ξ ω) dξ dω where p(x w θ ξ ω) is the equilibrium pricing function, which has as arguments the observed (x w) and unobserved (ξ ω). BLP suggested using a series of basis functions to form a semiparametric approximation to E(p | z). BLP (1999) suggested an approach that makes greater use of the functional form of equilibrium prices as implied by the model. I use this approach and outline it below. In the case of (A.1), they proposed to replace the expected equilibrium price in (A.2) with the equilibrium price at the expected value of the unobservables (i.e., at ξ = ω = 0). The instrument for price is then = p(x w p θ ξ ω)|ξ=ω=0 for some initial estimate of θ. Note that if the x characteristics imply, given θ and ξ = 0, that product j has close rivals, then the predicted markup for product j will be low and its predicted price will be close to predicted marginal . Otherwise, if a good is predicted to have no close rivals, the instrucost, wj η ment associated with price may be well above predicted marginal cost. As BLP noted, rivals’ characteristics have an effect on the calculated instrument that is motivated by the model. It is trivial to extend the simple model in (A.1) to one in which advertising enters linearly. The corresponding instrument for advertising would be θ ξ ω)|ξ=ω=0 a = a(x wad 60
See Goeree (2002, 2003).
1062
MICHELLE SOVINSKY GOEREE
for some initial estimate of θ. Firms’ advertising choices depend on their markup, their advertising elasticities of demand, and the cost of advertising in different media. If the product has low predicted markups (due to many close rivals), then marginal revenue from advertising will be lower, ceteris paribus, and our predicted advertising (in each media) will be lower as well.61 Otherwise, if a good is predicted to have no close rivals predicted advertising will be higher, ceteris paribus. Note also that the level of predicted advertising for j in media m depends on the predicted marginal cost of advertising in that media ad ψ). (wjm The estimator will be biased since the price (advertising) evaluated at the expected values are not the expected value of price (advertising). However, the approximation is consistent since it is a function of exogenous data and is constructed to be highly correlated with the relevant functions of prices (advertising). Applying the method to the nonlinear limited information model is more complex, but the instruments are still functions of the same exogenous data and are constructed in a way that makes use of the functional form of equilibrium prices and advertising implied by the model. The efficient set of instruments when we have only moment restrictions is ∂ξj (Θ0 ) ∂ωj (Θ0 ) E z T (zj ) ∂θ ∂θ where T (zj ) is the matrix that normalizes the error matrix (Chamberlain (1987)).62 BLP (1999) proposed to replace the expectation with the appropriate derivatives evaluated at the expectation of the unobservables. Below are the steps I take to construct such derivatives for the limited information model: (i) Construct initial instruments for prices ( pinitial ) and advertising.63 (ii) Use the initial instruments to obtain an initial estimate of the parame ters, Θ. ln( (iii) Construct estimates of δ, mc, and mcad . I used δ = xβ, mc) = w η, and ln( mcad ) = wad ψ. (iv) Solve the first-order conditions for equilibrium advertising, a, as a ad c, m c p initial x). function of (Θ δ m , (v) Solve the first-order conditions of the model for equilibrium prices, p c, as a function of (Θ δ m a, x). (vi) These solutions imply a value for predicted market shares, s, which is p a function of (Θ δ, a x). 61
Indeed, if products are identical, Bertrand competitors will find it optimal not to advertise. In the linear model of (A.1), ∂ξj /∂(β α) = (xj pj ). 63 I constructed a distance variable based on observables and used kernel estimates for prices and advertising as the initial instruments. 62
LIMITED INFORMATION AND ADVERTISING
1063
(vii) This implication gives the unobservables evaluated at the exogenous c x θ). Calpredictions: ξ(θ) = ξ( p a s δ x θ) and ω (θ) = ω ( p a s δ m culate the required disturbance–parameter pair derivatives. initial is replaced by (viii) Repeat steps (iv)–(vii), where each time the new p found from the previous round. the p (ix) Form approximations to the optimal instruments by taking the average of the exogenous derivatives found in step (vii). APPENDIX B: MEDIA EXPOSURE EXOGENEITY TESTS I used data from the Simmons survey to test whether media exposure is endogenous to the purchase decision. These data include information on whether the individual purchased a PC, demographic characteristics, and media exposure information. For ease of exposition, assume there is only one advertising medium.64 The value to i of purchasing a PC is yi∗ = z1i δ1 + αEim + ui which depends on exogenous control variables, z1i (potentially endogenous), media exposure variables Eim , and parameters. The explanatory variables included in z1i are measures of age, education, marital status, household size, gender, race, and income. We observe a purchase yi = 1(yi∗ > 0). The amount of exposure of i to medium m is ∗ = z1i δ21 + z2m δ22 + εim = Zim δ2 + εim Eim
The instruments, z2m , are variables that impact exposure to medium m but do not affect the probability you buy a PC, conditional on exposure. They consist of the price of media access and are discussed below. I assume (ui εim ) has a mean zero, bivariate normal distribution and is independent of Zim . Simmons reported the quintile into which the consumer falls with regard to media exposure. Defining quintile 1 as the highest, i belongs to the qth ∗ < c(q−1)m , where c are cutoff values. I conquintile in medium m if cqm < Eim structed a binary variable equal to 1 if i falls in one of the top two quintiles for medium m.65 Smith and Blundell (1986) and Rivers and Vuong (1988) developed a twostep test for the exogeneity of regressors in limited dependent variable models. Wooldridge (2002) showed that the test of exogeneity is valid without assuming normality or homoskedasticity of εim and can be applied broadly even if the 64
It is straightforward to extend the framework to allow for multiple endogenous variables. Probit results suggest falling into the lower three quintiles do not significantly impact PC purchase probabilities, conditional on observed covariates. However, I conducted the same tests for exposure to each quintile separately and the results do not change. 65
1064
MICHELLE SOVINSKY GOEREE
endogenous regressor is a binary variable. I present the results from the twostep regressions and the exogeneity tests after discussing the instruments. If individuals consult a magazine or newspaper prior to purchase, they may buy a single copy off the newstand. Hence, I gathered data on two measures of the price of access: single-copy price and the per-issue price based on an annual subscription. I collected access prices for over 140 magazines and over 100 newspapers in 1996 and 1997 from the Audit Bureau of Circulations. Newspaper prices are averaged within 12 geographic regions. Television viewers who fall into the highest quintiles may have access to more channels than those provided by a basic cable subscription (i.e., expanded basic cable). I use two measures of the cost of access to cable: the monthly fee for basic cable and the monthly fee for expanded basic cable. In some geographic regions, consumers can purchase pay service stations at an additional fee (HBO, Showtime, etc.). Since these pay service stations are typically commercial-free, I do not include the additional monthly fee associated with pay service access. The cable price data are from the Television and Cable Factbook, 1996 and 1997. I gathered data on access prices for over 250 cable carriers in 12 geographic regions. Cable prices are averaged within geographic region. Table B.I presents the results with single-copy magazine, single-copy newspaper, and the monthly fee for expanded basic cable as instruments for exposure to magazines, newspapers, and television, respectively. The results indicate high income married individuals are more likely to be in the top quintiles of newspaper and magazine readership, where readership of newspapers is increasing in household size. Not surprisingly, low income individuals are more likely to be in the top 40% of exposure to television, where these individuals are less likely to be in the top quintiles for newspaper or magazine readership. The first part of the lower panel reports the χ2 test statistic of the restriction that access prices have no impact on exposure falling in the top 40% for each media.66 As indicated by the last row, the instruments have power in all specifications. The second panel presents the results of a Wald test of the null hypothesis that exposure is exogenous for each media. In all specifications, the test statistic is not significant, indicating that I cannot reject the null that exposure to newspapers, magazines, and cable television is exogenous to the PC purchase decision. I conducted the exogeneity tests for exposure to each quintile separately and for the alternative access prices mentioned above. The results do not change. APPENDIX C: SIMULATION DETAILS A general outline for simulation follows. I omit the time subscript for clarity. First prepare random draws, which, once drawn, do not change throughout estimation. 66 The price coefficient for newspaper is positive. This could reflect the higher quality (and prices) of newspapers read by individuals in the top quintiles.
TABLE B.I TWO-STEP PROBIT EXOGENEITY TESTSa Dependent Variable: Top 40% of Exposure to Potentially Endogenous Regressor
agesq (age squared) edusp (education if <11) eduhs (= 1 highest edu 12 years) eduad (= 1 highest edu 1–3 college) edubs (= 1 highest edu college grad) married (= 1 married) hhsize (household size) inclow (= 1 income < $60,000) inchigh (= 1 income > $100,000) malewh (= 1 male and white)
−1386 (0143) 004∗∗ (0005) 0000∗∗ (0000) −0077∗∗ (0005) −0396∗∗ (0038) −0217∗∗ (0039) −0089∗∗ (0038) 0244∗∗ (0028) −0036∗∗ (0009) −0258∗∗ (0029) 0083∗∗ (0042) −0016 (0026)
Television ∗∗
−0821 (0158) 0002 (0005) 0000 (0000) 0035∗∗ (0005) 0286∗∗ (0036) 0272∗∗ (0038) 0146∗∗ (0037) 0074∗∗ (0027) 0012 (0009) 0118∗∗ (0028) −0034 (0041) −0004 (0025)
Magazines ∗∗
2462 (0799) 0014∗∗ (0005) 0000 (0000) −0023∗∗ (0006) −0089∗∗ (0043) −0002 (0046) 001 (0044) 0097∗∗ (0031) 0010 (0010) −0096∗∗ (0033) 0136∗∗ (0051) −0045 (0029)
Any Media ∗∗
3079 (0865) 0014∗∗ (0005) 0000 (0000) −0023∗∗ (0006) −0087∗∗ (0043) 0001 (0046) 0012 (0044) 0098∗∗ (0031) 0010 (0010) −0092∗∗ (0033) 0137∗∗ (0051) −0046 (0029)
Newspaper ∗∗
−1501 (0211) 0005 (0019) 0000∗∗ (0000) −0022 (0037) −0134 (0197) −002 (0119) 0016 (0065) 0029 (0121) 0069∗∗ (0020) −0094 (0134) 005 (0065) 0085∗∗ (0036)
Television ∗∗
−0988 (0427) 0019∗∗ (0007) 0000∗∗ (0000) −0031 (0020) −0116 (0168) 0052 (0159) 0067 (0096) 017∗∗ (0064) 0061∗∗ (0013) −0125 (0078) 0057 (0059) 008∗∗ (0037)
Magazines
−1474 (1066) 0018∗ (0010) 0000∗∗ (0000) −005∗∗ (0011) −0281∗∗ (0054) −0104∗∗ (0049) −002 (0047) 0116∗∗ (0053) 0057∗∗ (0011) −0193∗∗ (0043) 0079 (0065) 008∗∗ (0037)
Any Media
−0927 (2968) 0006 (0017) 0000∗∗ (0000) −0026 (0058) −0122 (0300) 0036 (0128) 004 (0089) 0041 (0206) 0082 (0104) −0141 (0456) 0078 (0257) 0043 (0333) (Continues)
1065
age
∗∗
LIMITED INFORMATION AND ADVERTISING
Constant
Newspapers
Purchased PC in Last 12 Months Top 40% of Exposure to
1066
TABLE B.I—Continued Dependent Variable: Top 40% of Exposure to Potentially Endogenous Regressor
Newspapers
0372∗∗ (0125)
Television
0014∗∗ (0005)
Magazines
−0710∗∗ (0264)
Any Media
Newspaper
Television
1032 (1322)
Top 40% television exposure
Test of Power of Instruments
655 0011
721 0007
0121 (1620)
0965 (1110) −0453 (1048) −0845 (6434)
−1267 (1218)
Top 40% magazine exposure
881 0003
Any Media
0394∗∗ (0150) −0007 (0006) −093∗∗ (0278)
Instrumented Media Exposure Top 40% newspaper exposure
Chi squared test statistic Prob > chi2
Magazines
Wald Test of Exogeneity of Media Exposure
385 0050
062 0430
114 0290
001 0930
190 0590
a Notes: Purchase data from Simmons 1996, 1997; sample size 13,400. Subscription prices are monthly; newspaper/magazine prices from Audit Bureau of Circulation; Cable prices from Television and Cable Factbook. Any media includes newspaper, magazine, and TV. Standard errors in parentheses. ∗∗ indicates significant at 5% and ∗ indicates significant at 10%.
MICHELLE SOVINSKY GOEREE
Instruments for Media Exposure Newspaper access price (price of single copy) TV cable access price (monthly fee expanded basic) Magazine access price (price of single copy)
Purchased PC in Last 12 Months Top 40% of Exposure to
LIMITED INFORMATION AND ADVERTISING
1067
1. In the case of the macro moments: (a) Draw i = 1 ns consumers from the joint distribution of characteristics and income given by the CPS, G(D y), and corresponding draws from multivariate normal distribution of unobservable consumer characteristics, G(ν), one for each product characteristic. Call these νik (where I drew a sample of 3000 for each year; ns = 9000). (b) Draw log normal variables, one for each medium combination. Call these κim (where m = 1 4). (c) Draw uniform random variables, one for each product–individual pair. Call these uij . 2. For the micromoments: (a) For each Simmons consumer i = 1 ncons, draw R times from multivariate normal distribution of unobservable consumer characteristics, G(ν), one for each product characteristic. Call these νikr (where ncons = 13400). (b) Draw R uniform random variables for each product–individual combination. Call these uijr . (c) Draw R log normal variables, one for each medium–individual combination, call these κimr . 3. Choose an initial value of the parameters θ0 . 4. For the macromoments, do for i = 1 ns: (a) Calculate φij (θ) for each product j = 1 J for each period φij (θ) = τij =
exp(τij ) 1 + exp(τij ) age D + id λ + ϑxj
d
+ Ψf
ϕm ajm +
m
ajm + ς
m
m
ρm a2jm
m s id jm
Υmd D a
d
+
ajm κim
m
(b) Given φij (θ) and uij , construct a J dimensional Bernoulli vector, bi (θ). This defines the choice set S , where the jth element is determined according to 1 if φij (θ) > uij , bij = 0 if φij (θ) ≤ uij . Define b0i to be the Bernoulli vector generated from the initial choice of parameters, θ0 . (c) Calculate Pij (θ) =
exp{δj + μij } y + k : b0 =1 exp{δk + μik } α i
ik
1068
MICHELLE SOVINSKY GOEREE
where μij is value of α ln(yi − pj ) + draw and θ. (d) Calculate sij (θ) =
φil
l∈S
k
xjk (σk νik +
d
Ωkd Did ) given the ith
Pij (θ) (1 − φik ) 0 φ i (θ0 ) k∈ /S
where φ0i (θ0 ) is the value of l∈S0 φil k∈/ S0 (1 − φik ) using the initial value of the parameters and the initial choice set. During estimation the parameter values will be updated, so the simulated product over the φij will differ from the initial φ0i (θ0 ) in all but the first simulation. 5. Calculate the simulator for the market share: 1 sj = sij ns i 6. For the micromoments, for each consumer, i = 1 ncons, calculate τij , age s D τij = + ϕm ajm + ρm a2jm id λd + ϑxj d
+ Ψf
m
ajm + ς
m
m
m
Υmd Dsid ajm
d
Do for r = 1 R draws: (a) Calculate φijr (θ) exp(τijr ) 1 + exp(τijr ) τijr = τij + ajm κimr φijr (θ) =
m
(b) Given φijr (θ) and uijr , construct a J dimensional Bernoulli vector, bir (θ). This defines the choice set Sr for the rth loop, where the jth element is determined according to 1 if φijr (θ) > uijr , bijr = 0 if φijr (θ) ≤ uijr . Define b0ir to be the Bernoulli vector generated from the initial choice of parameters, θ0 . (c) Calculate Pijr (θ) =
y + α i
exp{δj + μijr } k : b0 =1 exp{δk + μikr }
irk
LIMITED INFORMATION AND ADVERTISING
where μijr is value of α ln(yi − pj ) + draw and θ. (d) Calculate sijr (θ) =
φil
l∈Sr
k∈ / Sr
(1 − φik )
k
xjk (σk νikr +
d
1069
Ωkd Dsid ) given the rth
Pijr (θ) φ0ir (θ0 )
where φ0ir (θ0 ) is the value of l∈Sr φil k∈/ Sr (1 − φik ) using the initial choice set evaluated at the initial value of the parameters, b0ir . 7. Calculate the simulator for the choice probability: sij =
1 sijr R r
The firm choice probability (used in the micromoments) is if = sij B j∈Jf
APPENDIX D: PRELIMINARY REGRESSIONS This appendix contains preliminary probit and logit regressions.
1070
TABLE D.I PROBIT ESTIMATES OF PURCHASE PROBABILITIESa Dependent Variable: Purchased PC in Last 12 Months Explanatory Variable
Log likelihood Likelihood ratio test statistic Prob > test statistic
∗∗
−15549 00141∗∗ −00002∗∗ −00585∗∗ −03427∗∗ −01735∗∗ −01028∗∗ 01082∗∗ 00660∗∗ −01436∗∗ 01067∗∗ 00834∗∗ −00383 00482 00176 −00059 −01264∗∗ −00664∗∗ 00856 00116 −6479
Std. Error
Coefficient
Std. Error
Coefficient
(01399) (00058) (00001) (00075) (00503) (00466) (00398) (00307) (00093) (00305) (00406) (00283) (00325) (00306) (00308) (00334) (00627) (00314) (00549) (00264)
∗∗
−15133 00140∗∗ −00002∗∗ −00588∗∗ −03441∗∗ −01715∗∗ −01008∗∗ 01067∗∗ 00660∗∗ −01438∗∗ 01093∗∗ 00828∗∗ −00338 00497∗
(01376) (00058) (00001) (00075) (00502) (00465) (00398) (00306) (00093) (00303) (00405) (00283) (00321) (00304)
∗∗
−01240∗∗ −00657∗∗
(00626) (00314)
−6481 −47 04538
−14907 00132∗∗ −00002∗∗ −00609∗∗ −03579∗∗ −01838∗∗ −01023∗∗ 01036∗∗ 0063∗∗ −01586∗∗ 01042∗∗ 00927∗∗
Std. Error
(01383) (00058) (000006) (00074) (00500) (00463) (00396) (00304) (00092) (00301) (00403) (00282)
−6536 −1146 00000
a Note: These results use the complete Simmons data set; sample size 20,100. The first specification is the unrestricted model to which I compare the other specifications. ∗∗ indicates significant at the 5% level; ∗ indicates significant at the 10% level.
MICHELLE SOVINSKY GOEREE
Constant age age squared edusp (education if <11) eduhs (= 1 if highest edu 12 years) eduad (= 1 if highest edu 1–3 college) edubs (= 1 if highest edu college grad) married (= 1 if married) hh size (household size) inclow (= 1 if income < $60,000) inchigh (= 1 if income > $100,000) malewh (= 1 if male and white) mag 1 (= 1 if magazine quintile = 1) mag 2 (= 1 if magazine quintile = 2) np 1 (= 1 if newspaper quintile = 1) np 2 (= 1 if newspaper quintile = 2) TV 1 (= 1 if television quintile = 1) TV 2 (= 1 if television quintile = 2) radio 1 (= 1 if radio quintile = 1) radio 2 (= 1 if radio quintile = 2)
Coefficient
TABLE D.II
Specification 1 Variable
Std. Err.
Std. Err.
Specification 5
Specification 6
Coef.
Std. Err.
Coef.
Std. Err.
Coef.
Std. Err.
0023∗
(0012)
0160∗ 0324∗∗ 0515
0026∗ 1029∗∗
(0019) (0030)
0020∗ 1038∗∗
(0013) (0030)
(0103) (0051) (1119)
−37,843
Coef.
Specification 4
Std. Err.
−38,961
Coef.
Specification 3
Priceb 0017∗ (0009) 0010∗∗ (0005) 0016∗ (0010) Total advertising 1002∗∗ (0026) Newspaper advertising Magazine advertising Television advertising Constantb 6627∗ (6479) 2206∗ (1729) CPU speedb (MHz) −4628∗ (2542) Pentiumb −2230∗∗ (1041) Laptopb Inclusive value 0491∗∗ (0033) 0262∗∗ (0038) 0486∗∗ (0039) Consumer attributes Not included Not included Not included Log likelihood
Coef.
Specification 2
−37,348
6169 (1995) 5898 (1488) 4892 (6528) 4041 (5463) −5778 (6789) −5805 (7319) −3541∗ (3122) −3311∗ (2956) 0411∗∗ (0040) 0431∗∗ (0041) 0413∗∗ (0041) Not included Not included Included −38,144
−36,645
−36,574
a Notes: ∗∗ indicates t -stat > 2; ∗ indicates t -stat > 1. All specifications were estimated using Simmon’s microlevel firm choice data, Gartner product characteristics data, and
CMR advertising data. b The coefficients on the product characteristics are estimable only up to a scale factor (1 minus the inclusive value coefficient).
LIMITED INFORMATION AND ADVERTISING
PRELIMINARY NESTED LOGIT ESTIMATESa
1071
1072
MICHELLE SOVINSKY GOEREE
TABLE D.III LOGIT IV RESULTSa
Variable
Price CPU Speed Pentium Laptop
OLS (i)
−005∗ (004) 016∗∗ (008) −036∗∗ (009) −127∗∗ (009)
Acer Apple Compaq Dell Gateway HP IBM Micron Pbell Constant First Stage Adjusted R2 F-statistic Prob > F
−1264∗∗ (089) 013
Instruments BLP 95 series approximation BLP 99 direct approximation
OLS (ii)
−004∗ (004) 017∗∗ (008) 004 (014) −125∗∗ (009) 164∗∗ (016) 181∗∗ (020) 177∗∗ (016) 072∗∗ (016) 198∗∗ (017) −013 (017) 101∗∗ (016) 055∗∗ (017) 224∗∗ (017) −1383∗∗ (089) 030
IV (iii)
−071∗∗ (013) 007 (008) −068∗∗ (011) 199∗∗ (016)
IV (iv)
IV (v)
−1283∗∗ (090)
−107∗∗ (017) 006 (008) −085∗∗ (019) 235∗∗ (020) 165∗∗ (017) 062∗∗ (029) 172∗∗ (017) 043∗∗ (017) 143∗∗ (021) −026∗ (018) 064∗∗ (019) −071∗∗ (028) 218∗∗ (019) −1399∗∗ (090)
−1333∗∗ (132)
−246∗∗ (041) 007∗ (011) 204∗∗ (038) 383∗∗ (047) 167∗∗ (020) 098∗ (056) 166∗∗ (021) 006 (025) 069∗∗ (033) −043∗∗ (022) 013 (028) −240∗∗ (056) 210∗∗ (023) −1420∗∗ (121)
042 12647 000
053 1127 000
033 21069 000
049 14376 000
X
−241∗∗ (051) 017∗ (014) 151∗∗ (029) 382∗∗ (058)
IV (vi)
X X
X
a Notes: The dependent variable is ln(s ) − ln(s ) based on 2112 observations. All regressions include time dumj 0 mies. Asymptotically robust standard errors are in parentheses. ∗∗ indicates t -stat > 2; ∗ indicates t -stat > 1. BLP series approximation IV are the sum of the values of the same characteristics of other products offered by the same firm, the sum of the values of the same characteristics of all products offered by rival firms, the number of ownfirm products, and the number of rival firm products. The more direct approximation IV are based on BLP 1999. These are used in the full model and described in the paper.
LIMITED INFORMATION AND ADVERTISING
1073
REFERENCES ACKERBERG, D. (2001): “Empirically Distinguishing Informative and Prestige Effects of Advertising,” RAND Journal of Economics, 32, 100–118. [1060] (2003): “Advertising, Learning, and Consumer Choice in Experience Goods Markets: A Structural Empirical Examination,” International Economic Review, 44, 1007–1040. [1018, 1060] ANAND, B., AND R. SHACHAR (2004): “Advertising, the Matchmaker,” Working Paper 02-057, Harvard Business School. [1018,1031] ANDERSON, S., A. DE PALMA, AND J. F. THISSE (1989): “Demand for Differentiated Products, Discrete Choice Models, and the Characteristics Approach,” Review of Economic Studies, 56, 21–35. [1027] ANDREWS, D. (1988): “Chi-Square Diagnostic Tests for Econometric Models,” Journal of Econometrics, 37, 135–156. [1055] BAJARI, P., AND C. L. BENKARD (2005): “Demand Estimation With Heterogenous Consumers and Unobserved Product Characteristics: A Hedonic Approach,” Journal of Political Economy, 113, 1239–1276. [1051,1056] BERRY, S. (1994): “Estimating Discrete Choice Models of Product Differentiation,” Rand Journal of Economics, 25, 242–262. [1030,1041] BERRY, S., J. LEVINSOHN, AND A. PAKES (1995): “Automobile Prices in Market Equilibrium,” Econometrica, 63, 841–890. [1017,1033,1041] (1999): “Voluntary Export Restraints on Automobiles: Evaluating a Trade Policy,” American Economic Review, 89, 400–430. [1031,1041,1061,1062] (2004): “Differentiated Products Demand Systems From a Combination of Micro and Macro Data: The New Car Market,” Journal of Political Economy, 112, 68–105. [1033] BRESNAHAN, T. (1989): “Empirical Studies of Industries With Market Power,” in Handbook of Industrial Organization, ed. by R. Schmalensee and R. Willig. New York: Elsevier, Chapter 17. [1034] CHAMBERLAIN, G. (1987): “Asymptotic Efficiency in Estimation With Conditional Moment Restrictions,” Journal of Econometrics, 34, 305–344. [1030,1031,1062] CHIANG, J., S. CHIB, AND C. NARASIMHAN (1999): “Markov Chain Monte Carlo and Models of Consideration Set and Parameter Heterogeneity,” Journal of Econometrics, 89, 223–248. [1038] CHING, A., T. ERDEM, AND M. KEANE (2008): “The Price Consideration Model of Brand Choice,” Journal of Applied Econometrics (forthcoming). [1038] ERDEM, T., AND M. KEANE (1996): “Decision-Making Under Uncertainty: Capturing Dynamic Brand Choice Processes in Turbulent Consumer Goods Markets,” Marketing Science, 15, 1–20. [1018] GEWEKE, J. (1988): “Antithetic Acceleration of Monte Carlo Integration in Bayesian Inference,” Journal of Econometrics, 38, 73–89. [1039] GOEREE, M. S. (2002): “Informative Advertising and the US Personal Computer Market: A Structural Empirical Examination,” Ph.D. Dissertation, University of Virginia. [1055,1061] (2003): “Was Mr. Hewlett Right? Mergers, Advertising, and the PC Industry,” Unpublished Manuscript, Claremont McKenna College. [1061] (2008): “Supplement to ‘Limited Information and Advertising in the U.S. Personal Computer Industry,” Econometrica Supplementary Material, 76, http://www.econometricsociety. org/ecta/Supmat/4158_data.pdf; http://www.econometricsociety.org/ecta/Supmat/4158_ miscellaneous.pdf; http://www.econometricsociety.org/ecta/Supmat/4158_data and programs. zip. [1034,1050,1055,1056] GOURIEROUX, C., A. MONFORT, E. RENAULT, AND A. TROGNON (1987): “Generalized Residuals,” Journal of Econometrics, 34, 5–32. [1034] GROSSMAN, G., AND C. SHAPIRO (1984): “Informative Advertising With Differentiated Products,” Review of Economic Studies, 51, 63–82. [1027] HENDEL, I. (1999): “Estimating Multiple-Discrete Choice Models: An Application to Computerization Returns,” Review of Economic Studies, 66, 423–446. [1027,1029]
1074
MICHELLE SOVINSKY GOEREE
LESLIE, P. (2004): “Price Discrimination in Broadway Theater,” Rand Journal of Economics, 35, 520–541. [1025] MEHTA, N., S. RAJIV, AND K. SRINIVASAN (2003): “Price Uncertainty and Consumer Search: A Structural Model of Consideration Set Formation,” Marketing Science, 22, 58–84. [1038] MILGROM, P., AND J. ROBERTS (1986): “Price and Advertising Signals of Product Quality,” Journal of Political Economy, 94, 796–821. [1030] NEVO, A. (2000): “A Practitioners Guide to Estimation of Random Coefficients Logit Models of Demand,” Journal of Economics and Management Strategy, 9, 513–548. [1033,1037,1041] NIEROP, E., R. PAAP, B. BRONNENBERG, P. FRANSES, AND M. WEDEL (2005): “Retrieving Unobserved Consideration Sets From Household Panel Data,” Anderson Working Paper, UCLA. [1038] PAKES, A., AND D. POLLARD (1989): “Simulation and the Asymptotics of Optimization Estimators,” Econometrica, 57, 1027–1057. [1039] PETRIN, A. (2002): “Quantifying the Benefits of New Products: The Case of the Minivan,” Journal of Political Economy, 110, 705–729. [1018,1033,1035,1050] RIVERS, D., AND Q. VUONG (1988): “Limited Information Estimators and Exogeneity Tests for Simultaneous Probit Models,” Journal of Econometrics, 39, 347–366. [1031,1063] SHUM, M. (2004): “Does Advertising Overcome Brand Loyalty? Evidence From the Breakfast Cereals Market,” Journal of Economics and Management Strategy, 13, 241–272. [1018] SMITH, R. J., AND R. BLUNDELL (1986): “An Exogeneity Test for a Simultaneous Equation Tobit Model With an Application to Labor Supply,” Econometrica, 54, 679–685. [1031,1063] STERN, S. (1997): “Simulation-Based Estimation,” Journal of Economic Literature, 35, 2006–2039. [1039] (2000): “Simulation Based Inference in Econometrics: Motivation and Methods,” in Simulation-Based Inference in Econometrics: Methods and Applications, ed. by R. Mariano, M. J. Weeks, and T. Schuermann. Cambridge, U.K.: Cambridge University Press, 9–37. [1039] VUONG, Q. H. (1989): “Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses,” Econometrica, 57, 307–333. [1056] WOOLDRIDGE, J. (2002): Econometric Analysis of Cross Section and Panel Data. Cambridge, MA: MIT Press. [1031,1063]
Dept. of Economics, University of Southern California, Los Angeles, CA 90089, U.S.A. and Claremont McKenna College, Claremont, CA 91711, U.S.A.;
[email protected]. Manuscript received July, 2002; final revision received January, 2008.
Econometrica, Vol. 76, No. 5 (September, 2008), 1075–1101
INFORMATION AND EFFICIENCY IN TENDER OFFERS BY ROBERT MARQUEZ AND BILGE YILMAZ1 We analyze tender offers where privately informed shareholders are uncertain about the raider’s ability to improve firm value. The raider suffers a “lemons problem” in that, for any price offered, only shareholders who are relatively pessimistic about the value of the firm tender their shares. Consequently, the raider finds it too costly to induce shareholders to tender when their information is positive. In the limit as the number of shareholders gets arbitrarily large, when private benefits are relatively low, the tender offer is unsuccessful if the takeover has the potential to create value. The takeover market is therefore inefficient. In contrast, when private benefits of control are high, the tender offer allocates the firm to any value-increasing raider, but may also allow inefficient takeovers to occur. Unlike the case where all information is symmetric, shareholders cannot always extract the entire surplus from the acquisition. KEYWORDS: Tender offers, shareholder information, efficiency.
1. INTRODUCTION THIS PAPER EXPLORES THE ROLE of shareholder information in takeover contests. The analysis of tender offers has been much studied in the literature, with particular emphasis on the problems associated with the free-rider problem in takeover bidding.2 Grossman and Hart (1980) established that costly takeovers may not be feasible for widely held firms since infinitesimal shareholders have an incentive to free-ride on other shareholders’ tendering decisions. Bagnoli and Lipman (1988) studied the equilibrium behavior for finitely many shareholders and then allowed the number of shareholders to get arbitrarily large.3 In a symmetric information setting, where both shareholders and the raider know the true post-takeover value of the firm, they showed that expected profits converge to zero for all possible prices when the raider has no private benefits but can increase firm value. Hence, payoff considerations for the raider get small as the number of shareholders increases. Starting from this symmetric information benchmark, we add an element of private information to the model by assuming that shareholders are privately informed about the post-takeover value of the firm. Specifically, with some 1 An earlier version of this paper was titled “The Aggregation of Information in Tender Offers.” We would like to thank Faruk Gül, Larry Samuelson (the editor), and three anonymous referees for useful suggestions, as well as seminar participants at the Wharton School, Washington University, Texas A&M University, and the 2006 Western Finance Association Meetings. This work was performed while Yılmaz was at the Wharton School, University of Pennsylvania. The usual disclaimers apply. 2 There is also a literature on competitive takeover bidding that largely abstracts from the freerider problem. See, for instance, Fishman (1988). 3 The finite shareholder case has also been analyzed by Holmstrom and Nalebuff (1992). Cornelli and Li (2002) analyzed a setting with finitely many risk arbitrageurs who participate in the tendering game.
© 2008 The Econometric Society
DOI: 10.3982/ECTA6178
1076
R. MARQUEZ AND B. YILMAZ
probability, the state is “bad” so that the value-added of the takeover is zero or negative, and with complementary probability, the state is “good” and the value-added is positive. Each shareholder observes a private noisy signal about the value created by the takeover. The raider submits an unconditional offer for the equity shares of the firm by specifying a price per share, and shareholders either accept the offer and tender their shares or reject the offer and keep their shares. We first characterize the tendering decisions of privately informed shareholders. We show that the free-rider problem associated with dispersed ownership gives rise to a “lemons problem” in that for any price offered, only those shareholders whose signals are suggestive of a relatively low post-acquisition firm value will be willing to tender. This introduces an important difference relative to the baseline model where information is symmetric since it reduces the likelihood that value-increasing takeovers will be successful: shareholders are more likely to retain their shares in the good state, but will unload them in the bad state. Hence, not only does the raider not get any surplus when the takeover adds value, he is left paying a premium for the shares when in the bad state. Moreover, this negative impact on the raider’s profit persists even as the number of shareholders increases since an increase in the number of shareholders does not help eliminate the lemons problem. We next extend the model to allow the raider to have some alternative motive for acquiring the firm, such as a private control benefit.4 The existence of a private benefit changes the implications for bidding behavior by the raider. Now, we find that, even in instances when the raider might fail to increase firm value, a positive private benefit creates an incentive for the raider to bid. Two distinct results arise as a function of the size of the private control benefit. For low values of the private benefit, the raider recognizes that with a relatively low price the offer will succeed only if the value of the shares after the takeover is expected to be low since many shareholders have observed a low signal. But then it is optimal for the raider to offer a price only slightly higher than the value of the shares, as the loss to the raider can be made arbitrarily small by reducing the price. As a consequence, a small benefit is enough to compensate the raider for the infinitesimally small loss, although it leads him to bid in such a way that the takeover is successful if and only if it adds no value. Therefore, the lemons problem that arises from shareholders’ freeriding incentives prevents the firm from being allocated in an efficient manner. Moreover, a further inefficiency arises when we allow for the possibility 4 One source can be “empire-building” incentives. A private benefit can also be obtained through oligopolistic competition, where the drive to expropriate a competing firm may produce an incentive to purchase a target firm. Since some of this expropriation accrues only to the acquirer, shareholders of the target firm that hold out are unable to reap all of the benefits associated with the takeover. Grossman and Hart (1980) discussed the relevance of private benefits of control and its impact on the market for corporate control.
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1077
that the raider may destroy value in the bad state, since then value-destroying takeovers will be more likely to be successful than those that increase value. When the private benefit of control is high, however, the raider finds it optimal to bid a higher price and induce tendering more often. Here, we find that the limiting price converges to the lowest price necessary to guarantee that the offer always succeeds. Assuming there is no value destruction upon takeover, we now have allocative efficiency, since the tender offer is always successful. Moreover, although the price converges to a value greater than the value of the shares being purchased, this price is strictly lower than the post-takeover value in the good state. While shareholders benefit from their information by allowing them to extract rents ex ante, uncertainty about firm value prevents shareholders from capturing the entire surplus generated by a value-increasing raider. As a final point, we note that we have described a rather stark dichotomy between the low and high private benefit cases in terms of the raider’s optimal bidding strategy. This is an important consequence of the limiting behavior of the probability of having a successful takeover as the number of shareholders increases, since when these get sufficiently large, this probability essentially becomes a deterministic function of the offered price. These results, for both the low and the high private benefit cases, are unlike the symmetric information setting analyzed in Bagnoli and Lipman (1988), where adding any private benefit of control leads all takeover bids to be successful at prices that fully reflect the post-takeover value and allocate all surplus to shareholders. Private information therefore affects not only the efficiency of the takeover process, but also the allocation of surplus between the raider and the shareholders. The key innovation in our work is that we allow for shareholders to be privately informed about the firm’s value. Indeed, there is a broad literature on how corporate insiders may learn about the firm’s fundamentals from outsiders (see, e.g., Holmstrom and Tirole (1993)). By analogy, target shareholders may well have information unavailable to a potential raider. For instance, while a raider may have a good estimate of the operating strategy to be developed as a result of the takeover, he may have little knowledge of how well the target firm’s corporate culture is likely to fit with that of the raider. The effect of this private shareholder information is to bias acquisitions toward those likely to create the least, if any, value, and may in fact prevent takeover bids from being made at all. Moreover, as the analysis below makes clear, while private information held by shareholders clearly drives our results, whether information is dispersed (i.e., the distribution of information across shareholders) is not important for the qualitative nature of the results.5 Indeed, similar results 5 In a related paper, Marquez and Yılmaz (2006) analyzed a similar setting in which the raider also has private information and showed that there exists a pooling equilibrium in which the tender offer does not reveal the raider’s information. Moreover, this pooling equilibrium is the only “robust” equilibrium.
1078
R. MARQUEZ AND B. YILMAZ
obtain even if all shareholders share the same information and if this information is perfect. The noisy signals case we study allows us to derive implications concerning the distribution of rents between the raider and shareholders as a function of shareholders’ private information. The rest of the paper is organized as follows. Section 2 lays out the basic framework. Section 3 presents a benchmark with symmetric information. Section 4 contains the main analysis of the role of private shareholder information. The case where the raider can destroy value upon acquiring the firm is studied in Section 5. Section 6 concludes. All omitted proofs are in the Appendix. 2. THE MODEL 2.1. Preferences There is a firm with n shareholders, each of whom owns a single share of the firm. In addition, there is a raider who wishes to purchase and run the firm, and an incumbent who currently manages the firm. Let ω ∈ Ω = {0 1} be the true state of the world. Given the true state of the world, each shareholder prefers the manager who is capable of producing higher cash flows. For simplicity, we normalize the per-share value to 0 under the incumbent management. The raider, if successful in taking over the firm, is expected to generate a firm value of Vω , with per-share value of vω = Vω /n. We assume that V1 > V0 = 0. The true state of the world is unknown to either the raider or the shareholders. Let λ ∈ (0 1) stand for the probability that ω = 1 and assume that λ is common knowledge. While the raider may add value by managing the firm, we also assume that he may like to take over partly due to a private benefit, B ≥ 0, which he receives if he acquires control, for which he needs to acquire at least half of the outstanding shares.6 We assume that everyone is risk-neutral. 2.2. Information Conditional on the true state of the world, each shareholder i receives a signal si ∈ [0 1] independently drawn from an identical distribution. Let f (s) be the density function for the probability of receiving signal s if ω = 1 is the true state of the world. Therefore, F(·) stands for the cumulative probability function of s if ω = 1. Similarly, we use g(s) as the density function for the probability of receiving signal s if ω = 0 is the true state of the world, with G(·) being the cumulative probability function in this state of the world. We 6 Allowing for supermajority rules does not qualitatively change our results, as long as the rule does not require unanimity (see Bond and Eraslan (2007), for a discussion of the role of unanimity rules in a bargaining setting). The only effect of introducing a supermajority rule is that greater supermajority rules require higher offer prices to maintain the same probability of a successful takeover. Such a higher offer price, however, also decreases the likelihood of a value-increasing takeover, since it increases the minimum size of the private benefit that is needed.
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1079
(·) is strictly increasing in assume that f (·) and g(·) are continuous, and that fg(·) the interval [0 1] (monotone likelihood ratio (MLR) condition). Furthermore, f (0) (1) = 0 and fg(1) = ∞, so that extreme signals are highly informative about the g(0) underlying state of the world. Each shareholder updates his belief given his signal s and we let β(ω|s) stand for this posterior belief.
2.3. Tender Offers The raider makes a tender offer at price per share p ∈ [0 ∞), agreeing to buy any and all shares that are tendered (an unconditional offer).7 Note that offering a price above v1 is suboptimal, so that without loss of generality we can restrict attention to p ∈ [0 v1 ]. Following a tender offer, each shareholder receives his private signal and then all shareholders decide simultaneously whether to tender or to reject the offer. Let σi : [0 1] → {tender keep} denote a (pure) strategy for shareholder i in a tendering subgame. We will focus on the symmetric Nash equilibria of this game. 3. THE SYMMETRIC INFORMATION BENCHMARK We begin our analysis by focusing on the benchmark case where there is symmetric information, so that all information is public and each participant—the raider and all shareholders—observes every signal that is available. Formally, this means that the raider’s as well as shareholder i’s information set comprises Sn = {sj }nj=1 . Beliefs are formed by Bayesian updating and, with a slight abuse of notation, we let β(ω|Sn ) stand for this posterior belief, which must be the same for everyone since there is no private information. One immediate implication is that, for ω = 1, p lim β(1|Sn ) = 1, and similarly, for ω = 0, p lim β(1|Sn ) = 0, since as n increases the probability that observing all the signals does not perfectly reveal the underlying state becomes vanishingly small. We can now analyze shareholders’ tendering decisions as well as the raider’s bidding behavior. This setting is analogous to the perfect information case studied by Bagnoli and Lipman (1988), where the true state ω is common knowledge, so that the extent of the raider’s value improvement is known by all. Bagnoli and Lipman focused primarily on symmetric equilibria and showed that the raider’s expected profit converges to zero when he has no private benefits (B = 0) but can increase firm value.8 Moreover, implicit in their analysis 7 Conditional offers in which the raider offers to purchase any tendered shares only if they are enough for him to gain control are strictly dominated by unconditional offers in this setting. See Marquez and Yılmaz (2007a) for an analysis of conditional versus unconditional offers. 8 They also show that asymmetric equilibria may exist where exactly the number of shares necessary to effect a takeover are tendered, and the takeover occurs with probability 1. In these equilibria, each shareholder is pivotal, allowing the raider to earn positive profits. However, the return to each shareholder is very different depending on whether he tenders or not in these equilibria despite these shareholders being ex ante symmetric.
1080
R. MARQUEZ AND B. YILMAZ
is the finding that the takeover bid of a value-increasing raider succeeds with a probability that is bounded away from zero as the number of shareholders increases. Therefore, under symmetric information, efficient takeovers occur with a strictly positive probability even as the raider’s profit becomes vanishingly small. The results of Bagnoli and Lipman carry over to our setting if we simply replace the known value of the firm under the raider with the expected post-takeover value given observation of the signals, E[V |Sn ]. To see this more formally, we follow Bagnoli and Lipman (1988) by noting that if tendering is to occur in any symmetric equilibrium, it must be that for any n > 2 and p ∈ (0 E[v|Sn ]) each shareholder plays a mixed strategy in the unique symmetric equilibrium of the tendering subgame. Let φ stand for the probability of a given shareholder tendering. Then the expected cost of buying all of the tendered shares is nφp given price p. Thus, the expected profit for the raider from the shares bought is E[v|Sn ]
n n
i
i=n/2
φi (1 − φ)n−i i − nφp
Note that the first term is just the expected value of the shares purchased, conditional on the takeover being successful. φ is determined by the indifference condition for shareholders between tendering and not tendering which, for any price p, is given by (1)
p = E[v|Sn ]
n−1 n−1 i=n/2
i
φi (1 − φ)n−1−i
The sum on the right-hand side corresponds to the probability that at least n2 of the remaining n − 1 shareholders tender, which corresponds to the probability that a shareholder attaches to the success of the tender offer conditional on knowing that he has not tendered. Substituting for p, the expected profit due to the shares bought is n n−1 n n − 1 (2) E[v|Sn ] φi (1 − φ)n−i i − nφ φi (1 − φ)n−1−i i i i=n/2
i=n/2
We show in the Appendix that this profit expression converges to zero for any price p ∈ (0 E[v|Sn ]) (see the proof of Proposition 1).9 There are two effects at work here that drive this result. When the takeover is successful, which occurs with positive probability, the raider offers a price lower than the full posttakeover value and captures rents. However, these rents are offset by the losses 9
If n is an odd number, then the summations start with i =
n+1 2
instead of i = n2 .
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1081
the raider incurs when an insufficient number of shares are tendered, given that the price offered must be higher than the value of the firm under current management. As n increases, these two effects exactly offset each other and the raider becomes approximately indifferent between all possible prices. While Bagnoli and Lipman (1988) did not consider the case where the raider has a private benefit, it is straightforward to see how their analysis changes when such a benefit is added. Since, in the absence of a private benefit, expected profits for the raider are zero in the limit anyway, adding a positive private benefit (B > 0) leads the raider to maximize the probability that his offer will be successful. This is achieved by always bidding the entire post-takeover value of the firm so that shareholders capture all the surplus. In other words, adding even an arbitrarily small private benefit to the symmetric information case leads to efficient takeovers: offers are always successful and at prices equal to the post-takeover value. We can now summarize the results for the symmetric information case. Since the per-share value vω = Vω /n always converges to zero as n increases, we focus instead on P = np, which denotes the total dollar value (i.e., the price) the raider offers for the entire equity stake of the firm. PROPOSITION 1: (i) For B = 0, the raider’s profit converges to 0 as n → ∞. The probability of a successful takeover, however, remains strictly between 0 and 1. (ii) For any B > 0, limn→∞ P = limn→∞ E[V |Sn ]. Furthermore, the probability of a successful takeover attempt converges to 1 for all states. It is worth noting that, since shareholders extract the entire value of the firm for B > 0, the expected value gain that shareholders receive converges to V1 when the state is ω = 1, which occurs with ex ante probability λ. When the state is ω = 0, the price converges to V0 = 0, so that shareholders obtain no increase in their wealth with probability 1 − λ. Letting W represent the ex ante wealth (or value) gain for shareholders as a result of a tender offer, we can now conclude that W = λV1 > 0, and note that W just corresponds to the ex ante value of the firm V¯ = E[V ] under the raider. 4. ASYMMETRIC INFORMATION AND TENDER OFFERS We next consider the case where each shareholder’s signal is private, so that there is asymmetric information across shareholders. We first characterize shareholders’ tendering strategies given a tender offer price p as a function of their private information. We then calculate the raider’s equilibrium profit given his optimal choice of price. 4.1. Shareholders’ Tendering Strategies The first result establishes a threshold structure in shareholders’ tendering strategies.
1082
R. MARQUEZ AND B. YILMAZ
PROPOSITION 2: For any offer price p, in the unique equilibrium of the tendering subgame there exists a cutoff signal s∗ such that σi∗ (s) = tender for all s < s∗ and σi∗ (s) = keep for all s > s∗ . PROOF: Let q(ω σ ∗ m) be the probability of having at least m shares tendered in the conjectured equilibrium σ ∗ given state ω. The expected value of a nontendered share is n vω β(ω|s)q ω σ ∗ 2 ω∈Ω On the other hand, the expected value of tendering is just the price offered p. The difference, D(s σ ∗ ), between not tendering and tendering then becomes n vω − p D(s σ ∗ ) = (3) β(ω|s)q ω σ ∗ 2 ω∈Ω n v1 − p = β(1|s)q 1 σ ∗ 2 λf (s) where β(1|s) = λf (s)+(1−λ)g(s) . Note that β(1|s) and therefore D(s σ ∗ ) are strictly increasing in s for all s ∈ (0 1) due to MLR. Furthermore, for s = 0, D(0 σ ∗ ) = −p < 0, while for s = 1, D(1 σ ∗ ) = q(1 σ ∗ n2 )v1 − p. Note that if there does not exist an s∗ < 1 such that D(1 σ ∗ ) ≥ 0, then it must be that D(s σ ∗ ) < 0 for all s. This implies that q(1 σ ∗ n2 ) must be equal to 1, implying that D(1 σ ∗ ) = v1 − p > 0 for all p ∈ (0 v1 ), thus contradicting the assumption that D(1 σ ∗ ) < 0. Therefore, for any p ∈ (0 v1 ) there exists an s∗ such that D(s σ ∗ ) = 0, verifying that σ ∗ is an equilibrium. Uniqueness of the symmetric Nash equilibrium follows immediately given that β(1|s) and q(ω σ ∗ n2 ) are increasing in s and s∗ , respectively. Q.E.D.
This proposition states that, for any price offered by the raider, only shareholders whose signals suggest a relatively low post-takeover value will be willing to tender. Shareholders with signals suggesting a high post-takeover value will prefer to keep their shares and benefit from the price appreciation following an acquisition. In essence, for given strategies of the other shareholders, any individual shareholder is more tempted to hold out when his signal is high. This gives rise to the threshold structure identified in the proposition. Proposition 2 identifies a “lemons problem” in takeover bidding. Importantly, this issue arises directly as a result of the free-rider problem associated with dispersed ownership. To see why, note that the price mechanism implied by a tender offer is essentially a “take it or leave it” mechanism. With only one shareholder, the raider could entirely avoid the lemons problem by making an
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1083
offer that is infinitesimally greater than 0. Since the single shareholder’s decision is clearly pivotal to whether the takeover succeeds or not, he would find it optimal to tender independently of his information. However, as ownership becomes more dispersed, the likelihood that any single shareholder is pivotal goes down, and shareholders have an opportunity to free-ride on their peers.10 Each shareholder will therefore only tender when his information indicates that the post-takeover value will be low. COROLLARY 1: The cutoff signal s∗ characterized by Proposition 2 is an increasing function of p, with limp→v1 s∗ = 1. This corollary establishes that increasing the offer price increases the likelihood that any given shareholder will tender his share. The intuition for this result is that by raising the price, the raider increases the cost of holding out to a shareholder, thus making him more willing to tender. At the extreme, bidding the highest possible post-takeover value (i.e., p = v1 ) leads all shareholders to tender, since there is no further gain from holding out. Given Proposition 2 and Corollary 1, we can now invert expression (3), defining the cutoff signal s∗ , DU (s∗ σ ∗ ) = 0, to express p as a function of s∗ : (4)
∗
∗
p(s ) = β(1|s )v1
n−1 n−1 i=n/2
i
F i (s∗ )(1 − F(s∗ ))n−1−i
Equation (4) gives us an indifference condition for shareholders in that a shareholder with signal s∗ is indifferent between tendering at the price p or keeping his share. It is therefore the analog of (1) for the asymmetric information case, being virtually identical for the case where λ → 0 or 1. The sum on the right-hand side corresponds to the probability that at least half of the remaining n − 1 shareholders tender, given the threshold strategy s∗ . Equivalently, this represents the probability that an individual shareholder attaches to the success of the tender offer, conditional on knowing that he has not tendered. The first term, β(1|s∗ ), is simply the posterior belief that the state is high for a shareholder with signal s = s∗ . The product is therefore the expected value of a nontendered share to a shareholder with a signal s∗ . 4.2. Equilibrium Price and Raider Profit Since from now on we will be using primarily the cutoff rule in most of our expressions, in what follows we abuse notation slightly by just using s in place 10 There are other mechanisms for making shareholders pivotal and thus reducing or possibly even eliminating the lemons problem induced by free-riding. One of the simplest is to use a 100% conditional offer, so that no shares are purchased unless every shareholder agrees to tender his share. Such an offer, however, suffers from obvious practical difficulties in that it is highly likely that there will always be some subset of shareholders who either remain uninformed about the tender offer or do not behave optimally.
1084
R. MARQUEZ AND B. YILMAZ
of s∗ wherever there is no risk of confusion. We can now calculate the expected profit to the raider, Π: (5)
Π = λv1
n n i=n/2
−p λ
i
F i (s)(1 − F(s))n−i i
n n i=0
i
+ (1 − λ)
F i (s)(1 − F(s))n−i i
n n i=0
+B λ
n n i=n/2
i
+ (1 − λ)
i
G (s)(1 − G(s)) i
n−i
i
F i (s)(1 − F(s))n−i
n n i=n/2
i
G (s)(1 − G(s)) i
n−i
This equation represents the analog of (2) for the case where each shareholder’s information is private and the raider possibly has a private benefit from acquiring the target firm. The first term reflects the expected value of the shares that are purchased, which is just the expected number of shares that are tendered times the value of each share conditional on the takeover being successful in the good state, when the takeover adds value. The second term is the price paid times the expected number of shares bought on aggregate, which are paid for whether the takeover is successful or not. The third term is the expected value of any private benefits of control, which accrue in either state (ω = 0 or 1) but only if at least half the shares are tendered so that the takeover is successful. It is useful to divide the analysis at this point into two cases: B = 0 and B > 0. We start with the former, for which we have the following result. PROPOSITION 3: Let B = 0. Then for any p, limn→∞ Π(s∗ (p)) ≤ 0. The proposition establishes that, for widely held firms, no tender offer can make positive profits. The intuition stems from the lemons problem discussed earlier. For any price that is offered, a larger fraction of the shares are tendered in the bad state than in the good state, since on average a larger number of shareholders will have sufficiently low signals (i.e., negative information) in the bad state. Given this, the share of the surplus going to the raider in the good state can never be enough to compensate him for the cost of the shares in the bad state.
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1085
Put differently, a raider who learns that his offer is successful at a given price should lower his expectation on the post-takeover value, since the offer’s success means that a majority of shareholders have observed sufficiently low signals. Consequently, he should consider lowering his bid. However, even at a lower price the raider would still face bad news if he learns that his offer is accepted by a majority of shareholders, since those shareholders must have even lower signals. In this sense, this problem can also be viewed as a “winner’s curse” in that the raider must recognize that his offer succeeding in fact represents bad news, since at least half of the shareholders must have observed relatively low signals. Consequently, he must revise downward his expected value of the firm. Moreover, this happens for any price offered, since the lower is the price, the worse is the news that the offer has gone through. Assume therefore from now on that B > 0. Using equation (9) in the Appendix, the expected value of the shares that are purchased by the raider can be written as n−1 n−1 F i (s)(1 − F(s))n−1−i λV1 F(s) i i=n/2−1
Defining n˜ = n(λF(s) + (1 − λ)G(s)) as the expected number of shares that are purchased, this translates into an expected value for each share purchased of n−1 i λF(s) i=n/2−1 n−1 F (s)(1 − F(s))n−1−i i v˜ = v1 λF(s) + (1 − λ)G(s) We denote by p∗ the price that maximizes the raider’s profit for any given B > 0. In what follows, we focus as before on P ∗ = np∗ , which represents the total dollar value the raider offers for the entire equity stake of the firm. For comparability, we normalize the expected value of each share that is tendered ˜ by the total number of shares, V˜ = nv.
s¯ Define s¯ such that 0 f (s) ds = 12 . We can now establish our main result. PROPOSITION 4: For any B > 0, there exists p ∈ (0 v1 ) and n such that, for all n > n, Π(s∗ (p)) > 0. Furthermore, let B¯ = (V1 /λ)[β(1|¯s)(λF(¯s) + (1 − λ)G(¯s)) − λF(¯s)]. Then we have the following cases: ¯ limn→∞ P ∗ = limn→∞ V˜ = 0. The probability of a successful (i) For 0 < B < B, takeover attempt converges to 1 when ω = 0 and converges to 0 when ω = 1. ¯ (ii) For B > B, lim P ∗ = β(1|¯s)V1 >
n→∞
λF(¯s) V1 = lim V˜ n→∞ λF(¯s) + (1 − λ)G(¯s)
The probability of a successful takeover attempt converges to 1 for all states.
1086
R. MARQUEZ AND B. YILMAZ
In contrast to the result in Proposition 3, this last result shows that for any private benefit B > 0, positive expected profits in the takeover of a widely held firm are possible. When the raider derives a private benefit from the acquisition, there is always a price sufficiently low that in the limit guarantees the offer will be accepted at least in the bad state and that allows the raider to enjoy his private benefit with probability 1 − λ. Therefore, a necessary and sufficient condition for a takeover bid of a widely held firm to be profitable is that B > 0. However, Proposition 4 also presents a stark dichotomy in the raider’s optimal pricing strategy as a function of the private benefit B in that his behavior ¯ For values of B below changes in a discrete way around the cutoff value B. this cutoff, the raider offers the minimum amount possible, with a price approaching 0 as n increases. For these low values of the private benefit, we find then that the limiting price as the number of shareholders increases converges to the expected value of the firm conditional on the success of the takeover. However, the outcome does not allocate the firm in an efficient manner: in the limit, the tender offer is successful if and only if the takeover adds no value. Acquisitions that increase the most value, those where V = V1 > 0, do not take place. ¯ the raider offers a strictly positive price that reFor B above the cutoff B, flects much of the value increase in the good state. Essentially, when the private benefit B is relatively large, the raider finds it optimal to bid a higher price and induce tendering more often than when B is low. We find now that the limiting price converges to the lowest price necessary to guarantee that the offer always succeeds. We therefore have allocative efficiency, since the tender offer is always successful and it is ex ante optimal for the firm to be taken over. In other words, all value-increasing acquisitions take place.11 We also observe that the equilibrium price converges to a value greater than the expected value of the tendered shares, so that the acquirer overpays for the firm. However, this price is strictly less than V1 , implying that shareholders fail to extract the full surplus from a value-increasing raider (we discuss this issue further in Section 4.4). The reason for the abrupt transition in the raider’s bidding strategy stems from the behavior of the order statistics that determine the probability of success of the tender offer in the good state. From (4) we see that for any P such that s∗ < s, as n increases, the right-hand side converges to zero, since the probability of a successful takeover in the good state (i.e., when at least 50% of the shares are tendered) becomes vanishingly small. Conversely, for P sufficiently large that s∗ ≥ s, as n increases, the right-hand side converges to β(1|s∗ )v1 , since the probability of a successful takeover in the good state approaches 1. From the raider’s perspective, therefore, as n increases, the only question is 11 However, if we allow v0 < 0, then both value-increasing and -decreasing takeover attempts may be successful. Therefore, even efficiency may not hold in more general settings. We discuss this case further in Section 5.
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1087
whether to bid high enough to induce at least half the shareholders to tender in the good state. If so, which occurs when B is high enough, the raider wishes to do so at the least cost, which implies bidding P ∗ = β(1|¯s)V1 . If not, which occurs when B is low, the raider may as well bid the lowest possible amount, which is achieved by offering a price P that gets closer and closer to zero as n increases. Given this limiting behavior of the probability of a successful takeover, we can restate the intuition for these two cases as follows. With a relatively low price the offer will succeed only if the value of the shares after the takeover is expected to be low, since many individuals have observed a low signal. When B is small, it is optimal for the raider to offer a price only slightly higher than the value of the shares. The raider loses money on each share, but this loss can be made arbitrarily small by reducing the price, taking it in the limit to 0, the post-takeover value when ω = 0. As a consequence, even a small benefit can be enough to compensate the raider for the infinitesimally small loss. While the price can never be exactly right because of the free-riding problem, it can be made “almost” right by reducing it, thus making sure that the takeover is successful only in the bad state. When B is high, the fact that a low bid leads the takeover to be unsuccessful with probability λ (the ex ante probability that the post-takeover value of the firm is high) implies too much of a loss for the raider. Therefore, the raider will want to make sure that the takeover always succeeds, even if this implies offering a price which is too high. In other words, the potential gain of the private benefit B is sufficiently large that it is no longer so crucial to minimize the loss on a per-share basis. The results above characterize the behavior of the total value offered for the firm in equilibrium, P ∗ , and show that the raider offers a higher price as his private benefit increases. In the limit, of course, this effect is pushed to the extreme, as characterized in Proposition 4. For completeness, we show that a similar result also holds for the case of finite n, in that comparative statics results show that the equilibrium share price depends on the level of private benefits in an intuitive way. PROPOSITION 5: For any finite n, the equilibrium price p∗ is increasing in B. 4.3. Comparison to Symmetric Information Benchmark From the analysis in Section 3 we know that, for the symmetric information case, allowing the raider to have even an arbitrarily small private benefit leads to takeover attempts that are always successful at prices that equal the posttakeover value. By contrast, Proposition 4 shows that this need not be the case when shareholders have private information. With private information, prices deviate from post-takeover firm value as the raider’s private benefit increases,
1088
R. MARQUEZ AND B. YILMAZ
since shareholders use their information in deciding whether to tender or not.12 We also show that the probability of realizing a value-increasing takeover converges to zero when the raider’s private benefit is strictly positive but low, implying that the efficient allocation of the firm is never achieved, quite unlike the symmetric information case. For completeness, we note that a similar result to that of the symmetric information case can be derived within the private information framework discussed above for the case where information problems become vanishingly small as a corollary to Proposition 4 by letting λ → 0 or 1. In this case, it is straightforward to see that, as λ → 1, B¯ → 0 and we are always in case (ii) of the proposition, even for an arbitrarily small private benefit: the probability of a value-increasing raider taking over converges to 1 as n goes to infinity. Moreover, the price converges to the post-takeover value, which in this case is equal to V1 > 0, so that the raider makes no profit beyond the value of his private benefit (this is similar to the symmetric information case studied in Section 3). At the other extreme, as λ → 0, we have that P ∗ → 0 as well for either case of Proposition 4 so that again the price converges to the post-takeover value. 4.4. Private Information and Value Extraction ¯ the equilibrium price conFrom Proposition 4 we obtain that for B > B, verges to a value greater than that of the tendered shares, so that the raider overpays for the shares purchased. However, this price is strictly less than V1 , implying that shareholders fail to extract the full surplus from those raiders who increase the most value. Nevertheless, shareholders gain as a result of their private information when the raider’s private benefit is sufficiently high, as described below. As in Section 3, define W to represent the ex ante wealth gain to shareholders from the presence of the raider. First, consider the case where ω = 0. For the symmetric information case we have determined that, conditional on ω = 0, W → 0 as n → ∞. By contrast, with asymmetric information, for ω = 0 ¯ since the raider will bid P ∗ > 0 and a majority we have that W > 0 for B > B, of shareholders will tender. Conversely, consider the case where ω = 1. For the symmetric information case we know that W → V1 as n → ∞, while under asymmetric information W → 12 P ∗ + 12 V1 , since half of the shareholders tender at the offered price, with the other half holding out and receiving the posttakeover value. Whether shareholders on net receive more or less than V¯ , the ex ante value of the firm, depends of course on the value of P ∗ . Note, however, ¯ total ex ante surplus must that since the firm is always purchased when B > B, be B + V¯ : the private benefit plus the ex ante value of the firm conditional on 12 This does not necessarily imply that shareholders are made worse off as a result of their private information. We analyze this issue in the next subsection.
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1089
success of the takeover. We can now establish the following result concerning the distribution of surplus. ¯ then with asymmetric information, W = λB¯ + V¯ > COROLLARY 2: If B > B, ¯ V : The value gain to shareholders is higher than the ex ante firm value under the raider. The payoff to the raider is B − λB¯ > 0. PROOF: Recall that we defined n˜ = n(λF(¯s) + (1 − λ)G(¯s)), which is the expected number of shares purchased by the raider in equilibrium. The total dollar payment to shareholders is therefore n˜
P∗ = (λF(¯s) + (1 − λ)G(¯s))β(1|¯s)V1 n
Furthermore, conditional on ω = 1, a fraction (1 − F(¯s)) of shareholders hold out at the offer price P ∗ and instead obtain the post-takeover value V1 . The ex ante expected value gain to these shareholders is λ(1 − F(¯s))V1 . Therefore, the total gain to shareholders is n˜
P∗ + λ(1 − F(¯s))V1 n = (λF(¯s) + (1 − λ)G(¯s))β(1|¯s)V1 + λ(1 − F(¯s))V1 V1 =λ β(1|¯s)(λF(¯s) + (1 − λ)G(¯s)) − λF(¯s) + λV1 λ = λB¯ + V¯
as desired. Finally, note that since the takeover attempt is always successful, the total surplus must be B + V¯ , which implies that the raider receives B − λB¯ > 0. Q.E.D. The corollary demonstrates that shareholders benefit from their private information by allowing them to extract greater value from the raider. As in the symmetric information benchmark, shareholders extract the full ex ante value of the firm, V¯ . However, their private information also allows them to extract a fraction of the raider’s private benefit B, since the raider needs to overpay for the shares he purchases to ensure a successful takeover attempt. From the corollary, it is also straightforward to see that an increase in the information value of shareholders’ signals allows them to extract greater surplus when B is large. One way to observe this is by considering an improvement in the information content of the signal that increases the threshold signal s¯ , representing the point where F(¯s) = 12 . As s¯ → 1 (or as s¯ increases), the price offered, P ∗ , converges to the value of the firm in the good state conditional on
1090
R. MARQUEZ AND B. YILMAZ
the success of the takeover offer, V1 . This increase is reflected in an increase in B¯ and thus an increase in the surplus obtained by shareholders. These rents, of course, must reduce the efficiency of the takeover market, since they reduce the return to the raider and thus the likelihood that a valueincreasing takeover will occur. This is seen very clearly by focusing on the case ¯ for which we have the following result. where B < B, ¯ then with asymmetric information the value gain COROLLARY 3: If 0 < B < B, to shareholders is zero: W = 0. The payoff to the raider is B(1 − λ) > 0. PROOF: For this case, from Proposition 4 we have that the total ex ante sur¯ plus is B(1 − λ), since the takeover occurs only for ω = 0. Moreover, for B < B, the price offered converges to 0 as n → ∞, so that shareholders obtain virtually no surplus. Q.E.D. This result points to an inefficiency created by the existence of private information in that value-increasing takeovers fail to occur when private benefits are low. In this case, the raider only receives his private benefit B in the bad state, ω = 0. Unlike the symmetric information case, the total surplus being created now is just B(1 − λ), which is strictly lower than the ex ante surplus that is available, B + V¯ . 5. ROBUSTNESS: VALUE-DESTROYING RAIDERS We now consider the case where V0 < 0, so that the raider may destroy value if the takeover is successful. As a starting point, note that there may be multiple equilibria even when information is symmetric. Consider a raider who is expected to destroy value. If he offers a negative price at least as large as the value under his management, there is an equilibrium in the tender offer subgame such that nobody tenders: Not tendering is a best response if a shareholder knows that the raider will not be able to take over. However, there is also an equilibrium in the tender offer subgame such that everyone tenders: Tendering is a best response if a shareholder knows that the value-destroying raider is going to take over. The same multiplicity problem exists in our setting as well. In what follows, we restrict attention to the equilibrium in which no tendering occurs following a negative offer price (although similar conclusions hold also for the case where the offer price is allowed to be negative). In this case, the raider’s expected profit following a negative offer price is zero, so without loss of generality we can restrict attention to offers with nonnegative prices only. We can now characterize shareholders’ tendering strategies. For V0 = 0, a shareholder’s incentive to hold out versus tendering, for any price p, is described by
n ∗ ∗ n v1 + β(0|s)q 0 σ ∗ v0 − p (6) D(s σ ) = β(1|s)q 1 σ 2 2
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1091
which is similar to expression (3), but includes the additional term β(0|s)q(0 σ ∗ n2 )v0 representing the payoff to a shareholder when he holds out and the takeover is successful in state ω = 0. Much as in the previous section, it is straightforward to show that for any positive offer price p, setting (6) equal to zero defines a unique threshold signal s∗ such that, in equilibrium, shareholders with signal s < s∗ tender, while those with signal s > s∗ do not. We can now calculate the expected profit of the raider, Π − , following a positive offer price. Much as in the previous section, we have that (7)
Π − = Π + (1 − λ)v0
n n
i
i=n/2
Gi (s)(1 − G(s))n−i i
where Π is as defined above in equation (5) for the case where V0 = 0. Equation (7) highlights the fact that the raider’s profit when V0 < 0 is similar to the case where V0 = 0 with the addition of the second term to reflect the value destruction when ω = 0. The last term therefore represents the expected value of the shares purchased when ω = 0: the per-share value v0 times the expected number of shares, all conditional on the takeover occurring. Since v0 < 0, this additional term must be negative. As in the previous section, we can invert expression (6), defining the cutoff signal s∗ , D(s∗ σ ∗ ) = 0, to express p as a function of s∗ : (8)
p(s∗ ) = β(1|s∗ )v1
n−1 n−1 i=n/2
∗
+ β(0|s )v0
i
F i (s∗ )(1 − F(s∗ ))n−1−i
n−1 n−1 i=n/2
i
Gi (s∗ )(1 − G(s∗ ))n−1−i
This expression is similar to (4) with the addition of the second term to reflect the expected value of the shares in state ω = 0. The final summation therefore corresponds to the probability that at least half of the remaining n − 1 shareholders tender, given the threshold strategy s∗ and conditional on the bad state ω = 0. The term β(0|s) is simply the posterior belief that the state is low for a shareholder with signal s = s∗ . We can now state the following result. PROPOSITION 6: There exists B > 0 such that for any B ≤ B, limn→∞ Π − ≤ 0. This results differs from the earlier case where V0 = 0 in which, for any positive private benefit B > 0, there is always a price that leads to positive expected profits. The intuition for this result once again stems from the lemons problem discussed earlier, in that the success of the tender offer signals to the raider that a majority of shareholders observed relatively low signals, suggesting he may have overpaid for the shares. However, unlike the previous case, when
1092
R. MARQUEZ AND B. YILMAZ
the raider destroys value in state ω = 0, he loses value on all the shares he purchases, and this remains true even if the price he pays is arbitrarily close to zero for those shares. The raider therefore needs a sufficiently large private benefit to compensate him for the expected losses. Much like in the previous section, it can be shown that for intermediate values of the private benefit, the raider can in fact make nonnegative expected profits. However, unlike the case where V0 = 0, an equilibrium in which the raider successfully takes over only when ω = 0 and at a price arbitrarily close to zero is no longer feasible. To see why, suppose to the contrary that the probability of taking over converges to 0 for ω = 1. In that case, the benefit of holding out for a shareholder with a sufficiently high signal becomes arbitrarily small as n goes to infinity. By contrast, the cost of not tendering remains strictly negative since V0 < 0. But this contradicts the fact that for any p > 0, there must be some cutoff signal s∗ such that D(s∗ σ ∗ ) = 0. Therefore, even for arbitrarily small prices, there must be a positive probability of a successful takeover when ω = 1, which arises because the possibility that value may be destroyed upon the success of the offer encourages shareholders to tender. In equilibrium, the minimum private benefit that just breaks even for the raider must reflect that when the tender offer is successful in state ω = 1, some value will be created and some profits will be earned by the raider. Conditional on ω = 0, however, the raider makes losses. Since the price at which the raider purchases the shares is arbitrarily small, these losses also represent overall value destruction, that is, V0 + B < 0. The following proposition formalizes this argument. s) V . There exists a > 0 such that for B ∈ PROPOSITION 7: Let |V0 | < β(1|¯ β(0|¯s) 1 (B B + ), the following cases exist: (i) The probability of a successful takeover converges to 1 when ω = 0 and is strictly positive, but less than 1, when ω = 1. (ii) The probability of a successful takeover converges to 0 when ω = 1 as V0 → 0. (iii) Conditional on ω = 0, a takeover destroys value, that is, V0 + B < 0.
This finding confirms that our main result in Proposition 4 is robust to allowing for a raider who sometimes destroy firm value: for low values of B, either no takeover occurs or the likelihood of the takeover occurring is significantly higher when ω = 0 than when ω = 1. Even though value destruction by the raider leads to greater shareholder tendering and thus a positive probability of success when ω = 1, it is still true that value-increasing takeovers are less likely to be successful than value-destroying ones. Moreover, the proposition establishes that not only do shareholders lose value, but that, conditional on ω = 0, the change in total welfare, V0 + B, is negative. The restriction on |V0 | is simply to ensure that, conditional on the cutoff signal s¯ , the takeover does not destroy firm value ex ante.
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1093
For the case where B is large, it is straightforward to see that for B large enough, it will be optimal to take over in both states of the world, since only then will the raider be assured of receiving his private benefit. The argument is similar to that in the second part of Proposition 4. Note, however, that firm value is destroyed when ω = 0, since V0 < 0, so that efficiency ex post no longer holds. As a final point, we note that our main inefficiency result from this section and the last—that value-increasing takeovers are less likely to be successful— also arises in other contexts. In particular, a similar inefficiency result holds for the case where V0 is strictly positive, thus establishing that such inefficiencies arise regardless of the value being generated by the raider. Likewise, it can be shown that the result is not restricted to the case where only shareholders have information. Rather, it is robust to the possibility that the raider has additional private information about some component of the value he may create upon acquiring the firm. For details on both of these cases, see Marquez and Yılmaz (2007b). 6. CONCLUSION This paper has analyzed takeover bidding when shareholders have private information. The main result is that a tender offer is more likely to be successful when the raider adds no value to the firm. In particular, when private benefits are small, the raider can take over only when he is not expected to create any value. Only when private benefits are large will value-increasing takeovers be profitable. The main driving force in these results stems from a lemons problem that arises as a result of the free-rider problem in takeovers, in that shareholders who expect a higher value for their shares post-takeover will be unwilling to tender at the offered price. This lemons problem implies that a raider finds it difficult to acquire the firm when doing so would increase the firm’s value. In contrast to the symmetric information case, though shareholders may be better off, they are unable to extract full surplus from a value-increasing raider. APPENDIX Mathematical Digression We would like to show that n n−1 n n−1 αi (1 − α)n−i i = αn αi (1 − α)n−1−i (9) i i i=k
i=k−1
where α ∈ [0 1]. Start by noting that n n (n − 1)! i−1 n α (1 − α)n−i i αi (1 − α)n−i i = αn i i!(n − i)! i=k
i=k
1094
R. MARQUEZ AND B. YILMAZ
Canceling out the i terms in the numerator and denominator, this expresn (n−1)! sion can be further written as αn i=k (i−1)!(n−i)! αi−1 (1 − α)n−i . By changing the limits of the summation, this expression can also be written as n−1 n−1 i (n−1)! αn i=k−1 i!(n−1−i)! αi (1 − α)n−1−i = αn i=k−1 n−1 (1 − α)n−1−i , as desired. α i A direct implication of this fact is that n n i=1
i
αi (1 − α)n−i i = αn
Omitted Proofs PROOF OF PROPOSITION 1: The proof is similar to the analysis in Bagnoli and Lipman (1988), so we only sketch the main parts of the argument. As arn−1 i gued above, for p = E[v|Sn ] i=n/2 n−1 φ (1 − φ)n−1−i and B = 0, we can write i the raider’s profits as n n−1 n n−1 i n−i i n−1−i E[v|Sn ] φ (1 − φ) i − nφ φ (1 − φ) i i i=n/2
i=n/2
n n/2 Using expression (9), the above equation simplifies to E[v|Sn ] n/2 φ (1 − φ)n/2 n2 . Note that this term is nonnegative for all φ ∈ [0 1] and furthermore it is maximized at φ = 12 . Therefore, following offer price, the expected n an1 optimal profit due to shares bought is E[V |Sn ] n/2 ( 2 )n+1 . From Sterling’s approxima n 1 n+1 tion we have that limn→∞ E[V |Sn ] n/2 ( 2 ) = 0. Therefore, the expected profit due to the shares bought must converge to zero for any price P ∈ (0 E[V |Sn ]) as n → ∞. Furthermore, note that the probability of a successful takeover is bounded away from 1. This establishes the first part of the proposition. Consider now the case where B > 0. Since in the absence of a private benefit profits converge to zero for any possible price, for any B > 0 there must exist n¯ ¯ it is optimal for the raider to offer p = E[v|Sn ]. Doing such that for any n > n, so guarantees the success of the offer and allows the raider to obtain his private benefit B. Q.E.D. PROOF OF COROLLARY 1: To show that s∗ is an increasing function of p, note simply that if D(s σ; p ) = 0 for some p = p , then D(s σ; p) < 0 for all p > p . Since β is increasing in s, the result will be true as long as q is increasing in the cutoff signal s∗ . Since q is an order statistic specifying the probability that at least a certain number of shares are tendered in equilibrium, this probability must be increasing in the value of the cutoff signal s∗ below which shareholders tender, thus establishing the first part. For the second part, note that as p goes to v1 , we need β(1|s)q(1 σ ∗ n2 ) to go to 1 as well so that D(s σ ∗ ) ≥ 0. But
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1095
note that as s and s∗ go to 1, both β(1|s) and q(1 σ ∗ n2 ) converge to 1, thus establishing the second part of the result. Q.E.D. PROOF OF PROPOSITION 3: Using expression (9), equation (5) simplifies to n−1 n−1 F i (s)(1 − F(s))n−1−i i
Π = λF(s)nv1
i=n/2−1
− pn(λF(s) + (1 − λ)G(s)) n n F i (s)(1 − F(s))n−i +B λ i i=n/2
+ (1 − λ)
n n i=n/2
i
G (s)(1 − G(s)) i
n−i
Substituting for p(s∗ ) from (4) and using the fact that V1 = nv1 , we can now write Π as Π(s) = V1 λF(s) − β(1|s)(λF(s) + (1 − λ)G(s)) (10) n−1 n−1 F i (s)(1 − F(s))n−1−i × i i=n/2
n−1 + V1 λF(s) F n/2−1 (s)(1 − F(s))n/2 n/2 − 1 n n +B λ F i (s)(1 − F(s))n−i i i=n/2
+ (1 − λ)
n n i=n/2
i
G (s)(1 − G(s)) i
n−i
Differentiating the first term in square brackets with respect to s, we obtain (11) λF − β(1|s)(λF + (1 − λ)G) =−
λ(1 − λ)[f g − fg ] (λF + (1 − λ)G) [λf + (1 − λ)g]2
The term f g − fg is strictly positive for all n given MLR. Consequently, the derivative in equation (11) is negative for all n. We also know that [λF(0) − β(1|0)(λF(0) + (1 − λ)G(0))] = 0. Therefore, [λF(s) − β(1|s)(λF(s) + (1 − λ)G(s))] must be not only a decreasing function of s, but also negative for all
1096
R. MARQUEZ AND B. YILMAZ
n−1 i F (s)(1 − F(s))n−1−i is strictly positive for any s > 0. Since the term i=n/2 n−1 i given n for all s > 0, this implies that the entire first term of Π(s) in (10) must be negative for all s > 0 and for all n. We can now calculate the limiting value for Π(s). From Sterling’s approxi n−1 n/2−1 mation it is easy to see that limn→∞ n/2−1 (s)(1 − F(s))n/2 = 0. Therefore, F limn→∞ Π(s) becomes (12) V1 λF(s) − β(1|s)(λF(s) + (1 − λ)G(s)) n−1 n − 1 i n−1−i F (s)(1 − F(s)) × lim i n→∞ i=n/2
+ B lim λ n→∞
n n i=n/2
i
+ (1 − λ)
F i (s)(1 − F(s))n−i
n n i=n/2
i
G (s)(1 − G(s)) i
n−i
From above, we know that for all n the first term is negative, so that its limit as n → ∞ must be nonpositive. Therefore, for B = 0 we can now conclude that limn→∞ Π(s) ≤ 0. Q.E.D. PROOF OF PROPOSITION 4: Assume that B > 0. Note that, for any s such that F(s) < 12 < G(s), by the law of large numbers (LLN) the first term in expression (12) converges to 0 as n gets arbitrarily large, whereas the second term converges to (1 − λ)B. In other words, for some s < s¯ , expression (12) converges by the LLN to (1 − λ)B as n → ∞. Therefore, for p(s) we must have limn→∞ Π(s∗ (p)) > 0, which establishes that there is always some price at which equilibrium profits are positive when B > 0. For the rest, note that for s > s¯ , (12) converges to V1 [λF(s) − β(1|s)(λF(s) + (1 − λ)G(s))] + B as n → ∞. Therefore, one necessary condition for the raider to find it optimal to induce a majority of shareholders to tender in both states is that there exists an s > s¯ such that V1 λF(s) − β(1|s)(λF(s) + (1 − λ)G(s)) + B > (1 − λ)B From equation (11), we know that the left-hand side is a decreasing function of s. Therefore, the cutoff private benefit of control is characterized by the equality ¯ V1 λF(¯s) − β(1|¯s)(λF(¯s) + (1 − λ)G(¯s)) + B¯ = (1 − λ)B yielding B¯ = (V1 /λ)[β(1|¯s)(λF(¯s) + (1 − λ)G(¯s)) − λF(¯s)] so that for B > B¯ the raider prefers to induce s∗ > s¯ rather than s∗ < s¯ .
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1097
˜ so that To obtain limn→∞ V˜ , recall that V˜ = nv, λF(s) lim V˜ = V1 lim n→∞ n→∞ λF(s) + (1 − λ)G(s) n−1 n−1 i n−1−i × F (s)(1 − F(s)) i i=n/2−1
For the first part, note that for p such that F(s∗ (p)) < 12 , it is immediate by the LLN that λF(s) lim V˜ = V1 lim n→∞ n→∞ λF(s) + (1 − λ)G(s) n−1 n−1 i n−1−i × F (s)(1 − F(s)) i i=n/2−1
= 0
n−1 i ∗ The equilibrium price is P ∗ = β(1|s∗ )V1 i=n/2 n−1 F (s )(1 − F(s∗ ))n−1−i , so i n−1 × that for p such that F(s∗ (p)) < 12 , limn→∞ P ∗ = V1 limn→∞ (β(1|s∗ ) i=n/2 n−1 i i ∗ ∗ n−1−i ∗ ˜ F (s )(1 − F(s )) ) = 0. Therefore, limn→∞ P = limn→∞ V = 0, as desired. For the second part, note that p must be such that F(s∗ (p)) > 12 . Then by the LLN the price converges to β(1|s∗ )V1 for all s∗ > s¯ since from equation (4) n−1 i ∗ we have that p(s∗ ) = β(1|s∗ )v1 i=n/2 n−1 F (s )(1 − F(s∗ ))n−1−i . Taking the i lowest possible price such that F(s∗ (p)) > 12 , the optimal price np∗ converges to β(1|¯s)V1 . We then have λF(¯s) lim V˜ = V1 n→∞ λF(¯s) + (1 − λ)G(¯s) as claimed. Finally, note that β(1|¯s) = >
λf (¯s) λf (¯s) + (1 − λ)g(¯s) λF(¯s) λF(¯s) + (1 − λ)G(¯s)
⇔
f (¯s) F(¯s) > g(¯s) G(¯s)
λF(¯s)V1 which is satisfied by MLR. Therefore, β(1|¯s)V1 > λF(¯s)+(1−λ)G(¯ , as desired. s) Finally, we show that offer prices P ∈ (0 β(1|¯s)V1 ) are suboptimal as n gets arbitrarily large. For any P ∈ (0 β(1|¯s)V1 ), we have n−1 P n−1 i ∗ ∗ n−1−i ∗ n = = q 1 σ F (s )(1 − F(s )) i β(1|s∗ )V1 i=n/2 2
1098
R. MARQUEZ AND B. YILMAZ
Moreover, as n → ∞, s∗ < s¯ for all P ∈ (0 β(1|¯s)V1 ). Expression (12) then converges by the LLN to V1 λF(s∗ ) − β(1|s∗ )(λF(s∗ ) + (1 − λ)G(s∗ ))
P + B(1 − λ) β(1|s∗ )V1
as n → ∞ However, we know from above that the term [λF(s∗ ) − β(1|s∗ )(λF(s∗ ) + (1 − λ)G(s∗ ))] is negative for all s > 0. Therefore, the raider’s profit is maximized by choosing P to be arbitrarily close to zero as n increases. An analogous argu¯ offer prices P < β(1|¯s)V1 are less ment can be used to show that for all B > B, Q.E.D. profitable than β(1|¯s)V1 as n gets arbitrarily large. PROOF OF PROPOSITION 5: Since the cutoff signal s∗ is monotonically increasing in p, we can focus on choosing an optimal s∗ , which is characterized by the first order condition ∂Π = 0 ∂s with second order condition ∂2 Π/∂s2 < 0. Define H = ∂Π and note that, at ∂s the optimal cutoff signal, H ≡ 0. We can therefore use the implicit function theorem to find ∂H/∂B ∂s∗ =− ∂B ∂H/∂s Since ∂H < 0 by the second order condition, we know that the sign of ∂s be the same as the sign of ∂H . From equation (5), ∂B n n ∂H ∂ = λ F i (s)(1 − F(s))n−i i ∂B ∂s i=n/2 + (1 − λ)
n n i=n/2
=
i
∂s∗ ∂B
will
G (s)(1 − G(s)) i
n−i
n! λf (s)F n/2−1 (s)(1 − F(s))n/2 n ( − 1)!( 2 )! n 2
+ (1 − λ)g(s)Gn/2−1 (s)(1 − G(s))n/2
> 0 Therefore, we have
∂H ∂B
> 0 for all s∗ ∈ (0 1), implying that
∂p∗ ∂B
> 0.
Q.E.D.
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1099
PROOF OF PROPOSITION 6: Substituting for p(s∗ ) from (8) and, as above, using expression (9), we can now write limn→∞ Π − as V1 λF(s) − β(1|s)(λF(s) + (1 − λ)G(s)) (13) n−1 n − 1 F i (s)(1 − F(s))n−1−i × lim i n→∞
i=n/2
+ V0 (1 − λ)G(s) − β(0|s)(λF(s) + (1 − λ)G(s)) n−1 n − 1 × lim Gi (s)(1 − G(s))n−1−i i n→∞
i=n/2
+ B lim λ n→∞
n n i=n/2
i
+ (1 − λ)
F i (s)(1 − F(s))n−i
n n i=n/2
i
Gi (s)(1 − G(s))n−i
Recall, from the proof of Proposition 3, that for all n the term [λF(s) − β(1|s)(λF(s) + (1 − λ)G(s))] is zero at s = 0 and strictly decreasing for all s > 0. Similarly, the term [(1 − λ)G(s) − β(0|s)(λF(s) + (1 − λ)G(s))] is zero at s = 0, and from MLR we know that this term is strictly increasing for all s > 0. Moreover, it is of equal magnitude but of opposite sign as [λF(s) − β(1|s)(λF(s) + (1 − λ)G(s))]. Therefore, V1 [λF(s) − β(1|s)(λF(s) + (1 − λ)G(s))] and V0 [(1 − λ)G(s) − β(0|s)(λF(s) + (1 − λ)G(s))] are both negative for all s > 0 and for all n. We can now conclude that for any s such that 12 < G(s), by the LLN as n gets arbitrarily large the second term in the above expression converges to a strictly negative number and the third term converges to a value which is proportional to B and no smaller than (1 − λ)B. The first term is always nonpositive. Therefore, there exists a minimum level of B such that the raider’s expected profit is positive. Q.E.D. The proof of Proposition 7 makes use of the following result. LEMMA 1: There does not exist a price p > 0 such that limn→∞
q(1σ ∗ n/2) q(0σ ∗ n/2)
= 0.
PROOF: Suppose to the contrary. Recall that D(s σ ∗ ) = β(1|s)q(1 σ ∗ n2 ) × ∗ n/2) v1 + β(0|s)q(0 σ ∗ n2 )v0 − p. If q(1σ converges to zero, then it must be that q(0σ ∗ n/2) D(s σ ∗ ) < 0 for all s. This contradicts the existence of a cutoff signal such that Q.E.D. D(s∗ σ ∗ ) = 0.
1100
R. MARQUEZ AND B. YILMAZ
PROOF OF PROPOSITION 7: From Lemma 1, we know that if V0 < 0, then for any p > 0 we must have q(1 σ ∗ n2 ) > 0. Therefore, s∗ must converge to s¯ . Consequently, we must have D(¯s σ ∗ ) = 0. Furthermore, q(0 σ ∗ n2 ) converges to 1 for any p > 0, so that q(1 σ ∗ n2 ) must converge to −(β(0|¯s)V0 )/(β(1|¯s)V1 ). This completes parts (i) and (ii) of the proposition. From equation (13), for zero expected profits we have B=
(V1 − V0 )[(1 − λ)G(¯s) − β(0|¯s)(λF(¯s) + (1 − λ)G(¯s))] λq(1 σ ∗ n2 ) + (1 − λ)
Substituting V1 = −(β(0|¯s)V0 )/(β(1|¯s)q(1 σ ∗ n2 )) and letting q(1 σ ∗ n2 ) = 1 we have (1 − λ)G(¯s) − β(0|¯s)(λF(¯s) + (1 − λ)G(¯s)) V 0 + B = V0 1 − β(1|¯s) Therefore, V0 + B < 0 as long as β(1|¯s) > (1 − λ)G(¯s) − β(0|¯s)(λF(¯s) + (1 − λ)G(¯s)). Note that β(1|¯s)[1 − (λF(¯s) + (1 − λ)G(¯s))] > −λF(¯s) since the lefthand side is positive and the right-hand side is negative. Adding and subtracting (1 − λ)G(¯s) to the right-hand side, we have β(1|¯s) 1 − (λF(¯s) + (1 − λ)G(¯s)) > (1 − λ)G(¯s) − (λF(¯s) + (1 − λ)G(¯s)) Rearranging terms we have β(1|¯s) > (1 − λ)G(¯s) + β(1|¯s)(λF(¯s) + (1 − λ)G(¯s)) − (λF(¯s) + (1 − λ)G(¯s)) Substituting β(1|¯s) = 1 − β(0|¯s) in the right-hand side gives us β(1|¯s) > (1 − λ)G(¯s) − β(0|¯s)(λF(¯s) + (1 − λ)G(¯s)) Therefore, we must have V0 + B < 0. From continuity, the inequality holds for Q.E.D. some q(1 σ ∗ n2 ) < 1, thus establishing part (iii). REFERENCES BAGNOLI, M., AND B. LIPMAN (1988): “Successful Takeovers Without Exclusion,” Review of Financial Studies, 1, 89–110. [1075,1077,1079-1081,1094] BOND, P., AND H. ERASLAN (2007): “Strategic Voting Over Strategic Proposals,” Mimeo, University of Pennsylvania. [1078] CORNELLI, F., AND D. LI (2002): “Risk Arbitrage in Takeovers,” Review of Financial Studies, 15, 837–868. [1075] FISHMAN, M. (1988): “A Theory of Preemptive Takeover Bidding,” RAND Journal of Economics, 19, 88–101. [1075]
INFORMATION AND EFFICIENCY IN TENDER OFFERS
1101
GROSSMAN, S., AND O. HART (1980): “Takeover Bids, the Free Rider Problem, and the Theory of the Corporation,” Bell Journal of Economics, 11, 42–64. [1075,1076] HOLMSTROM, B., AND B. NALEBUFF (1992): “To the Raider Goes the Surplus? A Reexamination of the Free Rider Problem,” Journal of Economics and Management Strategy, 1, 37–62. [1075] HOLMSTROM, B., AND J. TIROLE (1993): “Market Liquidity and Performance Monitoring,” Journal of Political Economy, 101, 678–709. [1077] MARQUEZ, R., AND B. YILMAZ (2006): “Takeover Bidding and Shareholder Information,” Mimeo, University of Pennsylvania. [1077] (2007a): “Conditional versus Unconditional Offers in Takeovers,” Mimeo, University of Pennsylvania. [1079] (2007b): “Information and Efficiency in Tender Offers,” Mimeo, University of Pennsylvania. [1093]
W. P. Carey School of Business, Arizona State University, Main Campus P.O. Box 873906, Tempe, AZ 85287, U.S.A.;
[email protected] and Graduate School of Business, Stanford University, 518 Memorial Way, Stanford, CA 94305-5015, U.S.A.;
[email protected]. Manuscript received November, 2005; final revision received April, 2008.
Econometrica, Vol. 76, No. 5 (September, 2008), 1103–1142
SEMIPARAMETRIC POWER ENVELOPES FOR TESTS OF THE UNIT ROOT HYPOTHESIS BY MICHAEL JANSSON1 This paper derives asymptotic power envelopes for tests of the unit root hypothesis in a zero-mean AR(1) model. The power envelopes are derived using the limits of experiments approach and are semiparametric in the sense that the underlying error distribution is treated as an unknown infinite-dimensional nuisance parameter. Adaptation is shown to be possible when the error distribution is known to be symmetric and to be impossible when the error distribution is unrestricted. In the latter case, two conceptually distinct approaches to nuisance parameter elimination are employed in the derivation of the semiparametric power bounds. One of these bounds, derived under an invariance restriction, is shown by example to be sharp, while the other, derived under a similarity restriction, is conjectured not to be globally attainable. KEYWORDS: Unit root testing, semiparametric efficiency.
1. INTRODUCTION THE UNIT ROOT TESTING PROBLEM is one of the most intensively studied testing problems in econometrics.2 During the past decade or so, considerable effort has been devoted to the construction of unit root tests enjoying good power properties.3 Asymptotic power envelopes for unit root tests in the Gaussian AR(1) model were obtained by Elliott, Rothenberg, and Stock (1996; henceforth ERS) and Rothenberg (2000), while Rothenberg and Stock (1997) derived asymptotic power envelopes under rather general distributional assumptions. Rothenberg and Stock (1997) found that significant power gains (relative to the Gaussian case) are available in cases where the underlying distribution is non-Gaussian and known, and pointed out that this finding is in perfect analogy with well known properties of the location model and the stable AR(1) model. The purpose of this paper is to investigate the extent to which departures from normality can be exploited for power purposes also in the (arguably) more realistic case where the error distribution is unknown. To 1 The author thanks Marcelo Moreira, Whitney Newey, Jack Porter, Jim Powell, Jim Stock, two anonymous referees, and seminar participants at Aarhus, Berkeley, Boston University, Brown, the 2005 CEME conference, Harvard/MIT, Michigan, Penn, UC Davis, UCLA, UCSD, Wisconsin, and the 2005 World Congress of the Econometric Society for helpful comments. Financial support from CREATES (funded by the Danish National Research Foundation) is gratefully acknowledged. 2 Important early contributions include Dickey and Fuller (1979, 1981), Phillips (1987), and Phillips and Perron (1988). For reviews, see Stock (1994), Phillips and Xiao (1998), and Haldrup and Jansson (2006). 3 In parallel with the literature exploring power issues, a different branch of the unit root literature has focused on improving the size properties of unit root tests. Noteworthy contributions in that direction include Ng and Perron (1995, 2001), Perron and Ng (1996), Paparoditis and Politis (2003), and Park (2003).
© 2008 The Econometric Society
DOI: 10.3982/ECTA6113
1104
MICHAEL JANSSON
do so, the paper develops asymptotic power envelopes that are semiparametric in the sense that they explicitly account for the fact that the underlying error distribution is known only to belong to some “big” set of error distributions. An interesting methodological conclusion emerging from the existing literature on optimality theory for unit root testing is that although there is a fundamental sense in which the unit root testing problem is nonstandard, the problem is still amenable to analysis using existing tools (such as those developed for exponential families and elegantly summarized in Lehmann and Romano (2005)). An important methodological motivation for the present work is the general question of whether semiparametric power envelopes for nonstandard testing problems can be obtained by a conceptually straightforward adaptation of semiparametric methods developed for standard problems. For a variety of reasons, the unit root testing problem seems like a natural starting point for such an investigation and although some of the results obtained in this paper are likely to be somewhat specific to unit root testing, it is hoped that interesting general methodological lessons can be learned from studying that particular problem. Semiparametric testing theory has been developed for models admitting locally asymptotically normal (LAN) likelihood ratios (e.g., Choi, Hall, and Schick (1996)). In those models, testing theory “has little more to offer than the comforting conclusion that tests based on efficient estimators are efficient” (van der Vaart (1998)). On the other hand, little (if any) work has been done for models outside the LAN class, such as the AR(1) model with a root close to, and possibly equal to, unity. The latter model, which is the model under study here, admits likelihood ratios which are locally asymptotically quadratic (LAQ) in the sense of Jeganathan (1995). No universally accepted definition of estimator efficiency exists for LAQ models.4 Moreover, the duality between point estimation and hypothesis testing typically breaks down in models whose likelihood ratios are LAQ but not LAN.5 For these reasons, it appears necessary to develop semiparametric envelopes for the unit root testing problem from first principles. As is the approach to semiparametric efficiency in standard estimation problems (e.g., Begun, Hall, Huang, and Wellner (1983), Bickel, Klaassen, Ritov, and Wellner (1998), Newey (1990)), the approach to optimality theory taken in this paper is based on Stein’s (1956) insight that a testing problem is no easier in a semiparametric model than in any parametric submodel of the semi4 Gushchin (1995) established an optimality property of maximum likelihood estimators. See also Jeganathan (1995) and Ling and McAleer (2003). 5 An exception occurs in models where the limiting experiment becomes a member of a linear exponential family upon conditioning on statistics with certain ancillarity properties. A well known example is models with locally asymptotically mixed normal (LAMN) likelihood ratios, which arise in cointegration analysis (e.g., Phillips (1991), Stock and Watson (1993)). For an example that does not belong to the LAMN class, see Eliasz (2004) and Jansson and Moreira (2006).
SEMIPARAMETRIC POWER ENVELOPES
1105
parametric model. Consequently, the semiparametric power envelope will be defined as the infimum of the power envelopes associated with smooth parametric submodels embedding the true error density. Although the unit root testing problem differs from standard testing problems in important respects, it turns out that some of the qualitative findings obtained from the least favorable submodel approach bear a noticeable resemblance to the well known results for the location model, a possibly surprising result in light of the fact that the semiparametric properties of the stable AR(1) model are substantially different from those of the location model. Specifically, it is shown in this paper that although the unit root testing problem admits adaptive procedures when the error distribution is known to be symmetric, adaptation is impossible when the error distribution is (essentially) unrestricted. Nevertheless, and in sharp contrast to the location model, the unit root model with an unrestricted error distribution has the property that although adaptation is impossible, departures from normality can be exploited for efficiency purposes. (The magnitude of the available power gains depends on the shape of the error distribution through its Fisher information for location and can be quite substantial when the error distribution has fat tails.) The paper proceeds as follows. To set the stage, Section 2 introduces the model and the testing problem under consideration, while Section 3 studies unit root testing under the assumption that the error distribution is known. Section 4 extends the results of Section 3 to parametric submodels. Employing those results, Sections 5 and 6 obtain semiparametric power envelopes for the cases of symmetric and (essentially) unrestricted error distributions, respectively. The consequences of accommodating deterministic components and/or serial correlation in the error are briefly explored in Section 7, while Section 8 offers concluding remarks. Proofs of the main results are provided in the Appendix. 2. PRELIMINARIES Suppose the observed data y1 yT are generated by the zero-mean AR(1) model (1)
yt = ρyt−1 + εt
where y0 = 0 and the εt are unobserved independent and identically distributed (i.i.d.) errors from an unknown continuous distribution with full support, zero mean, and finite variance. Let f denote the unknown error density. Furthermore, and without loss of generality, let the (unknown) error variance be normalized to 1. The objective of this paper is to develop asymptotic power envelopes for the unit root testing problem in the zero-mean AR(1) model, treating f as
1106
MICHAEL JANSSON
an unknown nuisance parameter. In other words, the testing problem under consideration is H0 : ρ = 1
vs.
H1 : ρ < 1
and it is assumed to be known that f lies in some set F of densities. The main goal of the paper is to develop sharp upper bounds on the asymptotic performance of unit root testing procedures in models of this type, with special attention being devoted to semiparametric cases in which F is infinite-dimensional. By Donsker’s theorem (e.g., Billingsley (1999)), the assumptions y0 = 0 and εt ∼ i.i.d.(0 1) ensure that if ρ = 1, then yt is I(1) in the sense that the weak limit of T −1/2 yT · is a Brownian motion, where · denotes the integer part of the argument. While Donsker’s theorem is valid without the additional assumption that the error distribution is continuous and has full support, most of the statistical analysis conducted in this paper would be invalid without an assumption of this kind.6 Specifically, the additional assumption on the error distribution implies that the distributions of {y1 yT } induced by different values of ρ are mutually absolutely continuous. Mutual absolute continuity is a finite sample counterpart of the property of (mutual) contiguity, which plays a prominent role in Le Cam’s (1972) theory of limits of experiments and will be utilized throughout this paper.7 Because the purpose of this paper is to elucidate the role of F in optimality theory for unit root tests, Sections 3–6 study the zero-mean AR(1) model, which deliberately assumes away deterministic components and serial correlation in the error.8 Section 7 explores the consequences of relaxing these (implausible) assumptions and finds, in perfect analogy with ERS’s results for the Gaussian case, that the results obtained for the zero-mean AR(1) model extend readily to a model with an unknown mean and serial correlation in the error, whereas the presence of a time trend affects the asymptotic power envelope(s). 6 If the innovation distribution has bounded support, then the conditional distribution of yt given yt−1 has parameter-dependent support, a property which introduces nontrivial complications even in models with i.i.d. data (e.g., Hirano and Porter (2003), Chernozhukov and Hong (2004)). 7 The property of mutual contiguity is useful in part because it makes it possible to derive conclusions about local asymptotic power functions from assumptions concerning the behavior of certain statistics “under the null,” an attractive feature because assumptions of the latter kind tend to be relatively easy to verify. Examples of readily verifiable assumptions required to hold “under the null” are provided by Assumptions LAQ and LAQ* in Sections 3 and 4, respectively, and the condition (15) that underlies the definition of adaptation employed in Section 5. 8 The model (1) furthermore sets the initial condition y0 equal to zero and assumes away conditional heteroskedasticity. Proceeding along the lines of Müller and Elliott (2003) and Boswijk (2005), respectively, it may be possible to relax these assumptions, but no attempts to do so will be made in this paper.
SEMIPARAMETRIC POWER ENVELOPES
1107
3. KNOWN ERROR DISTRIBUTION In an attempt to further motivate the question addressed by this paper and to facilitate the interpretation of the main results, this section discusses asymptotic optimality theory for the unit root testing problem under the counterfactual assumption that the underlying error distribution is known (i.e., that F is a singleton). Even if f is known, the unit root testing problem is nonstandard. A well known manifestation of the nonstandard nature of the unit root testing problem is that contiguous alternatives to the unit root null are of the form ρ = 1 + O(1/T ). Accordingly, the parameter of interest is henceforth taken to be c = T (ρ − 1), the associated formulation of the unit root testing problem being H0 : c = 0 vs. H1 : c < 0. Any (possibly randomized) unit root test can be represented by means of a test function φT : RT → [0 1] such that H0 is rejected with probability φT (Y ) whenever YT := (y1 yT ) = Y . The power function (with argument c) associated with φT is given by EρT (c) φT (YT ), where ρT (c) := 1 + c/T and the subscript on E indicates the distribution with respect to which the expectation is taken. Define the log likelihood ratio function f
LT (c) :=
T t=2
T c log f yt − yt−1 − log f (yt ) T t=2
For any α ∈ (0 1) and any sample size T , it follows from the Neyman–Pearson lemma that the optimal size α unit root test against the point alternative c = f ¯ The power (against the alternative c = c¯ < 0 rejects for large values of LT (c). ¯ of this point optimal test gives the value of the size α power envelope at c) ¯ c = c. Under mild assumptions on f , the finite sample power envelope has an asymptotic counterpart which depends on f only through a scalar functional. A sequence of unit root tests φT is said to have asymptotic size α if lim EρT (0) φT (YT ) = α
T →∞
The asymptotic power envelope for unit root tests of asymptotic size α will be derived under the following high-level assumption on f , in which op0f (1) is shorthand for “op (1) when H0 holds and ε has density f ” and Lf denotes the set of functions f for which E[ f (ε)] = 0, E[ε f (ε)] = 1, and 1 ≤ E[ f (ε)2 ] < ∞. ASSUMPTION LAQ: If cT is a bounded sequence, then 1 f f ff LT (cT ) = cT ST − cT2 HT + op0f (1) 2
1108
MICHAEL JANSSON
where, for some f ∈ Lf , T 1 yt−1 f (yt ) S := T t=2 f T
T Iff 2 H := 2 y T t=2 t−1 ff T
Iff := E[ f (ε)2 ]
Assumption LAQ is in the spirit of Jeganathan (1995) and implies that the likelihood ratios are LAQ at ρ = 1 in the sense of that paper. In particular, it follows from Donsker’s theorem and Chan and Wei (1988, Theorem 2.4) that (2)
1 f LT (c) →d0f Λf (c) := c Sf − c 2 Hff 2
∀c
where
1
Sf :=
W (r) dBf (r)
1
Hff := Iff
0
W (r)2 dr 0
(W Bf ) is a bivariate Brownian motion with
W (1) Var Bf (1)
=
1 1 1 Iff
and →d0f is shorthand for “→d when H0 holds and ε has density f .” Additional discussion of Assumption LAQ, including sufficient conditions for its validity, will be given at the end of this section. Prohorov’s theorem (e.g., Billingsley (1999)) and Le Cam’s third lemma (e.g., van der Vaart (2002)) can be used to show that if (2) holds, then every subsequence φT admits a further subsequence φT and a [0 1]-valued function ψ for which (3)
lim EρT (c) φT (YT ) = E ψ(Sf Hff ) exp(Λf (c)) ∀c
T →∞
If φT has asymptotic size α, then ψ in (3) satisfies E[ψ(Sf Hff )] = α and it follows from the Neyman–Pearson lemma that E[ψ(Sf Hff ) exp(Λf (c))] is bounded from above by Ψf (c α) := E ψf (Sf Hff |c α) exp(Λf (c)) where ψf (Sf Hff |c α) := 1[Λf (c) > Kα (c; Iff )]
SEMIPARAMETRIC POWER ENVELOPES
1109
1[·] is the indicator function, and Kα (c; Iff ) is the 1 − α quantile of Λf (c). These facts yield the following theorem, which generalizes a result of ERS to non-Gaussian error distributions.9 THEOREM 1: If Assumption LAQ holds and φT has asymptotic size α, then (4)
lim EρT (c) φT (YT ) ≤ Ψf (c α) ∀c < 0
T →∞
The proof of Theorem 1 given above is based on Le Cam’s (1972) theory of limits of experiments. Because f is assumed to be known, a Neyman–Pearson test exists for every T and the use of the theory of limits of experiments can be avoided (e.g., Rothenberg and Stock (1997)). On the other hand, the use of the limits of experiments approach seems unavoidable when studying the models under consideration in the following sections. Specifically, the presence of a nuisance parameter governing distributional shape makes it very difficult (if not impossible) to derive a Neyman–Pearson-type test for any given T . In contrast, the limits of experiments approach is applicable also when f depends on a nuisance parameter, because the limiting experiments associated with such models do admit Neyman–Pearson-type tests. The asymptotic power bound Ψf (c α) is attainable pointwise (in c) when f is known, limT →∞ on the left-hand side of (4) equaling limT →∞ and the inequality being sharp when φT (YT ) equals 1 2 ff f φfT (YT |c α) := 1 cST − c HT > Kα (c; Iff ) 2 the natural finite sample counterpart of ψf (Sf Hff |c α). Moreover, it was found by Rothenberg and Stock (1997) that the local asymptotic power func¯ α) is uniformly (in c) “close” to Ψf (c α) if c¯ is tion associated with φfT (·|c chosen appropriately. By implication, Ψf is a relevant benchmark. The envelope Ψf depends on f only through Iff . It can be shown that Ψf is strictly increasing in Iff and that Iff ≥ 1 with equality if and only if f is the standard normal density. Moreover, ERS’s unit root tests, based on the Gaussian (quasi-) likelihood, have local asymptotic power functions that are invariant with respect to f . As a consequence, the Gaussian power envelopes derived by ERS provide a lower bound on maximal attainable local asymptotic power in models with non-Gaussian errors. Figure 1 plots Ψf (· 005) for various values of Iff , thereby quantifying the magnitude of the potential power gains, relative to procedures based on a 9 Theorem 1 is essentially due to Rothenberg and Stock (1997), who obtained a result equivalent to (4) under the (somewhat stronger) assumptions that (i) E[|ε|k + | f (ε)|k ] < ∞ for some k > 2, where f (ε) := ∂ log f (ε − θ)/∂θ|θ=0 , and (ii) ff satisfies a linear Lipschitz condition, where ff (ε) := ∂2 log f (ε − θ)/∂θ2 |θ=0 .
1110
MICHAEL JANSSON
FIGURE 1.—Power envelopes, known error distribution.
Gaussian quasi-likelihood, available in applications with nonnormal errors.10 Evidently, substantial power gains will be available (for models with Iff well above unity) if it is possible to construct a unit root test which is computable without knowledge of f and attains Ψf for every f ∈ F . Section 5 shows that this situation occurs when F consists only of symmetric error densities. More generally, Figure 1 suggests that nontrivial power gains will be available in situations where (attainable) semiparametric power envelopes are qualitatively similar to Ψf in the sense that they lie well above the envelope corresponding to the Gaussian distribution. Section 6 shows that this occurs even when F is unrestricted. Consider the density given by fλ (ε) := Cλ0 exp(−Cλ1 |ε|λ ), where λ > 1/2 and the constants Cλ0 and Cλ1 are determined by the requirement ∞ ∞ fλ (ε) dε = ε2 fλ (ε) dε = 1 10
−∞
−∞
The values λ = 2 and λ = 1 correspond to the standard normal and rescaled double exponential distributions, respectively, and the associated values of Iff are 1 and 2, respectively. More generally, it can be shown that the value of Iff associated with fλ is given by
∞ 2
∞ [ r exp(−r λ ) dr][ 0 r 2(λ−1) exp(−r λ ) dr]
∞ Iff (λ) := λ2 0 [ 0 exp(−r λ ) dr]2 Because limλ↓1/2 Iff (λ) = ∞ and Iff (·) is continuous, the range of Iff (·) is [1 ∞). Numerical evaluation shows that Iff (07709) ≈ 5 and Iff (06818) ≈ 10, respectively.
SEMIPARAMETRIC POWER ENVELOPES
1111
Assumption LAQ holds for a wide range of error distributions. For instance, Jeganathan (1995) showed that Assumption LAQ is satisfied (with f = −f˙/f ) under the following absolute continuity condition on f . a function f˙ such that f (ε) =
∞ f admits
εASSUMPTION AC: The density 2 ˙ ˙ f (r) dr for every ε ∈ R and −∞ [f (ε) /f (ε)] dε < ∞. −∞ Under Assumption AC, f is the score function, evaluated at θ = 0, for θ in the location model (5)
Xi = θ + εi
where the εi are i.i.d. with density f . Similarly, Iff is the Fisher information for location associated with f . As a consequence, both f and Iff are readily interpretable. In the location model (5), Assumption AC serves dual purposes: it delivers the LAN property (i.e., a quadratic expansion of the log likelihood ratio function) and enables nonparametric estimation of f . Assumption AC will serve similar purposes in Theorems 4 and 6 of this paper. The following Le Cam (1970) type of assumption is implied by Assumption AC.11 ASSUMPTION DQM: The density f admits a function f such that, as |θ| → 0,
∞
−∞
2
1 f (ε − θ) − 1 − θ f (ε) f (ε) 2
f (ε) dε = o(θ2 )
For the purposes of establishing just the LAN property (in the location model), it is well known that differentiability in quadratic mean (Assumption DQM) suffices.12 It seems natural to ask if the model studied in this paper exhibits a similar feature. An affirmative answer to that question is provided by the following lemma, which therefore shows that the usefulness of Assumption DQM extends beyond the class of models whose likelihood ratios enjoy the LAN property.13 LEMMA 2: Assumption LAQ is implied by Assumption DQM. 11
Additional discussion of the relationship between assumptions of the absolute continuity (AC) and differentiability in quadratic mean (DQM) variety can be found in Le Cam (1986, Section 17.3) and Le Cam and Yang (2000, Section 7.3). 12 Indeed, van der Vaart (2002, p. 676) argued that Assumption DQM is “exactly right for getting the LAN expansion (in the location model).” For an appreciation of differentiability in quadratic mean, see Pollard (1997). 13 A proof of Lemma 2 can be found in Jansson (2007).
1112
MICHAEL JANSSON
4. PARAMETRIC SUBMODELS Relaxing the assumption that the error density is known, this section studies unit root testing in parametric submodels. In the present context, a parametric submodel is a model of the form (1) with f embedded in a parametric family F := {f (·|η) : η ∈ R} of density functions satisfying ∞ ∞ εf (ε|η) dε = 0 and ε2 f (ε|η) dε < ∞ −∞
−∞
for each value of the (nuisance) parameter η. It is assumed that f (·) = f (·|0); that is, it is assumed that the true (but unknown) value of η is zero. Moreover, the family F is assumed to be “smooth” at η = 0. Among other things, “smoothness” will √ imply that the contiguous alternatives to η = 0 are of the form η = O(1/ T ). In recognition of this fact, all subsequent formulations will employ a local reparameterization of the form √ η = ηT (h) := h/ T , where the true (but unknown) value of the local parameter h is zero. Let the log likelihood ratio function associated with F be denoted by LFT (c h) :=
T t=2
T c log f yt − yt−1 ηT (h) − log f [yt |ηT (0)] T t=2
and let Lη denote the class of functions η for which E[ η (ε)] = 0, E[ε η (ε)] = 0, and E[ η (ε)2 ] < ∞. The degree of smoothness assumed on the part of F is made precise by the following high-level assumption, which generalizes Assumption LAQ to parametric submodels. ASSUMPTION LAQ*: If (cT hT ) is a bounded sequence, then 1 LFT (cT hT ) = (cT hT )STF − (cT hT )HTF (cT hT ) + op0f (1) 2 where, for some F := ( f η ) ∈ Lf × Lη , f 1 T ST t=2 yt−1 f (yt ) T F
T := ST := √1 STη t=2 η (yt ) T Iff T 2 ff fη HT HT t=2 yt−1 T2 HTF := := fη ηη If η T HT HT t=2 yt−1 T 3/2 Iff If η := E[ F (ε) F (ε) ] IF := If η Iηη
If η
T 3/2
T t=2
Iηη
yt−1
1113
SEMIPARAMETRIC POWER ENVELOPES
The requirement E[ F (ε)] = (0 0) of Assumption LAQ* is the familiar zero-mean property of scores, while E[ε F (ε)] = e1 := (1 0) will be a con∞ sequence of the requirement −∞ εf (ε|η) dε = 0 under mild smoothness conditions. As is true of Assumption LAQ, Assumption LAQ* is in the spirit of Jeganathan (1995) and implies that the likelihood ratios are LAQ in the sense of that paper. Moreover, proceeding as in the proof of Lemma 2 it can be shown that Assumption LAQ* is implied by the following generalization of Assumption DQM. ASSUMPTION DQM*: The family F admits functions f and η such that, as |θ| + |η| → 0, 2 ∞ 1 f (ε − θ|η) − 1 − [θ f (ε) + η η (ε)] f (ε) dε f (ε) 2 −∞ = o(θ2 + η2 ) The LAQ property delivered by Assumption LAQ* makes it possible to use the limits of experiments approach to derive asymptotic power envelopes for the unit root testing problem in a model where it is assumed to be known only that f ∈ F . To describe the salient properties of the limiting experiment, let (W BF ) := (W Bf Bη ) be a trivariate Brownian motion with 1 e1 W (1) (6) = Var BF (1) e1 IF It follows from standard weak convergence results that (STF HTF ) →d0f (SF HF ) where
1 Sf W (r) dBf (r) := 0 Sη Bη (1)
1 Iff 0 W (r)2 dr Hff Hf η := HF :=
1 Hf η Hηη If η W (r) dr
SF :=
0
If η
1 0
W (r) dr
Iηη
Using Prohorov’s theorem, Le Cam’s third lemma, and the result 1 LFT (c h) →d0f ΛF (c h) := (c h)SF − (c h)HF (c h) 2
∀(c h)
it can be shown that every subsequence φT admits a further subsequence φT and a [0 1]-valued function ψ for which (YT ) = E ψ(SF HF ) exp(ΛF (c h)) (7) E φ ∀(c h) lim ρ (c)η (h) T T T T →∞
1114
MICHAEL JANSSON
Because the true, but unknown, value of h has been normalized to zero, asymptotic power envelopes provide sharp upper bounds on limT →∞ EρT (c)ηT (0) × φT (YT ). In view of (7), these bounds can be obtained by maximizing E ψ(SF HF ) exp(ΛF (c 0)) = E ψ(SF HF ) exp(Λf (c)) with respect to ψ. As in Section 3, the tests under consideration will be assumed to be such that the limiting test functions ψ satisfy (8)
E[ψ(SF HF )] = α
In an attempt to further ensure that the power envelopes account for the presence of the unknown nuisance parameter h, some additional restrictions will be placed on ψ. Two classes of test functions, motivated by two conceptually distinct approaches to nuisance parameter elimination in the limiting experiment, will be considered. The first class is motivated by the fact that ψ is α-similar in the limiting experiment if and only if (9) E ψ(SF HF ) exp(ΛF (0 h)) = α ∀h Accordingly, a sequence φT is said to be locally asymptotically α-similar (in F ) if any ψ satisfying (7) also satisfies (9). The second class is motivated by a location invariance property enjoyed by testing problems involving c in the limiting experiment. As explained in a remark following the proof of Theorem 3, any (location) invariant test in the limiting experiment admits a representation in which ψ(SF HF ) depends on (SF HF ) only through (Sfη HF ), where
Sfη := Sf −
Hf η Sη Hηη
Accordingly, a sequence φT is said to be locally asymptotically α-invariant (in F ) if any ψ satisfying (7) also satisfies (8) and can be chosen such that (10)
ψ(SF HF ) = E[ψ(SF HF )|Sfη HF ]
It is shown in the proof of Theorem 3 that if (10) holds, then E ψ(SF HF ) exp(ΛF (c h)) = E ψ(SF HF ) exp(Λfη (c)) (11) where 1 Λfη (c) := c Sfη − c 2 Hffη 2
Hffη := Hff −
Hf2 η Hηη
Because the right-hand side of (11) does not depend on h, the class of locally asymptotically α-invariant tests is contained in the class of locally asymptotically α-similar tests. Both classes of tests contain most (if not all) existing unit
SEMIPARAMETRIC POWER ENVELOPES
1115
root tests. In particular, it can be shown that both classes of tests contain the point optimal tests of ERS as well as the “robust” unit root tests based on M-estimators and/or ranks proposed by Herce (1996), Hasan and Koenker (1997), Thompson (2004), and Koenker and Xiao (2004). On the other hand, the restrictions imposed are not entirely vacuous, as it follows from Theorem 3 that they are violated by the “oracle” test based on φfT unless the submodel satisfies If η = 0. The next result generalizes Theorem 1 to parametric submodels. It is shown in part (a) that ΨFS (c α) := E ψSF (SF HF |c α) exp(Λf (c)) provides an upper bound on local asymptotic power for locally asymptotically α-similar tests, where ψSF (SF HF |c α) := 1[Λf (c) > KαS (Sη c; IF )] and KαS is the continuous function satisfying E[ψSF (SF HF |c α)|Sη ] = α.14 The envelope for locally asymptotically α-invariant tests is shown in part (b) to be given by ΨFI (c α) := E ψIF (SF HF |c α) exp(Λf (c)) where ψIF (SF HF |c α) := 1[Λfη (c) > KαI (c; IF )] and KαI (c; IF ) is the 1 − α quantile of Λfη (c). THEOREM 3: (a) If Assumption LAQ* holds and φT is locally asymptotically α-similar, then (12)
lim EρT (c)ηT (0) φT (YT ) ≤ ΨFS (c α) ∀c < 0
T →∞
(b) If, moreover, φT is locally asymptotically α-invariant, then (13)
lim EρT (c)ηT (0) φT (YT ) ≤ ΨFI (c α)
T →∞
∀c < 0
14 A more explicit characterization of KαS is given in the proof of Lemma 7 provided in Jansson (2007).
1116
MICHAEL JANSSON
The bounds derived in Theorem 3 are attainable pointwise if F is known, limT →∞ on the left-hand sides of (12) and (13) equaling limT →∞ and the inequalities becoming equalities when φT (YT ) is given by15 1 f ff φSFT (YT |c α) := 1 cST − c 2 HT > KαS (STη c; IF ) 2 and φ
I FT
1 2 ffη fη I (YT |c α) := 1 cST − c HT > Kα (c; IF ) 2
respectively, where fη
fη
f
ST := ST −
HT η S Iηη T
f η2
ffη
HT
ff
:= HT −
HT Iηη
Because the class of locally asymptotically α-invariant tests is contained in the class of locally asymptotically α-similar tests, the power envelopes satisfy ΨFI ≤ ΨFS by construction. Moreover, the inequality is strict whenever If η = 0, implying that the present model differs in an interesting way from models with LAN likelihood ratios. In a Gaussian shift experiment (the limiting experiment in a model with LAN likelihood ratios) with one element of the mean vector being the parameter of interest and the others being unknown nuisance parameters, the class of α-similar tests contains the class of size α location invariant tests. In other words, the natural counterparts of the restrictions (9) and (10) are nested in the same way as they are here. Unlike the limiting experiment of the model studied here, however, the two classes of restrictions give rise to identical power envelopes in a Gaussian shift experiment (because the best α-similar test is location invariant) and there is no ambiguity about what the “correct” power envelope is.16 In contrast, because ΨFI and ΨFS differ whenever If η = 0, it is unclear which (if any) of these envelopes is the “correct” envelope in the present context. A potential problem with the power envelope ΨFS is that it is perhaps “too local” in the sense that it fails to adequately reflect the fact that the nuisance parameter h is unknown in the limiting experiment. Specifically, whereas E[ψ(SF HF ) exp(ΛF (c h))] will depend on h in general even if ψ satisfies (9), the object being maximized, E[ψ(SF HF ) exp(Λf (c))], does not depend on h. 15 The functions φSFT and φIFT are the natural finite sample counterparts of ψSF and ψIF , respectively. 16 This observation combined with the fact that (the counterpart of) local asymptotic αsimilarity is a weaker restriction than (the counterpart of) local asymptotic α-invariance in models with LAN likelihood ratios would appear to explain why the latter restriction has received little (if any) attention in the existing literature on semiparametric testing theory.
SEMIPARAMETRIC POWER ENVELOPES
1117
One way to avoid this potential problem, which further helps clarify the relationship between (9) and (10), is to consider a minimax criterion. Using the Hunt–Stein theorem (e.g., Lehmann and Romano (2005, Theorem 8.5.1)), it can be shown that if ψ satisfies (9), then inf E ψ(SF HF ) exp(ΛF (c h)) ≤ ΨFI (c α) (14) h∈R
where the inequality becomes an equality when ψ = ψIF (·|c α). By implication, ΨFI can be interpreted as a minimax power envelope for locally asymptotically α-similar tests. Indeed, if φT is locally asymptotically α-similar, then inf lim min EρT (c)ηT (h) φT (YT ) ≤ ΨFI (c α) ∀c < 0 H T →∞ h∈H
where the inf is taken over all finite subsets H of R. (Moreover, the inequality becomes an equality and limT →∞ can be replaced by limT →∞ when φT (·) = φIFT (·|c α).) This fact, a proof of which can be based on (14) and the methods of van der Vaart (1991), would appear to support the conjecture that ΨFI is the correct power envelope. Additional substantiation of that conjecture is provided by a remark at the end of this section. Because profile likelihood procedures “work” in conventional (parametric or even semiparametric) problems (e.g., Murphy and van der Vaart (1997, 2000)), it may be worth noting that ΨFI has a profile likelihood interpretation. Specifically, the function Λfη appearing in the definition of ΨFI satisfies Λfη (c) = max ΛF (c h) − max ΛF (0 h) h
h
This is not merely a coincidence, as it follows from Lehmann and Romano (2005, Problem 6.9) that the best location invariant tests are of the profile likelihood variety whenever the log likelihood is quadratic in the location parameter. Comparing the power envelopes of Theorems 1 and 3, it is seen that ΨFS ≤ Ψf the inequality being strict unless If η = 0. Therefore, the asymptotic power bound(s) for unit root tests in a parametric submodel will be strictly lower than the power bound in the model with a known error density unless the submodel satisfies If η = 0. Because the assumption η ∈ Lη implies E[ f (ε) η (ε)] = 0 only when f (ε) = ε, the only distribution (with full support and unit variance) for which the condition If η = 0 is satisfied for every smooth submodel is the Gaussian distribution. Therefore, the point optimal tests of ERS are locally asymptotically α-similar/α-invariant in any smooth parametric submodel for which f (·|0) is the Gaussian distribution. Conversely, if f is not Gaussian, then the test φfT (·|c α) of the previous section violates (9) for some smooth parametric submodel F with f (·|0) equal to the (true) density f By implication, the concept of point optimality, which has proven successful when dealing
1118
MICHAEL JANSSON
with the curvature in the unit root model with a known density, cannot be used to handle the nuisance parameter f . Specifically, a test of the form φf¯T (·|c α) will not be “nearly efficient” even if f¯ is chosen carefully. The results mentioned in the preceding paragraph bear a noticeable resemblance to the point estimation results for the location model, but differ in one important respect from the results for the stable AR(1) model. In the location model, the Cramér–Rao bound is given by Iff−1 when f is known and by (Iff − If2η /Iηη )−1 in parametric submodels, implying that the bounds coincide if and only if If η = 0. Moreover, the sample mean, the quasimaximum likelihood estimator of θ based on the Gaussian distribution, is regular in any submodel, whereas the quasi-maximum likelihood estimator
θˆ f := arg maxθ i log f (Xi − θ) based on the true density f is regular in a submodel F with f (·|0) = f if and only if the submodel has If η = 0, a condition which is violated by some smooth submodels unless the true distribution happens to be Gaussian. In the location model, the condition that If η = 0 in all smooth submodels permitted by the set F of densities to which f is assumed to belong is simply Stein’s (1956) necessary condition for adaptation. As is well known, this condition is satisfied when f is assumed to be symmetric, but is violated when f is unrestricted. In fact, adaptive estimation is possible in the symmetric location model under Assumption AC (e.g., Beran (1974, 1978), Stone (1975)), whereas the sample average attains the semiparametric efficiency bound in the location model with an (essentially) unrestricted f (e.g., Levit (1975), Newey (1990)), implying in particular that departures from normality cannot be exploited for efficiency purposes in that model. The latter property is not shared by the stable AR(1) model, which admits adaptive estimators even when f is required only to satisfy Assumption AC (e.g., Kreiss (1987a, 1987b), Drost, Klaassen, and Werker (1997)). Therefore, although the stable AR(1) model and the location model exhibit qualitatively identical behavior when the density is known and/or symmetric, they exhibit drastically different behavior in the semiparametric case where f is treated as an unrestricted nuisance parameter. Utilizing the results of this section, the following two sections develop power envelopes in the semiparametric cases where f is either assumed to be symmetric or is left unrestricted. It will be shown in Section 5 that the unit root model admits adaptive testing procedures when the errors are assumed to be symmetric and satisfy Assumption AC. Consequently, the analogies pointed out by Rothenberg and Stock (1997, p. 278) extend in a predictable way to the semiparametric model in which only symmetry is assumed on the part of the error distributions. In contrast, it is obvious from the results cited in the previous paragraph that these analogies will not extend to the model in which the error distribution is unrestricted. Studying the unit root model with an (essentially) unrestricted f , Section 6 finds that the semiparametric properties of the unit root model are related to the semiparametric properties of both the
SEMIPARAMETRIC POWER ENVELOPES
1119
location model and the stable AR(1) model. On the one hand, a numerical evaluation of the semiparametric power envelopes will show that these can be well above the power envelope corresponding to the Gaussian distribution. By implication, the unit root model shares some of the semiparametric properties of the stable AR(1) model. On the other hand, the analytical characterization of the semiparametric power envelope for the unit root model turns out to be intimately related to the corresponding characterization of the semiparametric power envelope for the location model (and seemingly unrelated to the characterization of the semiparametric power envelope for the stable AR(1) model). REMARK: A simple heuristic argument (which can easily be made rigorous) shows that “plug-in” versions of φSFT (YT |c α) will typically fail to attain ΨFS . Indeed, consider 1 2 ff S S ˜η ˜ ˜ φFT (YT |c α) := 1 c ST − c HT > Kα (ST c; IF ) 2 where, for some estimator η˜ T of η (and assuming the derivatives exist), T 1 ˜ ST := yt−1 f (yt |η˜ T ) T t=2 T 1 S˜ Tη := √ η (yt |η˜ T ) T t=2
∂ f (yt |η) := log f (yt − θ|η) ∂θ θ=0 η (yt |η) :=
∂ log f (yt |η) ∂η
−1 η If η˜ T is asymptotically efficient (i.e., best regular), then T 1/2 η˜ T = Iηη ST + op0f (1) and f (·|η˜ T ) should be asymptotically equivalent to ∂ f (·|η) f (·) + f η (·)η˜ T f η (·) := ∂η η=0
in the sense that T 1 fη S˜ T = yt−1 [ f (yt ) + f η (yt )η˜ T ] + op0f (1) = ST + op0f (1) T t=2
Moreover, S˜ Tη should be op0f (1), so it should be the case that 1 fη ff φ˜ SFT (YT |c α) = 1 cST − c 2 HT > KαS (0 c; IF ) + op0f (1) 2 Replacing η = 0 by an asymptotically efficient estimator is seen to have a nonnegligible impact on the properties of the test. Indeed, the statistics
1120
MICHAEL JANSSON
φ˜ SFT (YT |c α) and φSFT (YT |c α) are asymptotically equivalent if and only if If η = 0. By implication, plug-in versions of φSFT (YT |c α) generally fail to attain ΨFS . In fact, it follows from the preceding display that φ˜ SFT (·|c α) is locally asymptotically α-invariant in F and therefore satisfies lim EρT (c)ηT (0) φ˜ ST (YT |c α; F) ≤ ΨFI (c α)
T →∞
The failure of the plug-in approach in this example casts serious doubt on the relevance of the power envelope ΨFS . In contrast, ΨFI is easily shown to be attained by plug-in versions of φIFT (YT |c α). It will be shown in Section 6 that this property extends to semiparametric models. 5. SYMMETRIC ERROR DISTRIBUTIONS This section studies unit root testing in the case where f is assumed to belong S , the set of symmetric densities satisfying Assumption AC. As discussed to FAC in the previous section, Stein’s (1956) necessary condition for adaptation in the location model is also necessary and sufficient for the power envelopes Ψf ΨFS , and ΨFI to coincide for every smooth submodel F permitted by the set F of densities to which f is assumed to belong. This necessary condition is S . Theorem 4, the main result of satisfied when f is assumed to belong to FAC S is also sufficient for adaptive this section, shows that the assumption f ∈ FAC unit root testing to be possible. In models with LAN likelihood ratios, the duality between point estimation and hypothesis testing in Gaussian shift experiments (e.g., Choi, Hall, and Schick (1996)) implies that associated with any “reasonable” definition of adaptation for point estimators (e.g., Bickel (1982), Begun et al. (1983)) there is a “reasonable” definition of adaptation for hypothesis tests. On the other hand, because the duality between point estimation and hypothesis testing breaks down in models with LAQ (but not LAN/LAMN) likelihood ratios, some care must be exercised when defining adaptation in the context of the model studied in this paper. In particular, although Ling and McAleer’s (2003) definition of adaptation for point estimators generalizes Bickel’s (1982) definition to models of the form considered in this paper, it is unclear whether that definition can be translated into a reasonable definition of adaptation for hypothesis tests. It is by no means difficult to give a reasonable definition of adaptation for tests of the unit root hypothesis. Nevertheless, it seems more attractive to work with a notion of adaptation that depends only on the model under consideration and makes no reference to any particular type of inference (e.g., point S estimation or hypothesis testing). Accordingly, the collection FAC is said to permit adaptive inference if there exists a pair (SˆT Hˆ T ) of statistics such that (15)
(SˆT Hˆ T ) = (ST HT ) + op0f (1) f
f
S ∀f ∈ FAC
1121
SEMIPARAMETRIC POWER ENVELOPES f
f
Because (ST HT ) is asymptotically sufficient when f is known and satisfies Assumption LAQ, the present definition is a natural formalization of the requirement that no information is lost (asymptotically) when the density is treated as S 17 an unknown nuisance parameter belonging to the set FAC . S Theorem 4 shows that FAC permits adaptive inference and uses that result to derive an adaptation result for unit root tests. To describe the latter result, suppose (SˆT Hˆ T ) satisfies (15) and let (16)
1 2 ˆ ˆ ˆ ˆ φT (YT |c α) := 1 c ST − c HT > Kα (c; IT ) 2
Iˆ T :=
1 T2
Hˆ T
T t=2
2 yt−1
If (15) holds, then a test based on φˆ T (·|c α) will be asymptotically equivalent to the oracle test based on φfT (·|c α), an adaptation property in view of the fact that φfT (·|c α) attains Ψf (·) and is locally asymptotically α-invariant in S F ⊆ FAC whenever F satisfies Assumption DQM*. f f As candidate “estimators” of ST and HT , consider (17)
T 1 SˆT := yt−1 ˆST t (yt ) T t=2
and (18)
T Iˆ T 2 y Hˆ T := 2 T t=2 t−1
T 1 ˆS Iˆ T := (yt )2 T t=1 T t
where { ˆST t : 2 ≤ t ≤ T } are estimators of f . Evidently, (SˆT Hˆ T ) satisfies (15) f S . provided the ˆST t are such that (SˆT Iˆ T ) = (ST Iff ) + op0f (1) for every f ∈ FAC This requirement is met by sample splitting estimators of the form (19)
ˆ (yt ) := S Tt
˜ST −τT (yt |yτT +1 yT ) t = 1 τT , ˜SτT (yt |y1 yτT ) t = τT + 1 T ,
where τT are integers with (20)
0 < lim τT /T ≤ lim τT /T < 1 T →∞
17
T →∞
Moreover, the definition generalizes in an obvious way to (other classes of densities and) other models with a finite-dimensional asymptotically sufficient statistic and the resulting definition agrees with standard definitions in models where the likelihood ratios happen to be LAN.
1122
MICHAEL JANSSON
and ˜ST is a sequence of estimators such that, as T → ∞, ∞ 2 S (21) ˜T (ε|ε1 εT ) − f (ε) f (ε) dε = op (1) −∞
and (22)
√ T
∞ −∞
˜ST (ε|ε1 εT )f (ε) dε = op (1)
S whenever ε1 εT are i.i.d. with density f ∈ FAC .
THEOREM 4: (a) If (SˆT Hˆ T ) is defined as in (17)–(22), then (15) holds. S (b) In particular, if f ∈ FAC and φˆ T is defined as in (16)–(22), then ¯ α) = lim EρT (c) φfT (YT |c ¯ α) lim EρT (c) φˆ T (YT |c
T →∞
T →∞
∀c ≤ 0 c¯ < 0
REMARK: (i) The main purpose of Theorem 4 is to demonstrate by example that the bound Ψf is sharp when the errors are known to be symmetric. Sample splitting estimators of f (with the null hypothesis imposed) are employed because such estimators make it possible to give a relatively elementary proof of adaptation under fairly minimal conditions on f . In practice, it may be desirable to use the full sample (along with some estimator of ρ) when estimating f . It seems plausible that the methods of Koul and Schick (1997) can be used to justify the use of such an estimator, but an investigation along these lines will not be pursued in this paper. Also left for future work is a numerical investigation of the extent to which the asymptotic power gains documented here are available in small samples when f needs to be estimated. A similar remark applies to Theorem 6 of the next section. (ii) Estimators ˜ST that satisfy (21) and (22) can be found in Bickel (1982) and Bickel et al. (1998). For further discussion of these high-level assumptions, see Schick (1986) and Klaassen (1987). 6. UNRESTRICTED ERROR DISTRIBUTIONS This section obtains semiparametric power envelopes for tests of the unit root hypothesis in the case where f is (essentially) unrestricted in the sense that is assumed to be known only that f belongs to FDQM , the class of densities satisfying Assumption DQM. In the spirit of Stein (1956), the semiparametric power envelopes will be defined as the infimum of the power envelopes associated with parametric submodels embedding the true error density. In light of the striking similarities between the results derived so far and the corresponding results for the location model, it seems plausible that these asymptotic power envelopes should admit an interpretation analogous to the interpretation of the semiparametric power envelope for tests in the location model.
SEMIPARAMETRIC POWER ENVELOPES
1123
That conjecture turns out to be correct. Specifically, it turns out that the least favorable submodels in the unit root model coincide with the least favorable submodels in the location model. In the location model, a least favorable submodel is any submodel for which −1 the associated η maximizes the squared correlation If2η Iff−1 Iηη of f (ε) and η (ε) subject to the restriction η ∈ Lη . As was seen in Section 4, this property is shared by the unit root model in the Gaussian case, where Iff = 1 and any submodel has If η = 0 (and is least favorable). Presuming that the property is shared by the unit root model also if Iff > 1, it follows that the semiparametric power envelopes for tests of the unit hypothesis should be given by the envelopes ΨFS and ΨFI associated with a submodel F for which η (ε) = f (ε) − ε. Theorem 5 makes the preceding heuristics precise. Let (W Bf ) and (Λf Sf Hff ) be as in Section 3 and define ΨfS (c α) := E ψSf (Sf Hff SfS |c α) exp(Λf (c)) I ΨfI (c α) := E ψIf (SfI Hff |c α) exp(Λf (c)) where18 ψSf (Sf Hff SfS |c α) := 1[Λf (c) > KαS (SfS c; JfLF )] I ψIf (SfI Hff |c α) := 1[ΛIf (c) > KαI (c; JfLF )]
1 I ΛIf (c) := c SfI − c 2 Hff 2
SfS := Bf (1) − W (1)
S := Sf − I f
1
JfLF :=
Iff Iff − 1 Iff − 1 Iff − 1
W (r) dr SfS
0
2
1
I Hff := Hff − (Iff − 1)
W (r) dr
0
Finally, for any f ∈ FDQM , let Jf denote the set of matrices IF associated with submodels F satisfying Assumption DQM*. THEOREM 5: If f ∈ FDQM , then (23) (24)
inf
F : IF ∈Jf
inf
ΨFS (c α) = ΨfS (c α) ∀c < 0
F : IF ∈Jf
ΨFI (c α) = ΨfI (c α)
∀c < 0
18 As defined, ψSf and ψIf are the test functions ψSF and ψIF of Section 4 that correspond to a submodel with η (ε) = f (ε) − ε.
1124
MICHAEL JANSSON
The proof of (23) first shows that (25)
inf
F : IF ∈Jf
ΨFS (c α) ≥ ΨfS (c α) ∀c < 0
The proof of (25) is constructive in the sense that it shows that the test based on 1 2 ff f fS S S LF φfT (YT |c α) := 1 cST − c HT > Kα (ST c; Jf ) 2 attains ΨfS and is locally asymptotically α-similar in any smooth submodel, where fS T
S
:= T
−1/2
T [ f (yt ) − yt ] t=2
Then, using the fact that ΨFS is continuous in IF , an inequality in the opposite direction is obtained by showing that JfLF belongs to the closure of Jf . A similar strategy is used to obtain (24). Figures 2 and 3 plot ΨfS (· 005) and ΨfI (· 005) for various values of Iff . Comparing Figures 2 and 3 to Figure 1, the semiparametric power envelopes are seen to lie well above the power envelope corresponding to the Gaussian
FIGURE 2.—Semiparametric power envelopes, similar tests.
SEMIPARAMETRIC POWER ENVELOPES
1125
FIGURE 3.—Semiparametric power envelopes, invariant tests.
distribution.19 In spite of the fact that there is no obvious connection between the analytical semiparametric efficiency results for the unit root model and those for the stable AR(1) model, the numerical results displayed in Figures 2 and 3 are therefore qualitatively similar to the well known results for the stable AR(1) model insofar as Figures 2 and 3 suggest that nonnormality can be a source of potentially substantial power gains in unit root tests even in the absence of knowledge of the error distribution. It is therefore of significant interest to investigate whether the power bounds reported in Figures 2 and 3 are sharp. The fact that completely consistent (in the terminology of Andrews (1986)) goodness of fit tests exist can be used to show that for any f¯ ∈ FDQM it is possible to construct tests that are locally efficient at f¯ in the sense that they are locally asymptotically α-similar/α-invariant in any smooth submodel F with f (·|0) ∈ FDQM and attain ΨfS /ΨfI when f = f¯. For instance, consider φ∗f¯T (YT |c α) := ϕT (YT |f¯)φSf¯T (YT |c α) + [1 − ϕT (YT |f¯)]φERS T (YT |c α) 19 The difference between the oracle bounds Ψf (· 005), ΨfS (· 005), and ΨfI (· 005) is noticeable in most of the cases considered. Numerical evaluation shows that supc |Ψf (c 005) − ΨfS (c 005)| ≈ 002, 005, 007 and supc |Ψf (c 005) − ΨfI (c 005)| ≈ 009, 015, 016 for Iff = 2, 5, 10.
1126
MICHAEL JANSSON
where φERS T (YT |c α) is the test function of ERS’s point optimal test and ϕT (·|f¯) is a (goodness of fit) test function for which ϕT (YT |f¯) = 1(f = f¯) + op0f (1)
∀f ∈ FDQM
This “shrinkage” test is asymptotically equivalent to φSf¯T (YT |c α) when f = f¯ and asymptotically equivalent to ERS’s test otherwise. In particular, the test is locally asymptotically α-similar in any smooth submodel F with f (·|0) ∈ FDQM and attains ΨfS when f = f¯. A similar construction can be used to show that ΨfI provides a (pointwise) sharp upper bound on the local asymptotic power attainable by means of tests that are locally asymptotically α-invariant in any smooth submodel F with f (·|0) ∈ FDQM . The preceding construction is of theoretical interest because it demonstrates by example that the bounds ΨfS and ΨfI are pointwise sharp. (In light of this it seems reasonable to refer to ΨfS and ΨfI as semiparametric power envelopes.) (YT |c α) is obviously not recNevertheless, the shrinkage test based on φS∗ f¯T ommended for actual use and a more interesting question is therefore whether globally (in f ) efficient testing procedures exist. On the one hand, reasoning similar to that of the remark at the end of Section 4 shows that plug-in versions of φSfT (YT |c α) generally fail to attain ΨfS even if a valid parametric submodel is postulated. In contrast, Theorem 6 will show that the assumption f ∈ FAC is sufficient for the envelope ΨfI to be globally attainable. Global attainability of ΨfI follows from arguments analogous to those used by Bickel (1982) to show feasibility of adaptive estimation of the slope coefficients in a standard regression model. The proof of (24) uses a finite sample counterpart of SfI given by (26)
fI T
S
f T
:= S −
T T 1 1 yt−1 √ [ f (yt ) − yt ] T 3/2 t=2 T t=2
Because T 1 μ y f (yt ) + S = T t=2 t−1
T ys−1 μ yt−1 := yt−1 − s=2 T −1 fI T
fI
T 1 1 yt−1 √ (yT − y1 ) 3/2 T T t=2
consistent estimation of ST turns out to be feasible even though f cannot
T μ be estimated with small bias. Specifically, the fact that t=2 yt−1 = 0 implies
SEMIPARAMETRIC POWER ENVELOPES
1127
that (the natural counterpart of) the assumption (22) can be avoided when fI constructing a consistent estimator of ST .20 To demonstrate by example that the bound ΨfI is globally (in f ) sharp, let (27)
1 2 ˆI I I I LF ˆ ˆ ˆ φT (YT |c α) := 1 c ST − c HT > Kα (c; JT ) 2
where, for some integers τT with (28)
lim τT = ∞ and
T →∞
lim τT /T = 0
T →∞
and for some estimator ˆT of f , (29)
T 1 I ˆ yt−1 ˆT (yt ) ST := T t=τ +1 T T T 1 ˆ 1 − yt−1 [ T (yt ) − yt ] √ T 3/2 t=τ +1 T t=τ +1 T
(30)
T T Iˆ T 2 1 yt−1 − (Iˆ T − 1) yt−1 Hˆ := 2 T t=τ +1 T 3/2 t=τ +1 I T
T
(31)
T
JˆTLF :=
2
T
Iˆ T Iˆ T − 1 Iˆ T − 1 Iˆ T − 1
T 1 ˆ ˆ IT := T (yt )2 T t=τ +1 T
As defined, φˆ IT (YT |c α) is a plug-in version of the test φIfT (YT |c α) used in the proof of Theorem 5(b). In the spirit of Bickel (1982), suppose (32)
ˆT (yt ) := ˜τT (yt |y1 yτT )
where ˜T is a sequence of estimators such that, as T → ∞, ∞ (33) [ ˜T (ε|ε1 εT ) − f (ε)]2 f (ε) dε = op (1) −∞
whenever ε1 εT are i.i.d. with density f ∈ FAC . 20
Because adaptive estimation is impossible in the location model with an (essentially) unre√ stricted f , it follows from Klaassen (1987) that there exists no T -unbiased estimator of f when f is (essentially) unrestricted; that is, the natural counterpart of (22) cannot hold when f is (essentially) unrestricted. In contrast, it follows from Bickel (1982) that the natural counterpart of (21) is compatible with the assumption f ∈ FAC , so (33) is not void.
1128
MICHAEL JANSSON
THEOREM 6: If f ∈ FAC and φˆ IT is defined as in (27)–(33), then I ¯ α) = E ψIf (SfI Hff ¯ α) exp(Λf (c)) |c lim EρT (c) φˆ IT (YT |c
T →∞
∀c ≤ 0 c¯ < 0 By showing that ΨfI is sharp, Theorem 6 demonstrates in particular that there is a sense in which the tests of ERS are asymptotically inadmissible if the assumption of Gaussian errors is relaxed. This result and the analogous inadmissibility result deducible from Theorem 4 can be viewed as unit root counterparts to the inadmissibility of the least squares estimator of β in the model Yi = βXi + εi where the εi are i.i.d. with density f and independent of the i.i.d. regressor Xi whose mean is assumed to be different from zero. Specifically, Bickel (1982, Example 2) showed that adaptive estimation of β is possible when f is symmetric, while Schick (1987, Example 2) presented the efficient influence function for β without assuming symmetry of f and showed in particular that departures from normality can be exploited for efficiency purposes also in that case. 7. EXTENSIONS Sections 3–6 study a model which assumes away the presence of deterministic components and/or serial correlation in the error. In the Gaussian case, the consequences of relaxing these assumptions are well understood from the work of ERS: parameters governing serial correlation in the error can be treated “as if” they are known, as can the value of a constant mean in the observed process, whereas the presence of a time trend affects the asymptotic power envelope. This section briefly explores whether these qualitative conclusions remain valid in models with non-Gaussian errors and finds that they do. In addition, and in perfect analogy with Section 4, it is found that also in models with a time trend, the properties of parametric submodels depend on whether or not Stein’s (1956) necessary condition for adaptation in the location model is satisfied. The consequences of accommodating deterministic components and/or serial correlation in the error will be explored by studying a model in which the observed data y1 yT are generated as (34)
yt = μ + δt + ut
(1 − ρL)γ(L)ut = εt
1129
SEMIPARAMETRIC POWER ENVELOPES
where μ and δ are the parameters governing the deterministic component, the lag polynomial γ(L) = 1 − γ1 L − · · · − γp Lp is of (known, finite) order p,21 the initial conditions are u0 = u−1 = · · · = u1−p = 0, and the εt are unobserved i.i.d. errors from a continuous distribution with full support, zero mean, unit variance, and density f . It is assumed that min|z|≤1 |γ(z)| > 0 so that the unit root testing problem is that of testing H0 : ρ = 1 vs. H1 : ρ < 1. As in Section 4, the density f is embedded in a smooth family of densities. As before, local reparameterizations will be employed in the asymptotic analysis. The appropriate reparameterizations of μ, δ, and γ(L) are of the form γ0 (1) δ = δT (d) := δ0 + √ d T g(L) γ(L) = γT (L; g) := γ0 (L) + √ T μ = μT (m) := μ0 + m
where μ0 and δ0 are known constants, γ0 (L) := 1 − γ10 L − · · · − γp0 Lp is a known lag polynomial with min|z|≤1 |γ0 (z)| > 0, whereas the unknown parameters are m, d, and the coefficients g := (g1 gp ) of the lag polynomial g(L) := −g1 L − · · · − gp Lp .22 Without loss of generality, it is assumed that μ0 and δ0 are equal to zero. The log likelihood ratio function associated with the chosen reparameterization is of the form LFT (c m d g h) := L0T (c m d g h) +
T
log f [εt (c m d g)|ηT (h)]
t=p+2
−
T
log f [εt (0 0 0 0)|ηT (0)]
t=p+2
where L0T (c h m d g) represents the contribution of y1 yp+1 and d εt (c m d g) := [1 − ρT (c)L]γT (L; g) yt − m − √ t T (t ≥ p + 2) 21 Adapting the methods of Jeganathan (1997), it should be possible to allow γ(L) to be a smoothly parameterized lag polynomial of infinite order. The qualitative conclusions of this section will not be affected by such an extension, so to conserve space it will not be pursued here. 22 The term γ0 (1) appears in the definition of δT (d) because the resulting definition gives rise to a limiting experiment which depends on d in a particularly simple way.
1130
MICHAEL JANSSON
If yt is generated by (34), (cT hT mT dT gT ) is a bounded sequence, and mild smoothness conditions on F hold, then LFT admits an expansion of the form (35)
LFT (cT mT dT gT hT ) f δη
= LT (cT dT hT ) + Lμ (mT ) + LγT (gT ) + op0f (1) where op0f (1) is shorthand for “op (1) when H0 holds, (m d g) = (0 0 0), and f δη ε has density f ” and the functions LT , Lμ , and LγT are given by 1 1 f δη f ff fδ LT (c d h) := cST − c 2 HT + d[STδ (c) − cHT (c)] − d 2 Hδδ (c) 2 2 1 fη + h[STη − cHT − d Hδη (c)] − h2 Hηη 2 p p log f (x1+j + γj0 m) − log f (x1+j ) (γ00 := −1) Lμ (m) := j=0
j=0
1 LγT (g) := g STγ − g Hγγ g 2 where
f
ST :=
T 1 xt−1 f (xt ) T t=p+2
ff
HT :=
T Iff 2 x T 2 t=p+2 t−1
T t −1 1 f (xt ) STδ (c) := √ ξc T T t=p+2 fδ
HT (c) :=
T t −1 Iff x ξ t−1 c T 3/2 t=p+2 T
T 1 S := √ η (xt ) T t=p+2 η T
H
fη T
T If η := 3/2 xt−1 T t=p+2
T 1 S := √ (yt−1 yt−p ) f (xt ) T t=p+2 γ T
SEMIPARAMETRIC POWER ENVELOPES
1131
( f η ) and (Iff If η Hηη ) are as in Section 4, xt := γ0 (L)yt ,23 ξc (r) := 1 − cr, Hδδ (c) := Iff (1 − c + c 2 /3), Hδη (c) := If η (1 − c/2), Hγγ := Σγγ Iff , and Σγγ is a p × p matrix with element (i j) given by E[γ0 (L)−1 εt−i γ0 (L)−1 εt−j ]. f δη Because neither Lμ (·) nor LT (·) is quadratic, the model with a deterministic component does not admit LAQ likelihood ratios. Nevertheless, the model is well suited for analysis using the limits of experiments approach, as the interesting part of the limiting experiment belongs to a curved exponential family and is amenable to analysis using existing tools. Indeed, for every (c h m d g), f δη
[LT (c d h) Lμ (m) LγT (g)] →d0f [Λf δη (c d h) Λμ (m) Λγ (g)] where →d0f is shorthand for “→d when H0 holds, (m d g) = (0 0 0), and ε has density f ” and Λf δη (c d h), Λμ (m), and Λγ (g) are mutually independent with 1 Λγ (g) := g Sγ − g Hγγ g Sγ ∼ N (0 Hγγ ) 2 p p log f (ε1+j + γj0 m) − log f (ε1+j ) Λμ (m) ∼ j=0
j=0
and 1 1 Λf δη (c d h) := c Sf − c 2 Hff + d[Sδ (c) − c Hf δ (c)] − d 2 Hδδ (c) 2 2 1 2 + h[Sη − c Hf η − d Hδη (c)] − h Hηη 2 where (Sf Hff Sη Hf η ) and (W Bf Bη ) are as in Section 4 and
1
Sδ (c) :=
ξc (r) dBf (r) 0
1
Hf δ (c) := Iff
W (r)ξc (r) dr 0
The mutual independence of Λf δη (c d h) and [Λμ (m) Λγ (g)] and the additively separable structure of the right-hand side of (35) imply that the derivation of asymptotic power envelopes for tests of the unit root hypothesis can proceed under the “as if” assumption that μ and the coefficients of γ(L) are known. Moreover, the distribution of Λf δη (c d h) does not depend on the coefficients of γ0 (L), so the power bounds developed in the previous sections 23
The (presample) values of y0 y1−p are set equal to zero in the definition of x1 xp . f ff fη η Because xt = yt when p = 0, the present definitions of ST , HT , ST , and HT are consistent with those of the previous sections.
1132
MICHAEL JANSSON
(under the assumption that d is known to equal zero) are valid also in the presence of a constant mean and/or serial correlation in the error. Furthermore, the presence of a constant mean and/or serial correlation in the error does not weaken the sense in which the bounds are sharp because T 1 f xˆ t−1 f (xˆ t ) = ST + op0f (1) T t=p+2
T Iff 2 ff xˆ = HT + op0f (1) T 2 t=p+2 t−1
√ and so on, where xˆ t := γ(L)(y ˆ ˆ μˆ := y1 , and γ(L) ˆ is a discretized, T t − μ), consistent estimator of γ(L). These qualitative conclusions, which are in perfect agreement with those obtained by ERS in the Gaussian case, show in particular that the inability to do adaptive unit root testing when f is (essentially) unrestricted is not an artifact of the assumption that the deterministic component is known. Nevertheless, it is of some interest to investigate whether the condition If η = 0 continues to play an important role also in models with a time trend. In the case where a time trend is accommodated, the relevant limiting experiment is an extended version of that studied in Section 4. The extended limiting experiment involves the three-dimensional parameter (c d h) and is characterized by log likelihood ratios of the form Λf δη (c d h). As in Section 4, a location invariance restriction can be used to remove the nuisance parameter h, the associated log likelihood ratio being given by Λf δη (c d) := max Λf δη (c d h) − max Λf δη (0 0 h) h
h
1 = c Sfη − c 2 Hffη + d[Sδη (c) − c Hf δη (c)] 2 1 − d 2 Hδδη (c) 2 where (Sfη Hffη ) is as in Section 4 and
Sδη (c) := Sδ (c) −
Hδη (c) Sη Hηη
Hδδη (c) := Hδδ (c) −
Hf δη (c) := Hf δ (c) −
Hf η Hδη (c) Hηη
Hδη (c)2 Hηη
Similarly, the remaining nuisance parameter d can be removed using the principle of invariance.24 Indeed, in perfect analogy with ERS’s analysis of the 24 The invariance condition in question is an asymptotic counterpart of the restriction that inference should be invariant under transformations of the form
yt → yt + bδ t
bδ ∈ R
SEMIPARAMETRIC POWER ENVELOPES
1133
Gaussian case, Lehmann and Romano (2005, Problem 6.9) and the fact that Λf δη (c d) is quadratic in d for any fixed c can be used to show that the power envelope associated with α-invariant tests in the extended limiting experiment is given by ΨFδ (c α) := E 1(Λfδη (c) > Kαδ (c; IF )) exp(Λf (c)) where Kαδ (c; IF ) is the 1 − α quantile of Λfδη (c) := max Λf δη (c d) − max Λf δη (0 d) d
d
1 1 [Sδη (c) − c Hf δη (c)]2 1 Sδη (0)2 = c Sfη − c 2 Hffη + − 2 2 Hδδη (c) 2 Hδδη (0) Analogous reasoning shows that if h is assumed to be known to equal zero, then the power envelope associated with α-invariant tests in the relevant limiting experiment is given by Ψ¯ Fδ (c α) := E 1(Λ¯ fδ (c) > K¯ αδ (c; Iff )) exp(Λf (c)) where K¯ αδ (c; Iff ) is the 1 − α quantile of Λ¯ fδ (c) := max Λf δη (c d 0) − max Λf δη (0 d 0) d
d
1 1 [Sδ (c) − c Hf δ (c)]2 1 Sδ (0)2 = c Sf − c 2 Hff + − 2 2 Hδδ (c) 2 Hδδ (0) By inspection, it is seen that Λ¯ fδ (·) and Λfδη (·) coincide if and only if If η = 0. In other words, Stein’s (1956) necessary condition for adaptation in the location model remains a necessary condition for adaptive unit root testing even when a time trend is included in the deterministic component. REMARK: Proceeding as in Section 6, it should be possible to give an explicit characterization of the semiparametric power envelope Ψfδ (c α) := infF : IF ∈Jf ΨFδ (c α) obtained by minimizing ΨFδ (c α) with respect to the submodel F and to demonstrate by example that the envelope is sharp. To conserve space, the details of these extensions are left for future work. 8. CONCLUSION This paper has derived asymptotic power envelopes for tests of the unit root hypothesis in a zero-mean AR(1) model. The power envelopes have been derived using the limits of experiments approach and are semiparametric in the (This transformation induces a transformation on the parameter δ of the form δ → δ + bδ , but leaves all other parameters unchanged.)
1134
MICHAEL JANSSON
sense that the underlying error distribution is treated as an unknown infinitedimensional nuisance parameter. Adaptation has been shown to be possible when the error distribution is known to be symmetric and to be impossible when the error distribution is (essentially) unrestricted. In the latter case, two conceptually distinct approaches to nuisance parameter elimination were employed in the derivation of the semiparametric power envelopes. One of these power bounds, derived under an invariance restriction, was shown by example to be sharp, while the other, derived under a similarity restriction, was conjectured not to be globally attainable. Both sets of restrictions imposed when deriving the semiparametric power envelopes have natural counterparts in models with LAN likelihood ratios and give rise to identical power envelopes in such models. The fact that the two sets of restrictions give rise to distinct power envelopes in the present context is perhaps surprising and clearly shows that not all methodological conclusions from the existing literature on semiparametrics will generalize to models not admitting LAN likelihood ratios. On the other hand, it is interesting that one approach to nuisance parameter elimination (albeit one that has not received much attention in the existing literature) “works” both in conventional models and in the model studied herein. It would be of interest to investigate whether this approach to nuisance parameter elimination also “works” in other nonstandard hypothesis testing problems involving infinite-dimensional nuisance parameters. APPENDIX: PROOFS PROOF OF THEOREM 3: Let c < 0 be given. (a) Because ΛF (0 h) = hSη − 12 h2 Iηη , it follows from the completeness properties of linear exponential families (e.g., Lehmann and Romano (2005, Theorem 4.3.1)) that ψ satisfies (9) if and only if E[ψ(SF HF )|Sη ] = α. Using this characterization of (9) and the properties of curved exponential families (e.g., Lehmann and Romano (2005, Lemma 2.7.2)), the Neyman– Pearson lemma can be used to show that if ψ satisfies (9), then E[ψ(SF HF ) × exp(Λf (c))] ≤ ΨFS (c α). (b) By the Neyman–Pearson lemma, the right-hand side in (11) is no greater than ΨFI (c α) if (8) holds. To complete the proof, it therefore suffices to show that (11) holds whenever (10) does. Now, Sη ∼ N (0 Iηη ) is independent of (Sfη HF ). Furthermore, 2 Hf η 1 Hf η c + h Sη − + h Iηη ΛF (c h) = Λfη (c) + c Hηη 2 Hηη
(36)
∀(c h)
These facts imply that E[exp(ΛF (c h))|Sfη HF ] = exp(Λfη (c)) for any (c h), from which the desired conclusion follows. Q.E.D.
SEMIPARAMETRIC POWER ENVELOPES
1135
REMARK: Using (36) and the fact that Sη ∼ N (0 Iηη ) is independent of (Sfη HF ), it can be shown that, for any c h bη and for any bounded, measurable function κ, E[κ(Sfη Sη + bη HF )ΛF (c h)] bη = E κ(Sfη Sη HF )ΛF c h + Iηη fη
fη η F S∞ H∞ ) denote the weak limit of (ST STη HTF ) (under a sequence of Let (S∞ parameterizations of the form (ρ η) = (ρT (c) ηT (h)) for some fixed (c h)).25 In view of the preceding display, any transformation of the form
(37)
fη η F fη η F S∞ H∞ ) → (S∞ S∞ + bη H∞ ) (S∞
bη ∈ R
induces a transformation of the parameter (c h) given by (c h) → (c h + bη /Iηη ). Because h is a nuisance parameter, the testing problem under consideration is invariant with respect to (location) transformations of the form fη F (37). The associated maximal invariant is (S∞ H∞ ). Condition (10) on the fη η F test function ψ asserts that the test depends on (S∞ S∞ H∞ ) only through this maximal invariant. The following simple lemma, a proof of which can be found in Jansson (2007), is used in the proofs of Theorems 4 and 5. LEMMA 7: There exists a (unique) continuous function KαS such that ψSF satisfies E[ψSF (SF HF |c α)|Sη ] = α. PROOF OF THEOREM 4: If (a) holds, then (b) holds because it follows from S , (a) and the continuity theorem (for convergence in probability) that if f ∈ FAC then ¯ α) = φfT (YT |c ¯ α) + op0f (1) φˆ T (YT |c (The continuity theorem is applicable because it follows from Lemma 7 that Kα is continuous.) To prove (a) it suffices to show that f (SˆT Iˆ T ) = (ST Iff ) + op0f (1)
S ∀f ∈ FAC
S be given. Throughout the proof, suppose H0 holds and let f ∈ FAC ˆ The result IT = Iff + op (1) is (essentially) a special case of Drost, Klaassen, and Werker (1997, Lemma 3.1) and can be proved in exactly the same way.
25
fη
η F When (c h) = (0 0), (S∞ S∞ H∞ ) ∼ (Sfη Sη HF ).
1136
MICHAEL JANSSON
f The result SˆT = ST + op (1) will be established by showing that f f f (SˆTτ SˆT − SˆTτ ) = (STτ ST − STτ ) + op (1)
τT
τT f τ where SˆTτ := T −1 t=2 yt−1 ˆST t (yt ) and STτ := T −1 t=2 yt−1 f (yt ). Let Et−1 [·] denote conditional expectation given {ε 1 εt−1 } and {ετT +1 εT }. By con√ τ struction, T Et−1 [ ˆST t (yt )] is the same for every t ≤ τT , namely √ τ √ ∞ S T Et−1 [ ˆST t (yt )] = T ˜T −τT (ε|ετT +1 εT )f (ε) dε = op (1) −∞
τ where last equality uses (22). Furthermore, Et−1 [ f (yt )] = E[ f (ε)] = 0
τthe T 3/2 and t=2 yt−1 = Op (T ), so τT 1 τ Et−1 yt−1 ( ˆST t (yt ) − f (yt )) T t=2 τT √ ∞ S 1 ˜ = y (ε|ε ε )f (ε) dε T t−1 τT +1 T T −τT T 3/2 t=2 −∞
= op (1) It now follows from Drost, Klaassen, and Werker (1997, Lemma 2.2) that f SˆTτ = STτ + op (1) because τT τT 1 1 E τ y 2 ( ˆS (yt ) − f (yt ))2 = y 2 op (1) T 2 t=2 t−1 t−1 T t T 2 t=2 t−1 = op (1) where the first equality uses the fact that, for every t ≤ τT , S τ Et−1 ( ˆT t (yt ) − f (yt ))2 ∞ [ ˜ST −τT (ε|ετT +1 εT ) − f (ε)]2 f (ε) dε = −∞
= op (1) the last equality being a consequence of (21). f f Analogous reasoning can be used to show that SˆT − SˆTτ = ST − STτ + op (1). Q.E.D. PROOF OF THEOREM 5: Let f ∈ FDQM be given.
SEMIPARAMETRIC POWER ENVELOPES
1137
Equation (23). The inequality (25) follows from the fact that if F satisfies Assumption DQM*, then φSfT (·|c α) is locally asymptotically α-similar in F and satisfies limT →∞ EρT (c)ηT (0) φST (YT |c α; f ) = ΨfS (c α) (for every c < 0); see Jansson (2007) for details. Next, because f ∈ FDQM , it can be embedded in a family F satisfying Assumption DQM* and it follows from standard spanning arguments (e.g., Newey (1990, Appendix B)) that the collection of functions η (defined as in Assumption DQM*) associated with such families F is dense in Lη . As a consequence, the set Jf is dense in the set of symmetric 2 × 2 matrices IF for which the first diagonal element equals Iff and (6) is positive semidefinite. In particular, the fact that f ∈ FDQM implies that JfLF belongs to the closure of Jf . To complete the proof of the inequality (38)
inf
F : IF ∈Jf
ΨFS (c α) ≤ ΨfS (c α) ∀c < 0
it therefore suffices to show that ΨFS (c α) is a continuous function of IF . Because KαS is continuous, the continuous mapping theorem can be used to show that if the sequence IFn is convergent, then ψSFn (SFn HFn |c α) (defined from IFn in the natural way) converges in distribution. Using this fact and the dominated convergence theorem, it can be shown that ΨFS (c α) is a continuous function of IF . Equation (24). The proof is similar to that of (23) and proceeds by showing that (39)
inf
ΨFI (c α) ≥ ΨfI (c α) ∀c < 0
inf
ΨFI (c α) ≤ ΨfI (c α) ∀c < 0
F : IF ∈Jf
and (40)
F : IF ∈Jf
Inequality (40) follows from arguments analogous to those used to prove (38). To establish (39), let c < 0 be given and let F be any submodel satisfying assumption LAQ*. Also, let (STF HTF ), (W Bf Bη ) etcetera be as in Section 4 and define 1 2 ffI fI I I LF φfT (YT |c α) := 1 cST − c HT > Kα (c; Jf ) 2 fI
where ST is defined in (26) and H
ffI T
:= H − (Iff − 1) T ff T
−3/2
T t=2
2 yt−1
1138
MICHAEL JANSSON
The statistic φIfT (YT |c α) satisfies I lim EρT (c )ηT (h) φIT (YT |c α; f ) = E ψIf (SfI Hff |c α) exp(ΛF (c h))
T →∞
for every (c h). In particular, limT →∞ EρT (c)ηT (0) φIT (YT |c α; f ) = ΨfI (c α), so the proof of (39) can be completed by showing that φIfT (·|c α) is locally asymptotically α-invariant in F . To do so, it suffices to show that I I |c α)|SF HF ] = E[ψIf (SfI Hff |c α)|Sfη HF ] E[ψIf (SfI Hff I A sufficient condition for this to hold is that (SfI Hff ) is independent of Sη conditional on (Sfη HF ). In turn, this conditional independence property follows from simple algebra and the fact that the conditional distribution of
1 (Sf SfS Sη ) given W is normal with mean ( 0 W (r) dW (r) 0 0) and variance
1
1
1 ⎛ ⎞ (Iff − 1) 0 W (r)2 dr (Iff − 1) 0 W (r) dr If η 0 W (r) dr
1 ⎜ ⎟ Iff − 1 If η ⎝ (Iff − 1) 0 W (r) dr ⎠
1 If η 0 W (r) dr If η Iηη Q.E.D.
PROOF OF THEOREM 6: Suppose H0 holds and let f ∈ FAC and c¯ < 0 be given. It suffices to show that ¯ α) = φIfT (YT |c ¯ α) + op (1) φˆ IT (YT |c ¯ α) was defined in the proof of Theorem 5. The displayed result where φIfT (·|c will follow from the convergence theorem (for convergence in probability) if it can be shown that (41)
fI fI SˆTI = STτ + op (1) = ST + op (1)
(42)
ffI ffI Hˆ TI = HTτ + op (1) = HT + op (1)
(43)
Iˆ T = Iff + op (1) fI
ffI
where ST and HT S
fI Tτ
are as in the proof of Theorem 5 and
T 1 := yt−1 f (yt ) T t=τ +1 T T T 1 1 yt−1 [ f (yt ) − yt ] − √ T 3/2 t=τ +1 T t=τ +1 T
T
SEMIPARAMETRIC POWER ENVELOPES
H
ffI Tτ
1139
2 T T Iff 2 1 := 2 y − (Iff − 1) yt−1 T t=τ +1 t−1 T 3/2 t=τ +1 T
T
The result (43) follows from Bickel (1982, Section 6.2(i)), while (42) follows from (43), limT →∞ τT /T = 0, and simple algebra. The second equality in (41) follows from limT →∞ τT /T = 0 and simple algebra. Finally, to establish the first equality in (41), let ˇT be a “bias-corrected” version of ˆT given by ∞ ˇ ˜ T (yt ) := τT (yt |y1 yτT ) − ˜τT (ε|y1 yτT )f (ε) dε −∞
Reasoning analogous to that of the proof of Theorem 4 can be used to show that T 1 yt−1 [ ˇT (yt ) − f (yt )] = op (1) T t=τ +1 T
T 1 ˇ [ T (yt ) − f (yt )] = op (1) √ T t=τT +1
The first equality in (41) can be established using these results and the fact that T 1 yt−1 ˇT (yt ) SˆTI = T t=τ +1 T T T 1 ˇ 1 yt−1 [ T (yt ) − yt ] − √ T 3/2 t=τ +1 T t=τ +1 T
T
Q.E.D.
REFERENCES ANDREWS, D. W. K. (1986): “Complete Consistency: A Testing Analogue of Estimator Consistency,” Review of Economic Studies, 53, 263–269. [1125] BEGUN, J. M., W. J. HALL, W.-M. HUANG, AND J. A. WELLNER (1983): “Information and Asymptotic Efficiency in Parametric–Nonparametric Models,” Annals of Statistics, 11, 432–452. [1104,1120] BERAN, R. (1974): “Asymptotically Efficient Adaptive Rank Estimates in Location Models,” Annals of Statistics, 2, 63–74. [1118] (1978): “An Efficient and Robust Adaptive Estimator of Location,” Annals of Statistics, 6, 292–313. [1118] BICKEL, P. J. (1982): “On Adaptive Estimation,” Annals of Statistics, 10, 647–671. [1120,1122, 1126-1128,1139] BICKEL, P. J., C. A. J. KLAASSEN, Y. RITOV, AND J. A. WELLNER (1998): Efficient and Adaptive Estimation for Semiparametric Models. New York: Springer-Verlag. [1104,1122] BILLINGSLEY, P. (1999): Convergence of Probability Measures (Second Ed.). New York: Wiley. [1106,1108]
1140
MICHAEL JANSSON
BOSWIJK, H. P. (2005): “Adaptive Testing for a Unit Root With Nonstationary Volatility,” Working Paper, University of Amsterdam. [1106] CHAN, N. H., AND C. Z. WEI (1988): “Limiting Distributions of Least Squares Estimates of Unstable Autoregressive Processes,” Annals of Statistics, 16, 367–401. [1108] CHERNOZHUKOV, V., AND H. HONG (2004): “Likelihood Estimation and Inference in a Class of Nonregular Econometric Models,” Econometrica, 72, 1445–1480. [1106] CHOI, S., W. J. HALL, AND A. SCHICK (1996): “Asymptotically Uniformly Most Powerful Tests in Parametric and Semiparametric Models,” Annals of Statistics, 24, 841–861. [1104,1120] DICKEY, D. A., AND W. A. FULLER (1979): “Distribution of the Estimators for Autoregressive Time Series With a Unit Root,” Journal of the American Statistical Association, 74, 427–431. [1103] (1981): “Likelihood Ratio Statistics for Autoregressive Time Series With a Unit Root,” Econometrica, 49, 1057–1072. [1103] DROST, F. C., C. A. J. KLAASSEN, AND B. J. M. WERKER (1997): “Adaptive Estimation in TimeSeries Models,” Annals of Statistics, 25, 786–817. [1118,1135,1136] ELIASZ, P. (2004): “Optimal Median Unbiased Estimation of Coefficients on Highly Persistent Regressors,” Working Paper, Princeton University. [1104] ELLIOTT, G., T. J. ROTHENBERG, AND J. H. STOCK (1996): “Efficient Tests for an Autoregressive Unit Root,” Econometrica, 64, 813–836. [1103] GUSHCHIN, A. A. (1995): “Asymptotic Optimality of Parameter Estimators Under the LAQ Condition,” Theory of Probability and Its Applications, 40, 261–272. [1104] HALDRUP, N., AND M. JANSSON (2006): “Improving Size and Power in Unit Root Testing,” in Palgrave Handbook of Econometrics. Vol. 1, Econometric Theory, ed. by T. C. Mills and K. Patterson. New York: Palgrave Macmillan, 252–277. [1103] HASAN, M. N., AND R. W. KOENKER (1997): “Robust Rank Tests of the Unit Root Hypothesis,” Econometrica, 65, 133–161. [1115] HERCE, M. A. (1996): “Asymptotic Theory of LAD, Estimation in a Unit Root Process With Finite Variance Errors,” Econometric Theory, 12, 129–153. [1115] HIRANO, K., AND J. R. PORTER (2003): “Asymptotic Efficiency in Parametric Structural Models With Parameter-Dependent Support,” Econometrica, 71, 1307–1338. [1106] JANSSON, M. (2007): “Supplement to ‘Semiparametric Power Envelopes for Tests of the Unit Root Hypothesis’,” Econometrica Supplemental Material, 76, http://www.econometricsociety. org/ecta/Supmat/6113-proofs.pdf. [1111,1115,1135,1137] JANSSON, M., AND M. J. MOREIRA (2006): “Optimal Inference in Regression Models With Nearly Integrated Regressors,” Econometrica, 74, 681–714. [1104] JEGANATHAN, P. (1995): “Some Aspects of Asymptotic Theory With Applications to Time Series Models,” Econometric Theory, 11, 818–887. [1104,1108,1111,1113] (1997): “On Asymptotic Inference in Linear Cointegrated Time Series Systems,” Econometric Theory, 13, 692–745. [1129] KLAASSEN, C. A. J. (1987): “Consistent Estimation of the Influence Function of Locally Asymptotically Linear Estimators,” Annals of Statistics, 15, 1548–1562. [1122,1127] KOENKER, R., AND Z. XIAO (2004): “Unit Root Quantile Autoregression Inference,” Journal of the American Statistical Association, 99, 775–787. [1115] KOUL, H. L., AND A. SCHICK (1997): “Efficient Estimation in Nonlinear Autoregressive TimeSeries Models,” Bernoulli, 3, 247–277. [1122] KREISS, J.-P. (1987a): “On Adaptive Estimation in Autoregressive Models When There Are Nuisance Functions,” Statistics and Decisions, 5, 59–76. [1118] (1987b): “On Adaptive Estimation in Stationary ARMA Processes,” Annals of Statistics, 15, 112–133. [1118] LE CAM, L. (1970): “On the Assumptions Used to Prove Asymptotic Normality of Maximum Likelihood Estimates,” Annals of Mathematical Statistics, 41, 802–828. [1111] (1972): “Limits of Experiments,” in Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1. Berkeley, CA: University of California Press, 245–261. [1106,1109]
SEMIPARAMETRIC POWER ENVELOPES
1141
(1986): Asymptotic Methods in Statistical Decision Theory. New York: Springer Verlag. [1111] LE CAM, L., AND G. L. YANG (2000): Asymptotics in Statistics Some Basic Concepts (Second Ed.). New York: Springer Verlag. [1111] LEHMANN, E. L., AND J. P. ROMANO (2005): Testing Statistical Hypotheses (Third Ed.). New York: Springer Verlag. [1104,1117,1133,1134] LEVIT, B. Y. (1975): “On the Efficiency of a Class of Nonparametric Estimates,” Theory of Probability and Its Applications, 3, 723–740. [1118] LING, S., AND M. MCALEER (2003): “On Adaptive Estimation in Nonstationary ARMA Models With GARCH Errors,” Annals of Statistics, 31, 642–674. [1104,1120] MÜLLER, U. K., AND G. ELLIOTT (2003): “Tests for Unit Root and the Initial Condition,” Econometrica, 71, 1269–1286. [1106] MURPHY, S. A., AND A. W. VAN DER VAART (1997): “Semiparametric Likelihood Ratio Inference,” Annals of Statistics, 25, 1471–1509. [1117] (2000): “On Profile Likelihood,” Journal of the American Statistical Association, 95, 449–485. [1117] NEWEY, W. K. (1990): “Semiparametric Efficiency Bounds,” Journal of Applied Econometrics, 5, 99–135. [1104,1118,1137] NG, S., AND P. PERRON (1995): “Unit Root Tests Is ARMA Models With Data-Dependent Methods for the Selection of the Truncation Lag,” Journal of the American Statistical Association, 90, 268–281. [1103] (2001): “Lag Length Selection and the Construction of Unit Root Tests With Good Size and Power,” Econometrica, 69, 1519–1554. [1103] PAPARODITIS, E., AND D. N. POLITIS (2003): “Residual-Based Block Bootstrap for Unit Root Testing,” Econometrica, 71, 813–855. [1103] PARK, J. Y. (2003): “Bootstrap Unit Root Tests,” Econometrica, 71, 1845–1895. [1103] PERRON, P., AND S. NG (1996): “Useful Modifications to Unit Root Tests With Dependent Errors and Their Local Asymptotic Properties,” Review of Economic Studies, 63, 435–465. [1103] PHILLIPS, P. C. B. (1987): “Time Series Regression With a Unit Root,” Econometrica, 55, 277–301. [1103] (1991): “Optimal Inference in Cointegrated Systems,” Econometrica, 59, 283–306. [1104] PHILLIPS, P. C. B., AND P. PERRON (1988): “Testing for a Unit Root in Time Series Regression,” Biometrika, 75, 335–346. [1103] PHILLIPS, P. C. B., AND Z. XIAO (1998): “A Primer on Unit Root Testing,” Journal of Economic Surveys, 12, 423–469. [1103] POLLARD, D. (1997): “Another Look at Differentiability in Quadratic Mean,” in Festschrift for Lucien Le Cam: Research Papers in Probability and Statistics, ed. by D. Pollar, E. Torgersen, and G. L. Yang. New York: Springer Verlag, 305–314. [1111] ROTHENBERG, T. J. (2000): “Testing for Unit Roots in AR and MA Models,” in Applications of Differential Geometry to Econometrics, ed. by P. Marriott and M. Salmon. New York: Cambridge University Press, 281–293. [1103] ROTHENBERG, T. J., AND J. H. STOCK (1997): “Inference in a Nearly Integrated Autoregressive Model With Nonnormal Innovations,” Journal of Econometrics, 80, 269–286. [1103,1109,1118] SCHICK, A. (1986): “On Asymptotically Efficient Estimation in Semiparametric Models,” Annals of Statistics, 14, 1139–1151. [1122] (1987): “A Note on the Construction of Asymptotically Linear Estimators,” Journal of Statistical Planning and Inference, 16, 89–105. [1128] STEIN, C. (1956): “Efficient Nonparametric Estimation and Testing,” in Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1. Berkeley, CA: University of California Press, 187–196. [1104,1118,1120,1122,1128,1133] STOCK, J. H. (1994): “Unit Roots, Structural Breaks and Trends,” in Handbook of Econometrics, Vol. 4, ed. by R. F. Engle and D. L. McFadden. New York: North-Holland, 2739–2841. [1103]
1142
MICHAEL JANSSON
STOCK, J. H., AND M. W. WATSON (1993): “A Simple Estimator of Cointegrating Vectors in Higher Order Integrated Systems,” Econometrica, 61, 783–820. [1104] STONE, C. J. (1975): “Adaptive Maximum Likelihood Estimators of a Location Parameter,” Annals of Statistics, 3, 267–284. [1118] THOMPSON, S. B. (2004): “Robust Tests of the Unit Root Hypothesis Should Not Be Modified,” Econometric Theory, 20, 360–381. [1115] VAN DER VAART, A. W. (1991): “An Asymptotic Representation Theorem,” International Statistical Review, 59, 97–121. [1117] (1998): Asymptotic Statistics. New York: Cambridge University Press. [1104] (2002): “The Statistical Work of Lucien Le Cam,” Annals of Statistics, 30, 631–682. [1108,1111]
Dept. of Economics, University of California at Berkeley, 508-1 Evans Hall, 3880, Berkeley, CA 94720-3880, U.S.A. and Center for Research in Econometric Analysis of Time Series (CREATES), School of Economics and Management, University of Aarhus, DK-8000 Aarhus C, Denmark;
[email protected]. edu. Manuscript received October, 2005; final revision received April, 2007.
Econometrica, Vol. 76, No. 5 (September, 2008), 1143–1166
NOTES AND COMMENTS CALIBRATION RESULTS FOR NON-EXPECTED UTILITY THEORIES BY ZVI SAFRA AND UZI SEGAL1 Rabin (2000) proved that a low level of risk aversion with respect to small gambles leads to a high, and absurd, level of risk aversion with respect to large gambles. Rabin’s arguments strongly depend on expected utility theory, but we show that similar arguments apply to general non-expected utility theories. KEYWORDS: Risk aversion, calibration, non-expected utility theories.
1. INTRODUCTION ABOUT TEN YEARS AGO, Rabin (2000) offered a very convincing argument against using expected utility theory by showing how reasonable levels of risk aversion with respect to small lotteries imply absurdly high levels of risk aversion with respect to large lotteries. Our aim is to show that this analysis does not challenge only expected utility theory: similar results can be obtained for all (smooth) preferences. We assume the following two properties of preferences throughout this paper: D1. Actions are evaluated by considering possible final-wealth levels. D2. Risk aversion: If lottery Y is a mean preserving spread of lottery X, then X is preferred to Y . Given these assumptions, we explore the quantitative relationship between the following two additional behaviors. B3. Rejection of a small actuarially favorable lottery in a certain range. For example, rejection of the lottery (−100, 12 ; 105, 12 ) at all wealth levels below 300,000. B4. Acceptance of large, very favorable lotteries. For example, acceptance of the lottery (−5000 12 ; 10000000 12 ). Specifically we explore how the size and form of actuarially favorable lotteries that are rejected and the range of wealth on which they are rejected relate to the size of large lotteries that can be accepted. Rabin (2000) showed the tension between these properties within expected utility theory. If for given small and relatively close < g, the decision maker rejects (− 12 ; g 12 ) at all wealth levels x ∈ [a b], then he also rejects (−L 12 ; G 12 ) at x∗ for some L, G, and x∗ ∈ [a b], where G is huge while L 1 We thank Larry Epstein, Simon Grant, Eran Hanany, Edi Karni, Mark Machina, Matthew Rabin, Ariel Rubinstein, Shunming Zhang, three referees, and especially Eddie Dekel for their suggestions. Zvi Safra thanks the Israel Institute of Business Research and Uzi Segal thanks the NSF for financial support (Award #0617359).
© 2008 The Econometric Society
DOI: 10.3982/ECTA6175
1144
Z. SAFRA AND U. SEGAL
is not. Rabin showed how stunning these numbers can be. For example, if a risk averse decision maker is rejecting the lottery (−100 12 ; 105 12 ) at all positive wealth levels below 300,000, then at wealth level 290,000 he will also reject the lottery (−10000 12 ; 5503790 12 ).2 Rabin’s arguments rely on the properties of expected utility theory and are understood to be a major attack on this theory.3 The hypothesis that in risky environments decision makers evaluate actions by considering possible final-wealth levels is widely used in expected utility theory as well as in its applications. Moreover, many of the new alternatives to expected utility—alternatives that were developed during the last twenty five years to overcome the limited descriptive power of expected utility—are also based on the hypothesis that only final-wealth levels matter. The final-wealth hypothesis is analytically tractable as it assumes that decision makers behave according to a unique, universal preference relation over final-wealth distributions. Suggested deviations from this hypothesis require much more elaborate and complex analysis. For example, postulating that decision makers ignore final-wealth levels and, instead, care about possible gains and losses may require using many preference relations and will necessitate a mechanism that defines the appropriate reference points. However, the poor descriptive power of some of the final-wealth models and, in particular, of final-wealth expected utility, has increased the popularity of gain–loss models such as prospect theory (Kahneman and Tversky (1979) and Tversky and Kahneman (1992)) and its offsprings. Using Rabin’s results, Cox and Sadiraj (2006) and Rubinstein (2006) concluded that the final-wealth approach should be dropped. A less radical conclusion from Rabin’s argument is that final-wealth expected utility should be replaced with more general final-wealth theories (see e.g., Rabin (2000, p. 1288) and Rabin and Thaler (2001)). For example, rankdependent utility with linear utility (Yaari (1987)) is capable of exhibiting both a relatively strong aversion to small gambles and a sensible degree of risk aversion with respect to large gambles (see Section 4 below). The present paper confronts this claim. We show that with small modifications of B3 and B4 one can still show that a rejection of small lotteries with positive expected value 2 For an earlier claim that a low level of risk aversion in the small implies huge risk aversion at the large, although without detailed numerical estimates, see Hansson (1988) and Epstein (1992). 3 Palacios-Huerta and Serrano (2006) objected to this conclusion and claimed that expected utility decision makers do not satisfy B3 on such large intervals. But Theorems 1 and 2 show that the intervals needed are much smaller than Rabin’s (see Tables I and II). In another approach, where decision makers do not maximize expected utility but are rather concerned with not going below a certain wealth level, Foster and Hart (2007) show that in Rabin’s example, B4 may not be implied by B3 if decision makers consider infinitely many repetitions of the gamble.
NON-EXPECTED UTILITY THEORIES
1145
leads to the rejection of very attractive large lotteries even if the expected utility hypothesis is dropped.4 Yaari’s model, for example, is inconsistent with our extended requirement for a moderate level of risk aversion in the small (see Proposition 2 in Section 4). The technical tool we use is local utilities (Machina (1982)). Theorem 1 shows how to use local utilities to obtain calibration results. But the conditions of this theorem are too strong in the sense that they are not satisfied by some nonexpected utility preferences and by some empirical tests. The main results of the paper show how to obtain inconsistency of the four desired properties with weaker assumptions than those used in Theorem 1. In Section 3 we show the inconsistency of the four properties under some assumptions concerning the way local utilities change from one distribution to another. In Section 4 we modify properties B3 and B4 to get general results. The paper’s analysis applies to all (Gâteaux) differentiable models, including Chew’s (1983) weighted utility, (the differentiable versions of) betweenness (Dekel (1986), Chew (1983)), quadratic utility (Machina (1982), Chew, Epstein, and Segal (1991)), and rank-dependent utility (Quiggin (1982)). In Section 4 we also show that non expected utility models satisfying constant absolute and relative risk aversion cannot satisfy our modified version of property B3. The analysis of the paper leaves us with the choice between several controversial conclusions: People do not reject small risks, people reject excellent large risk, or, explanations that seem to be more likely, people are not globally risk averse or people do not utilize just one preference relation. 2. CALIBRATION AND LOCAL UTILITIES We assume throughout that preferences over distributions are representable by a functional V which is risk averse with respect to mean-preserving spreads, monotonically increasing with respect to first-order stochastic dominance, continuous with respect to the topology of weak convergence, and Gâteaux differentiable (see below).5 Denote the set of all such functionals by V . According to the context, utility functionals are defined over lotteries (of the form X = (x1 p1 ; ; xn pn )) or over cumulative distribution functions (denoted F H). Degenerate cumulative distribution functions are denoted δx . For x, , and g, Hxg denotes the distribution of the lottery (x − 12 ; x + g 12 ). When and g are fixed, we write Hx instead. 4 Our claims hold provided some minimal degree of smoothness of preferences is assumed. Formally, we assume that preferences are Gâteaux differentiable. 5 It is of course possible to create nondifferentiable examples (see, e.g., Dekel (1986)), but all the standard models in the literature are Gâteaux (if not Fréchet) differentiable.
1146
Z. SAFRA AND U. SEGAL
The functional V is Gâteaux differentiable if for every F and H, the derivative ∂ V ((1 − ε)F + εH) ∂ε ε=0 exists and is linear in H (see Zeidler (1985)). If V is Gâteaux differentiable, then there are local utilities6 u(·; F) such that for all F and H, (1)
V ((1 − ε)F + εH) − V (F) =ε u(x; F) dH − u(x; F) dF + o(ε)
Throughout the paper, the local utilities enable us to carry over accumulated levels of risk aversion from one point to another. Proposition 1 links risk attitudes of the functional V with properties of the local utilities u(·; F). The following lemma is needed since the calibration results for expected utility rely on the concavity of the von Neumann–Morganstern (vNM) function. It extends Machina’s (1982) result for Fréchet differentiable functionals to the class of all Gâteaux differentiable functionals. LEMMA 1: All local utilities of a Gâteaux differentiable functional V ∈ V are concave. Suppose that V is expected utility with the vNM utility u. For every F , the function u is also the local utility function of V at F . If the decision maker rejects the lottery (− 12 ; g 12 ) at a given x, that is, if (2)
u(x) > 12 u(x − ) + 12 u(x + g)
then by the nature of expected utility theory, for every distribution F and ε > 0, (3)
V ((1 − ε)F + εδx ) ≥ V ((1 − ε)F + εHx )
Proposition 1 shows that the equivalence of eqs. (2) and (3) holds for the local utilities of general functionals V as well. PROPOSITION 1: Let V ∈ V and x ∈ . The following conditions are equivalent: (i) For every F , u(x; F) ≥ 12 u(x − ; F) + 12 u(x + g; F). (ii) For every F and ε > 0, V ((1 − ε)F + εδx ) ≥ V ((1 − ε)F + εHx ). 6 The concept of local utilities was introduced by Machina (1982). Machina assumed the stronger notion of Fréchet differentiability.
1147
NON-EXPECTED UTILITY THEORIES
This proposition leads to the following behavioral conclusion: THEOREM 1: Let V ∈ V , g > > 0, and G > b − a, and let (b−a)/(+g) (b−a)/(+g) 1 − g (1 − p) (4) + (G + a − b) L > ( + g) g p 1− g If for every F with support in [a − L a + G], x ∈ [a b], and ε > 0, V ((1 − ε)F + εδx ) ≥ V ((1 − ε)F + εHx ) then V (a 1) ≥ V (a − L p; a + G 1 − p) Table I offers the minimal values of L for different levels of G, b − a, and g when = 100 and p = 12 . For example, if the decision maker rejects (−100 12 ; 110 12 ) on a range of 40,000, then he also rejects the lottery (−2310 12 ; 10000000 12 ). If p = 12 , the values should be multiplied by 1−p . p 1 For 1 − p = 100000 , we obtain that this decision maker will refuse to pay even three cents for a 1:100,000 chance of winning 10 million dollars! The “if” condition of this theorem is stronger than the one used by Rabin (which is formally obtained when ε = 1), although within expected utility theory they are equivalent, as by the independence axiom, V (δx ) ≥ V (Hx ) if and only if for all F and ε > 0, V ((1 − ε)F + εδx ) ≥ V ((1 − ε)F + εHx ). This condition is quite strong and behaviorally questionable. The common ratio effect (Allais (1953)) shows that preferences may be reversed when the choice is conditioned on a small probability. For example, let F = δ0 , x = 100, = 100, and g = 110. The fact that V (100 1) > V (0 12 ; 210 12 ) does not imply, for preferences exhibiting this effect, that V (100 01; 0 09) > V (210 005; 0 095). Although the “then” part of Theorem 1 is as strong as Rabin’s original claim, we consider a weaker hypothesis at the cost of obtaining a weaker conclusion. We offer such results in the next two sections. TABLE I VALUES OF La G
1,000,000 5,000,000 10,000,000
b−a
g = 101
g = 105
g = 110
g = 125
20,000 30,000 40,000
376,873 1,141,280 1,392,440
12,662 8,241 5,035
2,421 2,316 2,310
1,125 1,125 1,125
a If the decision maker rejects (−100 1 ; g 1 ) at all wealth levels between a and b, then at a he also rejects 2 2 (−L 12 ; G 12 ).
1148
Z. SAFRA AND U. SEGAL
3. HYPOTHESIS 2 Machina (1982) introduced the following notion. DEFINITION 1: Suppose all local utilities are twice differentiable. The functional V ∈ V satisfies Hypothesis 2 if for all F and H such that F dominates H by first-order stochastic dominance and for all x, −
u (x; H) u (x; F) ≥ − u (x; F) u (x; H)
Machina (1982, 1987) showed that Hypothesis 2 conforms with many violations of expected utility like the Allais paradox, the common ratio effect (Allais (1953)), and the mutual purchase of insurance policies and lottery tickets. Hypothesis 2 implies that for given x > y > z, indifference curves in the probability triangle {(x p; y 1 −p − q; z q) : p + q ≤ 1} become steeper as one moves from δz to δx .7 Note that expected utility preferences satisfy Hypothesis 2 with equality, as all local utility functions are identical and equal to the vNM utility u. Assuming Hypothesis 2, we get the following result. THEOREM 2: Let V ∈ V and assume that it satisfies Hypothesis 2. Let 0 < < g < L and let b − a = L + g. Then there exists εˆ > 0 such that, for p ≤ εˆ and for all G satisfying g (b−a)/(+g) −1 p (5) ( + g) G< g 1−p −1 if for all x ∈ [a b], V (x 1) > V (x − 12 ; x + g 12 ), then V (b 1) ≥ V (b − g − L p; b − g + G 1 − p) Observe that if for all x ∈ [a b], V (x 1) > V (x − 12 ; x + g 12 ), then, by continuity, there exists εˆ > 0 such that for all p ∈ (0 ε] ˆ and for all x ∈ [a b], 1−p 1−p V (a p; x 1 − p) ≥ V a p; x − ; x + g 2 2 This is the value of εˆ used in the theorem. To understand the implication of the theorem, first consider expected utility preferences. By the independence ax7 The experimental evidence concerning this hypothesis, even on the probability triangle, is inconclusive. Battalio, Kagel, and Jiranyakul (1990) and Conlisk (1989) suggest that indifference curves become less steep as one moves closer to either δx or δz . But Conlisk’s lotteries do not satisfy the requirements of Hypothesis 2. Battalio, Kagel, and Jiranyakul (1990) did find some violations of Hypothesis 2, but as most of their subjects were consistent with expected utility theory, only a small minority of them violated this hypothesis. For a further discussion of violations of Hypothesis 2, see Starmer (2000, Sec. 5.1.1).
NON-EXPECTED UTILITY THEORIES
1149
TABLE II VALUES OF P a L
20,000 30,000 50,000
G
100000 1000000 100000000
g = 101
g = 105
g = 110
g = 125
0.7462 0.9357 0.9978
0.1740 0.1621 0.1421
0.0054 58 × 10−4 66 × 10−6
27 × 10−7 13 × 10−10 32 × 10−17
a If the expected utility decision maker rejects (−100 1 ; g 1 ) at all wealth levels between b − L and b, then at b 2 2 he also rejects (−L p; G 1 − p).
iom εˆ = 1 and, as is explained in the beginning of the theorem’s proof, the lottery rejected is (−L p; G 1 − p) (a rightward shift by g of the lottery (−g − L p; −g + G 1 − p) rejected by general preferences). In Table II, = 100 and the wealth level is b. The table presents, for different combinations of L = b − a, G, and g, values of p such that a rejection of (−100 12 ; g 12 ) at all x ∈ [b − L b] by an expected utility decision maker leads to a rejection at b of the lottery (−L p; G 1 − p). For example, if the decision maker rejects (−100 12 ; 110 12 ) on a range of 30,000, then he also rejects the 1 lottery (−30000 1700 ; 1000000 1699 ). 1700 For general preferences, Theorem 2 implies, for example, the following behavior (where the numbers are taken from Table II). Let g = 110 and L = 30000. If for all x ∈ [a b], V (x 1) > V (x − 100 12 ; x + 110 12 ), and if εˆ = 0006, then V (b 1) > V (b − 30110 0006; b + 106 − 110 0994) We provide here an outline of the proof of Theorem 2. In the first part of the proof we use the fact that the rejection of the small lottery (− 12 ; g 12 ) at x implies, by differentiability, that at some distribution on the line segment connecting δx and Hx , the local utility function at this distribution prefers (x 1) to the lottery (x − 12 ; x + g 12 ). Observe that if x < b − g, then δb dominates by first-order stochastic dominance all distributions on the line segment connecting δx and Hx . Hence, if (− 12 ; g 12 ) is rejected at all x ∈ [a b − g], then, by Hypothesis 2, the local utility function u(·; δb ) prefers (x 1) to the lottery (x − 12 ; x + g 12 ) for all such x. Hence u(·; δb ) rejects some of the attractive large lotteries of Table II. Next we show that a similar property holds along the line segment connecting (b 1) and (b − g − L ε; b − g + G 1 − ε). By Gâteaux differentiability, the decision maker rejects the large lottery as well. By definition, Hypothesis 2 needs differentiability of the local utility functions. This is not a trivial assumption and it is closely associated with orders of risk aversion: The functional V represents first [second] order risk aversion if the risk premium the decision maker is willing to pay to avoid playing t ⊗ X := (tx1 p1 ; ; txn pn ) for E[X] = 0 converges to zero at the same rate as t [t 2 ]. Indifference curves of preferences satisfying first order risk aversion
1150
Z. SAFRA AND U. SEGAL
are not differentiable at δx , and the local utilities u(·; δx ) are not differentiable at x (see Segal and Spivak (1990, 1997), Epstein and Zin (1990), and Safra and Segal (2002)). The next section deals with general functionals, including those that do not satisfy Hypothesis 2. 4. STOCHASTIC B3 The previous section dealt with preferences that satisfy the restrictive Hypothesis 2. Even when local utilities are differentiable, not all functionals satisfy this assumption (for example, some versions of Chew’s (1983) weighted utility theory and Gul’s (1991) disappointment aversion theory). In addition, there are many models where local utilities are not differentiable, among them rank-dependent utility (Quiggin (1982)), the most popular
alternative to expected utility theory. This functional is given by V (F) = u(x) df (F). For finite lotteries with x1 ≤ · · · ≤ xn , the value of this functional is given by V (x1 p1 ; ; xn pn ) = u(x1 )f (p1 ) +
n i=2
i−1 i u(xi ) f pj − f pj j=1
j=1
Risk aversion implies
x concave f and the local utilities of this functional are given by u(x; F) = u (z)f (F(z)) dz (see Chew, Karni, and Safra (1987)). Hence for δa we have z ≤ a, u(z)f (0) u(z; δa ) = u(a)f (0) + (u(z) − u(a))f (1) z ≥ a. This local utility is not differentiable at a unless f (0) = f (1). Concavity now implies f ≡ 1 and V is reduced to expected utility. Hypothesis 2 is violated because at a, u(a; δa ) represents an infinite level of risk aversion, while for a > a, u(a; δa ) represents a finite level of risk aversion. A special case of the rank-dependent utility family is Yaari’s (1987) dual
theory, given by V (F) = x df (F). This functional seems to solve the problem raised by Rabin’s analysis. Assume that f is concave (hence risk aversion) and that f ( 12 ) = 11 . Clearly, the decision maker rejects (− 12 ; g 12 ) at 21 all wealth levels for all g < 11, but accepts (−L 12 ; G 12 ) at all wealth levels for all G > 11L. Properties D1 and D2 are satisfied, and although small lotteries are rejected (B3), attractive large lotteries are accepted even for modest gains (B4). Moreover, the dual theory is a member of the larger class of constant risk aversion preferences (that is, constant absolute and constant relative risk aversion) which display similar behavior. DEFINITION 2: The functional V satisfies constant risk aversion (CRA) if for all α > 0, β, F , and H, V (F) ≥ V (H)
⇐⇒
V (α ⊗ F ⊕ β) ≥ V (α ⊗ H ⊕ β)
NON-EXPECTED UTILITY THEORIES
1151
where F ⊕ β is obtained from F by adding β to all its outcomes and, as before, α ⊗ F is obtained from F by multiplying all outcomes by α. If for some wealth level the CRA decision maker is indifferent between accepting and rejecting the lottery (− 12 ; g 12 ), then for all ε > 0, (i) he rejects the lottery (− 12 ; g − ε 12 ) at all wealth levels and (ii) he accepts the lottery (−K 12 ; Kg + ε 12 ), K > 0, at all wealth levels. The four properties D1–B4 are thus satisfied. Like risk averse rank-dependent preferences, CRA preferences have differentiable local utilities only when they are reduced to expected utility (see Safra and Segal (1998); in the case of CRA, expected utility means expected value). To analyze general preferences—preferences with nondifferentiable local utilities and preferences that violate Hypothesis 2—we consider a stochastic version of B3 where the decision maker rejects the lottery (− 12 ; g 12 ) at both deterministic and stochastic wealth levels. DEFINITION 3—Stochastic B3: The functional V satisfies ( g) stochastic B3 on [a b] if for all F with support in [a b], V (F) > V ( 12 [F ] + 12 [F ⊕ g]).8 The distribution F in the above definition serves as background risk—risk to which the binary lottery (− 12 ; g 12 ) is added. Since initial wealth is usually stochastic, an observed rejection of the lottery (− 12 ; g 12 ) indicates behavior according to the stochastic version of B3. For functionals satisfying stochastic B3 we have the following result. THEOREM 3: Let V ∈ V satisfy ( g) stochastic B3 on [a b] and let n = Then there is F ∗ with support in [a b] and ε∗ > 0 such that for all ε < ε∗ ,
b−a +g
.
V ((1 − ε)F ∗ + εδa ) ≥ V ((1 − ε)F ∗ + εH) for all H = (a − L 12 ; a + G 12 ), where (6)
L≥
( + g) +1 g−
and G =
L−1 [(n − 1)(g − ) + g]
Table III offers some examples for values of n and G that satisfy the conditions of Theorem 3 for = 100 and L = 25000. For example, if the decision maker rejects the stochastic risk (−100 12 ; 110 12 ) at all lotteries with final outcomes between 100,000 and 325,000, then there is a distribution F ∗ on this support such that for a sufficiently small ε he prefers the distribution (1 − ε)F ∗ + εδ100000 to (1 − ε)F ∗ + εH, where H is the distribution of the lottery (100000 − 25000 12 ; 100000 + 2703571 12 ). 8
F = F ⊕ (−).
1152
Z. SAFRA AND U. SEGAL TABLE III VALUES OF Ga
b−a
45,000 112,500 225,000 450,000
g = 101
g = 105
g = 110
g = 125
80,970 164,925 304,850 584,701
299,390 710,975 1,396,951 2,768,902
560,714 1,364,285 2,703,571 5,382,142
1,275,000 3,150,000 6,275,000 12,525,000
a If the decision maker rejects 1 [F 100] + 1 [F ⊕ g] for all distributions F with outcomes between a and b, then he 2 2 also prefers (1 − ε)F ∗ + εδa to (1 − ε)F ∗ + εHa25000G (Ha25000G is the distribution of (a − 25000 12 ; a + G 12 )) for some distribution F ∗ and for a sufficiently small ε.
Clearly, Stochastic B3, the “if” condition of Theorem 3, is weaker than the “if” condition of Theorem 1: if, for every x, the decision maker objects to replacing the single outcome x with the distribution Hx , then, by constructing a sequence of such changes, it is evident that he objects to replacing all outcomes x by the corresponding lotteries Hx . Empirical evidence seems to support stochastic B3 (see, e.g., Guiso, Jappelli, and Terlizzese (1996), Paiella and Guiso (2001), and Hochguertel (2003)).9 Note that for expected utility functionals the stochastic version of B3 is equivalent to the deterministic one: A rejection of (− 12 ; g 12 ) at all deterministic wealth levels implies its rejection at all stochastic wealth levels. Likewise, this definition is satisfied by risk averse rank-dependent functionals with a sufficiently concave utility function u. Rabin’s logic does not imply rejection of large attractive lotteries, but that the decision maker cannot simultaneously reject small actuarially favorable lotteries and accept all large attractive ones. Theorem 3 provides similar results. It does not suggest that all functionals reject attractive lotteries—Yaari’s (1987) theory, for example, accepts even moderately attractive large lotteries (−L 12 ; G 12 ), provided G > f ( 12 )L/[1 − f ( 12 )]. But then preferences cannot satisfy stochastic B3 for small values of and g. In fact, the next proposition shows that CRA functionals do not satisfy stochastic B3 for any g > . PROPOSITION 2: If V ∈ V satisfies constant risk aversion and is continuously Gâteaux differentiable,10 then for g > , V cannot exhibit ( g) stochastic B3 on [a b] for a sufficiently large b − a.
Consider the CRA functional V (F) = x df (F(x)) (Yaari (1987)). Let f (p) = pη and let F be the uniform distribution over [a b]. For η = 05, = 9 Rabin and Thaler (2001) on the other hand seem to claim that a rejection of a small lottery is likely only when the decision maker is unaware of the fact that he is exposed to many other risks. ∂ 10 That is, for every F and H, ∂α V ((1 − α)F + αH)|α=0 exists, is linear in H, and continuous in F .
NON-EXPECTED UTILITY THEORIES
1153
100, and g = 110, the decision maker enjoys the additional risk (− 12 ; g 12 ) whenever b − a > 19461 and for = 100, g = 105, and η = 07, whenever b − a > 7482. APPENDIX there exist H ∗ PROOF OF LEMMA 1: Suppose u(·; F) is not concave. Then
∗ ∗
and H such that H is a mean preserving spread of H , but u(x; F) dH (x) < u(x; F) dH(x). For every ε, (1 − ε)F + εH is a mean preserving spread of (1 − ε)F + εH ∗ , hence, by risk aversion, V ((1 − ε)F + εH) ≤ V ((1 − ∂ ε)F + εH ∗ ). As this inequality holds for all ε, it follows that
∂ε V ((1 − ε)F + ∂ ∗
εH)|ε=0 ≤ ∂ε V∗ ((1−ε)F +εH )|ε=0 . Hence, by equation (1), u(x; F) dH(x) ≤ u(x; F) dH (x), a contradiction. Q.E.D. PROOF OF PROPOSITION 1: (i) ⇒ (ii) Let Fα = (1 − ε)F + ε[(1 − α)δx + αHx ]. For ζ ≥ 0 we obtain ζ ζ Fα + ((1 − ε)F + εHx ) Fα+ζ = 1 − 1−α 1−α From eq. (1) we have11 ∂ V (Fα ) ∂α 1 = lim [V (Fα+ζ ) − V (Fα )] ζ→0 ζ ζ 1 ζ V 1 − F ((1 − ε)F + εH + ) − V (F ) = lim α x α ζ→0 ζ 1−α 1−α ζ 1 u(y; Fα ) d((1 − ε)F + εHx ) − u(y; Fα ) dFα = lim ζ→0 ζ 1 − α ζ +o 1 − α 1 1 1 = lim ζε u(x − ; Fα ) + u(x + g; Fα ) − u(x; Fα ) + o(ζ) ζ→0 ζ 2 2 1 1 = ε u(x − ; Fα ) + u(x + g; Fα ) − u(x; Fα ) 2 2 ≤ 0 11
In eq. (1) notation,
ζ 1−α
is ε, Fα is F , and (1 − ε)F + εHx is H.
1154
Z. SAFRA AND U. SEGAL
Hence V ((1 − ε)F + εδx ) = V (F0 ) ≥ V (F1 ) = V ((1 − ε)F + εHx ). (ii) ⇒ (i): By eq. (1), V ((1 − ε)F + εδx ) ≥ V ((1 − ε)F + εHx ) ⇐⇒ ε u(y; F) dδx − u(y; F) dF + o(ε) ≥ε
u(y; F) dHx −
u(y; F) dδx −
u(y; F) dF
≥ ⇐⇒
u(y; F) dF + o(ε)
⇒
u(y; F) dHx −
u(y; F) dF
1 1 u(x; F) ≥ u(x − ; F) + u(x + g; F) 2 2
The third line follows from the second by letting ε → 0.
Q.E.D.
PROOF OF THEOREM 1: We first show that if an expected utility maximizer with utility u rejects at all wealth levels x between a and b the lottery (− 12 ; g 12 ), then when his wealth level is a, he will also reject any lottery of the form (−L p; G 1 − p), G > b − a, provided that inequality (4) is satisfied. Rejecting the lottery (− 12 ; g 12 ) at a + implies u(a + ) − u(a) > u(a + + g) − u(a + ). By concavity, u (a) ≥ [u(a + ) − u(a)]/ and u (a + + g) ≤ ≤
u(a + + g) − u(a + ) u(a + ) − u(a) < g g u (a) g
Similarly, suppose for simplicity that b = a + k( + g), where k = integer, and obtain (7)
b−a +g
is an
(b−a)/(+g) u (b) < u (a) g
Concavity implies that for every c, u(c + + g) ≤ u(c) + ( + g)u (c), hence
i−1 g
(b−a)/(+g)
(8)
u(b) ≤ u(a) + ( + g)u (a)
i=1
1155
NON-EXPECTED UTILITY THEORIES
Normalizing u(a) = 0 and u (a) = 1, we obtain from eqs. (7) and (8) (9)
(b−a)/(+g) u (b) ≤ g
and
u(b) ≤ ( + g)
1−
(b−a)/(+g) g
1−
g
For concave u we now obtain that for every x ∈ / [a b], ⎧ ⎨ −(a − x)
(10)
x < a, (b−a)/(+g) u(x) ≤ ⎩ u(b) + (x − b) x > b. g
Inequalities (9) and (10) imply that if for all wealth levels x between a and b the decision maker rejects the lottery (− 12 ; g 12 ), then when his wealth level is a, he will also reject any lottery of the form (−L p; G 1 − p), G > b − a, provided that inequality (4) is satisfied. Let H be the distribution of the lottery (a − L p; a + G 1 − p) and denote Fα = (1 − α)δa + αH. Similarly to the proof of Proposition 1, we obtain that ∂ V (Fα ) = pu(a − L; Fα ) + (1 − p)u(a + G; Fα ) − u(a; Fα ) ∂α Since for all x ∈ [a b], V ((1 − ε)Fα + εδx ) ≥ V ((1 − ε)Fα + εHx ), then similarly to the proof of (ii)→(i) in Proposition 1, for all x ∈ [a b] and α, u(x; Fα ) ≥ 12 u(x − ; Fα ) + 12 u(x + g; Fα ). Therefore, it follows by the statement at the beginning of this proof that the expression pu(a − L; Fα ) + (1 − p)u(a + G; Fα ) − u(a; Fα ) is nonpositive. As F0 = δa and F1 = H, it follows that V (a 1) ≥ V (H). Q.E.D. PROOF OF THEOREM 2: Similarly to the proof of Theorem 1, it can be shown that if an expected utility maximizer with utility u rejects at all wealth levels the lottery (− 12 ; g 12 ), then when his wealth level is b, he also rejects the lottery (−L p; G 1 − p), provided inequality (5) is satisfied. By concavity, for every c, u(c − − g) ≤ u(c) − ( + g)u (c), hence
i−1 g
(b−a)/(+g)
(11)
u(a) ≤ u(b) − ( + g)u (b)
i=1
Normalizing u(b) = 0 and u (b) = 1, we obtain by (7) and (11) (12)
(b−a)/(+g) g u (a) ≥
and
u(a) ≤ −( + g)
1−
g (b−a)/(+g)
1−
g
1156
Z. SAFRA AND U. SEGAL
and hence, for every x ∈ / [a b], ⎧ (b−a)/(+g) g ⎨ u(a) − (a − x) (13) u(x) ≤ ⎩ x − b
x < a, x > b.
Inequalities (12) and (13) imply that if for all wealth levels x between a and b the decision maker rejects the lottery (− 12 ; g 12 ), then (14)
u(b) > pu(b − L) + (1 − p)u(b + G)
provided that p and G satisfy inequality (5) and L ≥ b − a. Examples of numbers satisfying this inequality are given in Table II. Similarly, the decision maker rejects the same lottery at b − g, a fact that is used in inequality (16) below. Let εˆ be as in the discussion following Theorem 2 and let L, G, and p satisfy the requirements of the theorem. Consider x ∈ [a b − g]. As V (x 1) > V (x − 12 ; x + g 12 ), it follows by Gâteaux differentiability that there is q∗ ∈ (0 1) such that ∂q∂ V (x 1 − q; x − q2 ; x + g q2 ) is strictly negative at q∗ . Hence (15)
1 1 u(x; Kx ) > u(x − ; Kx ) + u(x + g; Kx ) 2 2 ∗
∗
where Kx is the distribution of (x 1 − q∗ ; x − q2 ; x + g q2 ). Obviously, δb dominates Kx by first-order stochastic dominance whenever b ≥ x + g. Hence, by Hypothesis 2, u(x; δb ) > 12 u(x − ; δb ) + 12 u(x + g; δb ) for all x ∈ [a b − g]. The increasing monotonicity of u(·; δb ) and inequality (14) now imply (16)
u(b; δb ) > u(b − g; δb ) > pu(b − g − L; δb ) + (1 − p)u(b − g + G; δb )
By Gâteaux differentiability and continuity, this implies that for sufficiently small μ, the decision maker with wealth level b prefers not to participate in the lottery X(μ) = (−g − L μp; 0 1 − μ; −g + G μ(1 − p)). We now show that μ = 1, which is the claim of the theorem. Let μ¯ = max{μ : V (b 1) ≥ V (b + X(μ))} and suppose that μ¯ < 1. Denote by F¯ the distribution of b + X(μ). ¯ Our next step is to show that for all x ∈ [b − g − L b − g], (17)
¯ > 1 u(x − ; F) ¯ + 1 u(x + g; F) ¯ u(x; F) 2 2
We defined a = b − g − L, therefore, as μp ¯ < p and by the construction of ¯ x 1 − μp) ¯ and X˜ x = (b − ε, ˆ V (Xˆ x ) > V (X˜ x ), where Xˆ x = (b − g − L μp; 1−μp ¯ 1−μp ¯ g − L μp; ¯ x − 2 ; x + g 2 ). Let Fˆx and F˜x denote the distributions of Xˆ x and X˜ x , respectively. Similarly to the derivation of eq. (15), it follows by
NON-EXPECTED UTILITY THEORIES
1157
Gâteaux differentiability that there exists F on the line segment connecting Fˆx and F˜x for which 1 1 u(x; F) > u(x − ; F) + u(x + g; F) 2 2 As F¯ dominates both Fˆx and F˜x by first-order stochastic dominance, it dominates F as well and eq. (17) follows by Hypothesis 2. Similarly to the derivation of eq. (16), the local utility at F¯ satisfies (18)
¯ > pu(b − g − L; F) ¯ + (1 − p)u(b − g + G; F) ¯ u(b; F)
Let H denote the cumulative distribution function of (b − g − L p; b − g + G 1 − p). Then, by Gâteaux differentiability and eq. (18), ∂ V ((1 − t)F¯ + tH) ∂t ¯ + (1 − p)u(b − g + G; F) ¯ − u(b; F)] ¯ = (1 − μ)[pu(b ¯ − g − L; F) < 0 But this means that ∃μ ∈ (μ ¯ 1) such that V (b − g − L μp; b 1 − μ; b − g + G μ(1 − p)) < V (b − g − L μp; ¯ b 1 − μ; ¯ b − g + G μ(1 ¯ − p)) ≤ V (b 1); a contradiction. Hence μ¯ = 1 and V (b 1) ≥ V (b − g − L p; b − g + G 1 − p)
Q.E.D.
PROOF OF THEOREM 3: Equation (6) is equivalent to (19)
G ≥ (n − 1)( + g) +
g( + g) g−
and L =
G + 1 (n − 1)(g − ) + g
In the proof of the theorem we will use the following lemma. Its proof appears after the proof of Theorem 3. LEMMA 2: Let u be a concave vNM function such that u(a − ) = 0 and u(a) = . Let X = (a n1 ; ; a + (n − 1)( + g) n1 ). If E[u(X)] ≥ E[u(X − 12 ; X + g 12 )], then for G satisfying inequality (19) we obtain u(a + G) ≤
G + (n − 1)(g − ) + g
Let F be the distribution of X of Lemma 2 and let F = 12 [F ] + 12 [F ⊕ g] be the distribution of (X − 12 ; X + g 12 ). By stochastic B3, V (F) > V (F ).
1158
Z. SAFRA AND U. SEGAL
There is therefore F ∗ = (1 − α)F + αF such that V ((1 − α)F + αF ) is strictly decreasing in α at F ∗ , hence E[u(F; F ∗ )] > E[u(F ; F ∗ )] Normalize u(·; F ∗ ) such that u(a − ; F ∗ ) = 0 and u(a; F ∗ ) = . As u(·; F ∗ ) is concave (see Lemma 1), it follows that u(a − L; F ∗ ) ≤ − L. By Lemma 2 and eq. (19), 1 1 E[u(H; F ∗ )] = u(a − L; F ∗ ) + u(a + G; F ∗ ) 2 2 1 1 G ≤ ( − L) + + 2 2 (n − 1)(g − ) + g =−
1 < u(a; F ∗ ) 2
For sufficiently small ε it thus follows by Gâteaux derivative that V ((1 − ε)F ∗ + Q.E.D. εδa ) ≥ V ((1 − ε)F ∗ + εH), hence the theorem. PROOF OF LEMMA 2: Observe first that 1 1 X − ; X + g 2 2 1 1 1 = a − ; a + g ; ; a + (n − 1)( + g) − ; 2n n n 1 a + (n − 1)( + g) + g 2n Denote ai = a + (i − 1)( + g), i = 1 n, bi = ai − , i = 1 n, and bn+1 = an + g. Let ci = u(ai ) and di = u(bi ). We assumed that d1 = 0 and c1 = ; hence c1 − d 1 = 1 As u is concave, it has at each point x right and left derivatives denoted u− (x) and u+ (x). By concavity, u− (b1 ) ≥ u+ (b1 ) ≥ 1. Also, u+ (bn+1 ) ≤ u− (bn+1 ) ≤
dn+1 − cn g
and u(a + G) ≤ dn+1 + (a + G − bn+1 )
dn+1 − cn g
NON-EXPECTED UTILITY THEORIES
1159
FIGURE 1.—The function u and its value at a + G.
(see Figure 1). Our aim is to solve (20)
max
c2 cn d2 dn+1
dn+1 + (a + G − bn+1 )
1 1 1 1 d1 + ci ≥ di + dn+1 n i=1 2n n i=2 2n n
(21)
s.t.
dn+1 − cn g
n
(22)
cn ≤ dn+1
(23)
c1 − d1 dn+1 − cn ≥ ··· ≥ g
Constraint (21) represents the rejection of the lottery (− 12 ; g 12 ) that is added to the original lottery. Constraint (22) follows by the monotonicity of u, while constraints (23) represent the concavity of u. Observe that (22) and (23) imply d1 ≤ c1 ≤ · · · ≤ cn . The target function is linear and there are 2n − 1 variables. As d1 = 0 and c1 = , constraint (23) consist of 2n − 1 inequalities, hence, together with (21) and (22), there are 2n + 1 linear constraints. At least one of the inequalities of line (23) must be strict; otherwise u is linear and inequality (21) is not satisfied. Since the constraints yield a bounded feasible set, at least 2n − 1 of them must be satisfied with equality. In other words, no more than 2 of the constraints are satisfied with strict inequality, and one of them belongs to (23).
1160
Z. SAFRA AND U. SEGAL
Suppose that the constraint (21) is satisfied with a strict inequality. One can then multiply by γ > 1 all the values of c and d from a strict inequality of (23) on without violating any of the constraints, thus increasing dn+1 , dn+1 − cn , and ultimately the value of the target function. Therefore, the first constraint must be satisfied with equality. Let uˆ be an increasing concave function that obtains the values of ci and di solving the optimization problem (20) at the points ai and bi . CASE 1: Assume first that the function uˆ is strictly increasing. Then constraint (22) is satisfied with strict inequality and, therefore, no more than one of the constraints (23) is strict. In other words, the function uˆ on [b1 a + G] is linear on both sides of one of the points ai or bi . Its slope is first 1 and then s, such that constraint (21) is satisfied with equality. Obviously such a kink cannot happen at a point bi , or the lottery (− 12 ; g 12 ) is not rejected. When the kink is at the point aj we obtain: • ci = + (i − 1)( + g), i = 1 j. • ci = + (j − 1)( + g) + s(i − j)( + g), i = j + 1 n. • di = (i − 1)( + g), i = 1 j. • di = + (j − 1)( + g) − s + s(i − j)( + g), i = j + 1 n + 1. The equality in line (21) now yields 2
n
ci = d 1 + 2
i=1
⇒
n
di + dn+1
i=2
2n + [(2n − j)(j − 1) + s(n − j)(n − j + 1)]( + g) = [(2n − j + 1)(j − 1) + s(n − j + 1)2 ]( + g) + [2(n − j) + 1](1 − s)
⇒
2n = [(j − 1) + s(n − j + 1)]( + g) + [2(n − j) + 1](1 − s)
⇒
s= =
2n − (j − 1)( + g) − [2(n − j) + 1] (n − j + 1)( + g) − [2(n − j) + 1] j − jg + g j − jg + g = −n + j + ng − jg + g (n − j)(g − ) + g
We thus obtain that (24)
ˆ + G) u(a = j + (j − 1)g + s(a + G − aj ) j − jg + g (G − (j − 1)( + g)) = j + (j − 1)g + (n − j)(g − ) + g
NON-EXPECTED UTILITY THEORIES
1161
Differentiate this last expression with respect to j to obtain ⎡ ⎤ (g − )[(n − j)(g − ) + g] + g − ⎣ +[j(g − ) − g](g − ) ⎦ × [G − (j − 1)( + g)] [(n − j)(g − ) + g]2 j − jg + g × ( + g) (n − j)(g − ) + g −(g − )[G − (j − 1)( + g)] n(g − ) × + ( + g)[(n − j)(g − ) + g] = [(n − j)(g − ) + g]2 −(g − )[G + + g] n(g − ) × + ( + g)[n(g − ) + g] = [(n − j)(g − ) + g]2 −
≤ 0 where the last inequality follows from G ≥ (n − 1)( + g) +
g( + g) g−
ˆ (which was assumed by the lemma). It follows that u(G) is decreasing in j. For j = 1 we obtain from eq. (25) that ˆ + G) = u(a
G + ; (n − 1)(g − ) + g
hence the claim of the lemma. CASE 2: Assume now that the function uˆ is not strictly increasing, hence constraint (22) is satisfied with equality. uˆ is strictly increasing up to some point and it must be flat from that point on. By constraint (21), this point must be one of the ai points. Suppose not, that is, suppose that there is a point bi such that ci−1 < di = ci . Define ⎧ ˆ u(y) y ≤ bi , ⎪ ⎪ ⎪ ⎪ 1 ⎨ d + (y − b )(d − c ) b < y < a , i i i i−1 i i ˜ u(y) = g ⎪ ⎪ 1 ⎪ ⎪ ⎩ di + (ai − bi )(di − ci−1 ) ai ≤ y. g ˜ and obviously As the function uˆ satisfies constraint (21), so does the function u, ˜ + G) > u(a ˆ + G). u(a
1162
Z. SAFRA AND U. SEGAL
As before, we must assume that constraint (21) is satisfied with equality, otherwise uˆ can be replaced with a higher function still satisfying this constraint. Therefore, we now have to solve the optimization problem under the constraints max
c2 ci∗ d2 di∗
ci ∗
i∗ −1
(25)
s.t.
i∗
1 1 1 1 ci + ci∗ = d1 + di n i=1 2n 2n n i=2
(26)
di ∗ < c i ∗
(27)
c1 − d1 ci ∗ − d i ∗ ≥ ··· ≥
This problem has 2(i∗ − 1) variables (recall that d1 = 0 and c1 = ) and 2(i∗ − 1) + 2 constraints. Of these, constraint (26) is satisfied with strict inequality. Therefore, at most one of the constraints of (27) is strict. In other words, uˆ has slope 1 up to either a point bi or a point ai , then slope s up to point ai∗ , and slope zero thereafter. It is easy to verify that the highest value of ci∗ is obtained g when s = 1, in which case, by constraint (25), ci∗ is bounded by g− . For G satisfying inequality (19) it is easy to verify that for > 1, G g +> ; (n − 1)(g − ) + g g− Q.E.D.
hence the lemma. PROOF that is,
OF
PROPOSITION 2: Let F be the uniform distribution over [a b],
F(x) =
⎧ 0 ⎪ ⎨ x−a ⎪ ⎩ b−a 1
x < a, a ≤ x < b, b ≤ x.
Let F˜ = 12 (F ⊕ (−)) + 12 (F ⊕ g). Then ⎧ 0 ⎪ ⎪ ⎪ x−a+ ⎪ ⎪ ⎪ ⎪ ⎪ 2(b − a) ⎪ ⎪ ⎨ x−a−g +g ˜ + F(x) = b−a 2(b − a) ⎪ ⎪ ⎪ ⎪ x b+g ⎪ ⎪ ⎪ +1− ⎪ ⎪ 2(b − a) ⎪ ⎩ 2(b − a) 1
x < a − , a − ≤ x < a + g, a + g ≤ x < b − , b − ≤ x < b + g, b + g ≤ x.
NON-EXPECTED UTILITY THEORIES
1163
Also define H and Hb by ⎧ x
a+1
a+1+g/(b−a)
u(x; Kb ) dH(x) ≥ a
u(x; Kb ) dHb (x) a−/(b−a)
a+/(b−a)
u(x; Kb ) d[H − Hb ](x)
⇐⇒ a−/(b−a)
FIGURE 2.—H and Hb .
1164
Z. SAFRA AND U. SEGAL
a+1+g/(b−a)
≥
u(x; Kb ) d[Hb − H](x) a+/(b−a)
a+/(b−a)
[Hb − H](x) du(x; Kb )
⇐⇒ a−/(b−a)
a+1+g/(b−a)
[H − Hb ](x) du(x; Kb )
≥ a+/(b−a)
where the last inequality is obtained by integration by parts of the second line. Since u(·; Kb ) is concave, this last inequality implies that g ; Kb D1 ≥ u− a + 1 + ; Kb D2 u+ a − (28) b−a b−a where D1 and D2 are the sizes of the areas S1 and S2 marked in Figure 2. We obtain 2 1 D1 = 2 b−a +g g− 1− D2 = 2(b − a) b−a Clearly, as b → ∞, D1 /D2 → 0, hence, by inequality (28), ; Kb u+ a − b−a = ∞ lim b→∞ g ; Kb u− a + 1 + b−a By concavity of the local utility functions, for a given b∗ where u (a − b∗−a ; H) and u (a + 1 + b∗g−a ); H) both exist,12 ; Kb u+ a − b−a (29) ∞ = lim b→∞ g ; Kb u− a + 1 + b−a ; Kb ;H u+ a − ∗ u a− ∗ b −a b −a = ≤ lim b→∞ g g ; Kb ;H u− a + 1 + ∗ u a+1+ ∗ b −a b −a 12
As u(·; H) is concave, u (·; H) exists almost everywhere.
NON-EXPECTED UTILITY THEORIES
1165
The last equation sign follows by the continuity of the Gâteaux derivative and by the fact that all local utilities are concave. Since a − b∗−a is not the left end of the domain of u(·; H), concavity implies that u (a − b∗−a ; H) < ∞. On the other hand, as a + 1 + b∗g−a is not the right end of the domain on that function, first-order stochastic dominance and concavity imply that u (a + 1 + b∗g−a ; H) > 0, a contradiction to the fact that in eq. (29) the limit in ∞. Hence V (Hb ) > ˜ > V (F) V (H) and V (F) Q.E.D. REFERENCES ALLAIS, M. (1953): “Le Comportement de l’Homme Rationnel Devant le Risque: Critique des Postulates et Axiomes de l’Ecole Americaine,” Econometrica, 21, 503–546. [1147,1148] BATTALIO, R. C., J. H. KAGEL, AND K. JIRANYAKUL (1990): “Testing Between Alternative Models of Choice Under Uncertainty: Some Initial Results,” Journal of Risk and Uncertainty, 3, 25–50. [1148] CHEW, S. H. (1983): “A Generalization of the Quasilinear Mean With Applications to the Measurement of Income Inequality and Decision Theory Resolving the Allais Paradox,” Econometrica, 51, 1065–1092. [1145,1150] CHEW, S. H., L. G. EPSTEIN, AND U. SEGAL (1991): “Mixture Symmetry and Quadratic Utility,” Econometrica, 59, 139–163. [1145] CHEW, S. H., E. KARNI, AND Z. SAFRA (1987): “Risk Aversion in the Theory of Expected Utility With Rank-Dependent Probabilities,” Journal of Economic Theory, 42, 370–381. [1150] CONLISK, J. (1989): “Three Variants on the Allais Example,” American Economic Review, 79, 392–407. [1148] COX, J. C., AND V. SADIRAJ (2006): “Small- and Large-Stakes Risk Aversion: Implications of Concavity Calibration for Decision Theory,” Games and Economic Behavior, 56, 45–60. [1144] DEKEL, E. (1986): “An Axiomatic Characterization of Preferences Under Uncertainty: Weakening the Independence Axiom,” Journal of Economic Theory, 40, 304–318. [1145] EPSTEIN, L. G. (1992): “Behavior Under Risk: Recent Developments in Theory and Applications,” in Advances in Economic Theory, Vol. II. ed. by J. J. Laffont. Cambridge, U.K.: Cambridge University Press, 1–63. [1144] EPSTEIN, L. G., AND S. ZIN (1990): “‘First Order’ Risk Aversion and the Equity Premium Puzzle,” Journal of Menetary Economics, 26, 386–407. [1150] FOSTER, D. P., AND S. HART (2007): “An Operational Measure of Riskiness,” Paper, Center for Rationality, The Hebrew University of Jerusalem. [1144] GUISO, L., T. JAPPELLI, AND D. TERLIZZESE (1996): “Income Risk, Borrowing Constraints, and Portfolio Choice,” American Economic Review, 86, 158–172. [1152] GUL, F. (1991): “A Theory of Disappointment Aversion,” Econometrica, 59, 667–686. [1150] HANSSON, B. (1988): “Risk Aversion as a Problem of Conjoint Measurment,” in Decision, Probability, and Utility, ed. by P. Gardenfors and N.-E. Sahlin. Cambridge, U.K.: Cambridge University Press, 136–158. [1144] HOCHGUERTEL, S. (2003): “Precautionary Motives and Portfolio Decisions,” Journal of Applied Econometrics, 18, 61–77. [1152] KAHNEMAN, D., AND A. TVERSKY (1979): “Prospect Theory: An Analysis of Decision Under Risk,” Econometrica, 47, 263–291. [1144] MACHINA, M. J. (1982): “‘Expected Utility’ Analysis Without the Independence Axiom,” Econometrica, 50, 277–323. [1145,1146,1148] (1987): “Choice Under Uncertainty: Problems Solved and Unsolved,” Journal of Economic Perspectives, 1, 121–154. [1148] PAIELLA, M., AND L. GUISO (2001): “Risk Aversion, Wealth and Background Risk,” Mimeo. [1152]
1166
Z. SAFRA AND U. SEGAL
PALACIOS-HUERTA, I., AND R. SERRANO (2006): “Rejecting Small Gambles Under Expected Utility,” Economics Letters, 91, 250–259. [1144] QUIGGIN, J. (1982): “A Theory of Anticipated Utility,” Journal of Economic Behavior and Organization, 3, 323–343. [1145,1150] RABIN, M. (2000): “Risk Aversion and Expected Utility Theory: A Calibration Result,” Econometrica, 68, 1281–1292. [1143,1144] RABIN, M., AND R. H. THALER (2001): “Risk Aversion,” Journal of Economic Perspectives, 15, 219–232. [1144,1152] RUBINSTEIN, A. (2006): “Dilemmas of an Economic Theorist,” Econometrica, 74, 865–883. [1144] SAFRA, Z., AND U. SEGAL (1998): “Constant Risk Aversion,” Journal of Economic Theory, 83, 19–42. [1151] (2002): “On the Ecomomic Meaning of Machina’s Fréchet Differentiability Assumption,” Journal of Economic Theory, 104, 450–461. [1150] SEGAL, U., AND A. SPIVAK (1990): “First Order Versus Second Order Risk Aversion,” Journal of Economic Theory, 51, 111–125. [1150] (1997): “First-Order Risk Aversion and Non-differentiability,” Economic Theory, 9, 179–183. [1150] STARMER, C. (2000): “Developments in Non-Expected Utility Theory: The Hunt for a Descriptive Theory of Choice Under Risk,” Journal of Economic Literature, 38, 332–382. [1148] TVERSKY, A., AND D. KAHNEMAN (1992): “Advances in Prospect Theory: Cumulative Representation of Uncertainty,” Journal of Risk and Uncertainty, 5, 297–323. [1144] YAARI, M. E. (1987): “The Dual Theory of Choice Under Risk,” Econometrica, 55, 95–115. [1144, 1150,1152] ZEIDLER, E. (1985): Nonlinear Functional Analysis and Its Applications, Vol. III. New York: Springer. [1146]
The College of Management and Tel Aviv University, Israel; [email protected] and Dept. of Economics, Boston College, 21 Campanella Way, Chestnut Hill, MA 02467-3806, U.S.A.; [email protected]. Manuscript received November, 2005; final revision received March, 2008.
Econometrica, Vol. 76, No. 5 (September, 2008), 1167–1190
SUBJECTIVE BELIEFS AND EX ANTE TRADE BY LUCA RIGOTTI, CHRIS SHANNON, AND TOMASZ STRZALECKI1 We study a definition of subjective beliefs applicable to preferences that allow for the perception of ambiguity, and provide a characterization of such beliefs in terms of market behavior. Using this definition, we derive necessary and sufficient conditions for the efficiency of ex ante trade and show that these conditions follow from the fundamental welfare theorems. When aggregate uncertainty is absent, our results show that full insurance is efficient if and only if agents share some common subjective beliefs. Our results hold for a general class of convex preferences, which contains many functional forms used in applications involving ambiguity and ambiguity aversion. We show how our results can be articulated in the language of these functional forms, confirming results existing in the literature, generating new results, and providing a useful tool for applications. KEYWORDS: Common prior, absence of trade, ambiguity aversion, general equilibrium.
1. INTRODUCTION IN A MODEL WITH RISK-AVERSE AGENTS who maximize subjective expected utility, betting occurs if and only if agents’ priors differ. This link between common priors and speculative trade in the absence of aggregate uncertainty is a fundamental implication of expected utility for risk-sharing in markets. A similar relationship holds when ambiguity is allowed and agents maximize the minimum expected utility over a set of priors, as in the model of Gilboa and Schmeidler (1989). In this case, purely speculative trade occurs when agents hold no priors in common; full insurance is Pareto optimal if and only if agents have at least one prior in common, as Billot, Chateauneuf, Gilboa, and Tallon (2000) showed. This note develops a more general connection between subjective beliefs and speculative trade applicable to a broad class of convex preferences, which encompasses as special cases not only the previous results for expected utility and maxmin expected utility, but all the models central in studies of ambiguity in markets, including the convex Choquet model of Schmeidler (1989), the smooth second-order prior models of Klibanoff, Marinacci, and Mukerji (2005) and Nau (2006), the second-order expected utility model of Ergin and Gul (2004), the confidence preferences model of Chateauneuf and Faro (2006), the multiplier model of Hansen and Sargent (2001), and the variational preferences model of Maccheroni, Marinacci, and Rustichini (2006). 1 Tomasz Strzalecki is extremely grateful to Eddie Dekel for very detailed comments on earlier versions of this note and to Itzhak Gilboa, Peter Klibanoff, and Marciano Siniscalchi for very helpful discussions. We thank Larry Samuelson (the co-editor) and three anonymous referees for very useful suggestions; we have also benefited from comments from Wojciech Olszewski, Marcin Peski, Jacob Sagi, Uzi Segal, Itai Sher, Jean-Marc Tallon, Jan Werner, Asher Wolinsky, and audiences at RUD 2007 and WIEM 2007.
© 2008 The Econometric Society
DOI: 10.3982/ECTA7660
1168
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
By casting our results in the general setting of convex preferences, we are able to focus on several simple underlying principles. We identify a notion of subjective beliefs based on market behavior and show how it is related to various notions of belief that arise from different axiomatic treatments. We highlight the close connection between the fundamental welfare theorems of general equilibrium and results that link common beliefs and risk-sharing. Finally, by establishing these links for general convex preferences, we provide a framework for studying ambiguity in markets while allowing for heterogeneity in the way ambiguity is expressed through preferences. The generality of this approach identifies the forces underlying betting without being restricted to any one particular representation, and in so doing unifies our thinking about models of ambiguity aversion in economic settings. The note is organized as follows. Section 2 studies subjective beliefs and behavioral characterizations, with illustrations for various familiar representations. Section 3 studies trade between agents with convex preferences. Appendix A develops an extension of these results to infinite state spaces, while Appendix B collects some proofs omitted in the text. 2. BELIEFS AND CONVEX PREFERENCES 2.1. Convex Preferences Let S be a finite set of states of the world. The set of consequences is R+ , which we interpret as monetary payoffs. The set of acts is F = RS+ with the natural topology. Acts are denoted by f , g, and h, while f (s) denotes the monetary payoff from act f when state s obtains. For any x ∈ R+ we abuse notation by writing x ∈ F , which stands for the constant act with payoff x in each state of the world. Let be a binary relation on F . We say that is a convex preference relation if it satisfies the following axioms: AXIOM 1—Preference: is complete and transitive. AXIOM 2—Continuity: For all f ∈ F , the sets {g ∈ F | g f } and {g ∈ F | f g} are closed. AXIOM 3—Monotonicity: For all f g ∈ F , if f (s) > g(s) for all s ∈ S, then f g. AXIOM 4—Convexity: For all f ∈ F , the set {g ∈ F | g f } is convex. These axioms are standard, and well-known results imply that a convex preference relation is represented by a continuous, increasing, and quasiconcave
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1169
function V : F → R.2 Convex preferences include as special cases many common models of risk aversion and ambiguity aversion. In many of these special cases, one element of the representation identifies a notion of beliefs. In what follows, we adopt the notion of subjective probability suggested in Yaari (1969) to define subjective beliefs for general convex preferences. We then study characterizations of this concept in terms of market behavior, and illustrate particular special cases including maxmin expected utility, Choquet expected utility, and variational preferences. 2.2. Supporting Hyperplanes and Beliefs The decision-theoretic approach of de Finetti, Ramsey, and Savage identifies a decision maker’s subjective probability with the odds at which he is willing to make small bets. In this spirit, Yaari (1969) identified subjective probability with a hyperplane that supports the upper contour set.3 If this set has kinks, for example because of nondifferentiabilities often associated with ambiguity, there may be multiple supporting hyperplanes at some acts. To encompass such preferences, we consider the set of all (normalized) supporting hyperplanes.4 DEFINITION 1—Subjective Beliefs: The set of subjective beliefs at an act f is π(f ) := {p ∈ S | p · g ≥ p · f for all g f } Given the interpretation of the elements of π(f ) as beliefs, we will write Ep g instead of p · g. For any convex preference relation, π(f ) is nonempty, compact, and convex, and is equivalent to the set of (normalized) supports to the upper contour set of at f . In the next section we explore behavioral implications of this definition, including willingness or unwillingness to trade, and their market consequences. 2.3. Market Behavior and Beliefs We begin with a motivating example, set in the maxmin expected utility (MEU) model of Gilboa and Schmeidler (1989). The agent’s preferences are 2 Axiom 4 captures convexity in monetary payoffs. For Choquet expected utility agents, who evaluate an act according to the Choquet integral of its utility with respect to a nonadditive measure (capacity), the relation between payoff convexity and uncertainty aversion has been studied by Chateauneuf and Tallon (2002). Dekel (1989) studied the relation between payoff convexity and risk aversion. 3 In the finance literature this is commonly called a risk-neutral probability or risk-adjusted probability. 4 Alternatively, Chambers and Quiggin (2002) defined beliefs using superdifferentials of the benefit function. Their definition turns out to be equivalent to ours.
1170
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
FIGURE 1.—Behavioral properties of beliefs in the MEU model.
represented using a compact, convex set of priors P ⊆ S and a utility index u : R+ → R that is concave and differentiable. The utility of an act f is given by the minimum expected utility over the set of priors P, V (f ) := min p∈P
s∈S
ps u(f (s)) = min Ep u(f ) p∈P
where we abuse notation by writing u(f ) for (u(f (1)) u(f (S))). Imagine that the agent is initially endowed with a constant act x. First, consider an act g such that Ep g = x for some p ∈ P, as depicted in the left panel of Figure 1 (the shaded area collects all such acts). One can see that the agent will have zero demand for g. Second, consider an act g such that Ep g > x for all p ∈ P, as depicted in the right panel of Figure 1. One can see that there exists ε > 0 sufficiently small such that εg + (1 − ε)x x. In the MEU model, the set P captures two important aspects of market behavior (both evident in Figure 1). First, agents are unwilling to trade from a constant bundle to a random one if the two have the same expected value for some prior in the set P. In particular, the set P is the largest set of beliefs revealed by this unwillingness to trade based on zero expected net returns. Second, agents are willing to trade from a constant bundle to (a possibly small fraction of) a random one whenever the random act has greater expected value according to every prior in the set P. In particular, the set P is the smallest set of beliefs revealing this willingness to trade based on positive expected net returns. We introduce two notions of beliefs revealed by market behavior that attempt to capture these properties for general convex preferences. The first notion collects all beliefs that reveal an unwillingness to trade from a given act f .
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1171
DEFINITION 2—Unwillingness-to-Trade Revealed Beliefs: The set of beliefs revealed by unwillingness to trade at f is π u (f ) := {p ∈ S | f g for all g such that Ep g = Ep f } This set gathers all beliefs for which the agent is unwilling to trade assets with zero expected net returns. It can also be interpreted as the set of Arrow– Debreu prices for which the agent endowed with f will have zero net demand. For a convex preference, it is straightforward to see that this gives a set of beliefs equivalent to that defined by our subjective beliefs in Definition 1. Our second notion collects beliefs revealed by a willingness to trade from a given act f . To formalize this, let P (f ) denote the collection of all compact, convex sets P ⊆ S such that if Ep g > Ep f for all p ∈ P, then εg + (1 − ε)f f for sufficiently small ε.5 We define the willingness-to-trade revealed beliefs as the smallest such set.6 DEFINITION 3 —Willingness-to-Trade Revealed Beliefs: The set of beliefs revealed by willingness to trade at an act f is P (f ) π w (f ) := The following proposition establishes the equivalence between the different notions of belief presented in this section and, therefore, gives behavioral content to Definition 1. Subjective beliefs are related to observable market behavior in terms of willingness or unwillingness to make small bets or trade small amounts of assets. PROPOSITION 1: If is a convex preference relation, then π(f ) = π u (f ) = π (f ) for every strictly positive act f . w
2.4. Special Cases In this section we explore the relationships between our notion of subjective belief and those arising in several common models of ambiguity. For the benchmark case of classical subjective expected utility (SEU), as observed by Yaari (1969), our subjective beliefs coincide with the local trade-offs or risk-neutral probabilities that play a central role in many applications of risk. If we restrict attention to constant acts, then subjective beliefs will coincide with the unique prior of the subjective expected utility representation. This property generalizes beyond SEU. The subjective beliefs we calculate at a constant act, at which risk and ambiguity are absent, coincide with the beliefs identified axiomatically in particular representations. 5 6
Notice that P (f ) is always nonempty, because S ∈ P (f ) by Axiom 3. The proof of Proposition 1 shows that P (f ) is closed under intersection.
1172
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
Maxmin Expected Utility Preferences We begin with MEU preferences, represented by a particular set of priors P and utility index u.7 These preferences also include the convex case of Choquet expected utility, for which P has additional structure as the core of a convex capacity. To derive a simple characterization of the set π(f ) for MEU preferences, let U : RS+ → RS be the function U(f ) := (u(f (1)) u(f (S))) giving ex post utilities in each state. For any f ∈ RS++ , DU(f ) is the S × S diagonal matrix with diagonal given by the vector of ex post marginal utilities (u (f (1)) u (f (S))). For each f ∈ RS+ , let M(f ) := arg min Ep u(f ) p∈P
be the set of minimizing priors realizing the utility of f . Note that V (f ) = Ep u(f ) for each p ∈ M(f ). Using a standard envelope theorem, we can express the set π(f ) as follows. PROPOSITION 2: Let be a MEU preference represented by a set of priors P and a concave, strictly increasing, and differentiable utility index u. Then is a convex preference and q π(f ) = q = pDU(f ) for some p ∈ M(f ) q In particular, π(x) = P for all constant acts x. Variational Preferences Introduced and axiomatized by Maccheroni, Marinacci, and Rustichini (2006), variational preferences have the representation V (f ) = min[Ep u(f ) + c (p)] p∈S
where c : S → [0 ∞] is a convex, lower semicontinuous function such that c (p) = 0 for at least one p ∈ S. The function c is interpreted as the cost of choosing a prior. As special cases, this model includes MEU preferences when c is 0 on the set P and ∞ otherwise, the multiplier preferences of Hansen 7 The MEU model is a special case of the model of invariant biseparable preferences in Ghirardato and Marinacci (2001). Ghirardato, Maccheroni, and Marinacci (2004) introduced a definition of beliefs for such preferences and proposed a differential characterization. For invariant biseparable preferences that are also convex, their differential characterization is equivalent to ours when calculated at constant bundle. The only invariant biseparable preferences that are convex are actually MEU preferences, however, so these are already included in our present discussion.
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1173
and Sargent (2001) when c (p) = R(p q) is the relative entropy between p and some fixed reference distribution q, and the mean–variance preference of Markovitz and Tobin when c (p) = G(p q) is the relative Gini concentration index between p and some fixed reference distribution q. For each f ∈ RS+ , let M(f ) := arg min Ep [u(f )] + c (p) p∈S
be the set of minimizing priors realizing the utility of f . Note that V (f ) = Ep u(f ) + c (p) for each p ∈ M(f ). The set π(f ) can be characterized as follows. PROPOSITION 3: Let be a variational preference for which u is concave, increasing, and differentiable. Then is a convex preference and q π(f ) = q = pDU(x) for some p ∈ M(f ) q In particular, π(x) = {p ∈ S | c (p) = 0} for all constant acts x. The set of subjective beliefs at a constant act x, π(x), is equal to the set of probabilities for which c , the cost of choosing a prior, is zero. An interesting implication of this result is that at a constant act, the subjective beliefs of an agent with Hansen and Sargent (2001) multiplier preferences are equal to the singleton {q} consisting of the reference probability, since R(p q) = 0 if and only if p = q.8 A similar result holds for mean–variance preferences. Confidence Preferences Chateauneuf and Faro (2006) introduced and axiomatized a class of preferences in which ambiguity is measured by a confidence function ϕ : S → [0 1]. The value of ϕ(p) describes the decision maker’s confidence in the probabilistic model p; in particular, ϕ(p) = 1 means that the decision maker has full confidence in p. By assumption, the set of such full confidence measures is nonempty; moreover, the function ϕ is assumed to be upper semicontinuous and quasiconcave. Preferences in this model are represented by V (f ) = min p∈Lα
1 Ep u(f ) ϕ(p)
where Lα = {q ∈ S | ϕ(q) ≥ α} is a set of measures with confidence above α. 8 This result also follows from an alternate representation V (f ) = −Eq exp(−θ−1 · u(f )) of those preferences. Strzalecki (2007) obtained an axiomatization of multiplier preferences along these lines.
1174
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
As before, for each f ∈ RS+ , let
1 Ep u(f ) M(f ) := arg min p∈Lα ϕ(p)
be the set of minimizing priors realizing the utility of f . Note that V (f ) = 1 E u(f ) for each p ∈ M(f ). By standard envelope theorems, π(f ) can be ϕ(p) p characterized in this case as follows. PROPOSITION 4: Let be a confidence preference for which u is concave, increasing, and continuously differentiable. Then is a convex preference and q π(f ) = q = pDU(x) for some p ∈ M(f ) q
In particular, π(x) = {p ∈ S | ϕ(p) = 1} for all constant acts x. Smooth Model The smooth model of ambiguity developed in Klibanoff, Marinacci, and Mukerji (2005) allows preferences to display nonneutral attitudes toward ambiguity, but avoids kinks in the indifference curves.9 This model has a representation of the form V (f ) = Eμ φ(Ep u(f )) where μ is interpreted as a probability distribution on the set of possible probability measures, φ : R → R, and u : R+ → R. When the indexes φ and u are concave, increasing, and differentiable, this utility represents a convex preference relation, and the set of subjective beliefs is a singleton consisting of a weighted mixture of all probabilities in the support of the measure μ. PROPOSITION 5: Let be a smooth model preference for which u and φ are concave, increasing, and differentiable. Then is a convex preference and π(f ) =
1 Eμ φ (Ep u(f ))pDU(f ) Eμ [φ (Ep u(f ))pDU(f )]
In particular, π(x) = {Eμ p} for all constant x. 9
For similar models, see Segal (1990), Nau (2006), and Ergin and Gul (2004).
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1175
Ergin–Gul Model Ergin and Gul (2004) introduced a model in which the state space takes the product form S = Sa × Sb . This model permits different decision attitudes toward events in Sa and Sb , thereby inducing Ellsberg-type behavior. Consider a product measure p = pa ⊗ pb on S; for any f ∈ RS , let Ea f be the vector of conditional expectations of f computed for all elements of Sb (thus Ea f ∈ RSb ) and for any g ∈ RSb , let Eb g denote the expectation of g according to pb . The preferences are represented by V (f ) = Eb φ(Ea u(f )) To express subjective beliefs, let U(f ) and DU(f ) be defined as before, with the convention that the states in S are ordered lexicographically first by a, then by b. Analogously, for each f define the vector (Ea u(f )) ∈ RSb and the diagonal matrix D(Ea u(f )). PROPOSITION 6: Let be an Ergin–Gul preference for which u and φ are concave, increasing, and differentiable. Then is a convex preference and π(f ) =
1 pDU(f ) Ia ⊗ D(Ea u(f )) pDU(f )[Ia ⊗ D(Ea u(f ))]
where Ia is the identity matrix of order Sa and ⊗ is the tensor product. In particular, π(x) = {p} for all constant x. REMARK 1: Our notion of beliefs may not agree with the beliefs identified by some representations, in part because we have focused on beliefs revealed by market behavior rather than those identified axiomatically. An illustrative case in point is the rank-dependent expected utility (RDEU) of Quiggin (1982) and Yaari (1987) in which probability distributions are distorted by a transformation function. When the probability transformation function is concave, this model reduces to Choquet expected utility with a convex capacity, a special case of MEU. By using the MEU representation, beliefs would be identified with a set of priors P, in general not a singleton. As we showed above, this set P coincides with the set π(x), the subjective beliefs given by any constant act x. However, RDEU preferences are also probabilistically sophisticated in the sense of Machina and Schmeidler (1992), with respect to some measure p∗ .10 Using the alternative representation arising from probabilistic sophistication, beliefs would instead be identified with this unique measure p∗ rather than with the set P. Although p∗ ∈ P, these different representations nonetheless lead to different ways to identify subjective beliefs, each justified by dif10
For more on probabilistic sophistication, RDEU, and MEU, see Grant and Kajii (2005).
1176
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
fering behavioral axioms.11 This indeterminacy could lead to different ways to attribute market behavior to beliefs. For example, Segal and Spivak (1990) attributed unwillingness to trade to probabilistic first-order risk aversion, while Dow and Werlang (1992) instead attributed unwillingness to trade to nonprobabilistic ambiguity aversion. 3. EX ANTE TRADE In this section, we use subjective beliefs to characterize efficient allocations. As our main result, we show that in the absence of aggregate uncertainty, efficiency is equivalent to full insurance under a “common priors” condition. While we maintain the assumption of a finite state space for simplicity, all of these results extend directly to the case of an infinite state space with appropriate modifications; for details, see Appendix A. We study a standard two-period exchange economy with one consumption good in which uncertainty at date 1 is described by the set S. There are m agents in the economy, indexed by i. Each agent’s consumption set is the set of S acts F . The aggregate m endowment is e ∈ R++ . An allocation f = (f1 fm ) ∈ m F is feasible if i=1 fi = e. An allocation f is interior if fi (s) > 0 for all s and for all i. An allocation f is a full insurance allocation if fi is constant across states for all i; any other allocation will be interpreted as betting. An allocation f is Pareto optimal if there is no feasible allocation g such that gi i fi for all i and gj j fj for some j. PROPOSITION 7: Suppose i is a convex preference relation for each i. An interior allocation (f1 fm ) is Pareto optimal if and only if i π i (fi ) = ∅. PROOF: First, suppose (f1 fm ) is an interior Pareto optimal allocation. By the second welfare theorem, there exists p ∈ RS , p = 0, supporting this allocation, that is, such that p · g ≥ p · fi for all g i fi and each i. By monotonicity, p > 0, thus after normalizing we may take p ∈ S. By definition, p ∈ π i (fi ) for each i, hence i π i (fi ) = ∅. For the other implication, take p ∈ i π i (fi ). By standard arguments, (f1 fm ; p) is a Walrasian equilibrium in the exchange economy with endowments (f1 fm ). By the first welfare theorem, Q.E.D. (f1 fm ) is Pareto optimal. This result provides a helpful tool to study mutual insurance and contracting between agents, regardless of the presence of aggregate uncertainty. The 11 A similar issue arises in the differing definitions of ambiguity found in the ambiguity aversion literature. One definition of ambiguity, owing to Ghirardato and Marinacci (2002), takes the SEU model as a benchmark and attributes all deviations from SEU to nonprobabilistic uncertainty aversion. Another definition, owing to Epstein (1999), uses the probabilistic sophistication model as a benchmark and hence attributes some deviations from SEU to probabilistic first-order risk aversion rather than nonprobabilistic uncertainty aversion.
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1177
following example illustrates. Consider an exchange economy with two agents. The first agent has MEU preferences with set of priors P1 and linear utility index, while the second agent has SEU preferences with prior p2 , also with a linear utility index. Assume p2 belongs to the relative interior of P1 (and hence that P1 has a nonempty relative interior).12 Thus this is an economy in which one agent is risk and ambiguity neutral, while the other is risk neutral but strictly ambiguity averse; moreover, the second agent is more ambiguity averse than the first, using the definition of Ghirardato, Maccheroni, and Marinacci (2004). In this case, an interior allocation is Pareto optimal if and only if it fully insures the ambiguity averse agent. This is because Proposition 7 implies an interior allocation f can be Pareto optimal if and only if p2 ∈ π 1 (f1 ). If f1 does not involve full insurance for agent 1, then π 1 (f1 ) will be the convex hull of a strict subset of the extreme points of P1 and, in particular, will not contain p2 . Alternatively, at any constant bundle x1 , π 1 (x1 ) = P1 p2 = π 2 (e − x1 ), so any such allocation is Pareto optimal. This result can be easily extended to the case in which agent 1 is also ambiguity averse, with MEU preferences given by the same utility index and a set P2 , provided P2 is contained in the relative interior of P1 . Similarly, risk aversion can be introduced, although for given beliefs the result will fail for sufficiently high risk aversion. Our main results seek to characterize desire for insurance and willingness to bet as a function of shared beliefs alone. To isolate the effects of beliefs, we first rule out aggregate uncertainty by taking the aggregate endowment e to be constant across states. In addition, we must rule out pure indifference to betting, as might occur in an SEU setting with risk-neutral agents. The following two axioms guarantee that such indifference to betting is absent. AXIOM 5—Strong Monotonicity: For all f = g, if f ≥ g, then f g. AXIOM 6—Strict Convexity: For all f = g and α ∈ (0 1), if f g, then αf + (1 − α)g g. Finally, we focus on preferences for which local trade-offs in the absence of uncertainty are independent of the (constant) level of consumption. These preferences are characterized by the fact that the directions of local improvement, starting from a constant bundle at which uncertainty is absent, are independent of the particular constant. AXIOM 7—Translation Invariance at Certainty: For all g ∈ RS and all constant bundles x x > 0, if x + λg x for some λ > 0, then there exists λ > 0 such that x + λ g x . 12
By relative interior, here we mean relative to the affine hull of P1 .
1178
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
This axiom will be satisfied by all of the main classes of preferences we have considered. A simple example violating this axiom is the SEU model with statedependent utility; in this case, the slopes of indifference curves can change along the 45◦ line. In fact, in the class of SEU preferences, Axiom 7 is equivalent to a state-independent and differentiable utility function. We show below that for a convex preference relation, translation invariance at certainty suffices to ensure that subjective beliefs are instead constant across constant bundles. PROPOSITION 8: Let be a convex preference relation satisfying Axiom 7. Then π(x) = π(x ) for all constant acts x x > 0. By this result, we can write π in place of π(x) when translation invariance at certainty is satisfied; we maintain this notational simplification below. Our main result follows. For any collection of convex preferences satisfying translation invariance at certainty, the sets π i of subjective beliefs contain all of the information needed to predict the presence or absence of purely speculative trade. Regardless of other features of the representation of preferences, the existence of a common subjective belief, understood to mean i π i = ∅, characterizes the efficiency of full insurance. Moreover, these results can be understood as straightforward consequences of the basic welfare theorems. PROPOSITION 9: If the aggregate endowment is constant across states and i satisfies Axioms 1–7 for each i, then the following statements are equivalent: (i) There exists an interior full insurance Pareto optimal allocation. (ii) Any Pareto optimal allocation is a full insurance allocation. (iii) Every full insurance allocation is Pareto optimal. (iv) i π i = ∅. PROOF: We show the sequence of inclusions: (i) ⇒ (iv) Suppose that x = (x1 xm ) is an interior full insurance allocation that is Pareto optimal. By the second welfare theorem, there exists p = 0 such that p supports the allocation x, that is, such that for each i, p · f ≥ p · xi for all f i xi . By monotonicity, p > 0, so after normalizing we can take p ∈ S. By definition, p ∈ π i for all i; hence i π i = ∅. (iv) ⇒ (ii) Let p ∈ i π i and suppose f is a Pareto optimal allocation such that fj is not constant for some j. Define xi := Ep fi for each i. By strict monotonicity, p 0. Thus xi ≥ 0 for all i and xi = 0 ⇐⇒ fi = 0. Since p ∈ {i : xi >0} π i (xi ) = {i : xi >0} π ui (xi ), xi fi for all i and, by strict convexity, xj j fj . Then the allocation x = (x1 xm ) is feasible, and Pareto dominates f , which is a contradiction. (ii) ⇒ (iii) Suppose that x is a full insurance allocation that is not Pareto optimal. Then there is a Pareto optimal allocation f that Pareto dominates x. By (ii), f must be a full insurance allocation, which is a contradiction.
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1179
FIGURE 2.—Full insurance and common subjective beliefs.
(iii) ⇒ (i) The allocation ( m1 e m1 e) is an interior full insurance allocation. By (iii) it is Pareto optimal. Q.E.D. Figure 2 illustrates Proposition 9 using an Edgeworth box: x is a full insurance allocation, and the two individuals’ preferences and subjective beliefs are drawn in black and gray. One can easily verify that x is Pareto optimal and that the intersection of the subjective beliefs is not empty in this case. REMARK 2: Billot, Chateauneuf, Gilboa, and Tallon (2000) derived a version of this result for the particular case of maxmin preferences using an ingenious separation argument.13 In this case, the common prior condition (iv) becomes the intuitive condition i Pi = ∅.14 Billot, Chateauneuf, Gilboa, and Tallon (2000) also considered the case of an infinite state space. In the Appendix, we show that our result can be similarly extended to an infinite state space, although the argument is somewhat more delicate. We view a main contribution of our result (and its extension to the infinite state space case) not as establishing the link between efficiency and notions of common priors per se, but as illustrating that these results are a simple consequence of the welfare theorems linking Pareto optimality to the existence of 13 In Billot, Chateauneuf, Gilboa, and Tallon (2000) there is an imprecision in the proof that (ii) ⇒ (iii), which implicitly uses condition (iv). 14 See Kajii and Ui (2006) for related results regarding purely speculative trade and no-trade theorems.
1180
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
linear functionals providing a common support to agents’ preferred sets, coupled with the particular form these supports take for various classes of preferences. Proposition 9 can be articulated in the language of specific functional forms discussed in Section 2.4. For SEU preferences, condition (iv) becomes the standard common prior assumption, whereas for MEU preferences, we recover the result of Billot, Chateauneuf, Gilboa, and Tallon (2000). For the smooth model of Klibanoff, Marinacci, and Mukerji (2005) condition (iv) means that the expected measures have to coincide, while for variational preferences of Maccheroni, Marinacci, and Rustichini (2006) the sets of measures with zero cost have to intersect. Interestingly, it follows that for Hansen and Sargent (2001) multiplier preferences condition (iv) means that the reference measures coincide. Finally, we note that extending Propositions 7 and 9 to allow for incomplete preferences is fairly straightforward, after appropriately modifying Axioms 1 and 2.15 APPENDIX A: INFINITE STATE SPACE Now we imagine that the state space S may be infinite, and let Σ be a σalgebra of measurable subsets of S. Let B(S Σ) be the space of all real-valued, bounded, and measurable functions on S, endowed with the sup-norm topology. Let ba(S Σ) be the space of bounded, finitely additive measures on (S Σ), endowed with the weak∗ topology, and let S be the subset of finitely additive probabilities. As in the finite case, we let F denote the set of acts, which is now B(S Σ)+ . We continue to use x ∈ R+ interchangeably for the constant act delivering x in each state s. For an act f , a constant x ∈ R+ , and an event E ⊂ S, let xEf denote the act such that x if s ∈ E, (xEf )(s) = f (s) if s ∈ / E. The goal of this section is to establish an analogue of our main result regarding the connection between the efficiency of full insurance and the existence of shared beliefs, Proposition 9, for infinite state spaces. Our work in Section 3 renders this analogue fairly straightforward by highlighting the close link between these results and the fundamental welfare theorems, appropriate versions of which hold in infinite-dimensional settings as well. Because topological issues are often subtle in infinite-dimensional spaces due to the multiplicity of nonequivalent topologies, we begin by emphasizing the meaning of our basic continuity axiom in this setting. 15 A similar observation was made by Rigotti and Shannon (2005), while a recent paper by Mandler (2006) studied Pareto optima for general incomplete preferences.
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1181
AXIOM 2—Continuity: For all f ∈ F , the sets {g ∈ F | g f } and {g ∈ F | f g} are closed in the sup-norm topology. To accommodate an infinite state space, we will need several additional axioms that serve to restrict agents’ beliefs, first by ensuring that beliefs are countably additive, and that beliefs are all mutually absolutely continuous both for a given agent and between different agents. To that end, consider the following axioms: AXIOM 8—Countable Additivity: For each f , each p ∈ π(f ) is countably additive. AXIOM 9—Mutual Absolute Continuity: If xEf ∼ f for some event E and some acts x f with x > sup f , then yEg ∼ g for every y and every act g. PROPOSITION 10: Let be monotone, continuous, convex, and satisfy mutual absolute continuity. If f , g are acts such that inf f inf g > 0, then π(f ) and π(g) contain only measures that are mutually absolutely continuous. PROOF: Suppose, by way of contradiction, that acts f g with inf f , inf g > 0, an event E, and measures p ∈ π(f ), p¯ ∈ π(g) are such that p(E) = 0 while ¯ p(E) > 0. Choose x > sup f . By monotonicity, x f and xEf f . Since p(E) = 0, p · (xEf ) = p · f Together with p ∈ π(f ) this implies xEf ∼ f . Choose y such that y < inf g. By ¯ mutual continuity, yEg ∼ g. Since p(E) > 0, p¯ · (yEg) < p¯ · g But p¯ ∈ π(g), which yields a contradiction.
Q.E.D.
The same argument will show that if mutual continuity holds across agents, then all beliefs of all agents are mutually absolutely continuous. We say that a collection {i : i = 1 m} of preference orders on F satisfies mutual absolute continuity if whenever xEf ∼i f for some agent i, some event E, and some x > sup f , then yEg ∼j g for every agent j, every y, and every act g. PROPOSITION 11: Let i be monotone, continuous, and convex for each i, and let {i : i = 1 m} satisfy mutual absolute continuity. Then for every i j and any acts f g such that inf f inf g > 0, π i (f ) and π j (g) contain only measures that are mutually absolutely continuous.
1182
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
Mutual absolute continuity is a strong assumption, and is close to the desired conclusion of mutual absolute continuity of agents’ beliefs. Without more structure on preferences, it does not seem possible to weaken, however. Without the additional structure available in various representations, nothing needs to tie together beliefs at different acts. This gives us very little to work with for general convex preferences. In contrast, in particular special cases, much weaker conditions would suffice to deliver the same conclusion. For example, Epstein and Marinacci (2007) showed that a version of the modularity condition of Kreps (1979) is equivalent to mutual absolute continuity of priors in the MEU model. For a complete analogue of our main result regarding the connection between common priors and the absence of betting, we must ensure that individually rational Pareto optimal allocations exist given any initial endowment allocation. This is needed to show that (ii) ⇒ (iii) in Proposition 9 without the additional assumption of a common prior, that is, to show that if every Pareto optimal allocation must involve full insurance, then all full insurance allocations are in fact Pareto optimal. Since no two full insurance allocations can be Pareto ranked, this conclusion will follow immediately from the existence of individually rational Pareto optimal allocations. Instead Billot, Chateauneuf, Gilboa, and Tallon (2000) used the existence of a common prior, condition (iv), to argue that any Pareto improvement must itself be Pareto dominated by the full insurance allocation with consumption equal to the expected values, computed with respect to some common prior. In the finite state space case, it is straightforward to give an alternative argument that does not make use of the common prior condition. If a full insurance allocation is not Pareto optimal, then there must exist a Pareto optimal allocation that dominates it, as a consequence of the existence of individually rational Pareto optimal allocations. When all Pareto optimal allocations involve full insurance, this leads to a contradiction that establishes the desired implication. With an infinite state space, the existence of individually rational Pareto optimal allocations is more delicate. Typically, this existence is derived from continuity of preferences in some topology in which order intervals, and hence sets of feasible allocations, are compact. In our setting, such topological assumptions are problematic, as order intervals in B(S Σ) fail to be compact in topologies sufficiently strong to make continuity a reasonable and not overly restrictive assumption. Instead we give a more subtle argument that makes use of countable additivity and mutual continuity to give an equivalent formulation of the problem recast in L∞ (S Σ μ) for an appropriately chosen measure μ. More precisely, suppose that {i : i = 1 m} satisfy mutual absolute continuity. Choose a measure μ ∈ π 1 (x) for some constant x. We can extend each i to L∞ (S Σ μ)+ in the natural way, first by embedding B(S Σ)+ in L∞ (S Σ μ)+ via the identification of an act f with its equivalence class [f ] ∈ L∞ (S Σ μ)+ , and then by noticing that a preference order satisfying our basic axioms will be indifferent over any acts f f ∈ B(S Σ)+ such that f ∈ [f ]. This
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1183
allows us to extend each preference order i to L∞ (S Σ μ)+ in the natural way, by defining [f ] i [g] ⇐⇒ f i g for any f g ∈ B(S Σ)+ . Similarly, given a utility representation Vi of i on B(S Σ)+ , define Vi : L∞ (S Σ μ)+ → R by Vi ([f ]) = Vi (f ) for each f ∈ B(S Σ)+ . With this recasting of the problem, the existence of individually rational Pareto optimal allocations follows from an additional type of continuity. ¯ such that for AXIOM 10—Countable Continuity: There exist x¯ and μ ∈ π(x) all g f x ∈ F , if {f α } is a net in F with f α x and f α ≤ g for all α, and q · f α → q · f for all q ∈ ca(S Σ) such that q μ, then f x. PROPOSITION 12: Let i be monotone, continuous, countably continuous, countably additive, and convex for each i, and let {i : i = 1 m} satisfy mutual absolute continuity. For any initial endowment allocation (e1 em ), individually rational Pareto optimal allocations exist. PROOF: Fix a constant act x > 0 and choose a measure μ ∈ π 1 (x). If f and g are μ-equivalent, so μ({s : f (s) = g(s)}) = 0, then f ∼i g for each i. To see this, fix μ-equivalent acts f and g, and an agent i. Without loss of generality suppose g i f . First suppose that inf f inf g > 0. In this case, every p ∈ π i (f ) is absolutely continuous with respect to μ, so p·g=p·f
∀p ∈ π i (f )
Thus f i g and we conclude g ∼i f as desired. For the general case, consider the sequence of constant acts {xn } with xn = n1 for each n: inf xn > 0 for each n while xn → 0 in the sup-norm topology. For each n, the acts f + xn and g + xn are μ-equivalent, and inf(f + xn ) inf(g + xn ) > 0. By the previous argument, f + xn ∼i g + xn for each n, and by continuity, f ∼i g as desired. For each i, extend Vi to L∞ (S Σ μ)+ using this observation, by defining Vi ([f ]) := Vi (f ) for each f ∈ B(S Σ)+ . Fix an initial endowment allocation (e1 em ), and set e := i ei . By the Banach–Alaoglu theorem, the order interval [0 e] is weak∗ compact in L∞ (S Σ μ)+ , and by mutual absolute continuity and countable continuity, Vi is weak∗ upper semicontinuous on [0 e]. From this it follows by standard arguments that for every initial endowment allocation (e1 em ), an individually rational Pareto optimal allocation exists; for completeness we reproduce an argument from Boyd (1995); see also Theorem 1.5.3 in Aliprantis, Brown, and Burkinshaw (1989). Define a preorder on the compact set of feasible allocations m A := f ∈ [L∞ (S Σ μ)+ ] : fi = e i
1184
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
as follows. Given feasible allocations (f1 fm ) and (g1 gm ), define f g if fi i gi for each i. Set
B (g) := {f ∈ A : f g} and
S := B ((e1 em )) = {f ∈ A : f (e1 em )} ¯ of R, ¯ B (g) = B (max R ¯ ) is Let R be a chain in S . For any finite subset R g∈R propnonempty, by transitivity. Thus {B (g) : g ∈ R} has the finite intersection erty. Each B (g) is weak∗ closed; hence, by compactness of A, g∈R B (g) = ∅, and any element of g∈R B (g) provides an upper bound for R. By Zorn’s lemma for preordered sets (see, e.g., Megginson (1998, p. 6)), S has a maximal element, which is then an individually rational Pareto optimal allocation. Q.E.D. With this in place, we turn to the infinite version of Proposition 9. The proof is analogous, making use of an infinite-dimensional version of the second welfare theorem and our previous result establishing the existence of individually rational Pareto optimal allocations in our model. As in the finite case, the aggregate endowment e is constant, with e > 0; hence inf e > 0. We say that f = (f1 fm ) ∈ F m is a norm-interior allocation if inf fi > 0 for i = 1 2 m. PROPOSITION 13: Let {i : i = 1 m} satisfy Axioms 1–10 and mutual absolute continuity. Then the following statements are equivalent: (i) There exists a norm-interior full insurance Pareto optimal allocation. (ii) Any Pareto optimal allocation is a full insurance allocation. (iii) Every full insurance allocation is Pareto optimal. (iv) i π i = ∅. PROOF: As in the proof of Proposition 9, we show the sequence of inclusions: (i) ⇒ (iv) Suppose that x = (x1 xm ) is a norm-interior full insurance allocation that is Pareto optimal. Each xi is contained in the norm interior of B(S Σ)+ ; hence by the second welfare theorem, there exists p ∈ ba(S Σ) with p = 0 such that p supports the allocation x, that is, such that for each i, p · f ≥ p · xi for all f i xi . By monotonicity, p > 0, so after normalizing we can take p ∈ S. By definition p ∈ π i for all i; hence i π i = ∅. (iv) ⇒ (ii) Let p ∈ i π i and suppose f is a Pareto optimal allocation such that fj is not constant for some j. Define xi := Ep fi for each i. By strict monotonicity, p is strictly positive, that is, p · g > 0 for any act g > 0. Together with countable additivity, this yields xi ≥ 0 for all i, and xi = 0 ⇐⇒ fi = 0.
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1185
Since p ∈ {i : xi >0} π i (xi ) = {i : xi >0} π ui (xi ), xi fi for all i, and by strict convexity, xj j fj . Then the allocation x = (x1 xm ) is feasible and Pareto dominates f , which is a contradiction. (ii) ⇒ (iii) Suppose that x is a full insurance allocation that is not Pareto optimal. Using Proposition 12, there must be a Pareto optimal allocation f that Pareto dominates x. By (ii), f must be a full insurance allocation, which is a contradiction. (iii) ⇒ (i) The allocation ( m1 e m1 e) is a norm-interior full insurance allocation. By (iii), it is Pareto optimal. Q.E.D. We close with an example illustrating how the additional axioms arising in the infinite state space case might naturally be satisfied. We consider the version of the MEU model studied by Billot, Chateauneuf, Gilboa, and Tallon (2000). They considered an MEU model in which each agent i has a weak∗ closed, convex set of priors Pi ⊂ ba(S Σ) consisting only of countably additive measures, and a utility index ui : R+ → R that is strictly increasing, strictly concave, and differentiable. In addition, they assumed that all measures in Pi and Pj are mutually absolutely continuous for all i and j. It straightforward to verify that Pi = π i for each i, as in the finite state case, and that the model satisfies countable additivity. To verify mutual absolute continuity, suppose that x > sup f but xEf ∼i f for some event E and some agent i. Using Theorems 3 and 5 of Epstein and Marinacci (2007), there must exist p ∈ Pi such that p(E) = 0. Because all measures in Pi and Pj for any other j are assumed to be mutually absolutely continuous, it must be the case that p(E) = 0 for any p ∈ Pj for any agent j, which guarantees that yEg ∼j g for all j and any other acts y g. To see that continuity and countable continuity are also satisfied, first take {f n } f in F with f n − f → 0. Then |Vi (f n ) − Vi (f )| = min Ep (ui (f n )) − min Ep (ui (f )) p∈π i p∈π i ≤ max Epn∗ (ui (f n ) − ui (f )) Ep (ui (f n ) − ui (f )) where pn∗ ∈ M(f n ) and p∗ ∈ M(f ).16 Since ui (f n ) − ui (f ) → 0, |Vi (f n ) − Vi (f )| → 0 and the desired conclusion follows. Next, to see that countable continuity is also satisfied, fix μ ∈ π 1 and an agent i. Take g f x ∈ F and a net {f α } in F with f α i x and f α ≤ g for all α. Notice that it suffices to show that the set {f ∈ L∞ (S Σ μ)+ : f i x f ∈ [0 g]} is σ(L∞ (S Σ μ) L1 (S Σ μ))-closed, with i and acts recast in L∞ (S Σ μ) as in Proposition 12. Using convexity, this is equivalent to showing that this set is closed in the Mackey topology τ := τ(L∞ (S Σ μ) L1 (S Σ μ)). Thus 16
As in the finite state space case, M(f ) := arg minp∈π i Ep (ui (f )).
1186
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI τ
suppose f α → f . By way of contradiction, suppose that x i f ; thus Vi (x) = Ep∗ (ui (x)) > Ep∗ (ui (f )), where as above p∗ ∈ M(f ). Then for every α, Ep∗ (ui (f α )) ≥ Vi (f α ) ≥ Ep∗ (ui (x)) > Ep∗ (ui (f )) while 0 < Ep∗ (ui (x)) − Ep∗ (ui (f )) ≤ Ep∗ (ui (f α )) − Ep∗ (ui (f )) = Ep∗ (ui (f α ) − ui (f )) = Ep∗ (ui (f α ) − ui (f ))
≤ Ep∗ |ui (f α ) − ui (f )| ≤ Ep∗ (K|f α − f |) for some K > 0, where the last inequality follows from the assumption that ui is strictly concave, strictly increasing, and differentiable, hence Lipschitz continuτ w∗ ous. Since τ is locally solid, |f α −f | → 0, from which it follows that |f α −f | → 0 as well. Since p∗ μ and p∗ is countably additive, by appealing to the Radon– Nikodym theorem, Ep∗ (K|f α − f |) → 0. As this yields a contradiction, f i x as desired. APPENDIX B: PROOFS We will use the fact that {g|g f } = int{g|g f } and {g|g f } = cl{g|g f }. Let f g denote the inner product of f and g, and let ∂I be the superdifferential of a concave function I. PROOF OF PROPOSITION 1: Using continuity, monotonicity, and convexity, standard arguments yield the equivalence of π(f ) and π u (f ) for any strictly positive act f . To show that π(f ) = π w (f ) as well, we first observe that by definition, the set π(f ) is the set of normals to the convex upper contour set B(f ) := {g ∈ RS : g f } at f , normalized to lie in S. Let TB(f ) (f ) denote the tangent cone to B(f ) at f , which is given by TB(f ) (f ) = {g ∈ RS : f + λg f for some λ > 0} From standard convex analysis results, π(f ) is also the set of normals to TB(f ) (f ), again normalized to lie in S. Thus π(f ) = {p ∈ S : p · g ≥ 0 for all g ∈ TB(f ) (f )} and g ∈ TB(f ) (f ) ⇐⇒ p · g ≥ 0 for all p ∈ π(f ). Then g ∈ TB(f ) (f ) + {f } = {h ∈ RS : (1 − ε)f + εh f for some ε > 0} ⇐⇒
p · g ≥ p · f
Thus π(f ) = π w (f ).
for all p ∈ π(f ) Q.E.D.
1187
SUBJECTIVE BELIEFS AND EX ANTE TRADE
For many of the results in the section on special cases, we make use of the following lemma. LEMMA 1: Assume that satisfies Axioms 1–4 and the representation V of q |q ∈ ∂V (f )}. is concave. Then π(f ) = π ∂ (f ) := { q q for some q ∈ ∂V (f ). PROOF: First, we show that π ∂ (f ) ⊆ π(f ). Let p = q Let V (g) ≥ V (f ). We have 0 ≤ V (g) − V (f ) ≤ q g − f , hence q f ≤ q g, so Ep g ≥ Ep f . Second, we show that π ∂ (f ) ∈ P (f ), thus π w (f ) ⊆ π ∂ (f ). Let g be such that Ep g > Ep f for all p ∈ π ∂ (f ). We need to find ε > 0 with V (εg + (1 − ε)f ) > V (f ). The one-sided directional derivatives V (f ; h) exist for all h ∈ RS , and V (f ; h) = min{l h|l ∈ ∂V (f )}.17 Hence, for some q ∈ ∂V (f ),
V (εg + (1 − ε)f ) = V (f + ε(g − f )) = V (f ) + εV (f ; g − f ) + o(ε) = V (f ) + ε min{l g − f |l ∈ ∂V (f )} + o(ε) = V (f ) + εq g − f + o(ε) = V (f ) + ε[q g − f + o(1)] Because q = qp for some p ∈ π ∂ (f ), q g − f = qEp (g − f ) > 0. Therefore, there exists a δ > 0 such that for all ε ∈ (0 δ), ε[Ep (g − f ) + o(1)] > 0, hence V (εg + (1 − ε)f ) > V (f ). Q.E.D. PROOF OF PROPOSITION 3: It follows from the proof of Theorem 3 in Maccheroni, Marinacci, and Rustichini (2006) that I(ξ) = minp∈S (Ep ξ + c (p)) is concave. This, together with concavity of u, yields the concavity of V . Continuity and monotonicity follow from the fact that I is monotonic and supnorm Lipschitz continuous. By Theorem 18 of Maccheroni, Marinacci, and Rustichini (2006), ∂V (f ) = {q ∈ RS : q = pDU(f ) for some p ∈ M(f )} The result follows from Lemma 1.
Q.E.D.
PROOF OF PROPOSITION 2: This follows from Proposition 3 by noting that MEU is the special case of variational preferences for which 0 if p ∈ P, c (p) = ∞ if p ∈ / P. Q.E.D. 17 Theorem 23.4 of Rockafellar (1970) implies that V (f ; h) = inf{l h|l ∈ ∂V (f )} for all h. Because V is a proper concave function, ∂V (f ) is a compact set, hence the infimum is achieved.
1188
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
PROOF OF PROPOSITION 4: It follows from Lemma 8 in Chateauneuf and 1 Ep ξ is concave. This, together with conFaro (2006) that I(ξ) = minp∈Lα ϕ(p) cavity of u, yields the concavity of V . Continuity and monotonicity follow from the fact that I is monotonic and sup-norm Lipschitz continuous (see Lemma 6 in Chateauneuf and Faro (2006)). By Clarke (1983, Sect. 2.8, Corollary 2), ∂V (f ) = {q ∈ RS : q = pDU(f ) for some p ∈ M(f )} The result follows from Lemma 1.
Q.E.D.
PROOF OF PROPOSITION 5: Continuity, monotonicity, and convexity are routine. When u and φ are concave and differentiable, it is straightforward to see that V is also concave and differentiable, and that ∂V (f ) = {DV (f )} = Q.E.D. {Eμ [Dφ(Ep u(f ))pDU(f )]}. PROOF OF PROPOSITION 6: Continuity, monotonicity, and convexity are routine. When u and φ are concave and differentiable, it is straightforward to see that V is also concave and differentiable. A direct calculation of directional derivatives reveals that ∂V (f ) = {DV (f )} = {pDU(f )[Ia ⊗ Q.E.D. D(Ea u(f ))]}. PROOF OF PROPOSITION 8: Fix constant acts x x > 0 and let B(x) := {f ∈ R : f x} denote the upper contour set of at x. As in the proof of Proposition 1, let TB(x) (x) denote the tangent cone to B(x) at x: S +
TB(x) (x) = {g ∈ RS : x + λg x for some λ > 0} Again as in the proof of Proposition 1, π(x) is the normal cone to TB(x) (x), analogously for π(x ). By translation invariance at certainty, TB(x) (x) = Q.E.D. TB(x ) (x ), from which we conclude that π(x) = π(x ). REFERENCES ALIPRANTIS, C. D., D. J. BROWN, AND O. BURKINSHAW (1989): Existence and Optimality of Competitive Equilibria. New York: Springer. [1183] BILLOT, A., A. CHATEAUNEUF, I. GILBOA, AND J.-M. TALLON (2000): “Sharing Beliefs: Between Agreeing and Disagreeing,” Econometrica, 68, 685–694. [1167,1179,1180,1182,1185] BOYD, J. (1995): “The Existence of Equilibrium in Infinite-Dimensional Spaces: Some Examples,” Mimeo, University of Rochester. [1183] CHAMBERS, R. G., AND J. QUIGGIN (2002): “Primal and Dual Approaches to the Analysis of Risk Aversion,” Mimeo, Australian National University. [1169] CHATEAUNEUF, A., AND J. H. FARO (2006): “Ambiguity Through Confidence Functions,” Mimeo, Université de Paris. [1167,1173,1188] CHATEAUNEUF, A., AND J.-M. TALLON (2002): “Diversification, Convex Preferences and Nonempty Core in the Choquet Expected Utility Model,” Economic Theory, 19, 509–523. [1169]
SUBJECTIVE BELIEFS AND EX ANTE TRADE
1189
CLARKE, F. (1983): Optimization and Nonsmooth Analysis. New York: Wiley. [1188] DEKEL, E. (1989): “Asset Demand Without the Independence Axiom,” Econometrica, 57, 163–169. [1169] DOW, J., AND S. R. WERLANG (1992): “Uncertainty Aversion, Risk Aversion, and the Optimal Choice of Portfolio,” Econometrica, 60, 197–204. [1176] EPSTEIN, L. G. (1999): “A Definition of Uncertainty Aversion,” Review of Economic Studies, 66, 579–608. [1176] EPSTEIN, L. G., AND M. MARINACCI (2007): “Mutual Absolute Continuity of Multiple Priors,” Journal of Economic Theory, 137, 716–720. [1182,1185] ERGIN, H., AND F. GUL (2004): “A Subjective Theory of Compound Lotteries,” Mimeo, Princeton University. [1167,1174,1175] GHIRARDATO, P., AND M. MARINACCI (2001): “Risk, Ambiguity, and the Separation of Utility and Beliefs,” Mathematics of Operations Research, 26, 864–890. [1172] (2002): “Ambiguity Made Precise: A Comparative Foundation,” Journal of Economic Theory, 102, 251–289. [1176] GHIRARDATO, P., F. MACCHERONI, AND M. MARINACCI (2004): “Differentiating Ambiguity and Ambiguity Attitude,” Journal of Economic Theory, 118, 133–173. [1172,1177] GILBOA, I., AND D. SCHMEIDLER (1989): “Maxmin Expected Utility With Non-Unique Prior,” Journal of Mathematical Economics, 18, 141–153. [1167,1169] GRANT, S., AND A. KAJII (2005): “Probabilistically Sophisticated Multiple Priors,” Mimeo, Working Paper 608, KIER. [1175] HANSEN, L. P., AND T. J. SARGENT (2001): “Robust Control and Model Uncertainty,” American Economic Review: Papers and Proceedings, 91, 60–66. [1167,1172,1173,1180] KAJII, A., AND T. UI (2006): “Agreeable Bets With Multiple Priors,” Journal of Economic Theory, 128, 299–305. [1179] KLIBANOFF, P., M. MARINACCI, AND S. MUKERJI (2005): “A Smooth Model of Decision Making Under Ambiguity,” Econometrica, 73, 1849–1892. [1167,1174,1180] KREPS, D. (1979): “A Representation Theorem for ‘Preference for Flexibility’,” Econometrica, 47, 565–578. [1182] MACCHERONI, F., M. MARINACCI, AND A. RUSTICHINI (2006): “Ambiguity Aversion, Robustness, and the Variational Representation of Preferences,” Econometrica, 74, 1447–1498. [1167,1172, 1180,1187] MACHINA, M. J., AND D. SCHMEIDLER (1992): “A More Robust Definition of Subjective Probability,” Econometrica, 60, 745–780. [1175] MANDLER, M. (2006): “Welfare Economics With Status Quo Bias: A Policy Paralysis Problem and Cure,” Working Paper, Royal Holloway College. [1180] MEGGINSON, R. E. (1998): An Introduction to Banach Space Theory. New York: Springer. [1184] NAU, R. F. (2006): “Uncertainty Aversion With Second-Order Utilities and Probabilities,” Management Science, 52, 136–145. [1167,1174] QUIGGIN, J. (1982): “A Theory of Anticipated Utility,” Journal of Economic Behavior and Organization, 3, 323–343. [1175] RIGOTTI, L., AND C. SHANNON (2005): “Uncertainty and Risk in Financial Markets,” Econometrica, 73, 203–243. [1180] ROCKAFELLAR, T. (1970): Convex Analysis. Princeton, NJ: Princeton University Press. [1187] SCHMEIDLER, D. (1989): “Subjective Probability and Expected Utility Without Additivity,” Econometrica, 57, 571–587. [1167] SEGAL, U. (1990): “Two-Stage Lotteries Without the Reduction Axiom,” Econometrica, 58, 349–377. [1174] SEGAL, U., AND A. SPIVAK (1990): “First Order versus Second Order Risk Aversion,” Journal of Economic Theory, 51, 111–125. [1176] STRZALECKI, T. (2007): “Axiomatic Foundations of Multiplier Preferences,” Mimeo, Northwestern University. [1173]
1190
L. RIGOTTI, C. SHANNON, AND T. STRZALECKI
YAARI, M. E. (1969): “Some Remarks on Measures of Risk Aversion and on Their Uses,” Journal of Economic Theory, 1, 315–329. [1169,1171] (1987): “The Dual Theory of Choice Under Risk,” Econometrica, 55, 95–115. [1175]
Fuqua School of Business, Duke University, 1 Towerview Drive, Durham, NC 27708, U.S.A.; [email protected], Dept. of Economics, University of California, Berkeley, Evans Hall, Berkeley, CA 94720-3880, U.S.A.; [email protected], and Dept. of Economics, Northwestern University, 2001 Sheridan Road, Evanston, IL 60208, U.S.A.; [email protected]. Manuscript received January, 2008; final revision received April, 2008.
Econometrica, Vol. 76, No. 5 (September, 2008), 1191–1206
IDENTIFICATION OF TREATMENT EFFECTS USING CONTROL FUNCTIONS IN MODELS WITH CONTINUOUS, ENDOGENOUS TREATMENT AND HETEROGENEOUS EFFECTS BY J. P. FLORENS, J. J. HECKMAN, C. MEGHIR, AND E. VYTLACIL1 We use the control function approach to identify the average treatment effect and the effect of treatment on the treated in models with a continuous endogenous regressor whose impact is heterogeneous. We assume a stochastic polynomial restriction on the form of the heterogeneity, but unlike alternative nonparametric control function approaches, our approach does not require large support assumptions. KEYWORDS: Continuous treatments, endogenous treatments, heterogeneous treatment effects, identification, nonseparable models, control function.
1. INTRODUCTION THERE IS A LARGE AND GROWING theoretical and empirical literature on models where the impacts of discrete (usually binary) treatments are heterogeneous in the population.2 The objective of this paper is to analyze nonparametric identification of treatment effect models with continuous treatments when the treatment intensity is not randomly assigned. This generally leads to models that are nonseparable in the unobservables and produces heterogeneous treatment intensity effects. Imposing a stochastic polynomial assumption on the heterogeneous effects, we use a control function approach to obtain identification without large support assumptions. Our approach has applications in a wide variety of problems, including demand analysis, where price elasticities may differ across individuals; labor supply, where wage effects may be heterogeneous; or production functions, where the technology may vary across firms. Other recent papers on semiparametric and nonparametric models with nonseparable error terms and an endogenous, possibly continuous, covariate include papers using quantile instrumental variable methods such as Chernozhukov and Hansen (2005) and Chernozhukov, Imbens, and Newey 1 We thank two anonymous referees and Whitney Newey for their comments. We also thank participants at the Berkeley–Stanford (March 2001) workshop on nonparametric models with endogenous regressors as well as participants at the University College London empirical microeconomics workshop for useful comments. J. Heckman would like to thank the NIH (Grant R01-HD043411) and the NSF (Grant SES-024158) for research support. C. Meghir would like to thank the Centre for Economics of Education and the ESRC for funding through the Centre for Fiscal policy at the IFS and his ESRC Professorial Fellowship. E. Vytlacil would like to thank the NSF for financial support (Grant SES-05-51089). The views expressed in this paper are those of the authors and not necessarily those of the funders listed here. All errors are our own. 2 See, for example, Roy (1951), Heckman and Robb (1985, 1986), Björklund and Moffitt (1987), Imbens and Angrist (1994), Heckman (1997), Heckman, Smith, and Clements (1997), Heckman and Honoré (1990), Card (1999, 2001), and Heckman and Vytlacil (2001, 2005, 2007a, 2007b), who discussed heterogeneous response models.
© 2008 The Econometric Society
DOI: 10.3982/ECTA5317
1192
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
(2007), and papers using a control variate technique such as Altonji and Matzkin (2005), Blundell and Powell (2004), Chesher (2003), and Imbens and Newey (2002, 2007). Chesher (2007) surveyed this literature. The analysis of Imbens and Newey (2002, 2007) is perhaps the most relevant to our analysis, with the key distinction between our approach and their approach being a trade-off between making a stochastic polynomial assumption on the outcome equation versus assuming large support. We discuss the differences between our approach and their approach further in Section 3.2. 2. THE MODEL, PARAMETERS OF INTEREST, AND THE OBSERVABLES Let Yd denote the potential outcome corresponding to level of treatment intensity d. When the treatments are discrete, this notation represents the two possible outcomes for a particular individual in the treated and nontreated state. In this paper, there is a continuum of alternatives as the treatment intensity varies. Define ϕ(d) = E(Yd ) and Ud = Yd − ϕ(d), so that, by construction, (1)
Yd = ϕ(d) + Ud
We restrict attention to the case where the stochastic process Ud takes the polynomial form (2)
Ud =
K
d j εj
with
E(εj ) = 0
j = 0 K
j=0
where K < ∞ is known.3 Let D denote the realized treatment, so that the realized outcome Y is given by Y = YD . We do not explicitly denote observable regressors that directly affect Yd . All of our analysis implicitly conditions on such regressors. We make the following assumption. A-1: ϕ(D) is K times differentiable in D (almost surely (a.s.)) and the support of D does not contain any isolated points (a.s.). This allows for heterogeneity of a finite set of derivatives of Yd . This specification can be seen as a nonparametric, higher order generalization of the random coefficient model analyzed by Heckman and Vytlacil (1998) and Wooldridge (1997, 2003, 2007). The normalization E(εj ) = 0 j = 0 K, implies that (∂j /∂d j )E(Yj ) = (∂j /∂d j )ϕ(d).4 3 As discussed later, we can test for the order of the polynomial as long as a finite upper bound on K is known. The question of identification with K infinite is left for future work. K 4 To see that E(εj ) = 0 j = 0 K, is only a normalization, note that ϕ(d) + j=0 d j εj = K j j j [ϕ(d) + K (d) + K j . j=0 d E(εj )] + j=0 d (εj − E(εj )) = ϕ j=0 d ε
IDENTIFICATION OF TREATMENT EFFECTS
1193
Equations (1) and (2) can be restated as follows to emphasize that we analyze a nonseparable model: (3)
Y = h(D ) = ϕ(D) +
K
Dj hj ()
j=0
where need not be a scalar random variable. The notation of equation (3) can be mapped into the notation of equations (1) and (2) by setting εj = hj (). Notice that we do not assume that is a scalar random variable, and h need not be monotonic in . One parameter of interest in this paper is the average treatment effect (ATE), (4)
ATE (d) = lim
d→0
∂ E(Yd+d − Yd ) ∂ ≡ E(Yd ) = ϕ(d) d ∂d ∂d
which is the average effect of a marginal increase in treatment if individuals were randomly assigned to base treatment level d. Note that the average treatment effect depends on the base treatment level, and for any of the continuum of possible base treatment levels we have a different average treatment effect. The average treatment effect is the derivative of the average structural function of Blundell and Powell (2004). We also consider the effect of treatment on the treated (TT), given by E(Yd+d − Yd |D = d) TT (d) = lim d→0 d ∂ Yd1 D = d2 ≡E ∂d1 d=d1 =d2 ∂ ϕ(d) + = jd j−1 E(εj |D = d) ∂d j=1 K
which is the average effect of treatment for those currently choosing treatment level d of an incremental increase in the treatment holding their unobservables fixed at baseline values. This parameter corresponds to the local average response parameter considered by Altonji and Matzkin (2001, 2005). We denote the choice equation (the assignment mechanism to treatment intensity) as (5)
D = g(Z V )
where Z are observed covariates that enter the treatment choice equation but are excluded from the equation for Yd and V is a scalar unobservable. We make the following assumption:
1194
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
A-2: V is absolutely continuous with respect to Lebesgue measure; g is strictly monotonically increasing in V ; and Z ⊥⊥ (V ε0 εK ). As long as D is a continuous random variable (conditional on Z), we can always represent D as a function of Z and a continuous scalar error term, with the function increasing in the error term and the error term independent of Z. −1 (V |Z). Thus, D = g(Z V ), To see this, set V = FD|Z (D|Z) and g(Z V ) = FD|Z where g is strictly increasing in the scalar V which is distributed unit uniform and independent of Z. However, the assumption that g(Z V ) is monotonic in a scalar unobservable V with Z ⊥⊥ (V ε0 εK ) is restrictive. The construc−1 (V |Z) = g(Z V ) do not guarantee Z ⊥⊥ tions V = FD|Z (D|Z) and D = FD|Z (V ε0 εK ). Given assignment mechanism (5) and assumption A-2, without loss of generality we can impose the normalization that V is distributed unit uniform. Given these assumptions and the normalization of V , we can follow Imbens and Newey (2002, 2007) and recover V from V = FD|Z (D|Z) and the func−1 (V |Z). Assignment mechanism (5) and assumption g from g(Z V ) = FD|Z tion A-2 will not be directly used to prove identification. However, we use them to clarify the primitives underlying our identification assumptions. 2.1. Education and Wages: A Simple Illustration To illustrate the type of problem we analyze in this paper, consider a simple model of educational choice. Suppose that the agent receives wages Yd at direct cost Cd if schooling choice d is made. We work with discounted annualized earnings flows. We write wages for schooling level d, Yd , as Yd = ϕ0 + (ϕ1 + ε1 )d + 12 ϕ2 d 2 + ε0 and the cost function for schooling as (6)
Cd = C0 (Z) + (C1 (Z) + v1 )d + 12 C2 (Z)d 2 + v0
where εs and vs (s = 0 1) are, respectively, unobserved heterogeneity in the wage level and in the cost of schooling. These unobserved heterogeneity terms are the source of the identification problem considered in this paper. We impose the normalizations that E(εs ) = 0 and E(vs ) = 0 for s = 0 1. We implicitly condition on variables such as human capital characteristics that affect both wages and the costs of schooling. The Z are factors that only affect the cost of schooling, such as the price of education. Assume that agents choose their level of education to maximize wages minus costs. Let D denote the resulting optimal choice of education. D solves the first order condition (ϕ1 − C1 (Z)) + (ϕ2 − C2 (Z))D + ε1 − v1 = 0
IDENTIFICATION OF TREATMENT EFFECTS
1195
Assuming that ϕ2 − C2 (Z) < 0 for all Z, the second order condition for a maximum will be satisfied. This leads to an education choice equation (assignment to treatment intensity rule) D=
ϕ1 − C1 (Z) + ε1 − v1 C2 (Z) − ϕ2
This choice equation is produced as a special case of the model given by equations (1), (2), and (5) with 1 ϕ(d) = ϕ0 + ϕ1 d + ϕ2 d 2 2 Ud = ε0 + ε1 d g(z v) =
ϕ1 − C1 (z) + Fε−1 (v) 1 −v1 C2 (z) − ϕ2
where V = Fε1 −v1 (ε1 − v1 ) with Fε1 −v1 the cumulative distribution function of ε1 − v1 . The goal is to identify the average return to education ATE (d) = ϕ1 + ϕ2 d or TT, which is TT (d) = (ϕ1 + E(ε1 |D = d)) + ϕ2 d. In this example, the treatment intensity is given by equation (5) with g strictly increasing in a scalar error term V = Fε1 −v1 (ε1 − v1 ). The structure of the treatment intensity mechanism is sensitive to alternative specifications. Consider the same example as before, except now the second derivative of Yd is also stochastic: Yd = ϕ0 + (ϕ1 + ε1 )d + 12 (ϕ2 + ε2 )d 2 + ε0 . The choice equation becomes D=
ϕ1 − C1 (Z) + ε1 − v1 C2 (Z) − ϕ2 − ε2
In this case, the structural model makes D a function of V = (ε1 − v1 ε2 ), which satisfies Z ⊥⊥ (V ε0 ε1 ε2 ), but V is not a scalar error. We can still −1 ˜ define V˜ = FD|Z (D|Z) and the function g˜ by g(Z V˜ ) = FD|Z (V˜ |Z). With this construction, D is strictly increasing in a scalar error term V˜ that is independent of Z. However, Z is not independent of (V˜ ε0 ε1 ε2 ). To see why, note that Pr(V˜ ≤ v|Z ε0 ε1 ε2 ) ϕ1 − C1 (Z) + ε1 − v1 −1 ≤ FD|Z (v)Z ε0 ε1 ε2 = Pr v1 : C2 (Z) − ϕ2 − ε2 = Pr(V˜ ≤ v|ε0 ε1 ε2 ) This is a case where assumption A-2 does not hold. The fragility of the specification of equation (5), where g is strictly increasing in a scalar error term,
1196
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
arises in part because, under rational behavior, heterogeneity in response to treatment (heterogeneity in the Yd model) generates heterogeneity in selection into treatment intensity. This heterogeneity is absent if agents do not know their own treatment effect heterogeneity, which can happen if agents are uncertain at the time they make participation decisions (see Abbring and Heckman (2007)). 3. IDENTIFICATION ANALYSIS An instrumental variable estimator (IV) does not identify ATE in the case of binary treatment with heterogeneous impacts (Heckman (1997), Heckman and Robb (1986)) unless one imposes covariance restrictions between the errors in the assignment rule and the errors in the structural model. Following Newey and Powell (2003) and Darolles, Florens, and Renault (2002), consider a nonparametric IV strategy based on the identifying assumption that E(Y − ϕ(D)|Z) = 0. Suppose K = 0, which is the special case of no treatment effect heterogeneity. In this case, UD = ε0 so that YD = ϕ(D) + ε0 . We obtain the standard additive-in-unobservables model considered in the cited papers. The identification condition is E(ε0 |Z) = 0. However, in the general case of treatment effect heterogeneity (K > 0), the IV identification restriction implies special covariance restrictions between the error terms. For example, suppose K = 1 and that D = g(Z) + V . Then E(Y − ϕ(D)|Z) = 0 requires E(ε0 |Z) = 0 and E(ε1 D|Z) = 0, with the latter restriction generically equivalent to E(ε1 |Z) = 0 and E(ε1 V |Z) = 0. In other words, in addition to the more standard type of condition that ε0 be mean independent of the instrument, we now have a new restriction in the heterogeneous case that the covariance between the heterogeneous effect and the unobservables in the choice equation conditional on the instrument does not depend on the instrument.5 Instead of following an instrumental variables approach, we explore identification through a control function.6 We assume the existence of a (known or identifiable) control function V˜ that satisfies the following conditions: A-3 —Control Function Condition: E(εj | D Z) = E(εj |V˜ ) = rj (V˜ ).7 5
See Heckman and Vytlacil (1998) and Wooldridge (1997, 2003, 2007). See Newey, Powell, and Vella (1999) for a control function approach for the case of separable models (K = 0). See Heckman and Vytlacil (2007b) for a discussion of the distinction between control functions and control variables. Technically “control function” is a more general concept. We adopt the recent nomenclature even though it is inaccurate. See the Matzkin (2007) paper for additional discussion. 7 Note that our normalization E(εj ) = 0, j = 0 K, implies the normalization that E(rj (V˜ )) = 0, j = 0 K. 6
IDENTIFICATION OF TREATMENT EFFECTS
1197
A-4 —Rank Condition: D and V˜ are measurably separated, that is, any function of D almost surely equal to a function of V˜ must be almost surely equal to a constant. A necessary condition for assumption A-4 to hold is that the instruments Z affect D.8 We return later in this section to consider sufficient conditions on the underlying model that implies the existence of such a control variate V˜ . Under these assumptions, ATE and TT are identified. THEOREM 1: Assume equations (1) and (2) hold with finite K ≥ 1. Under assumptions A-3 (control function condition), A-4 (rank condition), and the smoothness and support condition A-1, ATE and TT are identified. See the Appendix for the proof. The control function assumption gives the basis for an empirical determination of the relevant degree of the polynomial in (2). If the true model is defined by a polynomial of degree , we have that for any k > , k ∂k ˜ = v) = ∂ ϕ(d) E(Y |D = d V ∂d k ∂d k
which does not depend on v and thus is only a function of d. This property can be verified by checking whether the following equality holds almost surely: k ∂ ∂k E(Y |D V˜ ) as ˜ =E E(Y |D V )D k > ∂Dk ∂Dk 3.1. Primitive Conditions Justifying the Control Function Assumption In the previous section a control function is assumed to exist and satisfy certain properties. The analysis in the previous section did not use assignment rule (5) or condition A-2. In this section, we use assignment rule (5) and condition A-2 along with the normalization that V is distributed unit uniform. Under these conditions, consider using V = FD|Z (D|Z) as the control function. This leads to the following corollary to Theorem 1: COROLLARY 3.1: Assume equations (1) and (2) hold with finite K ≥ 1, and assume smoothness and support condition A-1. If D is generated by assignment equation (5) and condition A-2 holds, and if V and D are measurably separable A-4, then ATE and TT are identified. 8
Measurable separability, which we maintain in this paper, is just one way to achieve identification. Alternatively, one could restrict the space of functions ϕ(D) not to contain rj (V˜ ) functions; this in turn can be achieved, for example, by assuming that ϕ(D) is linear in D and rj is nonlinear as in the Heckman (1979) selection model. See also Heckman and Robb (1985, 1986), who discussed this condition.
1198
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
For the conditions of Theorem 1 to be satisfied, it is sufficient to verify that under the conditions in the corollary the control function assumption A-3 is satisfied. Given that D satisfies assignment equation (5) and condition A-2, from Imbens and Newey (2002, 2007) we obtain that V = FD|Z (D|Z) is a control variate satisfying assumption A-3. Next consider measurable separability condition A-4. Measurable separability is a relatively weak condition, as illustrated by the following theorem. THEOREM 2: Assume that (D V ) has a density with respect to Lebesgue measure in R2 and denote its support by S, and let S 0 be the interior of the support. Further, assume that (i) any point in S 0 has a neighborhood such that the density is strictly positive within it and (ii) any two points within S 0 can be connected by a continuous curve that lies strictly in S 0 . Then measurable separability between D and V (A-4) holds. For the proof, see the Appendix. Measurable separability is a type of rank condition. To see this, consider the following heuristic argument. Consider a case where the condition is violated at some point in the interior of the support of (D V ), that is, h(D) = l(V ). Hence h(g(Z V )) = l(V ). Differentiating both sides of this expression with respect ∂g = 0. If measurable separability fails, ∂h = 0 and hence to Z, we obtain ∂h ∂g ∂Z ∂g ∂g = 0, which means that g does not vary with Z. Note that the conditions in ∂Z Theorem 2 are not very restrictive. For example, the conditional support of D can depend on V and vice versa. Assignment rule (5) and condition A-2 do not imply measurable separability (A-4). To show this, we consider two examples where equation (5) and condition A-2 hold, but D and V are not measurably separable. In the first example, Z is a discrete random variable. In the second example, g(z v) is a discontinuous function of v. First, suppose Z = 0 1 and suppose that D = g(z v) = z + v, with V Unif[0 1]. Then A-4 fails, that is, D and V are not measurably separable. To see this, let m1 (t) = t and let m2 (t) = 1[t ≤ 1]t + 1[t > 1](t − 1). Then m1 (V ) = m2 (D), but m1 and m2 are not a.s. equal to a constant. Now consider a second example. Suppose that D = g1 (z) + g2 (v), where g2 (t) = 1[t ≤ 05]t + 1[t > 05](1 + t). Let g1max and g1min denote the maximum and minimum of the support of the distribution of g1 (Z), and suppose that g1max − g1min < 1. Then A-4 fails, that is, D and V are not measurably separable. To see this, let m1 (t) = 1[t ≤ 05], let m2 (t) = 1[t ≤ 05 + g1max ], and note that m1 (V ) = m2 (D) but that m1 and m2 do not (a.s.) equal a constant. Assignment rule (5), condition A-2, and regularity conditions that require Z to contain a continuous element and that g be continuous in v are sufficient to imply that measurable separability (A-4) holds. We prove the following theorem.
IDENTIFICATION OF TREATMENT EFFECTS
1199
THEOREM 3: Suppose that D is determined by equation (5). Suppose that g(z v) is a continuous function of v. Suppose that, for any fixed v, the support of the distribution of g(Z v) contains an open interval. Then, under assumption A-2, D and V are measurably separated (A-4 holds). For the proof, see the Appendix. Note that, for any fixed v, for the support of the distribution of g(Z v) to contain an open interval requires that Z contains a continuous element. A sufficient condition for the support of the distribution of g(Z v) to contain an open interval is that (a) Z contains an element whose distribution conditional on the other elements of Z contains an open interval and (b) g is a continuous monotonic function of that element. Thus, under the conditions of Theorem 3, V is identified by V = F(D|Z) and both the control function condition A-3 and the rank condition A-4 hold with V˜ = V . 3.2. Alternative Identification Analyses A general analysis of identification using the control function assumption without the polynomial structure is related to the work of Heckman and Vytlacil (2001) on identifying the marginal treatment effect (MTE). That paper considers a binary treatment model, but the analysis may be extended to the continuous treatment case. Similar approaches for semiparametric models with a continuous treatment D that is strictly monotonic in V was pursued for the average structural function (ASF) by Blundell and Powell (2004) and for the local average response (LAR) by Altonji and Matzkin (2001, 2005). The derivative of the ASF corresponds to our ATE, and the LAR corresponds to treatment on the treated. Most relevant to this note is the analysis of Imbens and Newey (2002, 2007). They invoked the same structure as we do on the first stage equation for the endogenous regressor as our assignment mechanism (5) and they also invoked assumption A-2. The control variate is V , with V identified and with a distribution that can be normalized to be unit uniform. By the same reasoning, we as have E(Yd |D = d V = v) = E(Yd |V = v). Furthermore, they assumed that the support of (D V ) is the product of the support of the two marginal distributions, that is, they assumed rectangular support. Their assumption implies that the conditional support of D given V does not depend on V (and vice versa). It is stronger than the measurable separability assumption we previously used to establish identification. From these assumptions it follows that E(Y |D = d V = v) = E(Yd |D = d V = v) = E(Yd |V = v) and
E(Yd ) =
E(Yd |V = v) dF(v)
1200
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
Then ϕ(d) = E(Y |D = d V = v) dF(v) and is identified. Identification of ϕ(d) in turn implies identification of ATE = ∂d∂ ϕ(d). The rectangular support condition is needed to replace E(Yd |V = v) by E(Y |D = d V = v) for all v in the unconditional support of V in the previous integral. The rectangular support condition may not be satisfied and, in general, requires a large support assumption as illustrated by the following example. Suppose D = g1 (Z) + V . Let G1 denote the support (Supp) of the distribution of g1 (Z). If Z and V are independent, then Supp(V |D = d) = Supp(V |g1 (Z) + V = d) = Supp(V |V = d − g1 (Z)) = {d − g : g ∈ G1 } where the last equality uses the condition that Z ⊥⊥ V . {d − g : g ∈ G1 } does not depend on d if and only if G1 = . For example, if G1 = [a b], then {d − g : g ∈ G1 } = {d − g : g ∈ [a b]} = [d − b d − a], which does not depend on d if and only if a = −∞ and b = ∞, that is, if and only if G1 = . Instead of imposing E(Yd |D = d V = v) = E(Yd |V = v), one could instead impose ∂ ∂ (7) E(Y |D = d V = v) = E Yd D = d V = v ∂d ∂d ∂ =E Yd V = v ∂d E( ∂d∂ Yd |V = v) is the marginal treatment effect of Heckman and Vytlacil (2001), adapted to the case of a continuous treatment. Instead of integrating E(Yd |V = v) to obtain ϕ(d), one could instead integrate E( ∂d∂ Yd |V = v) to obtain ATE or TT:
∂ E(Y |D = d V = v) dF(v) ∂d
∂ = E(Yd |V = v) dF(v) = ATE (d) ∂d
∂ E Yd1 |D = d2 V = v dF(v|D = d2 ) ∂d1 d=d1 =d2 ∂ =E Yd D = d2 ∂d1 1 d=d1 =d2
∂ = E Yd1 |D = d2 = TT (d) ∂d1 d=d1 =d2 This is the identification strategy followed in Heckman and Vytlacil (2001), adapted to the case where D is a continuous treatment. As discussed in
IDENTIFICATION OF TREATMENT EFFECTS
1201
Heckman and Vytlacil (2001), a rectangular support condition is required to integrate up MTE to obtain ATE. Note that one does not require the rectangular support condition to integrate up ∂d∂ E(Y |D = d V = v) to obtain TT. For TT, one only needs to evaluate ∂d∂ E(Y |D = d V = v) for v in the support of V conditional on D = d, not in the unconditional support of V . While a rectangular support condition is not required to integrate MTE to recover TT, a support condition is required for equation (7) to hold. That equation requires that E(Y |D = d V = v) can be differentiated with respect to d while keeping v fixed. This property is closely related to measurable separability between D and V . Assume that there exists a (differentiable) function of D, h(D) equal (a.s.) to a function of V m(V ), which is not constant. Then we obtain as
E(Y |D = d V = v) = E(Yd |V = v) + h(d) − m(V ) and ∂ ∂ as ∂ E(Y |D = d V = v) = E(Yd |V = v) + h(d) ∂d ∂d ∂d which implies that equation (7) is violated. Thus, for TT, we still need measurable separability between D and V for equation (7) to hold. There are trade-offs between the approach presented in this note versus an approach that identifies MTE/MTE-like objects and then integrates them to obtain the object of interest. The approach developed here requires a stochastic polynomial structure on UD of equation (2) and higher order differentiability. These conditions are not required by Imbens and Newey (2002, 2007) or Heckman and Vytlacil (2001). The approach of this note does not require the large support assumption required to implement these alternative approaches. As shown by Theorem 2, measurable separability between D and V is a relatively mild restriction on the support of (D V ). As shown by Theorem 3, measurable separability between D and V follows from assignment mechanism (5) and assumption A-2 combined with a relatively mild regularity condition. 4. ESTIMATION Under the control function assumption we have E(Y |D = d Z = z) = E(Y |D = d V = v) = ϕ(d) +
K
d j hj (v)
j=0
The method we propose is an extension of Newey, Powell, and Vella (1999) and may also be viewed as an extension of estimation of additive models in
1202
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
a nonparametric context. The estimation is carried out in two steps: first estimate the residual vi from the nonparametric regression D = E(D|Z) + V ; then estimate ϕ and the hj ’s. Define the estimation criterion (8)
˜ h(v)]2 min E[Y − ϕ − D
ϕh0 hK
˜ = [1 D D2 DK ] and h = [h0 h1 hK ] . The first order condiwhere D tions for the minimization are (9)
E(Y |D = d) = ϕ(d) + d˜ E(h(V )|D = d) ˜ |V = v) = E(Dϕ(D)|V ˜ ˜D ˜ |V = v)h E(DY = v) + E(D
where d˜ = [1 d d K ] . This linear system in ϕ and h can easily be solved if the conditional expectations are replaced by their estimators (by kernels for example). In that case it is easily seen that (9) generates a linear system with respect to the ϕ(di ) and the hj (vi ) (i = 1 n; j = 0 K), and this system may be solved by usual methods of linear equations. The equations in (9) are then used to compute ϕ(d) and hj (v) at any point of evaluation. If we only wish to focus attention on ϕ, the vector h may be eliminated from (9) and we obtain ˜D ˜ −1 )E(ϕ(D)|V = v)|D = d ϕ(d) − d˜ E E(D ˜D ˜ −1 )E(DY ˜ |V = v)|D = d = E(Y |D = d) − d˜ E E(D This equation has the form (I − T )ϕ = ψ, where T is, under very general conditions, a compact operator and ψ may be estimated. It is a Fredholm equation of type II which may be analyzed using the methods in Carrasco, Florens, and Renault (2007, Sec. 7). The original system (9) is also a Fredholm equation of type II and both systems generate well posed inverse problems. The asymptotic theory developed in Carrasco, Florens, and Renault (2007) applies with the exception that the vi are now estimated. A precise analysis of this approach and some applications will be developed in future work. 5. CONCLUSIONS This paper considers the identification and estimation of models with a continuous endogenous regressor and nonseparable errors when continuous instruments are available. We present an identification result using a control function technique. Our analysis imposes a stochastic, finite order polynomial restriction on the outcome model, but does not impose a large support assumption.
IDENTIFICATION OF TREATMENT EFFECTS
1203
APPENDIX: PROOFS OF THEOREMS PROOF OF THEOREM 1: Suppose that there are two sets of parameters (ϕ1 rK1 r01 ) and (ϕ2 rK2 r02 ) such that E(Y |D = d V˜ = v) = ϕi (d) +
K
d k rki (v)
i = 1 2
k=0
where the conditional expectation on the left-hand side takes this form as a result of the control function assumption A-3. Then (10)
[ϕ1 (d) − ϕ2 (d)] +
K
d k [rk1 (v) − rk2 (v)] = 0
k=0
Given smoothness assumption A-1, this implies ∂K 2 ∂K 1 ϕ (d) − ϕ (d) + (K!)(rK1 (v) − rK2 (v)) = 0 ∂d K ∂d K Measurable separability assumption A-4 implies that if any function of d is equal to a function of v (a.s.) then this must be a constant (a.s.). Hence, rK1 (v) − rK2 (v) is a constant a.s. Hence, rK1 (v) − rK2 (v) = E[rK1 (V˜ ) − rK2 (V˜ )] This expression equals zero given our normalization that E(εK ) = 0. Hence, as
rK1 (v) − rK2 (v) = 0 Considering the (K − 1)st derivative of equation (10), we find that ∂K−1 2 ∂K−1 1 ϕ (d) − ϕ (d) + (K!)d[rK1 (v) − rK2 (v)] ∂d K−1 ∂d K−1 1 2 (v) − rK−1 (v)] = 0 + ((K − 1)!)[rK−1
We have already shown that rK1 (v) = rK2 (v) and thus ∂(K−1) 1 ∂(K−1) ∂ 2 1 2 ϕ (d) + ((K − 1)!)(rK−1 ϕ (d) − (v) − rK−1 (v)) ∂d (K−1) ∂d (K−1) ∂d = 0 as
1 2 Using the logic of the previous analysis, we can show that rK−1 (v)−rK−1 (v) = 0. as 1 Iterating this procedure for k = K − 2 0, it follows that rk (v) − rk2 (v) =
1204
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
0 for all k = 0 K. Again appealing to equation (10), it follows that as ϕ1 (d) − ϕ2 (d) = 0 and thus ATE is identified. Using the fact that ϕ1 (d) − as as ϕ2 (d) = 0 and rk1 (v) − rk2 (v) = 0 for all k = 0 K, we also have that K K ∂ ϕ1 + k=1 kd k−1 E[rk1 (v)|d] = ∂d∂ ϕ2 + k=1 kd k−1 E[rk2 (v)|d] = 0 and thus TT ∂d is identified. Q.E.D. PROOF OF THEOREM 2: Let (d v) be a point of the interior of the support S 0 . Let N d denote a neighborhood of d and let N v denote a neighborhood of v such that N d × N v is included in S 0 . The distribution of (D V ) restricted to N d × N v is equivalent to Lebesgue measure (i.e., has the same null sets). Then using Theorem 5.2.7 of Florens, Mouchart, and Rolin (1990), (D V ) restricted to N d × N v are measurably separated. This implies that if within that as neighborhood h(D) = l(V ), then h(D) and l(V ) are a.s. constants. We need to show that this is true everywhere in the interior of the support. Consider any two points (d v) and (d v ) in S 0 . The theorem will be true if h(d) = h(d ). As S 0 satisfies the property (ii) in the theorem and is open by definition, there exists a finite number of overlapping open sets with nonempty overlaps, that is, there exists a finite sequence of neighborhoods Njd × Njv , j = 1 J, such d = ∅, and similarly for Njv . The first that each Njd × Njv ⊂ S 0 and Njd ∩ Nj+1 point (d v) is in N1d × N1v and the second point (d v ) is in NJd × NJv . Take d1 ∈ N1d and in the next overlapping neighborhood d2 ∈ N2d . From the previous result, (D V ) are measurably separated on N1d × N1v and on N2d × N2v . Thus h(di ), i = 1 2, is constant on each and thus constant on the union, implying h(d1 ) = h(d2 ). Iterating in this way along the sequence of neighborhoods until NJd × NJv , it follows that h(d) = h(d ). Hence h(D) is a.s. constant and, because as Q.E.D. h(D) = l(V ), l(v) is a.s. constant. PROOF OF THEOREM 3: Let Z denote the support of the distribution of Z. Consider any two functions m1 and m2 such that m1 (D) = m2 (V ) a.s. For (almost every (a.e.) FV ) fixed v0 , using the assumption that Z and V are independent, it follows that m1 (g(z v0 )) = m2 (v0 ) for a.e. z conditional on V = v0 implies that m1 is (a.s. FZ ) constant on {g(z v0 ) : z ∈ Z }. Likewise, for a v1 close to v0 , we have m1 is constant on {g(z v1 ) : z ∈ Z }. Using the fact that g(z v) is continuous in v and that {g(z v) : z ∈ Z } contains an open interval for any v, we can pick v1 sufficiently close to v0 so that {g(z v0 ) : z ∈ Z } and {g(z v1 ) : z ∈ Z } have a nonnegligible intersection, and we thus conclude that m1 is constant on {g(z v) : z ∈ Z v = v0 v1 }. Proceeding in this fashion, we conclude that m1 is (a.s.) constant on {g(z v) : z ∈ Z v ∈ [0 1]}, and thus that m1 is a.s. equal to a constant. Q.E.D. REFERENCES ABBRING, J. H., AND J. J. HECKMAN (2007): “Econometric Evaluation of Social Programs, Part III: Distributional Treatment Effects, Dynamic Treatment Effects, Dynamic Discrete
IDENTIFICATION OF TREATMENT EFFECTS
1205
Choice, and General Equilibrium Policy Evaluation,” in Handbook of Econometrics, Vol. 6B, ed. by J. Heckman and E. Leamer. Amsterdam: Elsevier, 5145–5303. [1196] ALTONJI, J. G., AND R. L. MATZKIN (2001): “Panel Data Estimators for Nonseparable Models With Endogenous Regressors,” Technical Working Paper t0267, NBER. [1193,1199] (2005): “Cross Section and Panel Data Estimators for Nonseparable Models With Endogenous Regressors,” Econometrica, 73, 1053–1102. [1192,1193,1199] BJÖRKLUND, A., AND R. MOFFITT (1987): “The Estimation of Wage Gains and Welfare Gains in Self-Selection,” Review of Economics and Statistics, 69, 42–49. [1191] BLUNDELL, R., AND J. POWELL (2004): “Endogeneity in Semiparametric Binary Response Models,” Review of Economic Studies, 71, 655–679. [1192,1193,1199] CARD, D. (1999): “The Causal Effect of Education on Earnings,” in Handbook of Labor Economics, Vol. 5, ed. by O. Ashenfelter and D. Card. New York: North-Holland, 1801–1863. [1191] (2001): “Estimating the Return to Schooling: Progress on Some Persistent Econometric Problems,” Econometrica, 69, 1127–1160. [1191] CARRASCO, M., J.-P. FLORENS, AND E. RENAULT (2007): “Linear Inverse Problems in Structural Econometrics Estimation Based on Spectral Decomposition and Regularization,” in Handbook of Econometrics, Vol. 6B, ed. by J. J. Heckman and E. Leamer. Amsterdam: Elsevier, 5633–5751. [1202] CHERNOZHUKOV, V., AND C. HANSEN (2005): “An IV Model of Quantile Treatment Effects,” Econometrica, 73, 245–261. [1191] CHERNOZHUKOV, V., G. W. IMBENS, AND W. K. NEWEY (2007): “Instrumental Variable Estimation of Nonseparable Models,” Journal of Econometrics, 139, 4–14. [1191,1192] CHESHER, A. (2003): “Identification in Nonseparable Models,” Econometrica, 71, 1405–1441. [1192] (2007): “Identification of Non-Additive Structural Functions,” in Advances in Economics and Econometrics: Theory and Applications, Ninth World Congress, Vol. 3, ed. by R. Blundell, W. K. Newey, and T. Persson. New York: Cambridge University Press, Chapter 1. Presented at the Econometric Society Ninth World Congress, 2005, London, England. [1192] DAROLLES, S., J.-P. FLORENS, AND E. RENAULT (2002): “Nonparametric Instrumental Regression,” Working Paper 05-2002, Centre Interuniversitaire de Recherche en Economie Quantitative (CIREQ). [1196] FLORENS, J.-P., M. MOUCHART, AND J. ROLIN (1990): Elements of Bayesian Statistics. New York: Dekker. [1204] HECKMAN, J. J. (1979): “Sample Selection Bias as a Specification Error,” Econometrica, 47, 153–162. [1197] (1997): “Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations,” Journal of Human Resources, 32, 441–462; addendum, 33, 247 (1998). [1191,1196] HECKMAN, J. J., AND B. E. HONORÉ (1990): “The Empirical Content of the Roy Model,” Econometrica, 58, 1121–1149. [1191] HECKMAN, J. J., AND R. ROBB (1985): “Alternative Methods for Evaluating the Impact of Interventions,” in Longitudinal Analysis of Labor Market Data, Vol. 10, ed. by J. Heckman and B. Singer. New York: Cambridge University Press, 156–245. [1191,1197] (1986): “Alternative Methods for Solving the Problem of Selection Bias in Evaluating the Impact of Treatments on Outcomes,” in Drawing Inferences From Self-Selected Samples, ed. by H. Wainer. New York: Springer Verlag, 63–107. Reprinted in 2000, Mahwah, NJ: Lawrence Erlbaum Associates. [1191,1196,1197] HECKMAN, J. J., AND E. J. VYTLACIL (1998): “Instrumental Variables Methods for the Correlated Random Coefficient Model: Estimating the Average Rate of Return to Schooling When the Return Is Correlated With Schooling,” Journal of Human Resources, 33, 974–987. [1192,1196] (2001): “Local Instrumental Variables,” in Nonlinear Statistical Modeling, Proceedings of the Thirteenth International Symposium in Economic Theory and Econometrics: Essays in Honor
1206
FLORENS, HECKMAN, MEGHIR, AND VYTLACIL
of Takeshi Amemiya, ed. by C. Hsiao, K. Morimune, and J. L. Powell. New York: Cambridge University Press, 1–46. [1191,1199-1201] (2005): “Structural Equations, Treatment Effects and Econometric Policy Evaluation,” Econometrica, 73, 669–738. [1191] (2007a): “Econometric Evaluation of Social Programs, Part I: Causal Models, Structural Models and Econometric Policy Evaluation,” in Handbook of Econometrics, Vol. 6B, ed. by J. Heckman and E. Leamer. Amsterdam: Elsevier, 4779–4874. [1191] (2007b): “Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Economic Estimators to Evaluate Social Programs and to Forecast Their Effects in New Environments,” in Handbook of Econometrics, Vol. 6B, ed. by J. Heckman and E. Leamer. Amsterdam: Elsevier, 4875–5144. [1191,1196] HECKMAN, J. J., J. A. SMITH, AND N. CLEMENTS (1997): “Making the Most Out of Programme Evaluations and Social Experiments: Accounting for Heterogeneity in Programme Impacts,” Review of Economic Studies, 64, 487–536. [1191] IMBENS, G. W., AND J. D. ANGRIST (1994): “Identification and Estimation of Local Average Treatment Effects,” Econometrica, 62, 467–475. [1191] IMBENS, G. W., AND W. K. NEWEY (2002): “Identification and Estimation of Triangular Simultaneous Equations Models Without Additivity,” Technical Working Paper 285, National Bureau of Economic Research. [1192,1194,1198,1199,1201] (2007): “Identification and Estimation of Triangular Simultaneous Equations Models Without Additivity,” Unpublished Manuscript, Harvard University and MIT. [1192,1194,1198, 1199,1201] MATZKIN, R. L. (2007): “Nonparametric Identification,” in Handbook of Econometrics, Vol. 6B, ed. by J. Heckman and E. Leamer. Amsterdam: Elsevier, 5307–5368. [1196] NEWEY, W. K., AND J. L. POWELL (2003): “Instrumental Variable Estimation of Nonparametric Models,” Econometrica, 71, 1565–1578. [1196] NEWEY, W. K., J. L. POWELL, AND F. VELLA (1999): “Nonparametric Estimation of Triangular Simultaneous Equations Models,” Econometrica, 67, 565–603. [1196,1201] ROY, A. (1951): “Some Thoughts on the Distribution of Earnings,” Oxford Economic Papers, 3, 135–146. [1191] WOOLDRIDGE, J. M. (1997): “On Two Stage Least Squares Estimation of the Average Treatment Effect in a Random Coefficient Model,” Economics Letters, 56, 129–133. [1192,1196] (2003): “Further Results on Instrumental Variables Estimation of Average Treatment Effects in the Correlated Random Coefficient Model,” Economics Letters, 79, 185–191. [1192, 1196] (2007): “Instrumental Variables Estimation of the Average Treatment Effect in Correlated Random Coefficient Models,” in Advances in Econometrics: Modeling and Evaluating Treatment Effects in Econometrics, Vol. 21, ed. by D. Millimet, J. Smith, and E. Vytlacil. Amsterdam: Elsevier, 93–117. [1192,1196]
Institut d’Economie Industrielle, Université Toulouse 1, 21 Allée de Brienne, 3100 Toulouse, France, Dept. of Economics, University of Chicago, 1126 E. 59th Street, Chicago, IL 60637, U.S.A and School of Economics, University College Dublin, Belfield, Dublin, Ireland; [email protected], Institute for Fiscal Studies and Dept. of Economics, University College London, Gower Street, London WC1E 6BT, U.K.; [email protected], and Dept. of Economics, Yale University, 30 Hillhouse Avenue, New Haven, CT 06511, U.S.A.; [email protected]. Manuscript received July, 2004; final revision received March, 2008.
Econometrica, Vol. 76, No. 5 (September, 2008), 1207–1217
COMMENT ON: THRESHOLD AUTOREGRESSIONS WITH A UNIT ROOT BY JEAN-YVES PITARAKIS1 In this paper we revisit the results in Caner and Hansen (2001), where the authors obtained novel limiting distributions of Wald type test statistics for testing for the presence of threshold nonlinearities in autoregressive models containing unit roots. Using the same framework, we obtain a new formulation of the limiting distribution of the Wald statistic for testing for threshold effects, correcting an expression that appeared in the main theorem presented by Caner and Hansen. Subsequently, we show that under a particular scenario that excludes stationary regressors such as lagged dependent variables and despite the presence of a unit root, this same limiting random variable takes a familiar form that is free of nuisance parameters and already tabulated in the literature, thus removing the need to use bootstrap based inferences. This is a novel and unusual occurrence in this literature on testing for the presence of nonlinear dynamics. KEYWORDS: Threshold autoregressive models, unit roots, nonlinear time series.
1. INTRODUCTION IN AN INFLUENTIAL PAPER Caner and Hansen (2001), CH henceforth, provided a new limit theory for exploring the presence of threshold effects in autoregressive models with a unit root. This limit theory involves a combination of unit root asymptotics and empirical process theory. It is used to derive the limiting distributions of Wald type statistics for testing the null hypothesis of no threshold effects as well as the joint null hypothesis of no threshold effects and a unit root in the parameters of a threshold autoregressive process (Theorem 4 and Theorems 5 and 6 in CH). Due to the existence of an underlying unit root, these limiting distributions are shown to have components that differ from existing results on testing for threshold effects obtained in a similar context but assuming stationarity of regressors in Hansen (1996). In this paper we wish to raise two important issues and offer appropriate corrections in relation to the distributional results presented within the main theorem of the paper (Theorem 4), in which CH obtained the limiting distribution of the Wald statistic for testing the null hypothesis of no threshold effects when the null model has an underlying unit root. First, the derivation of the limiting distribution of the Wald statistic in Theorem 4 of CH relies on a reparameterization of the original threshold autoregressive model (TAR) which is expected to give the same Wald statistic as the one obtained in the original TAR model presented in their equation (1). Here we obtain the limiting distribution of the same test statistic without applying 1
I wish to thank Bruce Hansen for numerous constructive suggestions and comments on an earlier draft. I also wish to thank Jesus Gonzalo, Grant Hillier, and Raymond O’Brien for very helpful comments. All errors are mine. © 2008 The Econometric Society
DOI: 10.3982/ECTA6979
1208
JEAN-YVES PITARAKIS
any such transformation and we subsequently explain why the transformation used in CH could not have led to the correct distribution. It is important to note at this stage that the practical inference procedures and modelling recommendations of CH remain unaffected by the wrong formulation of the asymptotic distribution of their Wald statistic. This is because the above mentioned model transformation and the limiting representation of their test statistic presented in Theorem 4 are not used directly for simulating critical values and conducting bootstrap based inferences. Second, CH show that because of the presence of a unit root regressor, the limiting distribution of the Wald statistic obtained in their Theorem 4 is nonstandard and different from what typically occurs in a stationary setup as obtained in Hansen (1996), for instance. Here we further establish that this portion of the limiting distribution that arises due to the presence of a unit root regressor is in fact equivalent to a well known limiting distribution that occurs under pure stationarity in the context of testing for structural breaks. Specifically, it is equivalent to the normalized squared Brownian bridge limit obtained in Andrews (1993). Thus, under the special scenario of a specification that includes solely the unit root regressor and deterministic components, and excludes stationary regressors such as lagged dependent variables (e.g., a TAR(1) model), the practical implementation of inferences becomes much simpler and no longer requires any bootstrapping. The distribution is well known and already tabulated in the literature. This is an unusual and novel result in the literature on testing for nonlinear dynamics where limiting distributions are known to be nonstandard and characterized by the presence of model specific parameters. 2. THE MODEL AND TEST STATISTIC We adopt the same notation and assumptions as in CH, but here recall the main model and general notation for clarity of exposition. Our first concern is to establish the limiting behavior of the Wald statistic for testing H0 : θ1 = θ2 in the model given by (1)
y = X1 θ1 + X2 θ2 + e
where y = ( y1 · · · yT ) , X1 stacks the elements of ( yt−1 rt wt−1 )× I(Zt−1 ≤ λ), and X2 stacks those of ( yt−1 rt wt−1 ) I(Zt−1 > λ). Here rt is a vector of deterministic components (e.g., rt = ( 1 t ) ) and wt−1 contains lagged dependent stationary regressors (i.e., wt−1 = (yt−1 yt−k ) ). The threshold parameter λ is unknown with λ ∈ Λ = [λ1 λ2 ], and λ1 and λ2 selected such that P(Zt ≤ λ1 ) = π1 > 0 and P(Zt ≤ λ2 ) = π2 < 1. We partition the parameter vectors as θi = (ρi βi αi ) for i = 1 2, with βi having the same dimension as rt and αi having the same dimension as wt−1 for i = 1 2. For later use we also define X ≡ X1 + X2 with X stacking the elements of ( yt−1 rt wt−1 ).
THRESHOLD AUTOREGRESSIONS
1209
The threshold variable is denoted Zt = yt − yt−m with m ≥ 1. Following CH, we also replace the threshold variable by a uniformly distributed random variable using the equality I(Zt−1 ≤ λ) = I(G(Zt−1 ) ≤ λ) ≡ I(Ut−1 ≤ u), where G(·) is the marginal distribution of Zt and Ut denotes a uniformly distributed random variable on [0 1]. Throughout this paper and for greater convenience we will also be using the notation I1t−1 and I2t−1 to refer to the two indicator functions I(Ut−1 ≤ u) and I(Ut−1 > u). Letting V = ( X1 X2 ) and θ = (θ1 θ2 ) , (1) above can also be reformulated as y = V θ + e. The Wald statistic (for given u or λ) for testing the null hypothˆ [R(V V )−1 R ]−1 (Rθ)/ ˆ σˆ e2 , esis H0 : θ1 = θ2 can now be written as WT (u) = (Rθ) where θˆ i = (Xi Xi )−1 Xi y for i = 1 2, σˆ e2 is the residual variance obtained from (1), and R is the restriction matrix given by R = [ I −I ] with I denoting an identity matrix of the same dimension as X. Given the orthogonality of X1 and X2 , the Wald statistic can also be expressed as (2)
WT (u) = (θˆ 1 − θˆ 2 ) [X1 X1 − X1 X1 (X X)−1 X1 X1 ](θˆ 1 − θˆ 2 )/σˆ e2
and in practice, since the threshold parameter is unidentified under the null hypothesis, inferences are conducted using the well known “sup-Wald” formulation expressed as WT = supu∈[π1 π2 ] WT (u), where π1 = G(λ1 ) and π2 = G(λ2 ) (see CH, p. 1562). 3. THE LIMITING BEHAVIOR OF THE WALD STATISTIC Our initial objective is to derive the limiting distribution of the Wald statistic under the restriction θ1 = θ2 in (1) and the same assumptions as in CH. Proposition 1 below presents the correct limiting distribution obtained through the direct use of (2) above. We subsequently discuss why the CH based model transformation could not have led to the same distribution. This also allows us to confirm that unlike Theorem 4, the distributional results presented in Theorems 5 and 6 of CH, which involve testing the joint null hypothesis that ρ1 = ρ2 = 0 in (1), remain unaffected by the use of the same model transformation provided that T (u) in Theorem 5 of CH is replaced with expression (3) in Proposition 1 below. 3.1. Direct Approach As in CH (see Theorem 3, p. 1561) we let h(u) = E[I1t−1 wt−1 ] and H(u) = E[I1t−1 wt−1 wt−1 ]. We also let W2 (u) denote a k-dimensional zero mean Gaussian process with covariance kernel H(u1 ∧ u2 ); W (s u) is a univariate two parameter Brownian motion defined as in CH and W (s) ≡ W (s 1).
1210
JEAN-YVES PITARAKIS
PROPOSITION 1: Under the assumptions and setup of Theorem 4 in CH we have (3)
sup WT (u) ⇒ sup S ∗ (u) M ∗ (u)−1 S ∗ (u)
u∈[π1 π2 ]
where (4)
u∈[π1 π2 ]
∗
S (u) =
X(s) dK(s u) − X(s)h(u) H(1)−1 W2 (1) W2 (u) − H(u)H(1)−1 W2 (1) − h(u)W (1)
M ∗ (u) u(1 − u) XX − Xh(u) H(1)−1 h(u) X = −1 ((1 − u)h(u) − H(u)H(1)
h(u)) X
X((1 − u)h(u) − h(u) H(1)−1 H(u))
H(u) − h(u)h(u) − H(u)H(1)−1 H(u)
with X(s) = (W (s) r(s) ) and K(s u) = W (s u) − uW (s 1). The expression in (3) provides the limiting distribution of the Wald statistic for testing for threshold effects in model (1) under the assumptions 1 and 2 of CH. Comparing (3) with the expression T (u) obtained in Theorem 4 of CH, it is clear that the asymptotic distribution of WT (u) appears more complicated and does not lend itself to obvious simplifications. Inverting M ∗ (u) and rearranging with (4) also does not appear to lead to a simpler looking formulation. The model specific population moments such as h(u) and H(u) enter (4) in a complicated manner, and this presence of the moments of the lagged dependent regressors clearly appears to create a more complicated asymptotic structure than what appeared in equation (9) of CH. 3.2. The CH Transformation and Its Implications In CH, instead of operating directly from (2) and for the sole purpose of deriving the limiting distribution of the Wald statistic, the authors intro1 θ1 + X 2 θ2 + duced the modified specification (see CH, p. 1586) y = X 1 = ( yt−1 r w 2 = ( yt−1 r w e with X ) I1t−1 , and X 1t−1 t 2t−1 ) I2t−1 , where t wit−1 = wt−1 − ( Iit−1 wt−1 / Iit−1 ) for i = 1 2, arguing that the Wald statistic computed from this transformed specification will remain equivalent to 2 and making use of the prop=X 1 + X that obtained from (1). Letting X erty X X1 = X1 X1 , it is easy to see here that for given u, the Wald statistic for testing H0 : θ1 = θ2 in this transformed model, say WTch (u), can be formu X −1 ˜ ˜ ˜ e2 , where lated as WTch (u) = (θ˜ 1 − θ˜ 2 ) [X 1 1 − X1 X1 (X X) X1 X1 ](θ1 − θ2 )/σ −1 2 X θ˜ 1 = (X ˜ e is the corresponding residual variance. i i ) Xi y for i = 1 2 and σ The following proposition summarizes the consequences of this model transformation on the distributions of the Wald statistic for testing H0 : θ1 = θ2 as well as H0 : ρ1 = ρ2 = 0 in (1).
THRESHOLD AUTOREGRESSIONS
1211
PROPOSITION 2: (i) Under the same assumptions as in CH and H0 : θ1 = θ2 , we have WT (u) = WTch (u) ∀T and ∀u. (ii) Under H0 : ρ1 = ρ2 = 0, we have WT (u) = WTch (u) ∀T u. Proposition 2 formally establishes that under the assumptions and general framework of CH, the limiting distribution of the Wald statistic obtained under the transformed model cannot coincide with that obtained under the original model in (1). Note that our result is valid for any u and naturally applies to the supremum of WT (u). Proposition 2 above also highlights the fact that the remaining results of CH that involve testing the joint null of linearity and a unit root are unaffected. The only minor correction that needs to be mentioned is that in Theorem 5 of CH, the T (u) process defined in their equation (7) needs to be replaced by the process S ∗ (u) M ∗ (u)−1 S ∗ (u) that appears in Proposition 2 above. REMARK: It is worth mentioning that although CH based their inferences on the Wald statistic obtained from the above transformed regression, their implementation of the algebra surrounding the use of WTch (u) does not really correspond to a Wald statistic aiming to test H0 : θ1 = θ2 in the transformed model, that is, it is not equivalent to our expression of WTch (u) presented above and the motivation underlying the formulation referred to as WT∗ (u) on page 1587 of CH is unclear. 4. ASYMPTOTICS UNDER UNIT ROOT REGRESSORS ONLY At this stage it is important to highlight the fact that the difficulties in CH arose due to the inclusion of stationary (lagged dependent) regressors in the fitted model (1). Indeed, it is easy to observe that the transformed regression model used in CH and the original model in (1) coincide if no lagged dependent regressors are included. In Theorem 4 of CH, the authors have formulated the limiting distribution of their sup-Wald statistic as the supremum of a sum expressed as Q1 (u) + Q2 (u), where Q1 (u) and Q2 (u) are independent stochastic processes defined in equations (8) and (9) of CH, and where Q2 (u) arises solely because of the presence of lagged dependent regressors in (1). Clearly, if the fitted model contained no lagged dependent regressors so that X1 = ( yt−1 rt ) I(Ut−1 ≤ u) and X2 = ( yt−1 rt ) I(Ut−1 > u), the CH limiting distribution would be given solely by the supremum of Q1 (u), formulated exactly as in CH, and the issues we raised about the erroneous model transformation and its consequences would not come into play. Indeed, from Proposition 1 above it is easy to see that when removing the presence of lagged dependent regressors the limiting distribution of the sup-
1212
JEAN-YVES PITARAKIS
Wald statistic takes the simpler form given by (5)
sup WT (u)
u∈[π1 π2 ]
1 X(s) dK(s u) ⇒ sup u(1 − u) u −1 X(s) dK(s u) × X(s)X(s)
Recalling that K(s u) = W (s u) − uW (s 1), it is then straightforward to observe that our formulation in (5) is identical to the limiting process denoted as Q1 (u) in CH. In CH, the authors further argued that this Q1 (u) component expressed as a normalized quadratic form in stochastic integrals is nonstandard and arises due to the presence of nonstationary regressors (see CH, pp. 1562– 1563). However, a closer look at the expression appearing in (8) of CH or (5) above suggests instead that Q1 (u) is in fact equivalent to a random variable given by a quadratic form in normalized Brownian bridges, identical to the one that occurs when testing for structural breaks in a purely stationary framework. The following proposition summarizes our result. PROPOSITION 3: Under the same assumptions as in CH but with X1 = ( yt−1 rt ) I(Ut−1 ≤ u) and X2 = (yt−1 rt )I(Ut−1 > u) so that the models contain no lagged dependent regressors, we have supu∈[π1 π2 ] WT (u) ⇒ supu∈[π1 π2 ] B0 (u) × B0 (u)/u(1 − u) with B0 (u) denoting a standard Brownian bridge of the same dimension as θ. The result in Proposition 3 is unusual and interesting for a variety of reasons. It highlights an environment in which the null distribution of the Wald statistic no longer depends on any nuisance parameters as is typically the case in the purely stationary setup of Hansen (1996) and thus no bootstrapping is needed for conducting inferences provided that the fitted model excludes the stationary lagged dependent regressors. Perhaps more interestingly and despite the underlying unit root, we have a limiting distribution that typically occurs in a stationary environment when one wishes to test for the presence of structural breaks. In fact, the distribution presented in Proposition 3 is extensively tabulated in Andrews (1993), and Hansen (1997) also provided p-value approximations which can be used for inference purposes. From the results in Hansen (1996), it is well known that in a purely stationary threshold model, the limiting distribution of the same Wald statistic depends on model specific quantities such as the population means and variances of regressors, among other nuisance parameters. Proposition 3 establishes that this is no longer the case in the present unit root environment provided that no lagged dependent regressors are included. It is also interesting to recall that
THRESHOLD AUTOREGRESSIONS
1213
within the stationary environment of Hansen (1996) there is one special scenario under which the limiting distribution in question also simplifies to that of Proposition 3. This scenario occurs if the threshold variable is independent of all the regressors included in the fitted specification. Since here the threshold variable appearing in the indicator function is always stationary while the regressor is a unit root process, as pointed out by a referee, an intuitive analogy comes through asymptotic uncorrelatedness arguments between the two processes whose variances are characterized by different orders of magnitude. From a practical point of view, it is important to mention that the asymptotic distribution of Proposition 3 depends on the number of parameters being tested under the null hypothesis (i.e., the dimension of θ) and the trimming percentages π1 and π2 since we operate under u ∈ [π1 π2 ]. In practice it is common to implement the test by setting π2 = 1 − π1 and using π1 = 010 or π1 = 015. Once the magnitude of the sup-Wald statistic has been computed it then suffices to obtain the relevant critical values from Table I in Andrews (1993). Alternatively, given the magnitude of the sup-Wald statistic, the corresponding approximate p-value can be evaluated using the pvsup routine provided in Hansen (1997). APPENDIX PROOF OF PROPOSITION 1: Our starting point is the formulation of the Wald statistic in (2) which can equivalently be written as (6)
WT (u) = [e X1 − e X(X X)−1 X1 X1 ][X1 X1 − X1 X1 (X X)−1 X1 X1 ]−1 × [X1 e − (X1 X1 )(X X)−1 X e]/σˆ e2
by using the null model y = Xθ + e in (2). Following CH (see p. 1587), we take as our null model a(L)yt = et and let a(1) ≡ 1 − α1 − · · · − αk . Note that as in CH we impose a zero mean on our chosen Data Generating Process with no loss of generality since the fitted model contains a trend component. Since the value of the Wald statistic is unaffected by rescaling yt−1 by a constant, it is also convenient to replace, as in CH, the regres sor matrix X with X ∗ = ( yt−1 /(σe /a(1)) rt wt−1 ). Replacing X by X ∗ in (6) leaves the Wald statistic unchanged. It will also be convenient to define xt−1 = ( yt−1 /(σe /a(1)) rt ) and write X ∗ = ( xt−1 wt−1 ) . Our next objective is to obtain the limiting behavior of the properly normalized version of [X1∗ X1∗ − X1∗ X1∗ (X ∗ X ∗ )−1 X1∗ X1∗ ]. We write −1 xt−1 wt−1 D−1 D−1 D1T xt−1 xt−1 D−1 1T 1T 2T −1 ∗ ∗ −1 (7) DT X X DT = wt−1 xt−1 D−1 wt−1 wt−1 D−1 D−1 D−1 2T 1T 2T 2T where the normalizing matrices are such that DT = diag(D1T D2T ) with D2T = √ T Ik . The dimension of D1T depends on the dimension of the vector of de-
1214
JEAN-YVES PITARAKIS
√ terministic components. If rt = ( 1 t ) , for instance, D1T = diag(T T T 3/2 ). Letting X(s) = ( W (s) r(s) ) , Theorem 3 in CH directly implies that 1 X(s)X(s) 0 −1 ∗ ∗ −1 0 (8) ≡ M DT X X DT ⇒ 0 H(1) ∗ ∗ −1 We next deal with the limiting behavior of D−1 T X1 X1 DT . We write
(9)
∗ ∗ −1 D−1 T X1 X1 DT −1 D1T xt−1 xt−1 I1t−1 D−1 1T = −1 w D−1 x I D t−1 t−1 1t−1 1T 2T
D−1 1T D−1 2T
and from Theorem 3 of CH we have
1 u 0 X(s)X(s) −1 ∗ ∗ −1 (10) DT X1 X1 DT ⇒ 1 h(u) 0 X(s)
xt−1 wt−1 I1t−1 D−1 2T wt−1 wt−1 I1t−1 D−1 2T
1 0
X(s)h(u) H(u)
≡ M(u)
It now follows from the continuous mapping theorem that (11)
∗ ∗ −1 −1 ∗ ∗ ∗ ∗ −1 ∗ ∗ −1 [D−1 T X1 X1 DT − DT X1 X1 (X X ) X1 X1 DT ]
⇒ [M(u) − M(u)M −1 M(u)] ≡ M ∗ (u) −1 We next focus on the limiting behavior of D−1 T X e and DT X1 e. At this stage, we can write the Wald statistic as
(12)
∗ −1 −1 ∗ −1 WT (u) = [e X1∗ D−1 T − e X DT M M(u)]M (u) ∗ −1 −1 ∗ × [D−1 ˆ e2 + op (1) T X1 e − M(u)M DT X e]/σ
Looking at each component separately, setting σe2 = 1 for simplicity and no loss of generality, and using Theorem 2 in CH, we have 1 −1 D1T xt−1 et X(s) dW (s 1) −1 ∗ 0 ⇒ DT X e = (13) ≡ S(1) √1 wt−1 et W2 (1) T and (14)
−1 T
∗ 1
D X e=
1 D−1 xt−1 et I1t−1 X(s) dW (s u) 1T 0 ⇒ ≡ S(u) √1 wt−1 et I1t−1 W2 (u) T
where W2 (u) denotes a zero mean Gaussian process with covariance kernel H(u1 ∧ u2 ). The above allows us to formulate the limiting behavior of ∗ −1 −1 ∗ D−1 T X1 e − M(u)M DT X e as (15)
∗ −1 −1 ∗ −1 ∗ D−1 T X1 e − M(u)M DT X e ⇒ S(u) − M(u)M S(1) ≡ S (u)
THRESHOLD AUTOREGRESSIONS
1215
X(s) ( X(s)X(s) )−1 × Taking r(s) = (1s) , for instance, we have X(s) dW (s 1) = dW (s 1) ≡ W (1) and the expression in (3) follows. Q.E.D. THEOREM 1: Consider the multiple regression model y = Xθ + e and the null hypothesis H0 : Rθ = r. Replacing the regressor matrix X with X ∗ = XA, where |A| =
0, leaves the Wald statistic for testing H0 unchanged iff R is replaced with R∗ = RA in the transformed model. PROOF OF PROPOSITION 2: The original model y = Xorig θ + e has regres + e has sors Xorig = ( X1 X2 ) while the CH transformed model y = Xγ X = ( X1 X2 ). We first obtain the expression of matrix A, say Ach , which that is, X = Xorig Ach . For simplicity and no loss when applied to Xorig gives X, of generality, we set rt = (1 t) and take wt−1 to be a scalar so that Xorig = ( yt−1 I1t−1 I1t−1 tI1t−1 wt−1 I1t−1 yt−1 I2t−1 I2t−1 tI2t−1 wt−1 I2t−1 ) and = ( yt−1 I1t−1 I1t−1 tI1t−1 w1t−1 I1t−1 yt−1 I2t−1 I2t−1 tI2t−1 w2t−1 I2t−1 ) X with w1t−1 and w2t−1 defined as in the text. Accordingly, the parameter vectors θ and γ are both 8 × 1 vectors and we let θ = (θ1 θ2 ) with θi = (θi1 θi2 θi3 θi4 ), i = 1 2. Similarly, γ = (γ1 γ2 ) with γi = (γi1 γi2 γi3 γi4 ), i = 1 2. It is now = Xorig Ach with Ach given by easy to note that X ⎛
(16)
1 ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ Ach = ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎝0 0
0 1 0 0 0 0 0 0
0 0 0 −w¯ 1 1 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0
⎞ 0 0 0 0 ⎟ ⎟ 0 0 ⎟ ⎟ 0 0 ⎟ ⎟ ⎟ 0 0 ⎟ ⎟ 0 −w¯ 2 ⎟ ⎟ 1 0 ⎠ 0 1
T T T T where w¯ 1 = t=1 wt−1 I1t−1 / t=1 I1t−1 and w¯ 2 = t=1 wt−1 I2t−1 / t=1 I2t−1 . We next recall that our original goal was to test H0 : Rθ = 0 in y = Xorig θ + e, where R = [ I −I ] and with I denoting a 4 × 4 identity matrix (i.e., H0 : θ1 = θ2 ). It now follows directly from Theorem 1 above that this is equivalent to + e with R∗ = RAch . This then establishes that testing H0 : R∗ γ = 0 in y = Xγ testing H0 : θ1 = θ2 in the original model is not equivalent to testing H0 : γ1 = γ2 in the CH transformed model since R∗ = R and the null hypothesis H0 : θ1 = θ2
1216
JEAN-YVES PITARAKIS
does not translate into H0 : γ1 = γ2 in the CH transformed model. The restriction matrix in the CH transformed model is different from R and is given by ⎞ ⎛ 1 0 0 0 −1 0 0 0 ⎜ 0 1 0 −w¯ 1 0 −1 0 w¯ 2 ⎟ ⎟ (17) R∗ = RAch = ⎜ ⎝0 0 1 0 0 0 −1 0 ⎠ 0 0 0
1
0
0
0
−1
from which it follows that the explicit null hypothesis tested in CH’s framework is given by ⎞ ⎛ ⎞ ⎛ 0 γ11 − γ21 ⎜ ⎟ ⎜ γ12 − w¯ 1 γ14 − γ22 + w¯ 2 γ24 ⎟ ⎜ 0 ⎟ ⎟ R∗ γ = ⎜ (18) ⎠ = ⎝0⎠ ⎝ γ13 − γ23 0 γ14 − γ24 which is clearly different from testing H0 : γ1 = γ2 in the CH transformed model since from (18) above we can see, for instance, that γ12 = γ22 . This can also be seen more explicitly by writing the second row of the column vector in (18) as γ12 − γ22 = (w¯ 1 − w¯ 2 )γ14 = (w¯ 1 − w¯ 2 )γ24 since γ14 = γ24 . For part (ii) of the proof, the result is obtained by following identical lines. Since here R = [ 1 0 0 0 −1 0 0 0 ], from the expression of Ach in (17) we immediately have that testing Rθ = 0 in the original model (1) (i.e., H0 : ρ1 = ρ2 = 0) is the same thing as testing Rγ = 0 in the transformed model. Q.E.D. PROOF OF PROPOSITION 3: The proof follows by writing J1∗ (u) defined in 1 equation (8) of CH as J1∗ (u) = 0 X(s) dK(s u). Here X(s) = ( W (s) r(s) ) as in CH and K(s u) = W (s u) − uW (s 1). This makes it clear that X(s) and K(s u) are independent for any u, and the result then follows from Lemma 5.1 in Park and Phillips (1988) and the definition of a Brownian bridge. Specifically, since X(s) is Gaussian and independent of K(s u), we 1 have 0 X(s) dK(s u) ≡ N(0 u(1 − u) X(s)X(s) ) conditionally on a real1 ization of X(s). Normalizing by 0 X(s)X(s) , we obtain the Brownian bridge limit process, which is also the unconditional distribution since it is not dependent on a realization of X(s). Q.E.D. REFERENCES ANDREWS, D. W. K. (1993): “Tests for Parameter Instability and Structural Change With Unknown Change Point,” Econometrica, 61, 821–856. [1208,1212,1213] CANER, M., AND B. E. HANSEN (2001): “Threshold Autoregression With a Unit Root,” Econometrica, 69, 1555–1596. [1207] HANSEN, B. E. (1996): “Inference When a Nuisance Parameter Is Not Identified Under the Null Hypothesis,” Econometrica, 64, 413–430. [1207,1208,1212,1213]
THRESHOLD AUTOREGRESSIONS
1217
(1997): “Approximate Asymptotic p-Values for Structural Change Tests,” Journal of Business and Economic Statistics, 15, 60–67. [1212,1213] PARK, J. Y., AND P. C. B. PHILLIPS (1988): “Statistical Inference in Regressions With Integrated Processes: Part 1,” Econometric Theory, 4, 468–497. [1216]
Division of Economics, School of Social Sciences, University of Southampton, Highfield, Hampshire, Southampton SO17 1BJ, United Kingdom; j.pitarakis@ soton.ac.uk. Manuscript received February, 2007; final revision received January, 2008.
Econometrica, Vol. 76, No. 5 (September, 2008), 1219–1222
ANNOUNCEMENTS 2008 EUROPEAN WINTER MEETINGS
THE 2008 EUROPEAN WINTER MEETINGS of the Econometric Society will take place at Churchill College, University of Cambridge, Cambridge, United Kingdom, from October 31 to November 1. The Programme Committee consists of the Regional Consultants with Professor Richard J. Smith as Programme Committee Chair. Senior members of the Econometric Society may propose candidates to the consultant of their region. Candidates are PhD students or recent PhD graduates who are intending to go on the job market during academic year 2008–2009. Eighteen participants are selected by the Regional Consultants to present their research and to discuss the research of the other participants at the meetings. The selection of participants proceeds in two stages. In the first stage each Regional Consultant selects one participant from his/her region and proposes a shortlist of candidates. In the second stage the remaining ten participants are elected by the Regional Consultants from these shortlists. Regional Consultants: Chairperson: Prof. Jean-Marc Robin; Paris School of Economics, Université de Paris 1, and University College London; e-mail: [email protected]; Areas of expertise: Labor economics, Microeconometrics; Region: France, Portugal and Spain Prof. Giuseppe Bertola; Università di Torino; e-mail: [email protected]; Areas of expertise: Macroeconomics, Labor economics, Financial theory and policy; Region: Greece, Italy and Switzerland Prof. Georg Kirchsteiger; Université Libre de Bruxelles; e-mail: [email protected]; Areas of expertise: Microeconomic theory, Experimental economics, Public economics; Region: Benelux Prof. Tom Krebs; University of Mannheim; e-mail: [email protected]; Areas of expertise: Macroeconomics, Financial economics, Economic theory; Region: Austria and Germany Prof. Zvika Neeman; Boston University and Tel Aviv University; e-mail: zvika@bu. edu; Areas of expertise: Microeconomic theory, Game theory, Mechanism design; Region: Israel and Other Areas Prof. Richard J. Smith; University of Cambridge; e-mail: [email protected]; Areas of expertise: Econometric theory, Estimation and inference in econometrics, Hypothesis testing, Model selection; Region: Great Britain and Ireland Prof. Kjetil Storesletten; University of Oslo; e-mail: [email protected]; Areas of expertise: Macroeconomics, Political economy; Region: Scandinavia Prof. Ákos Valentinyi; University of Southampton; e-mail: [email protected]; Areas of expertise: Macroeconomic Theory, Economic growth, Transition economics; Region: Eastern European countries Proposals of candidates should be sent by e-mail to the Consultant of the relevant region for the candidate before September 5, 2008. The e-mail should include the candi© 2008 The Econometric Society
DOI: 10.3982/ECTA765ANN
1220
ANNOUNCEMENTS
date’s CV, the complete paper s/he would like to present together with an introduction detailing the qualities of the candidate and, if the paper is co-authored, the contribution of the candidate. Successful candidates will be informed by October 3, 2008. Information concerning these meetings will be made available in due course on the conference website http://www.econ.cam.ac.uk/esewm08/. 2008 LATIN AMERICAN MEETING
THE 2008 LATIN AMERICAN MEETING of the Econometric Society (LAMES) will take place in Rio de Janeiro, Brasil, November 20–22. The meetings are jointly hosted by Fundação Getulio Vargas (FGV/EPGE) and Instituto Nacional de Matemática Pura e Aplicada (IMPA) and will take place at IMPA. The 2008 LAMES will run in parallel to the Latin American and Caribbean Economic Association (LACEA). Registered participants will be welcome to attend all sessions for both meetings. Economists and professionals working in related fields, including those who currently are not members of the Econometric Society, are invited to submit theoretical and applied papers in all areas of economics for presentation at the Congress. Each author may submit only one paper to the LAMES and only one paper to the LACEA. The same paper may not be submitted to both the LAMES and LACEA Congress. At least one co-author must be a member of the corresponding Society or Association. The conference website is now available for submissions at http://lacealames2008.fgv. br. The submission deadline is September 30, 2008. Authors may join the Econometric Society at http://www.econometricsociety.org. The Program Committee Chair is Aloisio Araujo. On behalf of the Program Committee, Co-Chairs are—Theory and Applied Economics: Humberto Moreira (EPGE/FGV), Econometrics: Sergio Firpo (PUC-Rio). For further information please visit the conference website at lacealames2008.fgv.br or contact us via e-mail at [email protected]. 2009 NORTH AMERICAN WINTER MEETING
THE 2009 NORTH AMERICAN WINTER MEETING of the Econometric Society will be held in San Francisco, CA, from January 3 to 5, 2009, as part of the annual meeting of the Allied Social Science Associations. The program will consist of contributed and invited papers. It is hoped that the research presented will represent a broad spectrum of applied and theoretical economics and econometrics. The program committee will be chaired by Steven Durlauf of University of Wisconsin–Madison. Program Committee: Steven Durlauf, University of Wisconsin–Madison Program Chair David Austen-Smith, Northwestern University (Political Economy) Dirk Bergemann, Yale University (Information Economics) Lawrence Blume, Cornell University (Game Theory) Moshe Buchinsky, University of California, Los Angeles (Applied Econometrics) Dennis Epple, Carnegie Mellon University (Public Economics)
ANNOUNCEMENTS
1221
Oded Galor, Brown University (Economic Growth) Jinyong Hahn, University of California, Los Angeles (Econometric Theory) Caroline Hoxby, Stanford University (Social Economics) Guido Kuersteiner, University of California, Davis (Time Series) Jonathan Levin, Stanford University (Industrial Organization) Shelly Lundberg, University of Washington (Labor Economics) James Rauch, University of California, San Diego (International Trade) Hélène Rey, Princeton University (International Finance) Manuel Santos, University of Miami (Computational Economics) Christina Shannon, University of California, Berkeley (Mathematical Economics) Steven Tadelis, University of California, Berkeley (Market Design) Petra Todd, University of Pennsylvania (Microeconometrics/Empirical Microeconomics) Toni Whited, University of Wisconsin (Finance) Noah Williams, Princeton University (Macroeconomics) Justin Wolfers, Wharton (Behavioral Economics/Experimental Economics) Tao Zha, Federal Reserve Bank of Atlanta (Macroeconomics) Lin Zhou, Arizona State University (Social Choice Theory/Microeconomic Theory) 2009 NORTH AMERICAN SUMMER MEETING
THE 2009 NORTH AMERICAN SUMMER MEETING of the Econometric Society will be held June 4–7, 2009, hosted by the Department of Economics, Boston University, in Boston, MA. The program will be composed of a selection of invited and contributed papers. The program co-chairs are Barton Lipman and Pierre Perron of Boston University. The local arrangements chair is Marc Rysman of Boston University. Program Committee: Daron Acemoglu, Massachusetts Institute of Technology (Macroeconomics: Growth, and Political Economy) John Campbell, Harvard University (Financial Economics) Yeon-Koo Che, Columbia University (Auctions and Contracts) Francis X. Diebold, University of Pennsylvania (Financial Econometrics) Jean-Marie Dufour, McGill University (Theoretical Econometrics) Jonathan Eaton, New York University (International Trade) Glenn Ellison, Massachusetts Institute of Technology (Theoretical Industrial Organization) Charles Engel, University of Wisconsin (International Finance) Larry Epstein, Boston University (Plenary Sessions) Hanming Fang, Duke University (Theoretical Public Economics) Jesus Fernandez-Villaverde, University of Pennsylvania (Macroeconomics: Dynamic Models and Computational Methods) Simon Gilchrist, Boston University (Plenary Sessions) Wojciech Kopczuk, Columbia University (Empirical Public Economics) Thomas Lemieux, University of British Columbia (Empirical Microeconomics)
1222
ANNOUNCEMENTS
Dilip Mookherjee, Boston University (Plenary Sessions) Kaivan Munshi, Brown University (Development) Muriel Niederle, Stanford University (Experimental Economics and Market Design) Edward O’Donoghue, Cornell University (Behavioral Economics) Claudia Olivetti, Boston University (Empirical Labor/Macroeconomics) Christine Parlour, University of California, Berkeley (Corporate Finance/Microeconomic Foundations of Asset Pricing) Zhongjun Qu, Boston University (Plenary Sessions) Lucrezia Reichlin, London School of Economics (Applied Macroeconomics/Factor Models: Theory and Application) Marc Rysman, Boston University (Empirical Industrial Organization) Uzi Segal, Boston College (Decision Theory) Chris Shannon, University of California, Berkeley (General Equilibrium and Mathematical Economics) Balazs Szentes, University of Chicago (Economic Theory) Julia Thomas, Ohio State University (Macroeconomics: Business Cycles) Timothy Vogelsang, Michigan State University (Time Series Econometrics) Adonis Yatchew, University of Toronto (Micro-Econometrics and Non-Parametric Methods) Muhammet Yildiz, Massachusetts Institute of Technology (Game Theory)
Econometrica, Vol. 76, No. 5 (September, 2008), 1223
FORTHCOMING PAPERS THE FOLLOWING MANUSCRIPTS, in addition to those listed in previous issues, have been accepted for publication in forthcoming issues of Econometrica. CHAMBERLAIN, GARY, AND MARCELO J. MOREIRA: “Decision Theory Applied to a Linear Panel Data Model.” CONLON, JOHN R.: “Two New Conditions Supporting the First-Order Approach to Multi-Signal Principal-Agent Problems.” ELLISON, GLENN, AND SARA FISHER ELLISON: “Search, Obfuscation, and Price Elasticities on the Internet.” FORTNOW, LANCE, AND RAKESH V. VOHRA: “The Complexity of Forecast Testing.” GONÇALVES, SÍLVIA, AND NOUR MEDDAHI: “Bootstrapping Realized Volatility.” GOVINDAN, SRIHARI, AND ROBERT WILSON: “On Forward Induction.” HANSEN, LARS PETER, AND JOSÉ SCHEINKMAN: “Long-Term Risk: An Operator Approach.” HEYDENREICH, BIRGIT, RUDOLF MÜLLER, MARC UETZ, AND RAKESH VOHRA: “Characterization of Revenue Equivalence.” KASAHARA, HIROYUKI, AND KATSUMI SHIMOTSU: “Nonparametric Identification of Finite Mixture Models of Dynamic Discrete Choices.” RINCÓN-ZAPATERO, JUAN PABLO, AND CARLOS RODRÍGUEZ-PALMERO: “Corrigendum to “Existence and Uniqueness of Solutions to the Bellman Equation in the Unbounded Case”.” SIEGEL, RON: “All-Pay Contests.” SINCLAIR-DESGAGNÉ, BERNARD: “Ancillary Statistics in Principal-Agent Models.”
© 2008 The Econometric Society
DOI: 10.3982/ECTA765FORTH
Econometrica, Vol. 76, No. 5 (September, 2008), 1225–1226
THE ECONOMETRIC SOCIETY ANNUAL REPORTS, 2007 REPORT OF THE PRESIDENT
1. THE SOCIETY IT HAS BEEN MY PLEASURE to serve as President of the Econometric Society in 2007. Fisher, Frisch, and the others who initiated the Society articulated a vision for formal, quantitative, and empirical research that continues to play out in exciting ways, albeit on a dramatically larger scale. In Frisch’s opening editorial in Econometrica he argued as follows: Experience has shown that each of. . . three viewpoints, that of statistics, economic theory, and mathematics is a necessary, but not by itself a sufficient, condition for real understanding of quantitative relationships in modern economic life. It is the unification that is powerful. And it is the unification that constitutes econometrics.
In current research this unification may take place across articles and projects, but the vision remains a powerful one for this Society. The Econometric Society thrives through its journal, Econometrica, and its monograph series along with the many conferences that it organizes. The editor of Econometrica, Stephen Morris, continues a long line of distinguished editors and is supported by excellent co-editors and associate editors. Throughout 2007, Andrew Chesher and Matt Jackson continued to recruit and edit high quality monographs. Since Matt stepped down at the end of the year, I specifically thank him for his hard work and welcome his replacement, George Mailath. During the course of the year, I attended Society meetings in six different regions. Each of these meetings had it its own unique character. The research activity and intensity that I observed continues the long and healthy tradition of our Society. North American Winter Meeting, Chicago, Illinois, January 5–7, 2007 North American Summer Meeting, Durham, North Carolina, June 21–24, 2007 Australasian Meeting, Brisbane, Queensland, Australia, July 3–6, 2007 Far Eastern Meeting, Taipei, Taiwan, July 11–13, 2007 European Summer Meeting, Budapest, Hungary, August 27–31, 2007 Latin American Meeting, Bogota, Columbia, October 4–6, 2007 I appreciate the important efforts of the local organizers and the chairs of the scientific committees that made these meetings such big successes. On behalf of the Society, I thank them for their hard work.
2. WORLD CONGRESS The Executive Committee considered proposals to host the 2010 World Congress by (i) Shanghai Jiao Tong University (SJTU), (ii) Shanghai University of Finance and Economics (SUFE), and (iii) the Singapore Management University (SMU) in cooperation with the National University of Singapore, the Nanyang Technological University, and INSEAD. All three proposals provided us with attractive alternatives, and I very much appreciate the work involved in preparing them. I am delighted that the World Congress will be held in Shanghai and it will involve a cooperative effort of SJTU, SUFE, and three other institutions. The World Congress is the most important meeting of our Society, and I look forward to a truly exciting event in 2010. © 2008 The Econometric Society
DOI: 10.3982/ECTA765PRES
1226
ANNUAL REPORTS
3. NEW JOURNALS There has been a continuing discussion within the Executive Committee and throughout the Society about the possible creation of new journals. At the Executive Committee meeting in August 2007, Richard Blundell, Torsten Persson, and I suggested that the Econometric Society consider creating two journals: one with a focus on economic theory and applications, and another with a focus on quantitative methods and applications broadly defined. The Executive Vice-President, Rafael Repullo, described how the Society could fund such journals. The Executive Committee agreed to initiate the creation process. A lingering concern has been what the quantitative journal might look like, how it might support Frisch’s vision of the role of the Econometric Society, and if there is sufficient interest in this journal to recruit the talent needed to make it a successful venture. As a result of these concerns, I appointed a committee chaired by Jean-Marc Robin that included Manuel Arellano, Orazio Attanasio, Stephen Durlauf, Robert Porter, and Thomas Sargent to propose what a successful new quantitative journal would look like and whether its creation is warranted. They prepared a report at the end of the year and it is posted on the Econometric Society web page. The Executive Committee will discuss the next steps to be taken at their August, 2008 meeting.
4. COMMITTEES The Econometric Society has three nominating committees: for Fellows, for Officers, and for Council Members. I gratefully acknowledge the members of these committees for their important work for the Society. FELLOWS COMMITTEE: Orazio Attanasio, Chair; Matthew Jackson; Rosa Matzkin; Larry Samuelson; Harald Uhlig. OFFICERS COMMITTEE: Ariel Rubenstein, Chair; Richard Blundell; Lars Peter Hansen; Eric Maskin; Roger Myerson; Torsten Persson; Thomas Sargent. COUNCIL COMMITTEE: Richard Blundell, Chair; Aloisio Araujo; Manuel Arellano; David Card; Mathias Dewatripont; Joon Park; Alvin Roth. The selection committee for the 2008 Frisch Medal included Gary Chamberlain, Chair; John Cochrane, and Jean-Marc Robin. I offer (again) my congratulations to David Card and Dean Hyslop for receiving this well deserved medal.
5. FINAL WORD My job as President was made considerably easier by the continued advice and support of the Executive Vice-President, Rafael Repullo. The General Manager, Claire Sashi, does an admirable job making sure that the Society runs smoothly. It is been a privilege to work with and learn from the previous presidents, Thomas Sargent and Richard Blundell. The Society has a great group of people to lead it over the next few years in Torsten Persson, Roger Myerson, and John Moore. Lars Peter Hansen PRESIDENT IN 2007